Getty Images/iStockphoto

How to deploy an EKS cluster using Terraform

Terraform benefits include scalability, repeatable infrastructure and cost efficiency. Follow this step-by-step tutorial to learn how to deploy an EKS cluster using Terraform.

You need repeatable practices to scale workloads, especially in the cloud. For engineers, creating resources manually is no longer the preferred method. With automation, they can spend more time doing value-driven work.

When it comes to Kubernetes orchestration, even though it's application-specific, clusters need infrastructure -- in most cases -- to run. It is not optimal to create those clusters from scratch or perform manual scaling if you want to create repeatable and consistent processes.

Terraform is a popular option to create an Elastic Kubernetes Service (EKS) cluster in AWS. Learn more about its benefits and follow a step-by-step tutorial on how to deploy an EKS cluster using Terraform.

Why choose Terraform

Infrastructure as code (IaC) uses code to manage and provision infrastructure through code instead of through manual processes. There are various benefits to this method, including the following:

  • repeatable and scalable infrastructure
  • shorter deployment times
  • lower costs

There are several IaC and configuration management tools available today. Each tool has its pros and cons, including Terraform. But Terraform is rising in popularity for infrastructure pros, developers, DevOps engineers, site reliability engineers and other engineering career paths. Many of Terraform's strengths come from the following:

  • it works across platforms;
  • it's human-readable and does not require advanced skill;
  • it's open source, which means engineers can create their own Terraform providers for specific functionality; and
  • it has both a free version -- Terraform -- and an enterprise version -- Terraform Cloud -- with no functionality lost between the versions.

Deploy an EKS cluster with Terraform

Before you start creating, you'll need the following:

  • an AWS account;
  • identity and access management (IAM) credentials and programmatic access;
  • AWS credentials that are set up locally with aws configure;
  • a Virtual Private Cloud configured for EKS; and
  • a code or text editor, like VS Code.

One you have the prerequisites, it is time to start writing the code to create an EKS cluster. The following steps show how to set up the main.tf file to create an EKS cluster and the variable files to ensure the cluster is repeatable across any environment.

Create the main.tf file

For the purposes of this section, VS Code will be used. However, any text editor will work.

Step 1. Open your text editor and create a new directory. Create a new file called main.tf. When you set up the main.tf file, use and create the following:

  • the AWS Terraform provider;
  • a new IAM role for EKS;
  • the EKS policy for the IAM role; and
  • the EKS cluster itself, including the worker nodes.

Step 2. In the main.tf file, add the provider code. This will ensure that you use the AWS provider.

terraform {
 required_providers {
  aws = {
   source = "hashicorp/aws"
  }
 }
}

Step 3. Set up the first resource for the IAM role. This ensures that the role has access to EKS.

resource "aws_iam_role" "eks-iam-role" {
 name = "devopsthehardway-eks-iam-role"

 path = "/"

 assume_role_policy = <<EOF
{
 "Version": "2012-10-17",
 "Statement": [
  {
   "Effect": "Allow",
   "Principal": {
    "Service": "eks.amazonaws.com"
   },
   "Action": "sts:AssumeRole"
  }
 ]
}
EOF

}

Step 4. Once the role is created, attach these two policies to it:

  • AmazonEKSClusterPolicy
  • AmazonEC2ContainerRegistryReadOnly-EKS

The two policies allow you to properly access EC2 instances (where the worker nodes run) and EKS.

resource "aws_iam_role_policy_attachment" "AmazonEKSClusterPolicy" {
 policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
 role    = aws_iam_role.eks-iam-role.name
}
resource "aws_iam_role_policy_attachment" "AmazonEC2ContainerRegistryReadOnly-EKS" {
 policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
 role    = aws_iam_role.eks-iam-role.name
}

Step 5. Once the policies are attached, create the EKS cluster.

resource "aws_eks_cluster" "devopsthehardway-eks" {
 name = "devopsthehardway-cluster"
 role_arn = aws_iam_role.eks-iam-role.arn

 vpc_config {
  subnet_ids = [var.subnet_id_1, var.subnet_id_2]
 }

 depends_on = [
  aws_iam_role.eks-iam-role,
 ]
}

Step 6. Set up an IAM role for the worker nodes. The process is similar to the IAM role creation for the EKS cluster except this time the policies that you attach will be for the EKS worker node policies. The policies include:

  • AmazonEKSWorkerNodePolicy
  • AmazonEKS_CNI_Policy
  • EC2InstanceProfileForImageBuilderECRContainerBuilds
  • AmazonEC2ContainerRegistryReadOnly
resource "aws_iam_role" "workernodes" {
  name = "eks-node-group-example"
 
  assume_role_policy = jsonencode({
   Statement = [{
    Action = "sts:AssumeRole"
    Effect = "Allow"
    Principal = {
     Service = "ec2.amazonaws.com"
    }
   }]
   Version = "2012-10-17"
  })
 }
 
 resource "aws_iam_role_policy_attachment" "AmazonEKSWorkerNodePolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
  role    = aws_iam_role.workernodes.name
 }
 
 resource "aws_iam_role_policy_attachment" "AmazonEKS_CNI_Policy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
  role    = aws_iam_role.workernodes.name
 }
 
 resource "aws_iam_role_policy_attachment" "EC2InstanceProfileForImageBuilderECRContainerBuilds" {
  policy_arn = "arn:aws:iam::aws:policy/EC2InstanceProfileForImageBuilderECRContainerBuilds"
  role    = aws_iam_role.workernodes.name
 }
 
 resource "aws_iam_role_policy_attachment" "AmazonEC2ContainerRegistryReadOnly" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
  role    = aws_iam_role.workernodes.name
 }

Step 7. The last bit of code is to create the worker nodes. For testing purposes, use just one worker node in the scaling_config configuration. In production, follow best practices and use at least three worker nodes.

 resource "aws_eks_node_group" "worker-node-group" {
  cluster_name  = aws_eks_cluster.devopsthehardway-eks.name
  node_group_name = "devopsthehardway-workernodes"
  node_role_arn  = aws_iam_role.workernodes.arn
  subnet_ids   = [var.subnet_id_1, var.subnet_id_2]
  instance_types = ["t3.xlarge"]
 
  scaling_config {
   desired_size = 1
   max_size   = 1
   min_size   = 1
  }
 
  depends_on = [
   aws_iam_role_policy_attachment.AmazonEKSWorkerNodePolicy,
   aws_iam_role_policy_attachment.AmazonEKS_CNI_Policy,
   #aws_iam_role_policy_attachment.AmazonEC2ContainerRegistryReadOnly,
  ]
 }

  ]
 }

Create the variables.tf file

Once the main.tf file is created, it's time to set up the variables. Variables allow you to pass in values and make your code repeatable. You can use this code for any EKS environment.

Step 1. Create a new file called variables.tf. When setting up the variables.tf file, you'll create the following two variables:

  • Subnet_id_1
  • Subnet_id_2

The two subnet IDs can be used from the VPC that you created via CloudFormation in the prerequisites section. You can use one public subnet and one private subnet for development purposes.

Step 2. Within the variables.tf file, create the following variables:

 variable "subnet_id_1" {
  type = string
  default = "subnet-your_first_subnet_id"
 }
 
 variable "subnet_id_2" {
  type = string
  default = "subnet-your_second_subnet_id"
 }

Create the EKS environment

To create the environment, ensure you're in the Terraform directory and module that you used to write the Terraform mode. Run the following commands:

  • terraform init. Initialize the environment and pull down the AWS provider.
  • terraform plan. Plan the environment and ensure no bugs are found.
  • terraform apply --auto-approve. Create the environment with the apply command, combined with auto-approve, to avoid prompts.

When you are ready to destroy all Terraform environments, ensure that you're in the Terraform module/directory that you used to create the EKS cluster. Then, run the terraform destroy command.

Next Steps

Deploy and manage Azure Key Vault with Terraform

Manage multiple environments with Terraform workspaces

Learn how to use Terraform modules with examples

Dig Deeper on Cloud deployment and architecture

Data Center
ITOperations
SearchAWS
SearchVMware
Close