Building Amazon EKS with Terraform

Using Terraform to create an Elastic Kubernetes Service Cluster

In a previous article, I covered how you can create an Amazon EKS (Elastic Kubernetes Service) cluster using the Weaveworks tool.

For this article, I will show how you can stand up a basic Amazon EKS cluster using Terraform module .

Previous Article

In a previous article, I demonstrated how to standup a basic Kubernetes cluster using the tool.

Required Tools

Doing it the Terraform Way

I created a small module that sets some defaults similar to the eksctl default settings. We can use this by running through the following steps.

Setup Project Environment

First let’s setup a project directory, and create a file. These commands will work in Bash shell:

Now let’s edit the file with the following contents:

With this created, we now want to set up some environment variables:

Change the above values to whatever makes sense in your environment, such as the AWS Region.

Preparation for EKS Cluster

Now run the following:

This will download the appropriate modules and providers, and show what will be created when we decide to apply this.

Create the EKS Cluster

Once satisfied, we can now create the cluster, an Amazon EKS Kubernetes master node, plus 2 Amazon EC2 instances that will be the Kubernetes worker nodes.

Configure Kubernetes CLI

Now we need to setup the Kubernetes CLI so that we may interact with our cluster we just created. We’ll reference the values we used before, such as in previous section.

The terraform module will output a local kubeconfig file, which we can use:

Now we can test to see if everything is working fine:

We should see something similar to this:

Output of kubectl get all command

Deploying Applications

Now that we have an available cluster, let’s start deploying some applications. We should have the Kubernetes client tool, (koob-cuttle) tool, and set to use our kubeconfig file for our cluster.

Now we can create manifests that describe what we want to to deploy.

NOTE: This part of the process is identical to the article with tool, and should be identical for just about any Kubernetes infrastructure, provided there is a service object type of available for the cluster.

Create Deploy Manifest

In Kubernetes, we deploy a pod, which is description how to run out service with one or more containers. In our case we’ll use an application called (from the docker image ).

We want to deploy a 3 pods for high availability, and there are a few objects that can facilitate this. We’ll use the controller to deploy a set of three pods.

Create a file with the contents below and name it :

Deploy the Pods

To deploy our pods using the controller run this command:

This will deploy a set of pods, which we can view by running

The results will look something like this.

Output of

Create Service Manifest

In order to access your application, you need to define a Service resource. This will create a single endpoint to access any one of the three pods we created.

There are different types of Service objects, and the one we want to use is called simply , which means an external load balancer. Not all clusters support this feature. Amazon EKS has support for the type using the class Elastic Load Balancer (ELB). Kubernetes will automatically provision and de-provision a ELB when we create and destroy our application.

Let’s create a service manifest called with the following contents:

Deploy the Service

We can deploy the service with the following command:

We can view the status of this by typing:

This should show something similar to the following image:

Output of

NOTE: On EKS, you’ll notice the port mapping will not be , where is the port used by the pod. The port here is the port on the worker node (EC2 instance), in this case, which is later mapped to the correct port of .

Connecting to the Application

Copy the long DNS name ending with under the field for and paste it into your browser (prefixed by ).

This may not be available immediately, as it may take about 5 minutes for this to be available. You will see something similar to this below.

If you hit refresh, you may see a different pod name as we connect to different pod servicing the application.

Cleaning Up

You can delete your application as well as de-provision the associated ELB resource with the following command: :

You can delete the whole cluster (about 20 minutes) with this command:


Here are some articles I came across in the journey to create this article.

Terraform Related

Kubectl Install

Creating EKS Cluster without AWS EKS Module


This tutorial should help you up to speed quickly with using Terraform to create Kubernetes clusters with Amazon EKS.

The underlying terraform-aws-eks module is quite robust and will handle most use cases, thus foregoing the need to invent equally complex code, unless you are a highly paid consultant (that’s a joke in case not obvious) or want to do it as a learning exercise.

The eks-basic module that I created for use with this tutorial is very basic and creates a small 2-node cluster. You can fork this and create your own solution that meets your needs.

Best of success. Remember to delete the cluster when you are finished, as these are and do cost money.

Linux NinjaPants Automation Engineering Mutant — exploring DevOps, o11y, k8s, progressive deployment (ci/cd), cloud native infra, infra as code