Building Amazon EKS with Terraform
Using Terraform to create an Elastic Kubernetes Service Cluster
In a previous article, I covered how you can create an Amazon EKS (Elastic Kubernetes Service) cluster using the Weaveworks eksctl
tool.
For this article, I will show how you can stand up a basic Amazon EKS cluster using Terraform module terraform-aws-modules/eks/aws
.
Previous Article
In a previous article, I demonstrated how to standup a basic Kubernetes cluster using the eksctl
tool.
Required Tools
- AWS CLI needed to interact with AWS cloud resources. A profile with administrative access should be configured.
- KubeCtl (koob-cuttle) or Kubenetes CLI to interact with EKS.
- Terraform CLI to manage Kubernetes and AWS resources.
- Bash Shell is not strictly required, but the commands in this tutorial are tested with
bash
– default in mac OS and popular Linux distros, or msys2 with Windows.
Doing it the Terraform Way
I created a small module that sets some defaults similar to the eksctl default settings. We can use this by running through the following steps.
Setup Project Environment
First let’s setup a project directory, and create a file. These commands will work in Bash shell:
mkdir -p ~/projects/eks-basic && cd ~/projects/eks-basic
touch my_cluster.tf
Now let’s edit the file my_cluster.tf
with the following contents:
my_cluster.tf
With this created, we now want to set up some environment variables:
export TF_VAR_eks_cluster_name=adorable-unicorn
export TF_VAR_region=us-east-2
Change the above values to whatever makes sense in your environment, such as the AWS Region.
Preparation for EKS Cluster
Now run the following:
terraform init
terraform plan
This will download the appropriate modules and providers, and show what will be created when we decide to apply this.
Create the EKS Cluster
Once satisfied, we can now create the cluster, an Amazon EKS Kubernetes master node, plus 2 Amazon EC2 instances that will be the Kubernetes worker nodes.
terraform apply
Configure Kubernetes CLI
Now we need to setup the Kubernetes CLI kubectl
so that we may interact with our cluster we just created. We’ll reference the values we used before, such as TF_VAR_eks_cluster_name
in previous section.
The terraform module will output a local kubeconfig file, which we can use:
export KUBECONFIG=$PWD/kubeconfig_${TF_VAR_eks_cluster_name}
Now we can test to see if everything is working fine:
kubectl get all --all-namespaces
We should see something similar to this:
Deploying Applications
Now that we have an available cluster, let’s start deploying some applications. We should have the Kubernetes client tool, kubectl
(koob-cuttle) tool, and KUBECONFIG
set to use our kubeconfig file for our cluster.
Now we can create manifests that describe what we want to to deploy.
NOTE: This part of the process is identical to the article with eksctl
tool, and should be identical for just about any Kubernetes infrastructure, provided there is a service object type of LoadBalancer
available for the cluster.
Create Deploy Manifest
In Kubernetes, we deploy a pod, which is description how to run out service with one or more containers. In our case we’ll use an application called hello-kubernetes
(from the docker image paulbouwer/hello-kubernetes:1.5
).
We want to deploy a 3 pods for high availability, and there are a few objects that can facilitate this. We’ll use the Deployment
controller to deploy a set of three pods.
Create a file with the contents below and name it hello-k8s-deploy.yaml
:
hello-k8s-deploy.yaml
Deploy the Pods
To deploy our pods using the Deployment
controller run this command:
kubectl apply -f hello-k8s-deploy.yaml
This will deploy a set of pods, which we can view by running
kubectl get pods
The results will look something like this.
kubectl get pods
Create Service Manifest
In order to access your application, you need to define a Service resource. This will create a single endpoint to access any one of the three pods we created.
There are different types of Service objects, and the one we want to use is called simply LoadBalancer
, which means an external load balancer. Not all clusters support this feature. Amazon EKS has support for the LoadBalancer
type using the class Elastic Load Balancer (ELB). Kubernetes will automatically provision and de-provision a ELB when we create and destroy our application.
Let’s create a service manifest called hello-k8s-svc.yaml
with the following contents:
hello-k8s-svc.yaml
Deploy the Service
We can deploy the service with the following command:
kubectl apply -f hello-k8s-svc.yaml
We can view the status of this by typing:
kubectl get svc
This should show something similar to the following image:
kubectl get svc
NOTE: On EKS, you’ll notice the port mapping will not be 80:8080
, where 8080
is the port used by the pod. The port here is the port on the worker node (EC2 instance), 31774
in this case, which is later mapped to the correct port of 8080
.
Connecting to the Application
Copy the long DNS name ending with us-west-2.elb.amazonaws.com
under the EXTERNAL-IP
field for hello-kubernetes
and paste it into your browser (prefixed by http://
).
This may not be available immediately, as it may take about 5 minutes for this to be available. You will see something similar to this below.
If you hit refresh, you may see a different pod name as we connect to different pod servicing the application.
Cleaning Up
You can delete your application as well as de-provision the associated ELB resource with the following command: :
kubectl delete -f hello-k8s-deploy.yaml
kubectl delete -f hello-k8s-svc.yaml
You can delete the whole cluster (about 20 minutes) with this command:
terraform destroy
References
Here are some articles I came across in the journey to create this article.
Terraform Related
- Terraform Home: https://www.terraform.io/
- AWS VPC Module: https://registry.terraform.io/modules/terraform-aws-modules/vpc/aws/
- AWS EKS Module: https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/
Kubectl Install
- https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html
- https://kubernetes.io/docs/tasks/tools/install-kubectl/
Creating EKS Cluster without AWS EKS Module
Conclusion
This tutorial should help you up to speed quickly with using Terraform to create Kubernetes clusters with Amazon EKS.
The underlying terraform-aws-eks module is quite robust and will handle most use cases, thus foregoing the need to invent equally complex code, unless you are a highly paid consultant (that’s a joke in case not obvious) or want to do it as a learning exercise.
The eks-basic module that I created for use with this tutorial is very basic and creates a small 2-node cluster. You can fork this and create your own solution that meets your needs.
Best of success. Remember to delete the cluster when you are finished, as these are m5.large
and do cost money.