Building Kubernetes (EKS) with eksctl
Getting started with AWS Elastic Kubernetes Service using eksctl
Amazon EKS (Elastic Kubernetes Service) implementation of Kubernetes. Unlike other implementations, such as Google GKE (Google Kubernetes Engine), batteries are not necessarily included with EKS. Thus you cannot do create a complete cluster with one single command.
This is an article that shows how to build a Kubernetes cluster with batteries included using Amazon EKS using a tool called eksctl
.
But first…
About The Hard Way
With EKS, you will also need to do the following:
- Create baseline network infrastructure: VPC (virtual private cloud) with subnets, route table, route association, internet gateway, security groups, etc.
- Create EKS Master Cluster IAM role and IAM Policy to allow EKS service to retrieve data from other AWS services
- Create EKS Master Cluster Security Group to allow cluster communication with worker nodes and worker nodes to communicate with Cluster API server.
- Create the EKS Master node (control plane) itself
- Create Security Group to allow nodes (EC2 instances) to communicate to Kubernetes API.
- Create worker node IAM role and Instance Profile
- Create worker node Security Group to allow worker nodes to communicate with each other, and allow kubelets and pods to receive communication from cluster control plane
- Create Security Group to allow pods to communicate to with cluster API server.
- Create worker node ASG (auto scale group) with worker nodes (EC2 instances) that will install Kubernetes worker node components (such as user-data cloud-init script)
- Create required Kubernetes configuration to allow worker nodes (EC2 instances) to join the cluster through IAM role authentication (authorization config map in
kube-system
namespace) - Create Kubernetes configuration to allow users (IAM users) to manage the cluster (authorization config map in
kube-system
namespace)
This is what I meant by batteries not included.
If you thought this was difficult, imagine: upgrades. As Kubernetes churns out new versions frequently, and AWS will deprecate older versions, you’ll need to upgrade (1) master node, and (2) worker nodes.
And this is just with a vanilla no-frills featureless Kubernetes cluster, so if you added features for dns, ingress (reverse proxy), secrets, etc., you may need to reinstall or upgrade these as well.
Doing it the Easy Way
Wouldn’t it be nice if we could create a cluster with a single command, like we can do with Google GKE?
gcloud container clusters create \
--cluster-version 1.14.10-gke.36 \
--region us-west1 \
--machine-type n1-standard-2 \
--num-nodes 1 \
--min-nodes 1 \
--max-nodes 4 \
my-demo
Well we can!
WeaveWorks created this tool called eksctl
(eksey-cuttle or eks-cuttle) that can be used in a similar way, to allow us to create our own cluster in a single command:
eksctl create cluster \
--version 1.14 \
--region us-west-2 \
--node-type t3.medium \
--nodes 3 \
--nodes-min 1 \
--nodes-max 4 \
--name my-demo
The eksctl
tool uses CloudFormation under the hood, creating one stack for the EKS master control plane and another stack for the worker nodes.
Essentially, we get this similar experience and ease of use to create Kubernetes clusters on Amazon managed by EKS.
Installing The Tools
For this small tutorial, we’ll need the following tools below. As a prerequisite, you’ll need to have AWS CLI tools installed, and configure AWS CLI with appropriate credentials to create things on AWS.
Installing eksctl
On macOS, if you have Homebrew installed, you simply run this:
brew tap weaveworks/tap
brew install eksctl
On Windows, you can use Chocolatey to install the eksctl:
choco install -y eksctl
On Linux, you can run the following:
TARBALL_NAME="eksctl_$(uname -s)_amd64.tar.gz"
HTTP_PATH="weaveworks/eksctl/releases/download/latest_release"
LOCATION="https://github.com/$HTTP_PATH/$TARBALL_NAME"curl --silent --location $LOCATION | tar xz -C /tmpsudo mv /tmp/eksctl /usr/local/bin
Install Kubernetes CLI
We will need a client tool to interact and manage our cluster, which is the kubectl
(koob-cuttle) tool.
On macOS, if you have Homebrew installed, you simply run this:
brew install kubernetes-cli
On Windows, you can use Chocolatey to install the tool:
choco install -y kubernetes-cli
On Debian or Ubuntu, you can install with this:
sudo apt-get update && sudo apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg \
| sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" \
| sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo apt-get updatesudo apt-get install -y kubectl
On Fedora, RHEL, or CentOS, you can run the following:
cat <<KUBEREPO_EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
KUBEREPO_EOFyum install -y kubectl
Creating the EKS Cluster
Now that the eksctl
is installed, we can create a cluster. We can create the cluster in either of these methods with eksctl
:
- using a single command
- using a script (domain specific language packaged in YAML)
Store Your Future Kubeconfig
When we create a Kubernetes cluster, we need to create a configuration, called kubeconfig, that stores the credentials to access our Kubernetes cluster.
To get started, let’s create a location where we can store our cluster configurations, such as $HOME/kubeconfigs
:
mkdir -p ~/kubeconfigs
Creating EKS through pure-CLI Method
Run the command below to create the cluster; expect this to take around 20 minutes:
eksctl create cluster \
--version 1.14 \
--region us-west-2 \
--node-type t3.medium \
--nodes 3 \
--nodes-min 1 \
--nodes-max 4 \
--name my-demo-cluster \
--kubeconfig=$HOME/kubeconfigs/demo-cluster-config.yaml
Creating EKS through YAML Method
As an alternative, we can also use YAML, as sort of DSL (domain specific language) script for creating Kubernetes clusters with EKS.
We can create an area to save eksctl
scripts:
mkdir -p $HOME/eksctl_scripts
touch -p $HOME/eksctl_scripts/demo_cluster.yaml
In that location, edit the file we created, i.e. $HOME/eksctl_scripts/demo_cluster.yaml
, and add the following contents:
Now we can create our cluster by running this command (expect this to take around 20 minutes):
eksctl create cluster \
-f $HOME/eksctl_scripts/demo_cluster.yaml \
--kubeconfig=$HOME/kubeconfigs/demo-cluster-config.yaml
Configure Kubernetes CLI
In above examples, we added an option to save a configuration to demo-cluster-config.yaml
. To access our cluster, we can set this environment variable:
export KUBECONFIG=$HOME/kubeconfigs/demo-cluster-config.yaml
Testing the EKS Cluster
We can see everything running on our cluster with this command:
kubectl get all --all-namespaces
This should show something like this:
Deploying Applications
Now that we have an available cluster, let’s start deploying some applications. We should have the Kubernetes client tool, kubectl
(koob-cuttle) tool, and KUBECONFIG
set to use our kubeconfig file for our cluster.
Now we can create manifests that describe what we want to to deploy.
Create Deploy Manifest
In Kubernetes, we deploy a pod, which is description how to run out service with one or more containers. In our case we’ll use an application called hello-kubernetes
(from the docker image paulbouwer/hello-kubernetes:1.5
).
We want to deploy a 3 pods for high availability, and there are a few objects that can facilitate this. We’ll use the Deployment
controller to deploy a set of three pods.
Create a file with the contents below and name it hello-k8s-deploy.yaml
:
Deploy the Pods
To deploy our pods using the Deployment
controller run this command:
kubectl apply -f hello-k8s-deploy.yaml
This will deploy a set of pods, which we can view by running
kubectl get pods
The results will look something like this.
Create Service Manifest
In order to access your application, you need to define a Service resource. This will create a single endpoint to access any one of the three pods we created.
There are different types of Service objects, and the one we want to use is called simply LoadBalancer
, which means an external load balancer. Not all clusters support this feature. Amazon EKS has support for the LoadBalancer
type using the class Elastic Load Balancer (ELB). Kubernetes will automatically provision and de-provision a ELB when we create and destroy our application.
Let’s create a service manifest called hello-k8s-svc.yaml
with the following contents:
Deploy the Service
We can deploy the service with the following command:
kubectl apply -f hello-k8s-svc.yaml
We can view the status of this by typing:
kubectl get svc
This should show something similar to the following image:
NOTE: On EKS, you’ll notice the port mapping will not be 80:8080
, where 8080
is the port used by the pod. The port here is the port on the worker node (EC2 instance), 31774
in this case, which is later mapped to the correct port of 8080
.
Connecting to the Application
Copy the long DNS name ending with us-west-2.elb.amazonaws.com
under the EXTERNAL-IP
field for hello-kubernetes
and paste it into your browser (prefixed by http://
).
This may not be available immediately, as it may take about 5 minutes for this to be available. You will see something similar to this below.
If you hit refresh, you may see a different pod name as we connect to different pod servicing the application.
Cleaning Up
You can delete your application as well as de-provision the associated ELB resource with the following command: :
kubectl delete -f hello-k8s-deploy.yaml
kubectl delete -f hello-k8s-svc.yaml
You can delete the whole cluster (about 20 minutes) with this command:
eksctl delete cluster --name=my-demo
Resources
Here are some articles I came across in the journey to create this article.
The eksctl tool
- https://eksctl.io/introduction/getting-started/
- https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html
Kubectl Install
- https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html
- https://kubernetes.io/docs/tasks/tools/install-kubectl/
Creating EKS Cluster without eksctl
Support Code
- Blog Support Code: https://github.com/darkn3rd/blog_tutorials/tree/master/kubernetes/eks_1_provision_eksctl
Conclusion
I hope this can help get you started with Kubernetes right away, so that you can begin building and deploying applications (such as kubectl manifests and helm charts) as well as experimenting and adding features of EKS, such as, to name a few:
- Route53 integration: external-dns
- Ingress controller with ELB: ingress-nginx
- Ingress controller with ALB: aws-alb-ingress-controller with ingress-merge
- Secrets with Secrets Manager: kubernetes-external-secrets
- Pod Level IAM: kube2iam
- Cluster Autoscaler
In the future, I would like to write articles about using Kubernetes with Terraform (such as kubernetes provider and helm provider), whether the cluster was created with Terraform or not. This way, you can develop a modular infrastructure scripts, unladen with unnecessarily complex scripts.
Best of success with your Kubernetes adventures. Enjoy!