Kubernetes (EKS) with eksctl

Provision Elastic Kubernetes Service cluster using eksctl

Joaquín Menchaca (智裕)
8 min readMay 2, 2023

--

The Kubernetes platform can seem quite intimidating and overwhelming. However there are tools that will make this journey easy. Cloud providers like Azure or Google Cloud have tools that can set up your Kubernetes cluster with a single command.

In contrast, for AWS, the batteries are not included, meaning that unfortunately, there’s no simple tool to build all of the components needed to provision a Kubernetes platform:

Is there really no tool!?? Well actually there is. Thanks to Weaveworks, there is a command line tool, eksctl, that can build all the necessary components and setup an EKS cluster.

This article will show you how to use this tool eksctl with a minimal set of steps to provision a Kubernetes cluster with support for persistent volumes. There’s also a short example on how to install Dgraph, a distributed graph database, to demonstrate the functionality.

Related Articles

Installing kubectl with adsf tool

The kubectl command should match the version of the Kubernetes cluster, so optionally, you can use asdf to install and use the desired version of kubectl. This article shows how to use this tool on Linux (Debian) or macOS (arm64 and amd64).

Earlier eksctl article (2019)

This earlier article was written around the time of Kubernetes 1.14 in 2019 and thus is very outdated. However, some of the explanations may be a useful overview.

Prerequisites

Tools

These are the tools used in this article.

  • AWS CLI [aws] is a tool that interacts with AWS.
  • kubectl client [kubectl] a the tool that can interact with the Kubernetes cluster. This can be installed using adsf tool.
  • helm [helm] is a tool that can install Kubernetes applications that are packaged as helm charts.
  • eksctl [eksctl] is the tool that can provision EKS cluster as well as supporting VPC network infrastructure.
  • adsf [adsf] is a tool that installs versions of popular tools like kubectl.

Additionally, these commands were tested in a POSIX Shell, such as bash or zsh. GNU Grep was also used to extract version of Kubernetes used on the server. On Linux will likely have this installed by default, while macOS users can use Homebrew to install it, run brew info grep for more information.

AWS Setup

Before getting started on EKS, you will need to set up billing to an AWS account (there’s a free tier), and then configure a profile that has provides to an IAM User identity. See Setting up the AWS CLI for more information on configuring a profile.

After setup, you can test it with the following:

export AWS_PROFILE="<your-profile-goes-here>"
aws sts get-caller-identity

Kubernetes Client Setup

If you use asdf to install kubectl, you can get the latest version with the following:

# install kubectl plugin for asdf
asdf plugin-add kubectl \
https://github.com/asdf-community/asdf-kubectl.git
asdf install kubectl latest

# fetch latest kubectl
asdf install kubectl latest
asdf global kubectl latest

# test results of latest kubectl
kubectl version --short --client 2> /dev/null

This should show something like:

Client Version: v1.27.1
Kustomize Version: v5.0.1

Also, create directory to store Kubernetes configurations that will be used by the KUBECONFIG env variable:

mkdir -p $HOME/.kube

Provision an EKS cluster

Here are the minimal steps required to install EKS with functional support for persistent volumes.

Kubernetes 1.22 and earlier

For Kubernetes 1.22 (and earlier versions), the process to create a fully functional EKS cluster with support for storage is straightforward.

First setup some environment variables:

export EKS_CLUSTER_NAME="my-cluster"
export EKS_REGION="us-east-2"
export EKS_VERSION="1.22"

export KUBECONFIG="$HOME/.kube/$EKS_REGION.$EKS_CLUSTER_NAME.yaml"

When ready you can provision an EKS cluster with the following command:


eksctl create cluster \
--version $EKS_VERSION \
--region $EKS_REGION \
--name $EKS_CLUSTER_NAME

You can verify your cluster with the following:

# fetch exact version of Kubernetes server (Requires GNU Grep)
VER=$(kubectl version --short \
| grep Server \
| grep -oP '(\d{1,2}.){2}\d{1,2}'
)

# setup kubectl tool
asdf install kubectl $VER
asdf global kubectl $VER

# test EKS cluster
kubectl get nodes
kubectl get all --all-namespaces

Kubernetes 1.23 and later: Part 1

Starting with Kubernetes 1.23, EKS no longer comes with a functional default storage driver. Thus, you cannot install a database application or other application that requires persistent volumes. If you attempt to do this, the pods will just hang forever in pending state until they eventually error out.

This is the new default user experience by design from AWS.

In order to get around this limitation, these set of steps will install a storage driver, so that your EKS cluster will support persistent volumes, and later install a new storage class that uses this storage driver.

Similar to before, we first define some environment variables:

export EKS_CLUSTER_NAME="my-cluster"
export EKS_REGION="us-east-2"
export EKS_VERSION="1.25"

export KUBECONFIG="$HOME/.kube/$EKS_REGION.$EKS_CLUSTER_NAME.yaml"

Run this command provision the EKS cluster:

# create cluster config file
eksctl create cluster \
--version $EKS_VERSION \
--region $EKS_REGION \
--name $EKS_CLUSTER_NAME \
--dry-run \
| sed 's/ebs: false/ebs: true/' \
> $EKS_CLUSTER_NAME.cluster.yaml

# provision using cluster config file
eksctl create cluster --config-file $EKS_CLUSTER_NAME.cluster.yaml

⚠️ NOTE ⚠️ This process will grant access to create EBS (Elastic Block Storage) volumes to all worker nodes on the cluster, so that any container that needs access to mount volumes can do this.

Kubernetes 1.23 and later: Part 2

Now that the EKS is provisioned with the new storage driver installed, we need to create a storage class that uses this new storage driver.

Install kubectl that matches the EKS cluster version and test connectivity:

# fetch exact version of Kubernetes server (Requires GNU Grep)
VER=$(kubectl version --short \
| grep Server \
| grep -oP '(\d{1,2}.){2}\d{1,2}'
)

# setup kubectl tool
asdf install kubectl $VER
asdf global kubectl $VER

# test EKS cluster
kubectl get nodes
kubectl get all --all-namespaces

Afterward, we’ll need to install the new storage driver:

# add remote repo
helm repo add \
"aws-ebs-csi-driver" \
"https://kubernetes-sigs.github.io/aws-ebs-csi-driver"
helm repo update

# deploy new storage driver
helm upgrade --install "aws-ebs-csi-driver" \
--namespace "kube-system" \
aws-ebs-csi-driver/aws-ebs-csi-driver

You can check the status with:

kubectl get pods \
--namespace kube-system \
--selector "app.kubernetes.io/name=aws-ebs-csi-driver"

In the next step, we’ll need to create a storage class that uses the new storage driver:

cat <<EOF > sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ebs-sc
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
EOF

# create storage class
kubectl apply --filename sc.yaml

You can check on the final result with:

kubectl get storageclass

This next step is completely optional: we can make the new storage class the new default for the Kubernetes cluster. You may want to test some deployments using the storage class first to make sure it works. When ready, you can run this:

# set gp2 to not be the default
kubectl patch storageclass gp2 --patch \
'{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'

# set ebs-sc to the default
kubectl patch storageclass ebs-sc --patch \
'{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

You can check the final results with:

kubectl get storageclass

Test an EKS cluster with Dgraph

Dgraph is a distributed graph database that can be easily installed with a helm chart. This makes it ideal to test functionality of the Kubernetes cluster with support for persistent volumes.

# add remote repo
helm repo add "dgraph" "https://charts.dgraph.io"
helm repo update

# install Dgraph
helm install "my-release" \
--namespace "dgraph" \
--create-namespace \
dgraph/dgraph

When completed, you can test to see if the cluster is up. This may take a few minutes to bring all of the components up.

kubectl get all --namespace "dgraph"
kubectl get pvc --namespace "dgraph"

Cleanup

When completed, you can remove the components with following steps below. Note that these steps require the environment variables set previously at the creation of the cluster.

Dgraph

Dgraph can be deleted with the following commands.

helm delete "my-release" --namespace "dgraph"

# IMPORTANT: delete persistent volume claims
kubectl delete pvc --selector "release=my-release" --namespace "dgraph"

⚠️ IMPORTANT ⚠️: The PVC (persistent volume claims) must be deleted, otherwise, there will be leftover volumes eating up costs after the Kubernetes cluster has been removed.

Kubernetes 1.22

If this version was used, the EKS cluster can be removed with the following:

eksctl delete cluster \
--region $EKS_REGION \
--name $EKS_CLUSTER_NAME

Remember to delete Dgraph before doing this so that there are no volumes left over eating up costs.

Finally, assuming you are using a unique KUBECONFIG file for this cluster you can go ahead and delete the file:

rm -f $KUBECONFIG

Kubernetes 1.23+

There are a few more steps involved for cleanup. First we want to change the default storage back to its original setting, otherwise the cluster cannot be deleted.


# unset ebs-sc
kubectl patch storageclass ebs-sc --patch \
'{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'

# gp2 set to default
kubectl patch storageclass gp2 --patch \
'{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

# remove components
kubectl delete --filename sc.yaml
helm delete "aws-ebs-csi-driver" --namespace "kube-system"

Delete the EKS cluster:

# delete EKS cluster
eksctl delete cluster --config-file $EKS_CLUSTER_NAME.cluster.yaml

Remember to delete Dgraph before doing this so that there are no volumes left over eating up costs.

Finally, assuming you are using a unique KUBECONFIG file for this cluster you can go ahead and delete the file:

rm -f $KUBECONFIG

Resources

These are some links that may be useful.

Elastic Kubernetes Service (EKS) from AWS

Conclusion

The purpose of this guide is to help get you started with a functional Kubernetes cluster quickly with a minimalist approach. The final result is a cluster suitable for test environments, but should not be used for production without further optimization and security, such as using IRSA (IAM Role to Service Account).

When installing the AWS EBS CSI driver, now required for Kubernetes 1.23+, you may want to enable some features like snapshots, volume resizing, and volume scheduling. See Installation for further information.

When configuring a storage class that uses this driver, there are some parameters can be configured, such as choosing different volume types other than gp3.

The eksctl tool uses CFN (Cloud Formation) stacks for its automation. The tool has many options, some of which is only available when using a cluster config file. You can for example, build your VPC infrastructure using CFN or Terraform, and then instruct eksctl to use that existing infrastructure when provisioning EKS.

--

--