Minimalist EKS w AWS LB Controller

Provision EKS cluster + AWS LB Controller using eksctl

Joaquín Menchaca (智裕)
11 min readMay 25, 2023


AWS managed Kubernetes service called Elastic Kubernetes Service (EKS) comes bundled with older classic ELB for load balancing services and does not have any default ingress controller that is necessary for advanced web reverse proxy routing.

You can add support for modern AWS load balancing solutions with the add-on called AWS Load Balancer Controller, which can be installed using a helm chart.

This guide will demonstrate how to install these components, and then demonstrate services that can use these resources.


This guide will stand up a disposable test cluster suitable for learning activities. See notes about security later in this guide.

This guide was tested using GNU Bash with GNU Grep, but should work with any POSIX shell, such as Zsh. I ran this on macOS Monterery (Mac OS X 12.6.3) with the following tool versions:

## Core Tools

* aws (aws-cli) 2.11.6
* eksctl 0.141.0
* kubectl v1.26.4
* helm v3.11.2

## Other Tools

* asdf v0.11.3
* grep (GNU grep) 3.10
* brew (Homebrew) 4.0.18
* bash 5.2.15
* zsh 5.9

## Helm Charts

These tools were installed using Homebrew (brew) with exception of kubectl, which is installed using asdf. As long as the versions you are using are not too far off, they should work. These are listed for troubleshooting purposes in case any issues arise.

AWS Elastic Load Balancing family

The term ELB (Elastic Load Balancing) is used rather loosely to describe their umbrella of load balancing solutions. There were two iterations, ELBv1 (also called classic ELB) and ELBv2 that includes Network Load Balancer (NLB) and Application Load Balancer (ALB).

  • Classic ELB: load balance TCP, UDP, HTTP, HTTPS, and TLS traffic without advanced request routing support. HTTP/2 is not supported.
  • ALB: load balance HTTP, HTTPS, gRPC, HTTP/2 (h2c or h2) traffic with advanced request routing with support for and WAF.
  • NLB: load balance TCP, UDP, and TLS traffic with high performance.

How does this relate to Kubernetes?

In Kubernetes, you can use load balancing features with a service of type LoadBalancer, and layer 7 reverse proxy with an ingress.

After installing the AWS Load Balancer Controller add-on, you have the following options for EKS:


These are are some prerequisites and initial steps needed to get started before provisioning a cluster and installing drivers.


This article requires some basic understanding of networking with TCP/IP and the OSI model, specifically the Transport Layer 4 and Application Layer 7 for HTTP. This article covers using load balancing and reverse proxy.

In Kubernetes, familiarity with service types: ClusterIP, NodePort, LoadBalancer, ExternalName and the ingress resource.


These are the tools used in this article.

  • AWS CLI [aws] is a tool that interacts with AWS.
  • kubectl client [kubectl] a the tool that can interact with the Kubernetes cluster. This can be installed using adsf tool.
  • helm [helm] is a tool that can install Kubernetes applications that are packaged as helm charts.
  • eksctl [eksctl] is the tool that can provision EKS cluster as well as supporting VPC network infrastructure.
  • adsf [adsf] is a tool that installs versions of popular tools like kubectl.

Additionally, these commands were tested in a POSIX Shell, such as bash or zsh. GNU Grep was also used to extract version of Kubernetes used on the server. On Linux will likely have this installed by default, while macOS users can use Homebrew to install it, run brew info grep for more information.

AWS Setup

Before getting started on EKS, you will need to set up billing to an AWS account (there’s a free tier), and then configure a profile that has provides to an IAM User identity. See Setting up the AWS CLI for more information on configuring a profile.

After setup, you can test it with the following:

export AWS_PROFILE="<your-profile-goes-here>"
aws sts get-caller-identity

This should show something like the following with values appropriate to your environment:

Kubernetes Client Setup

If you use asdf to install kubectl, you can get the latest version with the following:

# install kubectl plugin for asdf
asdf plugin-add kubectl \
asdf install kubectl latest

# fetch latest kubectl
asdf install kubectl latest
asdf global kubectl latest

# test results of latest kubectl
kubectl version --short --client 2> /dev/null

This should show something like:

Client Version: v1.27.1
Kustomize Version: v5.0.1

Also, create directory to store Kubernetes configurations that will be used by the KUBECONFIG env variable:

mkdir -p $HOME/.kube

Setup Env Variables

These environment variables will be used throughout this project. If opening up a browser tab, make sure to set the environment variables accordingly.

# variables used to create EKS
export AWS_PROFILE="my-aws-profile" # CHANGEME
export EKS_CLUSTER_NAME="my-lb-cluster" # CHANGEME
export EKS_REGION="us-west-2"
export EKS_VERSION="1.26"

# KUBECONFIG variable

Setup Helm Repositories

These days helm charts come from a variety of sources. You can get the helm chart used in this guide by running the following commands.

# add AWS LB Controller (NLB/ALB) helm charts
helm repo add "eks" ""

# download charts
helm repo update

Provision an EKS cluster

The cluster can be brought up with the following commands:

# create configuration
eksctl create cluster \
--version $EKS_VERSION \
--region $EKS_REGION \
--dry-run \
| sed 's/awsLoadBalancerController: false/awsLoadBalancerController: true/' \
> $EKS_CLUSTER_NAME.cluster.yaml

# provision EKS cluster
eksctl create cluster --config-file $EKS_CLUSTER_NAME.cluster.yaml

Note about Security

The tool eksctl comes with a bundled IAM profile called awsLoadBalancerController that can added to the the cluster. Note that this profile will be added to all worker nodes, meaning that any container running on EKS can access ELB family of APIs to create load balancers.

Diagram of Worker Node (EC2) Instance Profile with attached role and policy

If you think this may be too many privileges granted to containers, you would not be wrong. This violates principle of least privilege, as privileges should only be granted to the aws-loadbalancer-controller pods.

In a future article, I will cover IRSA (IAM Role for Service Accounts), which is needed to implement this least privilege. But for now, for learning purposes and simplicity, we will use IAM role for the worker node groups.

Testing Results

You can test the results with the following commands below.

First get the correct kubectl version that matches the Kubernetes master server:

# fetch exact version of Kubernetes server (Requires GNU Grep)
VER=$(kubectl version --short 2> /dev/null \
| grep Server \
| grep -oP '(\d{1,2}\.){2}\d{1,2}'

# setup kubectl tool
asdf install kubectl $VER
asdf global kubectl $VER

Now you can test the cluster:

kubectl get nodes
kubectl get all --all-namespaces

Afterward, you should seem something similar to the following:

Install the AWS loadbalancer add-on

You can install the AWS LoadBalancer add-on using the aws-load-balancer-controller helm chart. This allow the cluster to use NLB and ALB external load balancers.

# install AWS Loadbalancer controller add-on
helm install \
aws-load-balancer-controller \
eks/aws-load-balancer-controller \
--namespace kube-system \
--set clusterName=$EKS_CLUSTER_NAME

You can check the results with the following command:

kubectl get all \
--selector "" \
--namespace "kube-system"

This should show something like the following:

Examples with Apache

For demonstrating that these new capabilities are working, we can try with the Apache web server.

Example using NLB

You can use an external load balancer with network load balancer to route traffic from the Internet to one of the Apache web server pods.

When you deploy a Kubernetes service resource of type LoadBalancer, it will provision an external load balancer in AWS cloud. Instead of using the older default classic ELB, you can select network load balancer by configuring the annotations.

Below is a diagram that shows this setup. This setup uses the NLB type of IP, where traffic is routed to the IP address pointed to by the service. Both the pod and node IP addresses are routable within the same VPC due to Amazon VPC CNI driver.

Diagram of Service (LoadBalancer) and NLB

Run the following commands to setup NLB to route to Apache web server pods:

# deploy application
kubectl create namespace httpd-svc
kubectl create deployment httpd \
--image=httpd \
--replicas=3 \
--port=80 \

# provision network load balancer
cat <<-EOF > svc.yaml
apiVersion: v1
kind: Service
name: httpd
annotations: external ip internet-facing
- port: 80
targetPort: 80
protocol: TCP
type: LoadBalancer
app: httpd

kubectl create --filename=svc.yaml --namespace=httpd-svc

Afterward, you can check the results with the following command:

kubectl get all --namespace=httpd-svc

This should show something like this:

You can test the traffic with the following command:

export SVC_LB=$(kubectl get service httpd \
--namespace "httpd-svc" \
--output jsonpath='{.status.loadBalancer.ingress[0].hostname}'

curl --silent --include $SVC_LB

This should something like the following:

Using the same environment variable SVC_LB from above, you can inspect the load balancer that was created with aws elbv2 describe-load-balancers command with a JMESPath query to filter to the desired load balancer:

aws elbv2 describe-load-balancers --region us-west-2 \
--query "LoadBalancers[?DNSName==\`$SVC_LB\`]"

This should show something like:

Example using ALB

For an external reverse proxy, which is essentially a load balancer with advanced HTTP routing rules, you will need to create an ingress resource. This will provision the ALB and configure it.

Diagram of Service (ClusterIP), Ingress and ALB

Run the following commands to setup ALB:

# deploy application 
kubectl create namespace httpd-ing
kubectl create deployment httpd \
--image=httpd \
--replicas=3 \
--port=80 \

# deploy service pointing to pods
kubectl expose deployment httpd \
--port=80 \
--target-port=80 \
--namespace httpd-ing

# provision application load balancer
kubectl create ingress alb-ingress \
--class=alb \
--rule="/=httpd:80" \
--annotation "" \
--annotation "" \

ALB is set to IP mode so that traffic is routed directly to the IP address of the pods referenced by the service. This is possible because the Amazon VPC CNI plugin used by EKS will have both the worker nodes (EC2) and pods on the same subnets within the VPC.

You can check results of the deployment with:

kubectl get all,ing --namespace=httpd-ing

This should show something like the following:

You can fetch the address and save it as an environment variable ING_LB:

export ING_LB=$(kubectl get ing alb-ingress \
--namespace "httpd-ing" \
--output jsonpath='{.status.loadBalancer.ingress[0].hostname}'

And test the traffic using the curl command:

curl --silent --include $ING_LB

This should show something like this following:

You can also inspect the load balancer configuration on AWS with the following command:

aws elbv2 describe-load-balancers --region us-west-2 \
--query "LoadBalancers[?DNSName==\`$ING_LB\`]"

This should look something like the following:


The following below will clean up AWS cloud resources that were used in this guide.

Delete Kubernetes resources

You can delete Kubernetes objects that were created with this guide using the following commands.

IMPORTANT: You want to delete any Kubernetes objects that have provisioned AWS cloud resources, otherwise, these will eat up costs.

# IMPORTANT: delete these to avoid costs 
kubectl delete "ingress/annotated" --namespace "httpd-ing"
kubectl delete "service/httpd-svc" --namespace "httpd-svc"

# deleted when cluster deleted
kubectl delete "deployment/httpd" --namespace "httpd-svc"
kubectl delete "namespace/httpd-svc"

kubectl delete "deployment/httpd" --namespace "httpd-ing"
kubectl delete "svc/httpd" --namespace "httpd-ing"
kubectl delete "namespace/httpd-ing"

Delete AWS cloud resources

You can delete the EKS cluster with the following command.

eksctl delete cluster --config-file $EKS_CLUSTER_NAME.cluster.yaml


These are links I have come across that are useful for this topic.

Kubernetes Concepts

AWS Concepts

AWS Load Balancer Controller

Kubectl Commands


This is an introduction to using NLB and ALB with the Elastic Kubernetes Service. Some of the take-aways in this guide are the relationship between ingress, service and external load balancers on AWS, and of course installing the aws-loadbalancer-controller helm chart to enable these features.

Network Policies

When exposing any of these applications outside of Kubernetes, especially to the public Internet, you will want to take security precautions. If you are publishing web services, then you will want to use network policies from service like Calico to make sure that any client entering from the Internet does not access other pods on the Kubernetes cluster.

Certificates with TLS

Additionally, all traffic should be encrypted using TLS. This requires owning a domain, so that you can register DNS address as well as issue certificates for the domains you own. For these, you may want to explore automation comes with tools like cert-manager or ACM for certificates; and external-dns or other automation with Route53 or other DNS service.

Security with Least Privilege

When using ALB and NLB, for a production server, you want to use PoLP and limit access to just only the aws-loadbalancer-controller pods that need such access. This article, to keep the complexity low, sets up access to all of the worker nodes (EC2 systems), which is not suitable for production environments.

Internal vs external ingress controller

Lastly when using ingress-controller for your cluster, you may decide that an internal load balancer is a better fit than an external load balancer like ALB. There are many solutions for this, such as the free open source ingress-nginx solution, traefik, and ambassador, just to name a few.

ALB with other ingress controllers

One option is to use both by configuring a hostname of * in the ingress for ALB, so that all traffic is passed to a secondary ingress controller like ingress-nginx. This way, you can use features that come with ACM, such as converting HTTP/1.1 traffic to HTTP/2, WAF (web application firewall), TLS termination automated with ACM, and other features.

Thank you for reading; best of success in your journeys.



Joaquín Menchaca (智裕)

DevOps/SRE/PlatformEng — k8s, o11y, vault, terraform, ansible