Image for post
Image for post

ALB Ingress with Amazon EKS

Integrating ALB Ingress with Elastic Kubernetes Service

When providing a service on Kubernetes, you can expose it through a Service or Ingress object. For the Service object, you’ll want to use a type. With Amazon EKS implementation, the service type of will use the classic ELB (Elastic Load Balancer).

With an Ingress object, you have to install such an Ingress controller to provide the facility. This will add a reverse-proxy capability that can route traffic to a service based on virtual hosting or path. In previous article, I covered that provides this Ingress capability to Amazon EKS.

Beside the classic ELB, Amazon’s ALB (Application Load Balancer) has reverse-proxy capabilities built into the load balancer. Sow why can’t we use this for Ingress instead?

Well, you actually indeed can do this when you install AWS ALB Ingress Controller, which will provision ALB load balancer while applying Ingress rules specified in you Ingress manifest.

This article covers how do the following with , , and tools:

  1. Create EKS cluster with role policy to allows Kubernetes to provision ALB as well as create DNS records with Route53.
  2. Add AWS ALB Ingress Controller with support for TLS termination.
  3. Add External DNS service to automate creating DNS records with Route53

Previous Related Articles

Creating Amazon EKS with eksctl

Installing and Using and tools.

NGiNX Ingress Controller



You should have the following tools installed and configured:

  • AWS CLI needed to interact with AWS cloud resources
  • EKSCtl (eks-cuttle or exseey-cuttle) tool to easily create an EKS cluster.
  • KubeCtl (koob-cuttle), the Kubernetes client tool, to interact with EKS
  • Helm to install applications on a Kubernetes cluster.

Domain Name and TLS Certificate

If you wish to use DNS records, you need to have a registered domain name on Amazon Route53. We’ll use as a fictional domain name for purposes of this tutorial. Replace this with your registered domain name.

For secure traffic with TLS, you’ll need to have registered a certificate matching the domain name with AWS Certificate Manager.

Part 1: Creating EKS Cluster

The use to provision your cluster or elect to use another method.

Creating Cluster with eksctl

We can bring up a cluster with all the requirements easily using tool. This process will take about 20 minutes.

First let’s describe our cluster by creating with these contents:

With that file in place, we can run this command:

eksctl create cluster \
--configfile alb_demo_cluster.yaml \
--kubeconfig alb_demo_kubeconfig.yaml

Now we can test the cluster using with tool:

export KUBECONFIG=${PWD}/alb_demo_kubeconfig.yaml
kubectl get all --all-namespaces

Creating Cluster without eksctl

You can use an alternative method to create your cluster, but you need to make sure that your cluster has the following:

  1. Authorization to administer the EKS cluster, which may mean modifying configmap in the namespace. See these docs.
  2. Role policy on worker nodes that allow access to create, delete, and modify ALB and a role policy that allows creation of records with Route53.
  3. Private subnets with tags of and , with respective values of and .
  4. Public subnets with tags of and , with respective values of and .

Part 2: Installing Ingress Controller and ExternalDNS

There are many guides to install AWS ALB Ingress Controller that vary in complexity. The easiest path is by using the Helm chart .

Setup Helm 2

If this is the first time you are using Helm 2, you need to install Tiller on your cluster. You can do this with these steps:

kubectl create serviceaccount tiller \
--namespace kube-system
kubectl create clusterrolebinding tiller-admin-binding \
--clusterrole=cluster-admin \
helm init \
helm repo add incubator \
helm repo update

Once setup, you can run

CLUSTER_NAME=alb-demo-clusterhelm install incubator/aws-alb-ingress-controller
--set clusterName=$CLUSTER_NAME
--set autoDiscoverAwsRegion=true
--set autoDiscoverAwsVpcID=true

Setup Helm 3

With Helm 3, you do not need tiller, and can install this directly. All we need to do is add the helm chart repository.

helm repo add incubator \
helm repo add stable \
helm repo update

Install AWS ALB Ingress Controller

Once Helm is setup, you can run the following to install the AWS ALB Ingress Controller.

CLUSTER_NAME=alb-demo-clusterhelm install incubator/aws-alb-ingress-controller
--set clusterName=$CLUSTER_NAME
--set autoDiscoverAwsRegion=true
--set autoDiscoverAwsVpcID=true

Before we used to have to explicitly put the and during installation, but now we can use the auto-discovery to avoid wiring in static values.

Install ExternalDNS

We’ll need to install External DNS to support registration of DNS names through Route53.

First let’s create a small values file called :

We’ll use a fictional domain of . Replace this with a domain you have registered in Route53.

Now we can install ExternalDNS into our cluster:

helm2 install \
--name demo-externaldns \
--values externaldns_values.yaml \

or with Helm3

helm3 install \
--generate-name \
--values externaldns_values.yaml \

Part 3: Deploying Application using ALB Ingress

For this demo, we’ll use application.

Deployment Manifest

Use this deployment manifest that describes our application deployment:

Then install it:

kubectl apply --filename=hello-k8s-deploy.yaml

Service Manifest

Now we need to create a service that will map to our pods. Create a service manifest of with the following contents:

Now install the service:

kubectl apply --filename=hello-k8s-svc-clusterip.yaml

Ingress Manifest

Now for the ingress description called with the following contents.

As you might have gathered, when defining an ingress we also create a load balancer indirectly through annotations. This means that currently, for every ingress we’ll have a load balancer.

As part of this definition, we will use a registered certificate stored in AWS Certificate Manager that matches our domain, which in our fictional example is .

If you do not wish to use certificates and use insecure traffice with port , you can include only the first three lines in the annotation.

Note: Some tutorials may show listing subnets within the annotations as well. This is not required if your subnets have been tagged appropriately, AWS ALB ingress controller can use autodiscovery and find the appropriate subnets.

Deploy the ingress with:

kubectl apply --filename=hello-k8s-alb-ing.yaml

Testing the Results

Once completed, you can navigate the site at (replacing with a domain you are using).

Note: as we are creating a new DNS record along with a new ALB, this may not work until about 5 minutes later. Give it some time.

Part 4: Introducing Merge Ingress

You might have come to the conclustion that creating a load balancer for each ingress with both cumbersome and expensive. Should you have 60 services in , , and environments that would be 180 load balancers. It will also take considerable time to destroy and recreate a loadbalancer (as well as DNS record) for any ingress change.

One solution to this is to use a merge ingress, which creates a single ALB ingress rule for all of our ingress definitions.

Installing Merge Ingress

First we need to install the merge ingress into our Kubernetes cluster:

git clone
helm install \
--namespace kube-system \
--name ingress-merge

Setup Merge Ingress

Once installed, to get started, we take our annotations that we would use with defining an ALB ingress and place them into a configmap. Create a configmap called with these contents:

Deploy Appllication with Merge Ingress using ALB Ingress

If you followed the steps previously, you’ll want to delete your existing ingress:

kubectl delete --filename=hello-k8s-alb-ing.yaml

Now create a new ingress file called with the following contents:

Now install it:

kubectl delete --filename=hello-k8s-merge-ing.yaml

Testing the Results

Once completed, you can see the site at (replacing with a domain you are using).

This may take about 5 minutes, as we did previously destroy an ALB alone with DNS record, and need to create them again. But any other applicaiton you use with a similar ingress, you’ll only have to deal with DNS updates, and won’t have to create new ALB load balancers to support new ingresses.


These are some resources I have come across in my research on AWS ALB Ingress Controller.


Code, Site, Helm Chart

External DNS

Merge Ingress




There you have it, the basics of using AWS ALB Ingress Controller, to allow your Kubernetes services to use both load balancer and ingress features of Application Load Balancer.

As you can see, there are some considerations (pros/cons) you may want to consider before rolling this out for , , or other environments:

  • requirements on Amazon EKS required to prepare a cluster for AWS ALB Ingress controller to support provisioning ALB with auto-discovery.
  • network layer through annotations is mixed in with application layer. The network layer can be hidden by using merge ingress.
  • every ingress definition creates a load balancer. This can be mitigated through merge ingress, and more recently recent alpha release.
  • can attach AWS WAF (Web Application Firewall) rules to the ALB (see annotations in resources), which can mitigate DDOS and other attacks.

I hope this is useful, best of success with your Kubernetes journeys.

Written by

Linux NinjaPants Automation Engineering Mutant — exploring DevOps, Kubernetes, CNI, IAC

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store