ALB Ingress with Amazon EKS

Integrating ALB Ingress with Elastic Kubernetes Service

Joaquín Menchaca (智裕)
7 min readDec 7, 2019

--

When providing a service on Kubernetes, you can expose it through a Service or Ingress object. For the Service object, you’ll want to use a LoadBalancer type. With Amazon EKS implementation, the service type of LoadBalancer will use the classic ELB (Elastic Load Balancer).

With an Ingress object, you have to install such an Ingress controller to provide the facility. This will add a reverse-proxy capability that can route traffic to a service based on virtual hosting or path. In previous article, I covered nginx-ingress that provides this Ingress capability to Amazon EKS.

Beside the classic ELB, Amazon’s ALB (Application Load Balancer) has reverse-proxy capabilities built into the load balancer. Sow why can’t we use this for Ingress instead?

Well, you actually indeed can do this when you install AWS ALB Ingress Controller, which will provision ALB load balancer while applying Ingress rules specified in you Ingress manifest.

This article covers how do the following with eksctl, helm, and kubectl tools:

  1. Create EKS cluster with role policy to allows Kubernetes to provision ALB as well as create DNS records with Route53.
  2. Add AWS ALB Ingress Controller with support for TLS termination.
  3. Add External DNS service to automate creating DNS records with Route53

Previous Related Articles

Creating Amazon EKS with eksctl

Installing and Using eksctl and kubectl tools.

NGiNX Ingress Controller

Prerequisites

Tools

You should have the following tools installed and configured:

  • AWS CLI needed to interact with AWS cloud resources
  • EKSCtl (eks-cuttle or exseey-cuttle) tool to easily create an EKS cluster.
  • KubeCtl (koob-cuttle), the Kubernetes client tool, to interact with EKS
  • Helm to install applications on a Kubernetes cluster.

Domain Name and TLS Certificate

If you wish to use DNS records, you need to have a registered domain name on Amazon Route53. We’ll use mycompany.com as a fictional domain name for purposes of this tutorial. Replace this with your registered domain name.

For secure traffic with TLS, you’ll need to have registered a certificate matching the domain name with AWS Certificate Manager.

Part 1: Creating EKS Cluster

The use eksctl to provision your cluster or elect to use another method.

Creating Cluster with eksctl

We can bring up a cluster with all the requirements easily using eksctl tool. This process will take about 20 minutes.

First let’s describe our cluster by creating alb_demo_cluster.yaml with these contents:

alb_demo_cluster.yaml

With that file in place, we can run this command:

eksctl create cluster \
--configfile alb_demo_cluster.yaml \
--kubeconfig alb_demo_kubeconfig.yaml

Now we can test the cluster using with kubectl tool:

export KUBECONFIG=${PWD}/alb_demo_kubeconfig.yaml
kubectl get all --all-namespaces

Creating Cluster without eksctl

You can use an alternative method to create your cluster, but you need to make sure that your cluster has the following:

  1. Authorization to administer the EKS cluster, which may mean modifying aws-auth configmap in the kube-system namespace. See these docs.
  2. Role policy on worker nodes that allow access to create, delete, and modify ALB and a role policy that allows creation of records with Route53.
  3. Private subnets with tags of kubernetes.io/role/internal-elb and kubernetes.io/cluster/<cluster-name>, with respective values of 1 and shared.
  4. Public subnets with tags of kubernetes.io/role/elb and kubernetes.io/cluster/<cluster-name>, with respective values of 1 and shared.

Part 2: Installing Ingress Controller and ExternalDNS

There are many guides to install AWS ALB Ingress Controller that vary in complexity. The easiest path is by using the Helm chart incubator/aws-alb-ingress-controller.

Setup Helm 2

If this is the first time you are using Helm 2, you need to install Tiller on your cluster. You can do this with these steps:

kubectl create serviceaccount tiller \
--namespace kube-system
kubectl create clusterrolebinding tiller-admin-binding \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:tiller
helm init \
--service-account=tiller
helm repo add incubator \
http://storage.googleapis.com/kubernetes-charts-incubator
helm repo update

Once setup, you can run

CLUSTER_NAME=alb-demo-clusterhelm install incubator/aws-alb-ingress-controller
--set clusterName=$CLUSTER_NAME
--set autoDiscoverAwsRegion=true
--set autoDiscoverAwsVpcID=true

Setup Helm 3

With Helm 3, you do not need tiller, and can install this directly. All we need to do is add the helm chart repository.

helm repo add incubator \
http://storage.googleapis.com/kubernetes-charts-incubator
helm repo add stable \
https://kubernetes-charts.storage.googleapis.com/
helm repo update

Install AWS ALB Ingress Controller

Once Helm is setup, you can run the following to install the AWS ALB Ingress Controller.

CLUSTER_NAME=alb-demo-clusterhelm install incubator/aws-alb-ingress-controller
--set clusterName=$CLUSTER_NAME
--set autoDiscoverAwsRegion=true
--set autoDiscoverAwsVpcID=true

Before we used to have to explicitly put the vpc-id and region during installation, but now we can use the auto-discovery to avoid wiring in static values.

Install ExternalDNS

We’ll need to install External DNS to support registration of DNS names through Route53.

First let’s create a small values file called values.external-dns.yaml:

values.external-dns.yaml

We’ll use a fictional domain of mycompany.com. Replace this with a domain you have registered in Route53.

Now we can install ExternalDNS into our cluster:

helm2 install \
--name demo-externaldns \
--values externaldns_values.yaml \
stable/external-dns

or with Helm3

helm3 install \
--generate-name \
--values externaldns_values.yaml \
stable/external-dns

Part 3: Deploying Application using ALB Ingress

For this demo, we’ll use hello-kubernetes application.

Deployment Manifest

Use this deployment manifest hello-k8s-deploy.yaml that describes our application deployment:

hello-k8s-deploy.yaml

Then install it:

kubectl apply --filename=hello-k8s-deploy.yaml

Service Manifest

Now we need to create a service that will map to our pods. Create a service manifest of hello-k8s-svc-clusterip.yaml with the following contents:

hello-k8s-svc-clusterip.yaml

Now install the service:

kubectl apply --filename=hello-k8s-svc-clusterip.yaml

Ingress Manifest

Now for the ingress description called hello-k8s-alb-ing.yaml with the following contents.

hello-k8s-alb-ing.yaml

As you might have gathered, when defining an ingress we also create a load balancer indirectly through annotations. This means that currently, for every ingress we’ll have a load balancer.

As part of this definition, we will use a registered certificate stored in AWS Certificate Manager that matches our domain, which in our fictional example is mycompany.com.

If you do not wish to use certificates and use insecure traffice with http port 80, you can include only the first three lines in the annotation.

Note: Some tutorials may show listing subnets within the annotations as well. This is not required if your subnets have been tagged appropriately, AWS ALB ingress controller can use autodiscovery and find the appropriate subnets.

Deploy the ingress with:

kubectl apply --filename=hello-k8s-alb-ing.yaml

Testing the Results

Once completed, you can navigate the site at hello-alb.mycompany.com (replacing mycompany.com with a domain you are using).

Note: as we are creating a new DNS record along with a new ALB, this may not work until about 5 minutes later. Give it some time.

Part 4: Introducing Merge Ingress

You might have come to the conclustion that creating a load balancer for each ingress with both cumbersome and expensive. Should you have 60 services in test, stage, and prod environments that would be 180 load balancers. It will also take considerable time to destroy and recreate a loadbalancer (as well as DNS record) for any ingress change.

One solution to this is to use a merge ingress, which creates a single ALB ingress rule for all of our ingress definitions.

Installing Merge Ingress

First we need to install the merge ingress into our Kubernetes cluster:

git clone https://github.com/jakubkulhan/ingress-merge
helm install \
--namespace kube-system \
--name ingress-merge
./helm

Setup Merge Ingress

Once installed, to get started, we take our annotations that we would use with defining an ALB ingress and place them into a configmap. Create a configmap called merge-ing-configmap.yaml with these contents:

merge-ing-configmap.yaml

Deploy Appllication with Merge Ingress using ALB Ingress

If you followed the steps previously, you’ll want to delete your existing ingress:

kubectl delete --filename=hello-k8s-alb-ing.yaml

Now create a new ingress file called hello-k8s-merge-ing.yaml with the following contents:

hello-k8s-merge-ing.yaml

Now install it:

kubectl delete --filename=hello-k8s-merge-ing.yaml

Testing the Results

Once completed, you can see the site at hello-alb.mycompany.com (replacing mycompany.com with a domain you are using).

This may take about 5 minutes, as we did previously destroy an ALB alone with DNS record, and need to create them again. But any other applicaiton you use with a similar ingress, you’ll only have to deal with DNS updates, and won’t have to create new ALB load balancers to support new ingresses.

Resources

These are some resources I have come across in my research on AWS ALB Ingress Controller.

Prerequisites

Code, Site, Helm Chart

External DNS

Merge Ingress

Tutorials

Annotations

Conclusion

There you have it, the basics of using AWS ALB Ingress Controller, to allow your Kubernetes services to use both load balancer and ingress features of Application Load Balancer.

As you can see, there are some considerations (pros/cons) you may want to consider before rolling this out for prod, test, or other environments:

  • requirements on Amazon EKS required to prepare a cluster for AWS ALB Ingress controller to support provisioning ALB with auto-discovery.
  • network layer through annotations is mixed in with application layer. The network layer can be hidden by using merge ingress.
  • every ingress definition creates a load balancer. This can be mitigated through merge ingress, and more recently recent alpha release.
  • can attach AWS WAF (Web Application Firewall) rules to the ALB (see annotations in resources), which can mitigate DDOS and other attacks.

I hope this is useful, best of success with your Kubernetes journeys.

--

--

Joaquín Menchaca (智裕)

DevOps/SRE/PlatformEng — k8s, o11y, vault, terraform, ansible