Nginx Ingress with Amazon EKS

Using nginx-ingress controller with Elastic Kubernetes Service

Joaquín Menchaca (智裕)
7 min readNov 28, 2019

--

UPDATE: Recently updated 2020-08-16 for helm3 and corrections

This is a small tutorial about how to install and setup Ingress support with Kubernetes. In particular, we’ll use the popular nginx-ingress ingress controller along with external-dns to rgister DNS records with Amazon Route53 service. On Amazon EKS, the ingress will use the default Elastic Load Balancer (now called classic ELB or ELBv1).

Background Overview

Generally, when deploying an application on Kubernetes, you create a Service to expose a set of Pods as a network service. With a cloud provider, it is common to use a LoadBalancer service type to connect your Service to the Internet, and with Amazon EKS, a Service with LoadBalancer service type will use ELB.

The problem with this LoadBalancer solution, is that every service you deploy, will create a new ELB instance, and these can get numerous. An alternative is to use an Ingress to expose the your Service, which will allow you to “consolidate your routing rules to a single resource as it can expose multiple services under the same IP”.

An Ingress controller does not come by default with Amazon EKS, so you’ll have to install one. In this article, we will use NGINX Ingress Controller.

Prerequisites

Tools

You should have the following tools configured:

  • AWS CLI needed to interact with AWS cloud resources
  • EKSCtl (eks-cuttle or exseey-cuttle) tool to easily create an EKS cluster.
  • KubeCtl (koob-cuttle), the Kubernetes client tool, to interact with EKS
  • Helm to install applications on a Kubernetes cluster.

Update: These instructions originally tested with helm2 in 2019 and have been updated to support helm3 with updated locations of helm chart repos that have migrated out of stable repository.

DNS and TLS Certificate

If you wish to use DNS records, you need to have a registered public domain on Amazon Route53. We’ll use mycompany.com. as a fictional domain name for purposes of this tutorial. Replace this with your registered domain name.

When created, you verify the hosted zone:

aws route53 list-hosted-zones

For secure traffic with TLS, you’ll need to have registered a public wildcard certificate matching the domain name with AWS Certificate Manager, e.g. *.mycompany.com.

Once created you can verify the certificate using:

aws acm list-certificates --region us-west-2

Note that unlike Route53 that is global, certificates are configured per region, so unless you search in the region where you created the certificate, it will not be listed.

Previous Article

I wrote a previous article that delves into installing eksctl and kubectl:

Part I: The Cluster

If you have an existing Amazon EKS cluster with nginx-ingress and external-dns installed, you can use that and skip this part.

Creating the EKS Cluster

You can BYOC (bring your own cluster) and provision a Amazon EKS. using tool of your choice or use the reference cluster using eksctl below.

For this BYOC, you will need to allow worker nodes or external-dns pod to access Amazon Route53:

  • Node IAM Role to add a policy to the worker nodes
  • IRSA setup a trust relationship to a Kubernetes Service Account.

For the eksctl reference config, create a configuration called cluster_with_dns with the contents below:

cluster_with_dns.yaml

Once we have this, we can run this command to create our cluster:

eksctl create cluster \
--config-file cluster_with_dns.yaml

After about 20 minutes, the cluster will available. Toward the end of this process, eksctl will update your KUBECONFIG, which defaults to $HOME/.kube/config.

Adding Route53 Support

The add-on called ExternalDNS that can allow Kubernetes service or ingress resource to create DNS records on a resource outside of Kubernetes. In our case, we have ExternalDNS create records on Route53.

In order to get started, we need to create a Helm chart values file. Create a file named values.external-dns.yaml with the following contents:

values.external-dns.yaml

Replace mycompany.com with the domain name you wish to use. Once ready, we can go ahead and install this using helm:

# Recent Update: chart has migrated from stable to bitnami
helm
repo add "bitnami" "https://charts.bitnami.com/bitnami"
# Install Chart
helm
install \
my-extdns \
--values values.external-dns.yaml \
bitnami/external-dns

You can verify the service is available by running kubectl get svc | grep external-dns.

Adding Ingress Support

Now we can install the Ingress resource using the Ingress-Nginx controller. Under the hood, the controller is running openresty (NGINX + LuaJIT) reverse proxy and is exposed through a service LoadBalancer, which on EKS is ELBv1. This allows use through ACM to terminate the TLS certificate.

Before we begin you will need to get ACM ARN you create from your domain, which looks something like:

arn:aws:acm:us-west-2:XXXXXXXXXXXX:certificate/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX

When you get this value, you can use with Helm chart configuration values and create a file values.nginx-ingress.yaml with content something like this:

values.nginx-ingress.yaml

Put the certificate manager ARN with the appropriate value to match the domain. Once ready, we can install our ingress:

# Recent Update: chart has migrated from stable to ingress-nginx
helm
repo add ingress-nginx \
https://kubernetes.github.io/ingress-nginx
helm install \
my-nginx \
--values values.nginx-ingress.yaml \
ingress-nginx/ingress-nginx

You can see it running with kubectl get svc | grep nginx-ingress-controller that will have an EXTERNAL-IP column showing the ELB’s FQDN, e.g. XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX-XXXXXXXXX.us-west-2.elb.amazonaws.com. Once this is available, we can begin installing applications that use an ingress.

Part II: The Application

For the demonstration application, we’ll create three manifests:

  • Deployment
  • Service with service type ClusterIP
  • Ingress to point to the Service

Deployment Manifest

Create a file called hello-k8s-deploy.yaml with the following contents.

hello-k8s-deploy.yaml

Now we can run this to install our pods:

kubectl apply --filename hello-k8s-deploy.yaml

Service Manifest

Now we need to create a service manifests that maps our pods to an internal ClusterIP. Create the following file named hello-k8s-svc-clusterip.yaml with the contents:

hello-k8s-svc-clusterip.yaml

We can apply this service by running:

kubectl apply --filename hello-k8s-svc-clusterip.yaml

Ingress Manifest

Now we can create an ingress that will route traffic for the hostname of hello-kubernetes, such as hello-kubernetes.mycompany.com.

Create an ingress manifests file named with the following contents:

hello-k8s-ing.yaml

Make sure to replace mycompany.com with an appropriate domain. We can apply this manifest with:

kubectl apply --filename hello-k8s-ing.yaml

Monitoring Progress

Here are some tips you can use to monitor the process along the way.

LoadBalancer Creation

You can get the status of the ingress and pay attention to the ADDRESS field:

kubectl get ingress

This won’t come up immediately, as it takes 5 minutes roughly for the ELB to be provisioned. Ultimately, you should get an ELB address (fictional name used):

1234567890abcde012345567890abcde-1234567890.us-west-2.elb.amazonaws.com

DNS Record Creation

You can monitor the logs from external-dns pod, you’ll see an events like this (fictional domain name and zone id used):

time="2020-08-16T22:29:29Z" level=info msg="Desired change: UPSERT hello.example.com A [Id: /hostedzone/Z01234567890ABCDEFGH]"
time="2020-08-16T22:29:29Z" level=info msg="Desired change: UPSERT hello.example.com TXT [Id: /hostedzone/Z01234567890ABCDEFGH]"
time="2020-08-16T22:29:29Z" level=info msg="2 record(s) in zone example.com. [Id: /hostedzone/Z01234567890ABCDEFGH] were successfully updated"

You can verify the creation on the hosted zone:

MY_DOMAIN=mycompany.com
MY_DNS_NAME=hello.mycompany.com
ZONE_ID=$(
aws route53 list-hosted-zones \
--query "HostedZones[].[Id,Name]" \
--output text | awk -F$'\t' "/$MY_DOMAIN./{ print \$1 }"
)
FILTER="ResourceRecordSets[?Name == '$MY_DNS_NAME.']|[?Type == 'A'].[Name,AliasTarget.DNSName]"aws route53 list-resource-record-sets \
--hosted-zone-id $ZONE_ID \
--query "$FILTER" \
--output text

This should show something like and match address of the load balancer:

hello.mycompany.com. 1234567890abcde012345567890abcde-1234567890.us-west-2.elb.amazonaws.com.us-west-2.elb.amazonaws.com.

Testing the Results

When you see the elastic load balancer and corresponding DNS record added, your server is available online. You can test it by using the DNS name you configured, e.g. https://hello.mycompany.com.

Clean-up

You can delete the previous material you deployed with:

cat hello-k8s-*.yaml | kubectl delete --filename -

For the cluster, if you provisioned EKS uses the eksctl scripts provided, you can delete it using this:

eksctl delete cluster --config-file cluster_with_dns.yaml

Resources

Kubernetes Addons

Source Code

I currently have this living in a branch, as this is work in progress:

Conclusion

With this you have the the basics of using an Ingress that uses the registered domain name to route to the desired service, a process called virtualhosting. The ExternalDNS does magic in the background where it looks at the Ingress host field, and creates a corresponding DNS records that points to the ELB address.

This article only covers the basics, but with Ingress you can do more fancy things, such as use https routes (aka http paths) to map to services, such as have / to the user-interface of an application and /api/v1 map to an api server that an application uses.

Thanks for reading, I hope this was useful in your Kubernetes journey.

--

--

Joaquín Menchaca (智裕)

DevOps/SRE/PlatformEng — k8s, o11y, vault, terraform, ansible