ExternalDNS with EKS and Route53
Using ExternalDNS with Node IAM Role to access to Route53
ExternalDNS automates updating DNS records so that you can deploy a public facing application using just Kubernetes. The alternative is to deploy the application, and then later use a different set of tools to update corresponding DNS records to support the application.
This tutorial shows how to setup and configure this on Amazon EKS using Amazon Route 53 DNS zones.
📔 NOTE: This was tested on following below and may not work if versions are significantly different.* Kubernetes API v1.22
* kubectl v1.24
* aws 1.24
* eksctl v0.97
* helm v3.82
* ExternalDNS v0.11.0
Knowledge requirements
This tutorial requires basic understanding of DNS protocol, cloud platforms like AWS, and the container orchestration platform Kubernetes.
Specifically you should know how to configure AWS profiles or default profile (aws
command), configuring Kubernetes usingKUBECONFIG
environment variable, and using kubectl
to deploy resources.
Tool requirements
The following tools are needed for this tutorial:
- AWS CLI (
aws
) v1.24.2 or higher (python 3.8 installed via pyenv tested) - Kubernetes client (
kubectl
) v1.24 or higher - eksctl [optional] v0.97 or higher
- Helm [optional] v3.8.2 or higher
All client scripts were tested using using bash v5.1
, (other POSIX shells should work) with Kuberntes (EKS) v1.22.6-eks-14c7a48
.
About EKSctl
The eksctl tool is a simple CLI tool for creating and managing clusters on EKS. The open source tool is written in Go and uses CloudFormation stacks to provision the cluster. The tool is developed by Weaveworks and is officially support by AWS.
This tool is not explicitly required, feel free to use other commands to provision the cluster and network information. If you elect to use eksctl
, with one single command you can provision the following:
- provision network infrastructure: VPCs, subnets, route tables, network ACLs, internet gateway, etc.
- provision control plane and worker nodes: EC2s with an instance profile managed by ASG.
- apply least privilege security with IAM roles and security groups
About Helm
Helm is a package manager for Kubernetes that allows you to install applications using a single command, reducing complexity that is required with kubectl
and custom hand-crafted manifests..
Helm is used to install the ingress-nginx
, as EKS does not come with a default ingress controller.
About node IAM role method
Kubernetes orchestrates the scheduling of containers or pods across several systems called nodes, and on AWS these nodes are called EC2 (Elastic Compute Cloud). ExternalDNS needs permissions to access the Route 53 zone, and this will be facilitated through the IAM role associated with EC2 or node, and hence, I use the term node IAM role to describe this process.
Access is granted through a policy that is attached to a role, and this role is associated through an instance profile with the Kubernetes nodes (or EC2 instances that are part of an Autoscaling group that make of the nodes).
This will grant the nodes access to the Route 53 zone, and thus grant all of the containers running on the cluster, including the ExternalDNS container, access to Route 53.
ExternalDNS will use a temporary token provided by the instance to access the Route 53 service.
Setup
We’ll use some environment variables to configure the project.
Configure the above as needed. You will want to change the DOMAIN_NAME
to a domain that you control. The domain example.com
is used as an example throughout this tutorial.
Creating a policy
Access to Route 53 is done by creating a policy and then attaching the policy to an IAM identity (users, groups, or roles). For this tutorial, the policy will be attached to an IAM user.
Save the following below as policy.json
:
Create this policy with the following command:
For the rest of this tutorial, POLICY_ARN
environment variable will be used.
Route 53 DNS zone
If you do not have a current DNS Zone configured, you can create one with this:
In this tutorial example.com
is used as an example domain. If you own a domain that was registered with a third-party domain registrar, you should point your domain’s name servers to the values printed from the above snippet.
Create the cluster
Provision an EKS cluster with your desired provisioning tool. If you use eksctl
, you can stand up a cluster easily with:
eksctl create cluster \
--name $EKS_CLUSTER_NAME \
--region $EKS_CLUSTER_REGION
Create namespaces
A common practice is to install applications into separate namespaces.
Personally, I like to put cluster-wide platforms such as an ingress controller and ExternalDNS into a kube-addons
namespace, and applications into their own unique namespace.
Whatever you chose, here’s how you can create all the namespaces that are used in this project, with the following commands:
Granting access using node IAM role method
For this process, the policy created earlier will be attached to the Node IAM role associated with the Kubernetes nodes (Amazon EC2 instances). This part of the process can happen before or after deploying ExternalDNS.
⚠️ WARNING: This access method grants ALL containers running in the node pool to access the Route 53 zone, not just the ExternalDNS container. This is suitable for disposable test environments. This is not recommended for production systems.
The Node IAM role can be attached with this command:
aws iam attach-role-policy \
--role-name $NODE_ROLE_NAME \
--policy-arn $POLICY_ARN
The challenge is how do you extract the role name of the role associated with the node? In AWS, you have to jump through hoops to get this unfortunately. A short cut would be to log into the web console and copypasta the role name.
Below is a script that can get the role name and attach the policy:
If you provision a cluster using eksctl
using the defaults, then this script works fine, and does not need to be updated.
However, if you use multiple node groups, then it is important to know which node group you will need to update, as a different node iam role is associated with each node group. In this case, you will need to supply a different INSTANCE_NAME
in the above script.
In this scenario, if you deploy ExternalDNS first, you can select the appropriate target node that is running the ExternalDNS pod with this:
Deploy ExternalDNS
Save the following below as externaldns.yaml
.
This manifest will have the necessary components to deploy ExternalDNS on a single pod.
First replace $DOMAIN_NAME
with the domain name, such as example.com
, and replace ${EXTERNALDNS_NS:-"default"}
with the desired namespace, such as externaldns
or kube-addons
.
When ready, you can deploy this with:
kubectl create --filename externaldns.yaml \
--namespace ${EXTERNALDNS_NS:-"default"}
Verify with a service object
For a quick demonstration that things are functioning, we can deploy an nginx web server, and use an annotation to the service
object to configure the FQDN (fully qualified domain name) for the web service.
Save the manifest below as nginx.yaml
:
Replace $DOMAIN_NAME
with the domain name, such as example.com
.
When ready to deploy, you can do so with this command:
kubectl create --filename nginx.yaml \
--namespace ${NGINXDEMO_NS:-"default"}
Check to see if the service has full deployed the external load balancer:
kubectl get service --namespace ${NGINXDEMO_NS:-"default"}
You may see something similar to this:
Service: verify record changes on Route 53 zone
Verify the Route 53 records have been updated:
This should show something like:
Service: query using dig
You can also use dig
to run a query against both the Route 53 name server and the default name server:
NAME_SERVER=$(head -1 <<< $NAME_SERVERS)dig +short @$NAME_SERVER nginx.$DOMAIN_NAME
dig +short nginx.$DOMAIN_NAME
This should return one or more IP addresses that correspond to the ELB FQDN.
Service: test with curl
Use curl to get a response using the FQDN:
curl nginx.$DOMAIN_NAME
This should show something like this:
Verify with an ingress object
ExternalDNS supports ingress objects as well. An ingress controller will route traffic to the appropriate backend service when it matches the value you set for the host
name. On top of this, ExternalDNS will update the zone with a record for that host
name.
NOTE: This tutorial creates two endpoints, a service
with an external load balancer and an ingress
, only for demonstration purposes to show off ExternalDNS. For practical purposes, only one endpoint is needed, so when the an ingress
is used, the service type
can be changed to ClusterIP
.
Ingress controller: ingress-nginx
In order for this to work, you will need to install an ingress controller on the Kubernetes cluster. An easy way to do this is to use Helm to install the ingress controller.
helm repo add ingress-nginx \
https://kubernetes.github.io/ingress-nginxhelm install --namespace ${INGRESSNGINX_NS:-"default"} \
ingress-nginx ingress-nginx/ingress-nginx
Ingress manifest
Save the following below as ingress.yaml
.
Change $DOMAIN_NAME
to a domain, such as example.com
. When ready to deploy the ingress, run:
kubectl create --filename ingress.yaml \
--namespace ${NGINXDEMO_NS:-"default"}
Check to see if the ingress has an external address (this may take some seconds):
kubectl get ingress --namespace ${NGINXDEMO_NS:-"default"}
You may see something similar to this:
Ingress: verify record changes on Route 53 zone
Verify the Route 53 records have been updated to reflect the ingress object’s address:
This should show something like:
Ingress: query using dig
You can use dig
to run a query against both the Route 53 name server and the default name server:
NAME_SERVER=$(head -1 <<< $NAME_SERVERS)dig +short @$NAME_SERVER server.$DOMAIN_NAME
dig +short server.$DOMAIN_NAME
This should return one or more IP addresses that correspond to the ELB FQDN.
Ingress: test with curl
Use curl
to get a response using the FQDN:
curl server.$DOMAIN_NAME
This should show something like this:
Cleaning up
You can remove the resources allocated with the following steps below. The load balancers and policy attachment needs to take place before deleting the cluster and policy.
Policy Attachment
Detach the policy from the Node IAM Role, so that cluster can be safely deleted later. Otherwise, this will block deleting the cluster with this role.
aws iam detach-role-policy --role-name $NODE_ROLE_NAME \
--policy-arn $POLICY_ARN
Load Balancers
Delete any load balancers that are used, as these services may not be deleted when the cluster is destroyed, and eat up costs.
kubectl delete svc/nginx -n ${NGINXDEMO_NS:-"default"}kubectl delete ing/nginx -n ${NGINXDEMO_NS:-"default"}
helm -n ${INGRESSNGINX_NS:-"default"} delete ingress-nginx
Kubernetes Cluster (EKS)
Now the Kubernetes cluster can be safely destroyed:
eksctl delete cluster --name $EKS_CLUSTER_NAME \
--region $EKS_CLUSTER_REGION
Route 53 zone
If the Route 53 zone is no longer needed, delete this with:
aws route53 delete-hosted-zone --id $NODE_ID
Policy
And last but not least, delete the policy if this will no longer be used:
aws iam delete-policy --policy-arn $POLICY_ARN
Resources
These are documentation links I have come across related to this tutorial.
AWS Documentation
ExternalDNS
This tutorial is based on docs I updated to the ExternalDNS project (pull request review phase)
- ExternalDNS AWS tutorial (latest)
Next Article
In this article series, the next article shows an alternative method using static credentials (access keys) is demonstrated.
Conclusion
The goal of all of this was to demonstrate ExternalDNS on EKS with Route 53, and walk through using aws
and kubectl
tools.
This tutorial also exposes the identity and authorization (policies) system of AWS. This method to grant access to Kubernetes nodes is only suitable for something like a private container registry, such as ECR, where all containers would need read-only ability to the registry. For write access, limited access to just the service can be done through a credentials file, or better IRSA (IAM roles for service accounts). These are other methods are covered in follow-up articles.
On a side note, for any fellow script ninjas out there, these scripts use JMESPath to manipulate JSON data through the aws --query
argument. The kubectl
tool supports a different syntax to do the same thing with JSONPath. Though not used in this tutorial, but very popular, is jq tool, which is useful for tools that don’t have any built in facility to manipulate output.
Anyhow, I hope this is useful.