ExternalDNS w. EKS and Route53 pt3
Using ExternalDNS with IRSA to access Route 53
In previous articles, I covered how to use ExternalDNS using either role or user identities. Both of these methods are less than ideal as they either allow access to every one in the cluster, or expose secrets that can get exploited.
📔 NOTE: This was tested on following below and may not work if versions are significantly different.* Kubernetes API v1.22
* kubectl v1.24
* aws 1.24
* eksctl v0.97
* helm v3.82
* ExternalDNS v0.11.0
Enter IAM Roles for Service Accounts
What if there was a way to use the Kubernetes identity, called a service account, to allow access to the Route53 resource?
This is possible with IRSA or IAM Roles for Service Accounts. This allows a service account to masquerade as a role to access the resource. So only the ExternalDNS using the service account, can have access, and there’s no need to manage any secrets, as this automated behind the scenes.
Naturally, this facility adds some more complexity, and this tutorial will walk you through the process.
Previous Article
Knowledge requirements
This tutorial requires basic understanding of DNS protocol, cloud platforms like AWS, and the container orchestration platform Kubernetes.
Specifically you should know how to configure AWS profiles or default profile (aws
command), configuring Kubernetes with KUBECONFIG
environment variable, and using kubectl
to deploy resources.
Tool requirements
The following tools are needed for this tutorial:
- AWS CLI (
aws
) v1.24.2 or higher (python 3.8 installed via pyenv tested) - Kubernetes client (
kubectl
) v1.24 or higher - eksctl [optional] v0.97 or higher
- Helm [optional] v3.8.2 or higher
All client scripts were tested using using bash v5.1
, (other POSIX shells should work) with Kubernetes (EKS) v1.22.6-eks-14c7a48
.
About EKSctl
The eksctl tool is a simple CLI tool for creating and managing clusters on EKS. The open source tool is written in Go and uses CloudFormation stacks to provision the cluster. The tool is developed by Weaveworks and is officially supported by AWS.
You can use this tool or methods to provision the cloud resources and associate the OIDC provider.
The following will be provisioned with three simple eksctl
commands:
- provision network infrastructure: VPCs, subnets, route tables, network ACLs, internet gateway, etc.
- provision control plane and worker nodes: EC2s with an instance profile managed by ASG.
- apply least privilege security with IAM roles and security groups
- associate an IAM OIDC provider for the cluster.
- create an IAM role with a trust relationship to the accociated IAM OIDC provider
- create service account with annotations that binds the service account to the previously created IAM role.
About Helm
Helm is a package manager for Kubernetes that allows you to install applications using a single command, reducing complexity that is required with kubectl
and custom hand-crafted manifests.
Helm is used to install the ingress-nginx
, as EKS does not come with a default ingress controller.
Setup
We’ll use some environment variables to configure the project.
Configure the above as needed. You will want to change the DOMAIN_NAME
to a domain that you control. The domain example.com
is used as an example throughout this tutorial.
Creating a Policy
Access to Route 53 is done by creating a policy and then attaching the policy to an IAM identity (users, groups, or roles). For this tutorial, the policy will be attached to an IAM role.
Save the following below as policy.json
:
Create this policy with the following command:
IMPORTANT: For the rest of this tutorial, POLICY_ARN
environment variable will be used.
Route 53 DNS zone
If you do not have a current DNS Zone configured, you can create one with this:
In this tutorial example.com
is used as an example domain. If you own a domain that was registered with a third-party domain registrar, you should point your domain’s name servers to the values printed from the above snippet.
Create the cluster
Provision an EKS cluster with your desired provisioning tool. If you use eksctl
, you can stand up a cluster easily with:
eksctl create cluster \
--name $EKS_CLUSTER_NAME \
--region $EKS_CLUSTER_REGION
Create namespaces
A common practice is to install applications into separate namespaces.
Personally, I like to put cluster-wide platforms such as an ingress controller and ExternalDNS into a kube-addons
namespace, and applications into their own perspective namespaces.
Whatever you chose, here’s how you can create all the namespaces (in an idempotent way) that are used in this project, with the following commands:
Granting access using IRSA method
IAM Roles for Service Accounts or IRSA will allow access to an AWS cloud resource through the Kubernetes identity called a service account. This will all the service account to impersonate the IAM role for access to a cluster through an OIDC (OpenID Connect) provider.
This allows the operator to use PoLP (Principal of Least Privilege) best practice where ONLY ExternalDNS is allowed access to the Route53 resource.
Configure OIDC for the cluster
Verify that OIDC is supported for the cluster with this command:
aws eks describe-cluster --name $EKS_CLUSTER_NAME \
--query cluster.identity.oidc.issuer --output text
Associate OIDC to the cluster
Associate the provider with the EKS cluster. If you use eksctl
, you can do this with this command:
eksctl utils associate-iam-oidc-provider \
--cluster $EKS_CLUSTER_NAME --approve
Create an IAM role bound to a service account
Create a new IAM role with a trust relationship to the cluster’s OIDC provider, and then create a service account with the appropriate annotations that will associate the service account used by ExternalDNS (external-dns
) with the newly created role.
If you use eksctl
, you can do this with the following command:
eksctl create iamserviceaccount \
--cluster $EKS_CLUSTER_NAME \
--name "external-dns" \
--namespace ${EXTERNALDNS_NS:-"default"} \
--attach-policy-arn $POLICY_ARN \
--approve
As an alternative, you can run those this process using only aws
and kubectl
commands. See Addendum: Role-SA binding with AWS command below for further information.
Deploy ExternalDNS
Save the following below as externaldns.yaml
.
A service account called external-dns
should have been already created and decorated with the required annotations for this to work. This manifest will have the other necessary components to deploy ExternalDNS on a single pod.
First replace $DOMAIN_NAME
with the domain name, such as example.com
, and replace $EXTERNALDNS_NS
with the desired namespace, such as externaldns
or kube-addons
.
When ready, you can deploy this with:
kubectl create --filename externaldns.yaml \
--namespace ${EXTERNALDNS_NS:-"default"}
Verify with a service object
For a quick demonstration that things are functioning, we can deploy an nginx web server, and use an annotation to the service
object to configure the FQDN (fully qualified domain name) for the web service.
Save the manifest below as nginx.yaml
:
Replace $DOMAIN_NAME
with the domain name, such as example.com
.
When ready to deploy, you can do so with this command:
kubectl create --filename nginx.yaml \
--namespace ${NGINXDEMO_NS:-"default"}
Check to see if the service has full deployed the external load balancer:
kubectl get service --namespace ${NGINXDEMO_NS:-"default"}
You may see something similar to this:
Service: verify record changes on Route 53 zone
Verify the Route 53 records have been updated:
This should show something like:
Service: query using dig
You can also use dig
to run a query against both the Route 53 name server and the default name server:
NAME_SERVER=$(head -1 <<< $NAME_SERVERS)dig +short @$NAME_SERVER nginx.$DOMAIN_NAME
dig +short nginx.$DOMAIN_NAME
This should return one or more IP addresses that correspond to the ELB FQDN.
Service: test with curl
Use curl to get a response using the FQDN:
curl nginx.$DOMAIN_NAME
This should show something like this:
Verify with an ingress object
ExternalDNS supports ingress objects as well. An ingress controller will route traffic to the appropriate backend service when it matches the value you set for the host
name. On top of this, ExternalDNS will update the zone with a record for that host
name.
NOTE: This tutorial creates two endpoints, a service
with an external load balancer and an ingress
, only for demonstration purposes to show off ExternalDNS. For practical purposes, only one endpoint is needed, so when the an ingress
is used, the service type
can be changed to ClusterIP
.
Ingress controller: ingress-nginx
In order for this to work, you will need to install an ingress controller on the Kubernetes cluster. An easy way to do this is to use Helm to install the ingress controller.
helm repo add ingress-nginx \
https://kubernetes.github.io/ingress-nginxhelm install --namespace ${INGRESSNGINX_NS:-"default"} \
ingress-nginx ingress-nginx/ingress-nginx
Ingress manifest
Save the following below as ingress.yaml
.
Change $DOMAIN_NAME
to a domain, such as example.com
. When ready to deploy the ingress, run:
kubectl create --filename ingress.yaml \
--namespace ${NGINXDEMO_NS:-"default"}
Check to see if the ingress has an external address (this may take some seconds):
kubectl get ingress --namespace ${NGINXDEMO_NS:-"default"}
You may see something similar to this:
Ingress: verify record changes on Route 53 zone
Verify the Route 53 records have been updated to reflect the ingress object’s address:
This should show something like:
Ingress: query using dig
You can use dig
to run a query against both the Route 53 name server and the default name server:
NAME_SERVER=$(head -1 <<< $NAME_SERVERS)dig +short @$NAME_SERVER server.$DOMAIN_NAME
dig +short server.$DOMAIN_NAME
This should return one or more IP addresses that correspond to the ELB FQDN.
Ingress: test with curl
Use curl
to get a response using the FQDN:
curl server.$DOMAIN_NAME
This should show something like this:
Cleaning up
You can remove the resources allocated with the following steps below.
Load Balancers
Delete any load balancers that are used, as these services may not be deleted when the cluster is destroyed, and eat up costs.
kubectl delete svc/nginx --namespace ${NGINXDEMO_NS:-"default"}
kubectl delete ing/nginx --namespace ${NGINXDEMO_NS:-"default"}helm --namespace ${INGRESSNGINX_NS:-"default"} delete ingress-nginx
Kubernetes Cluster (EKS)
Now the Kubernetes cluster can be safely destroyed:
eksctl delete cluster --name $EKS_CLUSTER_NAME \
--region $EKS_CLUSTER_REGION
Route 53 zone
If the Route 53 zone is no longer needed, delete this with:
aws route53 delete-hosted-zone --id $NODE_ID
Policy
And last but not least, delete the policy if this will no longer be used:
aws iam delete-policy --policy-arn $POLICY_ARN
Addendum: Role-SA binding with AWS command
If you are not using eksctl
tool (or want to run these steps manually), and provided you have an EKS cluster with an associated IAM OIDC provider, you can setup IRSA with the following commands:
Note that during cleanup phase, you will need to detach the policy from the role, and delete the role.
aws iam detach-role-policy --role-name $IRSA_ROLE \
--policy-arn $POLICY_ARNaws iam delete-role --role-name $IRSA_ROLE
Resources
These are documentation links I have come across related to this tutorial.
AWS Documentation
- Policies and permissions in IAM
- IAM roles for service accounts
- Creating an IAM role and policy for your service account
Related Projects to IRSA
Before there was IRSA, there were these projects:
ExternalDNS
This tutorial is based on docs I updated to the ExternalDNS project (pull request review phase)
- ExternalDNS AWS tutorial (latest)
Conclusion
This is the ultimate article in this series for demonstrating ExternalDNS on EKS with Route 53, and walk through using aws
and kubectl
tools.
The main take way is using IRSA to provide secure access to ExternalDNS without compromising best practices in security and operations.
If more than one service needs access to Route53, such as CertManager using an ACME certficate authority, you could potentially share the same IAM role, but I would use two different Kubernetes service accounts. I have not yet tested this scenario for using the same role for two services.
For a service where all containers running on the cluster require read-only access, such a container registry like ECR (Elastic Container Registry), then this method may be overkill, as you will have to configure service accounts with annotations for the role binding for every service that requires access.
In this use case, access for read-only can be granted at the node level, using the IAM role associated with the EC2 worker nodes. This method was covered in the first article of this series.
If a CI platform is used to publish images to ECR, which requires read-write from a specific service, then IRSA would be ideal for that use case.
I hope this is interesting and useful for your journey. Thank you for reading.