ExternalDNS w. AKS & Azure DNS 2
ExternalDNS with static credentials to access to Azure DNS
In the previous article, I covered how to automate DNS records updates during deployment in Kubernetes using ExternalDNS with Azure DNS. This was done by granting the whole cluster access to update records, which is not desirable.
As an alternative, you can give access to only the ExternalDNS container by uploading static credentials containing the secrets that will provide access to the Azure DNS zone. This method has some drawbacks (see below).
This tutorial will show how to setup ExternalDNS using static credentials on Azure Azure Kubernetes Services (AKS) using Azure DNS.
📔 NOTE: This was tested on following below and may not work if versions are significantly different.* Kubernetes API v1.22
* kubectl v1.24
* az v2.36.0
* helm v3.82
* ExternalDNS v0.11.0
Dangers of using static credentials
At some point, the secrets can get compromised, and then unknown threat actors will have the secrets and unmitigated access to the service. This is not a matter of if, but a matter of when.
For this scenario the secret will need to be rotated: where the current secret is replaced with a newly generated secret, and access to the service using the old credentials are removed.
When delivering the secret to the ExternalDNS service, it should be done in a secure manner, not stored anywhere after uploading the secret.
If the cloud provider has this feature, access to the resource should be closely monitored, so that any unauthorized access generates an alert.
⚠️ This naturally requires automation to help mitigate the risk. Thus given the risk, and complexity in the automation required to mitigate the risk, this method is the least desirable method. It should only be used when no other option is available.
Previous Article
Knowledge Requirements
This tutorial requires basic understanding of DNS, managing cloud platforms like Azure, and one’s way around container orchestration with Kubernetes.
Using Azure CLI (az
command) to provision cloud resources, and kubectl
to provision Kubernetes services. This requires using the KUBECONFIG
env var to configure access to the Kubernetes cluster.
Tool requirements
The following tools are needed:
- Azure CLI (
az
)v2.36.0
(Python3.10.4
) - Kubernetes client (
kubectl
)v1.24
or higher - Helm
v3.8.2
or higher (optional for the ingress) - jq (
jq
)
All client scripts were tested using using bash v5.1
, (other POSIX shells should work) with Kubernetes v1.22.6
.
Setup
Use these environment variables and adjust as needed:
The DOMAIN_NAME
will need to change to a domain that is under your control. The domain example.com
will be used as an example domain for this tutorial.
Azure DNS zone
If you do not yet have a Azure DNS zone available, you can create one through these steps:
This should output a list similar to this:
ns1-06.azure-dns.com.
ns2-06.azure-dns.net.
ns3-06.azure-dns.org.
ns4-06.azure-dns.info.
The variable $NS_LIST
will be used later in verification stages.
Update domain registrar name servers
In this tutorial example.com
is used as an example domain. If you own a domain that was registered with a third-party domain registrar, you should point your domain’s name servers to the values printed from the above list.
Create a cluster
The Kubernetes cluster can deploy containers across a several virtual machines called nodes. These are managed as a set and called a node pool. On Azure, a node pool is implemented as virtual machine scale set (VMSS).
Create Azure Kubernetes Service cluster
You can create the resource group, the Azure Kubernetes Service cluster, as well as local cluster operator access through set by the KUBECONFIG
with the following commands.
Create namespaces
A common practice is to install applications into separate namespaces.
Personally, I like to put cluster-wide platforms such as an ingress controller and ExternalDNS into a kube-addons
namespace, and applications into their own unique namespace.
Whatever namespace you chose, here’s how you can create all the namespaces that are used in this project, with the following commands:
Granting access using static credentials
In this method, we need to create a service principal and keep a local copy of the secret. We will grant access to the service principal.
Two of these variables, PRINCIPAL_ID
and PRINCIPAL_SECRET
, will be used to create the Kubernetes secret in the next step.
ExternalDNS Secret
Create and deploy a configuration tells ExternalDNS to use the service principal.
Deploy ExternalDNS
Save the following below as externaldns.yaml
.
This manifest will have the necessary components to deploy ExternalDNS on a single pod.
Before deploying, edit the file and replace $DOMAIN_NAME
with the domain name, such as example.com
, and also replace $EXTERNALDNS_NS
with the desired namespace, such as kube-addons
.
The variable $AZ_DNS_RESOURCE_GROUP
needs to be changed to the DNS resource group, for example: my-dns-group
.
When ready, you can deploy this with:
kubectl create --filename externaldns.yaml \
--namespace ${EXTERNALDNS_NS:-"default"}
You can look at the objects deployed with:
kubectl get all --namespace ${EXTERNALDNS_NS:-"default"}
View logs
You should peek at the logs to see the health and success of the ExternalDNS deployment.
POD_NAME=$(kubectl get pods \
--selector "app.kubernetes.io/name=external-dns" \
--namespace ${EXTERNALDNS_NS:-"default"} --output name)kubectl logs $POD_NAME --namespace ${EXTERNALDNS_NS:-"default"}
Verify with a service object
When using a service as an endpoint for your web application, you can have ExternalDNS update DNS records so that users can reach the endpoint. ExternalDNS will scan for annotations in the service object. Here’s how you can set DNS records using a service object.
Save the manifest below as nginx.yaml
Change $DOMAIN_NAME
to a domain, such as example.com
. When ready to deploy the ingress, run:
kubectl create --filename nginx.yaml \
--namespace ${NGINXDEMO_NS:-"default"}
Check to see if the service has an external address (this may take some seconds):
kubectl get service --namespace ${NGINXDEMO_NS:-"default"}
You may see something similar to this:
Service: verify record changes on Azure DNS zone
Check if the records were updated in the zone:
az network dns record-set a list \
--resource-group ${AZ_DNS_RESOURCE_GROUP} \
--zone-name ${DOMAIN_NAME} \
--query "[?fqdn=='nginx.$DOMAIN_NAME.']" \
--output yaml
This should show something like:
Service: query using dig
Using list of name servers from before, check that the record is resolved from an Azure DNS nameserver, and then check that the record is resolved using the default name server configured for your system:
NAME_SERVER=$(head -1 <<< $NS_LIST)dig +short @$NAME_SERVER nginx.$DOMAIN_NAME
dig +short nginx.$DOMAIN_NAME
Service: test with curl
Use curl to get a response using the FQDN:
curl nginx.$DOMAIN_NAME
This should show something like:
Verify with an ingress object
An ingress controller is a reverse-proxy load balancer that will route traffic to your services based on the FQDN (fully qualified domain name) value you set for the host
name key. ExternalDNS monitors ingress changes, and will fetch the host name, and update corresponding DNS records.
⚠️ NOTE: This tutorial creates two endpoints for the same web server for demonstration purposes. This is unnecessary, as one endpoint will do, so if you are using an ingress resource, you can change the type of the service to ClusterIP
.
Ingress manifest
Save the following below as ingress.yaml
:
Change $DOMAIN_NAME
to a domain, such as example.com
Ingress controller: ingress-nginx
In order for this to work, you will need to install an ingress controller on the Kubernetes cluster. An easy way to do this is to use Helm to install the ingress controller.
helm repo add ingress-nginx \
https://kubernetes.github.io/ingress-nginxhelm install --namespace ${INGRESSNGINX_NS:-"default"} \
ingress-nginx ingress-nginx/ingress-nginx
After a minute, check to see if the ingress controller has a public IP address.
kubectl get service ingress-nginx-controller \
--namespace ${INGRESSNGINX_NS:-"default"}
This should show something like:
Deploy the ingress
When ready to deploy the ingress, run:
kubectl create --filename ingress.yaml \
--namespace ${NGINXDEMO_NS:-"default"}
Check to see if the ingress has an external address (this may take some seconds):
kubectl get ingress --namespace ${NGINXDEMO_NS:-"default"}
You may see something similar to this:
Ingress: Verify record changes on Azure DNS zone
Check if the records were updated in the zone:
az network dns record-set a list \
--resource-group ${AZ_DNS_RESOURCE_GROUP} \
--zone-name ${DOMAIN_NAME} \
--query "[?fqdn=='server.$DOMAIN_NAME.']" \
--output yaml
This should show something like:
Ingress: Query using dig
Using list of name servers from before, check that the record is resolved from an Azure DNS nameserver, and then check that the record is resolved using the default name server configured for your system:
NAME_SERVER=$(head -1 <<< $NS_LIST)dig +short @$NAME_SERVER server.$DOMAIN_NAME
dig +short server.$DOMAIN_NAME
Ingress: test with curl
Use curl
to get a response using the FQDN:
curl server.$DOMAIN_NAME
This should show something like:
Cleaning up
The cluster and resources created from Kubernetes can be destroyed with:
az aks delete \
--resource-group ${AZ_AKS_RESOURCE_GROUP} \
--name ${AZ_AKS_CLUSTER_NAME}
The zone can be removed with:
az network dns zone delete \
--resource-group ${AZ_DNS_RESOURCE_GROUP} \
--name ${DOMAIN_NAME}
The resource groups can be deleted with:
az group delete --name ${AZ_DNS_RESOURCE_GROUP}
az group delete --name ${AZ_AKS_RESOURCE_GROUP}
Resources
These are resources I came across in making this article.
ExternalDNS
The original documentation only covered using static credentials and partially documented managed identities (previously called managed service identities). I recently updated this to fully document managed identities and add AAD Pod Identity.
- Setting up ExternalDNS for Services on Azure (ExternalDNS project)
Conclusion
Similar to the previous article, the goal is to demonstration ExternalDNS on AKS with Azure DNS, and walk through using az
and kubectl
tools to set all of this up.
The secondary take way is exposure to identities (also called principals) that are configured to access a cloud resource. In both articles, service principals are used to configure access. This article shows how to manually manage a service principal, while the previous article uses the one that comes with the cluster, or specifically virtual machines that are managed by VMSS, called the kubelet identity.
This solution should be used as a last resort or for limited testing. For services that need read-write access to a cloud resource, such as ExternalDNS, the preferred method is to use AAD Pod Identity or the newer Azure Workload Identity. This will be covered in follow-up articles.