AKS with External DNS

Using external-dns add-on with Azure DNS and AKS

Update (2021年06月28日): removed envsubt & terraform for simplicity

This article covers using ExternalDNS to automate updating DNS records when applications are deployed on Kubernetes. This is needed if you wish to use a public endpoint and would prefer a friendlier DNS name rather than a public IP address.

This article will configure the following components:

Blog Source Code

The blog source code for this article has instructions for using other tools like Terraform (terraform) for cloud resources and gettext (envsubst) for using the shell as a template engine to compose raw Kubernetes manifests or Helm chart values for use with either kubectl or helm. I also included snippets how to use either the jq tool syntax or JMESPath syntax that is used with the az --query flag.

Articles in the Series

These articles are part of a series, and below is a list of articles in the series.

  1. AKS with external-dns: service with LoadBalancer type

Previous Articles

Helmfile: In this article, I covered how to use this incredible tool called helmfile, which will be used in this tutorial.

Azure DNS Automation: In this article, I previously wrote about using Azure DNS for a public domain, and how you either transfer the domain to Azure DNS, or use a subdomain with Azure DNS.

Provision Azure Kubernetes Service: In this article, I cover how to provision Kubernetes cluster using AKS (Azure Kubernetes Service) using Azure CLI tools (az).

GKE with ExternalDNS: Previously, I wrote a similar article about how to do the same automation with Kubernetes cluster using GKE (Google Kubernetes Engine) and Google Cloud DNS.

Requirements

Registered domain name

As this tutorial uses a public domain name, so you will need to purchase this from somewhere to follow everything in this tutorial. This generally costs about $2 to $20 per year.

A fictional domain of example.com will be used as an example. You can also alternatively experiment a private domain, such as example.dev.

Required Tools

These tools are required.

  • Azure CLI tool (az): command line tool that interacts with Azure API

Optional Tools

I highly recommend these tools:

  • POSIX shell (sh), e.g. GNU Bash (bash) or Zsh (zsh): these scripts in this guide were tested using either of these shells on macOS and Ubuntu Linux.

No Longer Required

These tools were used in an earlier version of this blog. They are no longer required, but can still be used (instructions in the blog source code)

  • gettext (envsubst): utilities that include a tool can substitute environment variables used in text files. This is useful for creating templates in with shell.

Project Setup

As this project has a few moving parts (Azure DNS, AKS, ExternalDNS with example applications Dgraph and hello-kubernetes), these next few will help keep things consistent.

Project File Structure

The following structure will be used:

~/azure_externaldns/
├── env.sh
├── examples
│ ├── dgraph
│ │ └── helmfile.yaml
│ └── hello
│ └── helmfile.yaml
└── helmfile.yaml

With either Bash or Zsh, you can create the file structure with the following commands:

mkdir -p ~/azure_externaldns/{terraform,examples/{dgraph,hello}} 
cd ~/azure_externaldns
touch env.sh helmfile.yaml examples/{dgraph,hello}/helmfile.yaml

Project Environment Variables

Setup these environment variables below to keep things consistent amongst a variety of tools: helm, helmfile, kubectl, jq, az.

If you are using a POSIX shell, you can save these into a script and source that script whenever needed. Copy this source script and save as env.sh:

You will be required to change AZ_DNS_DOMAIN to a domain that you have registered, as example.com is already owned. Make sure you transfer domain control to Azure DNS.

Additionally, you can use the defaults or opt to change values for AZ_RESOURCE_GROUP, AZ_LOCATION, and AZ_CLUSTER_NAME.

This env.sh file will be used for the rest of the project. When finished with the necessary edits, source it:

source env.sh

Provisioning Azure Resources

Resource Group

In Azure, resources are organized under resource groups.

source env.shaz group create -n $AZ_RESOURCE_GROUP -l $AZ_LOCATION

Provisioning Azure DNS Zone

You can create domain using Azure CLI tool (az) tools:

source env.shaz network dns zone create \
--resource-group
${AZ_RESOURCE_GROUP} \
--name ${AZ_CLUSTER_NAME}

Verifying Azure DNS Zone

Gather information on a particular domain with using the built-in query flag with JMESPath syntax:

az network dns zone list --query "[?name=='$AZ_DNS_DOMAIN']"

Provisioning an AKS Cluster

The second half of this exercise requires a Kubernetes cluster: AKS (Azure Kubernetes Service) configured with Managed Identity enabled. If you already have an existing AKS cluster available, you can use this.

In a previous article, I walked through how to set this up using Azure CLI tool (az). You can use that guide, or run through these steps below:

az aks create \
--resource-group ${AZ_RESOURCE_GROUP} \
--name ${AZ_CLUSTER_NAME} \
--generate-ssh-keys \
--vm-set-type VirtualMachineScaleSets \
--node-vm-size ${AZ_VM_SIZE:-Standard_DS2_v2} \
--load-balancer-sku standard \
--enable-managed-identity \
--node-count 3 \
--zones 1 2 3

az aks get-credentials \
--resource-group ${AZ_RESOURCE_GROUP} \
--name ${AZ_CLUSTER_NAME} \
--file ${KUBECONFIG:-$HOME/.kube/config}

When completed, you should be able to see resources already allocated in Kubernetes with this command:

kubectl get all --all-namespaces

The results should be similar to this:

Managed Identity

The AKS cluster will need to use Managed Identity. If this was not enabled with the creation of the AKS cluster, you can add it now with the following command:

az aks update -g $AZ_RESOURCE_GROUP -n $AZ_CLUSTER_NAME \
--enable-managed-identity

NOTE: This may cause some confusion, but Managed Identity is the new name for MSI (Managed Service Identity). There may be guides or documentation using the earlier term.

Authorizing access Azure DNS

We need to allow access to the Managed Identity installed on VMSS node pool workers to the Azure DNS zone. This will allow external-dns automation to work when running on the AKS cluster.

NOTE: A Managed Identity is a wrapper around service principals to make management simpler. Essentially, they are mapped to a Azure resource, so that when the Azure resource no longer exists, the associated service principal will be removed.

ExternalDNS using Managed Identity

First we want to get the scope, that has a format like this:

/subscriptions/<subscription id>/resourceGroups/<resource group name>/providers/Microsoft.Network/dnszones/<zone name>/

Fetch the AZ_DNS_SCOPE and AZ_PRINCIPAL_ID, and then grant access grant access to this specific Azure DNS zone:

export AZ_DNS_SCOPE=$(
az network dns zone list \
--query "[?name=='$AZ_DNS_DOMAIN'].id" \
--output
table | tail -1
)
export AZ_PRINCIPAL_ID=$(
az aks show -g $AZ_RESOURCE_GROUP -n $AZ_CLUSTER_NAME \
--query "identityProfile.kubeletidentity.objectId" | tr -d '"'
)
az role assignment create \
--assignee "$AZ_PRINCIPAL_ID" \
--role "DNS Zone Contributor" \
--scope "$AZ_DNS_SCOPE"

Installing External DNS

Now comes the fun part, installing the automation so that services with endpoints can automatically register records in the Azure DNS Zone when deployed.

Using Helmfile

Copy the following below and save as helmfile.yaml:

Make sure that appropriate environment variables are setup before running this command: AZ_RESOURCE_GROUP, AZ_DNS_DOMAIN, AZ_TENANT_ID, AZ_SUBSCRIPTION_ID. Otherwise, this script will fail.

Once ready, simply run:

helmfile apply

Testing ExternalDNS is Running

You can test that the external-dns pod is running with:

LABEL_NAME="app.kubernetes.io/name=external-dns",
LABEL_INSTANCE="app.kubernetes.io/instance=external-dns"
EXTERNAL_DNS_POD_NAME=$(
kubectl \
--namespace kube-addons get pods \
--selector "$LABEL_NAME,$LABEL_INSTANCE" \
--output name
)
kubectl logs --namespace kube-addons $EXTERNAL_DNS_POD_NAME

If there are errors in the logs about authorization, you know immediately that the setup is not working. You’ll need to revisit that the appropriate access was added to a role and attached to the correct service principal that was created on VMSS node pool workers.

Example using hello-kubernetes

The hello-kubernetes is a simple application that prints out the pod names. This application can demonstrate that automation with ExternalDNS and Azure DNS have worked correctly.

A service of LoadBalancer type will be configured with the required annotation to tell ExternalDNS the desired DNS A record to configure. ExternalDNS will scan services for this annotation, and then trigger the automation.

hello-kubernetes with LoadBalancer

Deploy hello-kubernetes using helmfile

Copy and paste the following manifest template below as examples/hello/helmfile.yaml:

We can deploy this using the following command:

source env.shhelmfile --file examples/hello/helmfile.yaml apply

Verify Hello Kubernetes is deployed

kubectl get all --namespace hello

You similar resources like this deployed:

hello-kubernetes deployment

Verify Hello Kubernetes works

After a few moments, you can check the results http://hello.example.com (substituting example.com for your domain).

Cleanup Hello Kubernetes

helm delete hello-kubernetes --namespace hello

Example using Dgraph

Dgraph is a distributed graph database and has a helm chart that can be used to install Dgraph into a Kubernetes cluster. You can use either helmfile or helm methods to install Dgraph.

Dgraph Alpha and Dgraph Ratel with 2 LoadBalancers

Securing Dgraph

Normally, you would only have the database available through a private endpoints, not available on the public Internet. However for this demonstration purposes, public endpoints through the service of type LoadBalancer will be used.

We can take precaution to add a whitelist or an allow list that will contain your IP address as well as AKS IP addresses used for pods and services. You can do that in Bash or Zsh with the following commands:

# get AKS pod and service IP addresses
DG_ALLOW_LIST
=$(az aks show \
--name $AZ_CLUSTER_NAME \
--resource-group $AZ_RESOURCE_GROUP | \
jq -r '.networkProfile.podCidr,.networkProfile.serviceCidr' | \
tr '\n' ','
)
# append home office IP address
MY_IP_ADDRESS=$(curl --silent ifconfig.me)
DG_ALLOW_LIST="${DG_ALLOW_LIST}${MY_IP_ADDRESS}/32"
export DG_ALLOW_LIST

Deploy Dgraph with Helmfile

Copy the following gist below and save as examples/dgraph/helmfile.yaml:

When ready, run the following:

helmfile --file examples/dgraph/helmfile.yaml apply

Verify Dgraph is deployed

Check that the services are running:

kubectl --namespace dgraph get all

This should output something similar to the following:

Dgraph deployment with LoadBalancer endpoints

Verify Dgraph Alpha health check

Verify that the Dgraph Alpha is accessible by the domain name (substituting example.com for your domain):

curl --silent http://alpha.example.com:8080/health | jq

This should output something similar to the following:

/health (HTTP)

Test solution with Dgraph Ratel web user interface

After a few moments, you can check the results http://ratel.example.com (substituting example.com for your domain).

In the dialog for Dgraph Server Connection, configure the domain, e.g. http://alpha.example.com:8080 (substituting example.com for your domain)

From there, you can run through some tutorials like https://dgraph.io/docs/get-started/ to create a small Star Wars graph database and run some queries.

Cleanup Dgraph resources

You can remove Dgraph resources, load balancer, and external disks with the following:

helm delete demo --namespace dgraph
kubectl delete pvc --namespace dgraph --selector release=demo

NOTE: If you delete the AKS cluster without going through this step, there will be disks left over that will incur costs.

Cleanup the Project

You can cleanup resources that can incur costs with the following:

Remove External Disks

Before deleting AKS cluster, make sure any disks that were used are removed, otherwise, these will be left behind an incur costs.

kubectl get pvc --all-namespaces

If you see these, then you need to remove the disks associated with the project:

The Dgraph external disks can be removed with:

kubectl delete pvc --namespace dgraph --selector release=demo

Remove the AKS Cluster

This will remove the AKS clusters and associated resources:

az aks delete \
--resource-group $AZ_RESOURCE_GROUP \
--name $AZ_CLUSTER_NAME

Remove Azure DNS Zone

az network dns zone delete \
--resource-group $AZ_RESOURCE_GROUP \
--name $AZ_DNS_DOMAIN

Resources and Further Reading

Here are some resources that I have come across that may be useful.

Blog Source Code

External DNS

Tools

Azure Identity

Azure DNS

Azure Kubernetes Service

Helm Charts

Conclusion

I don’t know where to begin with this. On the surface, this seems easy:

automate DNS during deployments of applications on Kubernetes (AKS) flavor with ExternalDNS.

As you can see, it is a little more complex, as configuring cloud resources, both Kubernetes and Azure, can zigzag through a plethora of tools.

And though this may seem intense, this is scratching off the surface, as there are some other domains to consider:

In any event, I hope this is useful and best of success in your AKS journey.

Linux NinjaPants Automation Engineering Mutant — exploring DevOps, Kubernetes, CNI, IAC

Linux NinjaPants Automation Engineering Mutant — exploring DevOps, Kubernetes, CNI, IAC