AKS with Azure CNI

Configuring Azure CNI Network Plugin with PodSubnetPreview

Joaquín Menchaca (智裕)
6 min readSep 6, 2021

--

This article covers configuring an AKS cluster with the Azure CNI network plugin.

Default kubenet plugin

The default network plugin that comes with a vanilla Kubernetes cluster, including AKS, is the kubenet:

Kubenet is a very basic, simple network plugin, on Linux only. It does not, of itself, implement more advanced features like cross-node networking or network policy. It is typically used together with a cloud provider that sets up routing rules for communication between nodes, or in single-node environments.

Introducing Azure CNI

With Azure CNI network plugin, you can utilize Azure’s network infrastructure for Kubernetes. In this scenario, both Kubernetes nodes and pods are parked on the same VNET (virtual network).

This setup is not particularly efficient with large number of pods that can exhaust IP addresses, and not particularly secure, as there can only be one VNet policy for both pods and nodes.

The Solution: PodSubnetPreview

A solution to ameliorate this scenario place pods on their own separate VNET and dynamically allocate IPs using PodSubnetPreview. This solution will grant better IP utilization, is scalable and flexible, has high performance, allows for separate VNet polices for pods, and you can use Kubernetes network policies (ref Dyanmic allocation of IPs and enhanced subnet support)

For this solution to work, we’ll need to run through these four steps:

  1. Create a virtual network with two subnets: one for pods and one for nodes
  2. Create a managed identity, which will be assigned to the AKS cluster
  3. Enable PodSubnetPreview feature
  4. Create an AKS cluster with pod and node subnets and managed identity

Afterward, we can verify that the IP addresses are using separate subnets for the pods and for the nodes.

About the Network Plugins

For further details, the background below describes how traffic works on each of the network plugins.

Kubenet Network Plugin

With the kubenet network plugin, there’s an overlay network on each node with a subnet that begins with 10.244.x.x. Traffic will use IP forwarding on the nodes themselves, and between the nodes use Azure UDR (user defined routing) rules to send the packets to the correct node.

In this scenario (see the above image), traffic heading to pods on the 10.244.2.0/24 subnet, will be routed to 10.24.0.4, and traffic bound to pods on the 10.244.0.0/24 subnet, will be routed to 10.240.0.5. Once it arrives there, IP forwarding will send it to bridge (cbr0), and then to the pod on the overlay network.

Azure CNI Network Plugin

With the Azure CNI network plugin without the PodSubnetPreview feature, both pods and nodes will have IP addresses on the same Azure VNET, which in the diagram is 10.240.0.0/16.

Anything on the subnet outside of the Kubernetes cluster can reach both the pods and nodes equally, without the need to go through a load balancer.

The the PodSubnetPreview feature will use two Azure VNETs, one for the pods, and one for the nodes. For the diagram, the difference will be that pods are on a different network.

Requirements

Provisioning Tools

These are the baseline tools needed to work with Azure and Kubernetes.

Deployment Tools

Deploy demo programs can be done with helm and helmfile tools.

  • Helm (helm): command line tool for “templating and sharing Kubernetes manifests” (ref) that are bundled as Helm chart packages.
  • helm-diff plugin: allows you to see the changes made with helm or helmfile before applying the changes.
  • Helmfile (helmfile): command line tool that uses a “declarative specification for deploying Helm charts across many environments” (ref). This tool can deploy Kubernetes manifests, Helm charts, and even kustomize.

Other Tools (optional)

  • POSIX shell (sh), such as GNU Bash (bash) or Zsh (zsh), are the execution environment used to run all of the above tools on either macOS or Ubuntu Linux.

Project Setup

The following file structure will be used:

~/azure_podsubnet/
├── demos
│ └── hello-kubernetes
│ └── helmfile.yaml
└── env.sh

You can create this with the following commands:

Project environment variables

Setup these environment variables in a file env.sh, which will be source.

Provision Azure resources

Azure resources will be provisioned in two phases:

  1. Network Infrastructure and Managed Identity
  2. Azure Kubernetes Service

Enable Pod Subnet Preview

First, run these commands to enable the pod subnet preview.

Azure Resources

Run the commands below to create the common resource group, network infrastructure (a virtual network with two subnets), and the managed identity:

Azure Kubernetes Service

Run the commands below to create the AKS cluster and configure access to the cluster.

Verify Results: Managed Identity Roles

Afterward, we can see that the automation from the AKS add-on (PodSubnetPreview) will assign the Contributor role to the AKS resource group and a Network Contributor role to the node subnet.

You can verify this with the following command:

This should look something like the following:

Verify Results: AKS Cluster

You can verify access to the cluster and components that were deployed on the cluster with the following commands:

source env.shkubectl get all --all-namespaces

This should look something like the following:

AKS (useast2) with Azure CNI and Calico

The one of particular note is azure-cns or Azure Container Networking for the Azure CNI network and IPAM plugins and Calico for network policies.

Verify Results: IP Addresses

We want to see look at the IP addresses used for both the nodes and pods and verify that they are on different subnets.

Run the following commands below:

Afterward, you should see something like the following:

You will notice the following:

  • 10.242.0.0/16 subnet will have pods (except daemonset pods)
  • 10.243.0.0/16 subnet will have nodes and pods that are part of a daemonset (Kube Proxy, Azure CNS, and Calico)

The Demo Program: hello-kubernetes

The demo program hello-kubernetes will just show off information about the node and pod that is running the service. The will be simple Kubernetes manifests packaged up in a helmfile script.

Copy the following script and save as demos/hello-kubernetes/helmfile.yaml:

Deploy this with the following commands:

helmfile --file demos/hello-kubernetes/helmfile.yaml apply

You can access the one of the pods using port-forward:

kubectl port-forward --namespace hello \
service/hello-kubernetes 8080:80

And you should see something like this with http://localhost:8080:

Cleanup

AKS Cluster

If you just want to delete the Kubernetes cluster and associated resources, e.g. load balancer, managed identity, VMSS, and NSG, then run this command:

az aks delete \
--resource-group "${AZ_RESOURCE_GROUP}" \
--name "${AZ_AKS_CLUSTER_NAME}"

Azure Resources

You can delete everything, including the virtual network and managed identity, run this command:

az group delete --name "${AZ_RESOURCE_GROUP}"

NOTE: Obvious be vary careful with this command, you do NOT want to do the proverbial rm -rf /.

Resources

Blog Source Code

Conclusion

That’s all there is to this, creating network infrastructure, creating a managed identity to manage the network infrastructure, and the Kubernetes (AKS) cluster itself.

I have written other articles that use Azure CNI, but do not delve into segregating the VNETs into pod and node subnets, as the articles already have a layer of complexity on the topics they cover.

--

--