AKS with Azure CNI

Configuring Azure CNI Network Plugin with PodSubnetPreview

This article covers configuring an AKS cluster with the Azure CNI network plugin.

Default kubenet plugin

Kubenet is a very basic, simple network plugin, on Linux only. It does not, of itself, implement more advanced features like cross-node networking or network policy. It is typically used together with a cloud provider that sets up routing rules for communication between nodes, or in single-node environments.

Introducing Azure CNI

This setup is not particularly efficient with large number of pods that can exhaust IP addresses, and not particularly secure, as there can only be one VNet policy for both pods and nodes.

The Solution: PodSubnetPreview

For this solution to work, we’ll need to run through these four steps:

  1. Create a virtual network with two subnets: one for pods and one for nodes
  2. Create a managed identity, which will be assigned to the AKS cluster
  3. Enable PodSubnetPreview feature
  4. Create an AKS cluster with pod and node subnets and managed identity

Afterward, we can verify that the IP addresses are using separate subnets for the pods and for the nodes.

About the Network Plugins

Kubenet Network Plugin

With the kubenet network plugin, there’s an overlay network on each node with a subnet that begins with 10.244.x.x. Traffic will use IP forwarding on the nodes themselves, and between the nodes use Azure UDR (user defined routing) rules to send the packets to the correct node.

In this scenario (see the above image), traffic heading to pods on the subnet, will be routed to, and traffic bound to pods on the subnet, will be routed to Once it arrives there, IP forwarding will send it to bridge (cbr0), and then to the pod on the overlay network.

Azure CNI Network Plugin

With the Azure CNI network plugin without the PodSubnetPreview feature, both pods and nodes will have IP addresses on the same Azure VNET, which in the diagram is

Anything on the subnet outside of the Kubernetes cluster can reach both the pods and nodes equally, without the need to go through a load balancer.

The the PodSubnetPreview feature will use two Azure VNETs, one for the pods, and one for the nodes. For the diagram, the difference will be that pods are on a different network.


Provisioning Tools

Deployment Tools

  • Helm (helm): command line tool for “templating and sharing Kubernetes manifests” (ref) that are bundled as Helm chart packages.
  • helm-diff plugin: allows you to see the changes made with helm or helmfile before applying the changes.
  • Helmfile (helmfile): command line tool that uses a “declarative specification for deploying Helm charts across many environments” (ref). This tool can deploy Kubernetes manifests, Helm charts, and even kustomize.

Other Tools (optional)

  • POSIX shell (sh), such as GNU Bash (bash) or Zsh (zsh), are the execution environment used to run all of the above tools on either macOS or Ubuntu Linux.

Project Setup

├── demos
│ └── hello-kubernetes
│ └── helmfile.yaml
└── env.sh

You can create this with the following commands:

Project environment variables

Provision Azure resources

Azure resources will be provisioned in two phases:

  1. Network Infrastructure and Managed Identity
  2. Azure Kubernetes Service

Enable Pod Subnet Preview

Azure Resources

Azure Kubernetes Service

Verify Results: Managed Identity Roles

You can verify this with the following command:

This should look something like the following:

Verify Results: AKS Cluster

source env.shkubectl get all --all-namespaces

This should look something like the following:

AKS (useast2) with Azure CNI and Calico

The one of particular note is azure-cns or Azure Container Networking for the Azure CNI network and IPAM plugins and Calico for network policies.

Verify Results: IP Addresses

Run the following commands below:

Afterward, you should see something like the following:

You will notice the following:

  • subnet will have pods (except daemonset pods)
  • subnet will have nodes and pods that are part of a daemonset (Kube Proxy, Azure CNS, and Calico)

The Demo Program: hello-kubernetes

Copy the following script and save as demos/hello-kubernetes/helmfile.yaml:

Deploy this with the following commands:

helmfile --file demos/hello-kubernetes/helmfile.yaml apply

You can access the one of the pods using port-forward:

kubectl port-forward --namespace hello \
service/hello-kubernetes 8080:80

And you should see something like this with http://localhost:8080:


AKS Cluster

az aks delete \
--resource-group "${AZ_RESOURCE_GROUP}" \

Azure Resources

az group delete --name "${AZ_RESOURCE_GROUP}"

NOTE: Obvious be vary careful with this command, you do NOT want to do the proverbial rm -rf /.


Blog Source Code


I have written other articles that use Azure CNI, but do not delve into segregating the VNETs into pod and node subnets, as the articles already have a layer of complexity on the topics they cover.