Set up Azure CNI overlay networking in Azure Kubernetes Service
By: Date: 15/05/2023 Categories: azure Tags:

The cluster nodes are deployed into an Azure Virtual Network (VNet) subnet when using Azure CNI Overlay, but IP addresses for pods are obtained from a private CIDR that is logically separate from the VNet hosting the cluster nodes. The Overlay network is used for pod and node traffic inside the cluster, while Network Address Translation (using the node’s IP address) is applied to access resources outside the cluster. With this approach, you may extend your cluster to very large sizes while saving a sizable number of VNet IP addresses. An further benefit is the ability to reuse the private CIDR across many AKS clusters, effectively expanding the IP area open to containerized applications in AKS.

Only the Kubernetes cluster nodes are given IP addresses from a subnet in overlay networking. Pods get IP addresses from a private CIDR that is given when the cluster is created. From the same CIDR, a /24 address space is allocated to each node. When you expand out a cluster, further nodes are automatically given /24 address spaces from the same CIDR. From this /24 area, Azure CNI gives IP addresses to pods.

There is a maximum number of pods per node

When creating a cluster or adding a new node pool, you can define the maximum number of pods per node. Azure CNI Overlay’s default value is 30. Azure CNI Overlay allows you to set values up to a maximum of 250 and a minimum of 10. Only the nodes in that node pool are affected by the maximum pods per node value configured during establishment of the node pool.

Azure CNI Overlay limitations

  • You can’t use Application Gateway as an Ingress Controller (AGIC) for an Overlay cluster.
  • Windows support is still in Preview
    • Windows Server 2019 node pools are not supported for Overlay
    • Traffic from host network pods is not able to reach Windows Overlay pods.
  • Sovereign Clouds are not supported
  • Virtual Machine Availability Sets (VMAS) are not supported for Overlay
  • Dualstack networking is not supported in Overlay
  • You can’t use DCsv2-series virtual machines in node pools. To meet Confidential Computing requirements, consider using DCasv5 or DCadsv5-series confidential VMs instead.

Configure Overlay clusters

Create a cluster with Azure CNI Overlay. Use the argument --network-plugin-mode to specify that this is an overlay cluster. If the pod CIDR is not specified then AKS assigns a default space, viz. 10.245.0.0/16. Replace the values for the variables clusterNameresourceGroup, and location.

clusterName="myOverlayCluster"
resourceGroup="myResourceGroup"
location="westcentralus"
az aks create -n $clusterName -g $resourceGroup --location $location --network-plugin azure --network-plugin-mode overlay --pod-cidr 192.168.0.0/16

Download and install the aks-preview Azure CLI extension for Windows

To install the aks-preview extension, run the following command:
az extension add --name aks-preview

You can update the extension to the latest version by running the following command:
az extension update --name aks-preview

The 'AzureOverlayPreview' feature flag should be registered
Register the AzureOverlayPreview feature flag by using the az feature register command, as shown in the following example:
az feature register --namespace "Microsoft.ContainerService" --name "AzureOverlayPreview"

It takes a few minutes for the status to show Registered. Verify the registration status by using the az feature show command:
az feature show --namespace "Microsoft.ContainerService" --name "AzureOverlayPreview"

When the status reflects Registered, refresh the registration of the Microsoft.ContainerService resource provider by using the az provider register command:
az provider register --namespace Microsoft.ContainerService


more details- Azure CNI overlay in AKS


Learn Continually there’s always “one more thing” to learn! ✍️📖🎥🎙️