If you want to resize an existing node pool, called nodepool01
, from SKU size Standard_DS3_v2 to Standard_DS4_v2. To complete this task, you’ll need to create a new node pool using Standard_DS4_v2, move workloads from nodepool01
to the new node pool, and remove nodepool01
. In this example, call this new node pool testnodepool
.
View Existing nodes in AKS:
kubectl get nodes
kubectl get pods -o wide -A
Create a new node pool with the desired SKU
Use the az aks nodepool add command to create a new node pool called testnodepool
with three nodes using the Standard_DS4_v2
VM SKU:
az aks nodepool add \
--resource-group myResourceGroup \
--cluster-name myAKSCluster \
--name mynodepool \
--node-count 3 \
--node-vm-size Standard_DS4_v2 \
--mode System \
--no-wait
When resizing, be sure to consider other requirements and configure your node pool accordingly. You may need to modify the above command.
After a few minutes, the new node pool has been created:
After adding a new node pool, view get nodes
kubectl get nodes
Cordon the existing nodes
Cordoning marks specified nodes as unschedulable and prevents any more pods from being added to the nodes.
First, obtain the names of the nodes you’d like to cordon with kubectl get nodes
. Your output should look similar to the following:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
aks-nodepool1-31721111-vmss000000 Ready agent 7d21h v1.21.9
aks-nodepool1-31721111-vmss000001 Ready agent 7d21h v1.21.9
aks-nodepool1-31721111-vmss000002 Ready agent 7d21h v1.21.9
Next, using kubectl cordon <node-names>
, specify the desired nodes in a space-separated list:
kubectl cordon aks-nodepool1-31721111-vmss000000 aks-nodepool1-31721111-vmss000001 aks-nodepool1-31721111-vmss000002
node/aks-nodepool1-31721111-vmss000000 cordoned
node/aks-nodepool1-31721111-vmss000001 cordoned
node/aks-nodepool1-31721111-vmss000002 cordoned
Drain the existing nodes
Draining nodes will cause pods running on them to be evicted and recreated on the other, schedulable nodes.
To drain nodes, use kubectl drain <node-names> --ignore-daemonsets --delete-emptydir-data
, again using a space-separated list of node names:
kubectl drain aks-nodepool1-31721111-vmss000000 aks-nodepool1-31721111-vmss000001 aks-nodepool1-31721111-vmss000002 --ignore-daemonsets --delete-emptydir-data
After the drain operation finishes, all pods other than those controlled by daemon sets are running on the new node pool:
kubectl get pods -o wide -A
Remove the existing node pool
To delete the existing node pool, use the Azure portal or the az aks nodepool delete command:
az aks nodepool delete \
--resource-group myResourceGroup \
--cluster-name myAKSCluster \
--name nodepool1