Kubernetes runtime changes and EKS
In version 1.20, Kubernetes deprecated Dockershim, which allows Kubernetes to use Docker as a container runtime. Docker is still fully functional, but users will need to migrate to a different container runtime before support is removed in a future Kubernetes release.
We’ve been hard at work making sure there is a clear path for customers to migrate to containerd as a runtime. With the 1.21 release we’re happy to announce that the Amazon Linux 2 EKS optimized AMI images will come with containerd support built in. The default runtime for 1.21 will still be Docker, and you can opt-in to containerd runtime by adding a --container-runtime containerd
option to your user data.
There are new versions of the Amazon Linux 2 EKS optimized AMI for EKS 1.16, 1.17, 1.18, 1.19, and 1.20. You don’t have to wait to test containerd container runtime and you can test it by adding a node group to an existing cluster or creating a new cluster with the latest AMI.
/etc/eks/bootstrap.sh $CLUSTER_NAME \
--b64-cluster-ca $B64_CLUSTER_CA \
--apiserver-endpoint $API_SERVER_URL \
--container-runtime containerd
Bash
You can also create a cluster an EKS cluster with eksctl
with containerd runtime using the following eksctl config file.
EKS_VERSION=1.21
AMI_ID=$(aws ssm get-parameter \
--name /aws/service/eks/optimized-ami/${EKS_VERSION}/amazon-linux-2/recommended/image_id \
--query "Parameter.Value" --output text)
AWS_REGION=${AWS_DEFAULT_REGION:-us-west-2}
CLUSTER_NAME=containerd-eks
cat > eksctl-containerd.yaml <<EOF
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: ${CLUSTER_NAME}
region: ${AWS_REGION}
version: "${EKS_VERSION}"
managedNodeGroups:
- name: containerd
ami: ${AMI_ID}
overrideBootstrapCommand: |
#!/bin/bash
/etc/eks/bootstrap.sh ${CLUSTER_NAME} --container-runtime containerd
EOF
eksctl create cluster -f eksctl-containerd.yaml
YAML
In the next release of EKS with Kubernetes 1.22 we will be changing the default runtime from Docker to containerd. This means EKS 1.21 will be the last release with Docker container runtime support. It is recommended that you test your workloads using containerd during the 1.21 lifecycle so you can make sure you don’t depend on any Docker specific features such as mounting the Docker socket or using docker-in-docker for container builds.
Containerd has been widely adopted in the Kubernetes community and is a graduated project with the CNCF. It’s fully supported with EKS and a great docker alternative. If you would like to use a container native operating system you can also use Bottlerocket OS which already comes with containerd as the default container runtime.
Kubernetes 1.21 highlights
If you’re interested in all of the notable features you should check out the the Kubernetes Blog release post and full release notes. I’ll highlight some of the notable updates here.
CronJob improvements
CronJobs graduated to a stable feature in this release of Kubernetes. The TTLAfterFinished option was enabled in EKS 1.20 and the feature graduates to beta in 1.21 upstream. This means if you have a CronJob that creates pods you can specify ttlSecondsAfterFinished
which will delete the pods so you don’t have completed pods left in etcd. This is especially useful for AWS Fargate users because pods create Kubernetes nodes and with this setting they’re automatically deleted after the job completes.
Graceful Node Shutdown
This feature has graduated to beta which allows the kubelet to block a node shutdown until all the pods have been given the chance to gracefully shut down. This is helpful for node termination requests that originate outside of Kubernetes (eg someone running sudo poweroff
) and may not go through the full cordon and drain lifecycle via a lifecycle hook. Managed Node Groups already have lifecycle hooks for node terminations that are requested through the Kubernetes API such as Cluster Autoscaler scale down events.
IPv4/IPv6 dual-stack support
Kubernetes now adds the ability for pods to have IPv4 and IPv6 addresses at the same time. This feature depends on your cluster needs and Container Network Interface (CNI) being used. The Amazon VPC CNI plugin will not support dual stack with the 1.21 release. For now we are focusing on enabling IPv6 support in a single interface configuration so pods can take advantage of IPv6 addressing but still route to IPv4 endpoints. Please subscribe to this GitHub issue if you’d like to stay up to date on IPv6 support.
Immutable Secrets and ConfigMaps
It’s now possible to mark ConfigMaps and Secrets as immutable in the Kubernetes API. This is very helpful when trying to migrate to new applications which require ConfigMap changes. By using immutable ConfigMaps and Secrets you can guarantee they are not changed and each deployment version has a config that matches. This can prevent outages and errors, but it currently requires manual tracking for which applications will use which version of each immutable ConfigMap and Secret.
Cluster Autoscaler priority expander for managed node groups
Managed node groups in EKS 1.21 will name Auto Scaling Groups in the form of eks-<managed-node-group-name>-<uuid>
. This means you can now take advantage of the Cluster Autoscaler priority expander with your managed node groups.
If you have two managed node groups for on-demand instances and spot instances with the names spot
and on-demand
you can set a higher priority for spot instance scaling with the following ConfigMap.
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-autoscaler-priority-expander
namespace: kube-system
data:
priorities: |-
20:
- eks-spot.*
10:
- eks-on-demand.*
YAML
Feature Deprecations
The Kubernetes API evolves over time, with new features moving from Alpha to Beta and eventually to Stable. In Amazon EKS, we enable beta and stable Kubernetes features which are enabled by default.
Features are also deprecated and removed from version to version. Feature removal is a slow and careful process, where features are first deprecated and remain that way for several releases then eventually removed. Check the upstream docs for more information on the Kubernetes Deprecation Policy.
Two new deprecations in 1.21 are described below, and you should start planning migration from them to prepare for their eventual removal. As a peek into the next release, check out this recent blog regarding removals in Kubernetes 1.22.
PodSecurityPolicy Deprecation
PodSecurityPolicy has been deprecated which means the functionality is still available but if you are using this feature you should make plans to use something different in a future release. The current plan for the Kubernetes project is to remove PodSecurityPolicy functionality in Kubernetes 1.25. They will be replaced with something new in the future which you can read more about here.
If you’re already using PodSecurityPolicies and would like to have a drop-in replacement you can use the open source Gatekeeper as an alternative policy enforcement option. We have a walk through on how to use it in this blog post.
TopologyKeys Deprecation
TopologyKeys was an alpha feature in Kubernetes and never available in EKS. It is being replaced with a new alpha feature called Topology Aware Hints in 1.21. It will be available in EKS in a future release when the feature graduates out of alpha status. TopologyKeys is a good example for why EKS does not enable alpha features in the API server. Alpha features are not guaranteed to be part of the API long term and don’t have long deprecation cycles like stable or beta features.
Upgrading EKS
There are some deprecations in 1.21 but no EKS features were removed in 1.21 so you should be able to test your workloads and upgrade without disruption. You can learn about how to upgrade your EKS version in EKS documentation.
If you’re still using EKS 1.16 now is the time to upgrade. EKS 1.16 will end support on July 25th and you should migrate to a newer release as soon as possible. You can see the EKS release calendar for more information.