Connect any Kubernetes cluster to Amazon EKS
By: Date: 19/09/2021 Categories: AWS Tags:

Amazon Elastic Kubernetes Service (Amazon EKS) now allows you to connect any Kubernetes cluster to AWS and visualize it in Amazon EKS via the AWS Management Console. You can connect any Kubernetes cluster, including Amazon EKS Anywhere clusters running on-premises, self-managed clusters on Amazon Elastic Compute Cloud (Amazon EC2), and other Kubernetes clusters running outside of AWS. Regardless of where your cluster is running, you can use the AWS console to view all connected clusters and the Kubernetes resources running on them.

To connect your cluster, the first step is to register it with Amazon EKS using the AWS Management Console, Amazon EKS API, or eksctl. After providing required inputs such as a cluster name and an IAM role that includes the required permissions, you will receive a configuration file for Amazon EKS Connector, a software agent that runs on a Kubernetes cluster and enables the cluster to register with Amazon EKS. After you apply the configuration file to the cluster, the registration is complete. Once your cluster is connected, you will be able to see the cluster, its configuration and workloads, and their status in the AWS Management Console.

In this post, we will look under the hood to see how the feature is implemented along with an example tutorial.

How it works

To connect Kubernetes clusters to Amazon EKS, you need to invoke the register-cluster API and deploy the manifest to your clusters. This manifest contains the configurations for the EKS Connector and a proxy agent. While the EKS Connector agent enables connectivity to AWS, the proxy agent interacts with Kubernetes to serve AWS requests. Amazon EKS leverages AWS Systems Manager’s agent to connect to AWS services. There are multiple steps involved in connecting a Kubernetes cluster to Amazon EKS. Let’s dive into them one by one.

Prerequisites

ServiceLinkedRole for EKS Connector

aws iam create-service-linked-role --aws-service-name eks-connector.amazonaws.com

Bash

Connector ServiceLinkedRole allows Amazon EKS to call other services to set up resources required for connecting a cluster to Amazon EKS. This includes managing activation in Systems Manager for the EKS Connector agent and for receiving events from EventBridge whenever the EKS Connector agent is registered and deregistered with the Systems Manager service.

Role for EKS Connector agent

# Define trust policy
cat <<EOF > agent-trust-policy.json
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "SSMAccess",
            "Effect": "Allow",
            "Principal": {
                "Service": [
                    "ssm.amazonaws.com"
                ]
            },
            "Action": "sts:AssumeRole"
        }
    ]
}
EOF

# Define policy document
cat <<EOF > agent-policy-document.json
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "SsmControlChannel",
            "Effect": "Allow",
            "Action": [
                "ssmmessages:CreateControlChannel"
            ],
            "Resource": "arn:aws:eks:*:*:cluster/*"
        },
        {
            "Sid": "ssmDataplaneOperations",
            "Effect": "Allow",
            "Action": [
                "ssmmessages:CreateDataChannel",
                "ssmmessages:OpenDataChannel",
                "ssmmessages:OpenControlChannel"
            ],
            "Resource": "*"
        }
    ]
}
EOF

# Create eks connector agent role
aws iam create-role \
  --role-name eks-connector-agent \
  --assume-role-policy-document file://agent-trust-policy.json

# Attach policy to eks connector agent role
aws iam put-role-policy \
    --role-name eks-connector-agent \
    --policy-name eks-connector-agent-policy \
    --policy-document file://agent-policy-document.json

JSON

This IAM role is used by the EKS Connector agent to connect to the Systems Manager service. This role can be scoped down further to include only the EKS cluster name if needed.

Cluster registration

The cluster can be registered in multiple ways, using AWS CLI, SDK, eksctl, or the console. When registering a cluster through eksctl or the console, a manifest file is auto-populated, whereas additional manual steps are required when using the AWS CLI and SDK.

Registering a cluster through AWS CLI

aws eks register-cluster --name external-k8s-cluster \
--connector-config provider=EC2,roleArn=arn:aws:iam::1234567890:role/eks-connector-agent

Bash

Output


Output
{
    "cluster": {
        "name": "external-k8s-cluster",
        "arn": "arn:aws:eks:us-west-2:1234567890:cluster/external-k8s-cluster",
        "createdAt": 1.628097936139E9,
        "status": "PENDING",
        "connectorConfig": {
            "*activationId*": "6cda972c-e098-4db6-8c38-cc9811352157",
            "*activationCode*": "fooBar2021",
            "activationExpiry": 1628702.734,
            "provider": "GKE",
            "roleArn": "arn:aws:iam::1234567890:role/eks-connector-agent"
        }
    }
}

Download the manifest template from S3 link and update the following values:

  • %EKS_ACTIVATION_CODE% with base64 decoded value of activationId returned in the API.
  • %EKS_ACTIVATION_ID% with value of activationCode.
  • %EKS_AWS_REGION% with the same Region as the register-cluster API.
sed -i '' "s~%AWS_REGION%~$EKS_AWS_REGION~g; s~%EKS_ACTIVATION_ID%~$EKS_ACTIVATION_ID~g; s~%EKS_ACTIVATION_CODE%~$(echo -n $EKS_ACTIVATION_CODE | base64)~g" eks-connector.yaml

Bash

Once the manifest file is generated, the next step is to apply the following YAML file to your cluster.

kubectl apply -f eks-connector.yaml

Bash

Connecting the Kubernetes cluster to AWS

The manifest file generated from registering a cluster contains the following components:

InitContainer: This container registers the EKS Connector agent with the Systems Manager control plane service and persists the registration information in the Kubernetes backend data store. InitContainer mounts this data to the EKS Connector agent’s volume when it is recycled. This eliminates the need of registration whenever a pod is recycled.

EKS Connector agent: This is an agent based on the SSM agent, running in container mode. This agent creates an outbound connection from the Kubernetes cluster to the AWS network. All subsequent requests from AWS are performed using the connection channels established by the EKS Connector agent.

Connector proxy: This agent acts as a proxy between the EKS Connector agent and Kubernetes API Server. This proxy agent uses the Kubernetes service account to impersonate the IAM user that accesses the console and fetches information from the Kubernetes API Server.

When this manifest is applied to a Kubernetes cluster, the EKS Connector agent connects to the Systems Manager service, which sends notification to EKS through Amazon EventBridge. EKS uses this agent information to send request when a user is accessing cluster through the console.

Granting IAM user access to Kubernetes cluster

To facilitate different IAM users with different levels of authorization to access the Kubernetes cluster, EKS Connector uses user impersonation mechanism to authorize against Kubernetes API server. This allows Kubernetes administrators to configure different permissions for their IAM users. For example, a user ‘Alice’ can have “read” permission on “foo” namespace, while user ‘Bob’ can be configured to only have access to “bar” namespace. This is done through ClusterRole and ClusterRoleBinding in Kubernetes. For example, the following YAML grants arn:aws:iam::1234567890:user/Aliceaccess to only the default namespace.

cat <<EOF > console-restricted-access.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: eks-console-dashboard-restricted-access-clusterrole
rules:
- apiGroups:
  - ""
  resources:
  - nodes
  - namespaces
  verbs:
  - get
  - list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: eks-console-dashboard-restricted-access-clusterrole-binding
subjects:
- kind: User
  name: "arn:aws:iam::1234567890:user/Alice"
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: eks-console-dashboard-restricted-access-clusterrole
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: default
  name: eks-console-dashboard-restricted-access-role
rules:
- apiGroups:
  - ""
  resources: 
  - pods
  - 
  verbs:
  - get
  - list
- apiGroups:
  - apps
  resources:
  - deployments
  - daemonsets
  - statefulsets
  - replicasets
  verbs:
  - get
  - list
- apiGroups:
  - batch
  resources:
  - jobs
  verbs:
  - get
  - list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: eks-console-dashboard-restricted-access-role-binding
  namespace: default
subjects:
- kind: User
  name: "arn:aws:iam::1234567890:user/Alice"
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: eks-console-dashboard-restricted-access-role
  apiGroup: rbac.authorization.k8s.io
EOF

kubectl apply -f console-restricted-access.yaml

YAML

Cluster role eks-console-dashboard-restricted-access-clusterrole is required for the console to show the list of namespaces available in the cluster. Once a namespace is selected, an IAM user can only view the workloads they have access to in that namespace. Get permission on `Nodes` enables users to view node information in the console. Kubernetes role eks-console-dashboard-restricted-access-role gives access to a minimum set of objects required to view data in the console.

Now, the final step is to allow the ServiceAccount used by the proxy agent to impersonate the IAM user ‘Alice.’ For example:

cat <<EOF > connector-additional-binding.yaml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: eks-connector-service
rules:
  - apiGroups: [ "" ]
    resources:
      - users
    verbs:
      - impersonate
    resourceNames:
      - "arn:aws:iam::1234567890:user/Alice"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: eks-connector-service
subjects:
  - kind: ServiceAccount
    name: eks-connector
    namespace: eks-connector
roleRef:
  kind: ClusterRole
  name: eks-connector-service
  apiGroup: rbac.authorization.k8s.io
---
EOF

kubectl apply -f connector-additional-binding.yaml

YAML

Accessing Kubernetes objects through the console

When an IAM user performs an action on the EKS Console to view Kubernetes objects, their IAM ARN along with the Kubernetes requests are sent to the EKS Connector agent through the Systems Manager service. Once the EKS Connector agent receives the request, it connects to the local Unix socket to pass the Kubernetes requests and IAM ARN to the proxy agent. The proxy agent uses the mounted ServiceAccount token to authenticate and it uses the IAM user ARN for authorization to fetch the information from Kubernetes. This ensures the IAM users can only access the objects that are allowed by Kubernetes RBAC.

Note:

  • IAM users invoking actions in the console should have eks:AccessKubernetesApi.

To demonstrate how the Kubernetes API is invoked from Amazon EKS, here are the sample curl commands:

Sample command to invoke proxy container

curl -s --unix-socket /var/eks/shared/connector.sock -X 'GET'
 -H 'x-aws-eks-identity-arn: arn:aws:iam::908176817140:role/IsengardAdministrator' \
'http://localhost/api/v1/namespaces?limit=100'

Bash

Sample command sent to Kubernetes API server

curl -X 'GET' -H 'Impersonate-User: arn:aws:iam::908176817140:role/IsengardAdministrator' \
-H 'Authorization: Bearer sEcReT_tOkEn'
--cacert ${CACERT}
'http://kubernetes.default.svc/api/v1/namespaces?limit=100'

Bash

Kubernetes response from the proxy agent is returned to the console through the EKS Connector agent and the Systems Manager service.

Deregistering a connected cluster

To deregister a connected cluster from EKS, use the “Deregister-cluster” API. This only removes metadata stored in the EKS and Systems Manager services but does not delete cluster resources. All the cluster resources created in Kubernetes including cluster role and role bindings should be deleted from the cluster by the Kubernetes administrator.