AWS introduced the concepts of Amazon EKS add-ons to ease the management of cluster add-on with the release of k8s 1.19.
In this particular use-case we had a cluster where add-ons were self managed, meaning installed using helm. There for each cluster upgrades, and new release of add-ons also needs to be upgraded. We were required to go through change-logs, and other pre-requisite so that there are no any breaking changes, and if there were we would need to make sure cluster works fine after the upgrade.
This management overhead could have been easily mitigated if we migrated to the EKS Managed add-ons. In this tutorial we will be migrating add-ons
- Amazon VPC CNI
- CoreDNS
- Kube-Proxy
Verify add-ons status
Firstly we would need to verify which add-ons are configured using managed add-ons. For this use the command
$ aws eks list-addons --cluster-name $CLUSTER_NAME
Output may be different in your case. But in my case aws-efs-csi-driver and aws-guardduty-agent were managed using add-ons. So output was something like
$ aws eks list-addons --cluster-name demo-eks
{
"addons": [
"aws-efs-csi-driver",
"aws-guardduty-agent"
]
}
Migrating Amazon VPC CNI Plugin
Amazon VPC CNI is responsible for creating Elastic Network Interface (ENI) and attach them to your worker nodes.
Let’s first see the version configured in cluster and also create a backup of configuration. Backup is required, if in case something goes wrong, we can easily restore.
Version of VPC CNI Plugin
$ kubectl describe daemonset aws-node --namespace kube-system | grep amazon-k8s-cni: | cut -d : -f 3
v1.19.5
Verify VPC CNI Plugin is managed manually
$ aws eks describe-addon --cluster-name demo-eks --addon-name vpc-cni --query addon.addonVersion --output text
An error occurred (ResourceNotFoundException) when calling the DescribeAddon operation: No addon: vpc-cni found in cluster: demo-eks
Create a Backup of Configuration
$ kubectl get daemonset aws-node -n kube-system -o yaml > aws-k8s-cni-backup.yaml
Now we have backup, also configured the correct version. Now lets create a IAM role with AmazonEKS_CNI_Policy.
#!/bin/bash
set -euo pipefail
# ==== CONFIGURATION ====
CLUSTER_NAME="your-cluster-name" # Replace with your EKS cluster name
REGION="your-region" # Replace with your AWS region (e.g., ap-south-1)
ENV="demo" # Environment or suffix
SERVICE_ACCOUNT_NAME="aws-node"
NAMESPACE="kube-system"
ROLE_NAME="AmazonEKSVPCCNIRole-${ENV}"
POLICY_ARN="arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
# ==== DERIVED VALUES ====
ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
OIDC_URL=$(aws eks describe-cluster \
--name "$CLUSTER_NAME" \
--region "$REGION" \
--query "cluster.identity.oidc.issuer" \
--output text)
OIDC_PROVIDER=$(echo "$OIDC_URL" | sed 's|https://||')
OIDC_PROVIDER_ARN="arn:aws:iam::$ACCOUNT_ID:oidc-provider/$OIDC_PROVIDER"
echo "Creating IAM role for $SERVICE_ACCOUNT_NAME in $NAMESPACE..."
echo "OIDC URL: $OIDC_URL"
echo "OIDC Provider ARN: $OIDC_PROVIDER_ARN"
# ==== CREATE TRUST POLICY JSON ====
cat > trust-policy.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "${OIDC_PROVIDER_ARN}"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"${OIDC_PROVIDER}:sub": "system:serviceaccount:${NAMESPACE}:${SERVICE_ACCOUNT_NAME}"
}
}
}
]
}
EOF
# ==== CREATE IAM ROLE ====
aws iam create-role \
--role-name "$ROLE_NAME" \
--assume-role-policy-document file://trust-policy.json
# ==== ATTACH POLICY ====
aws iam attach-role-policy \
--role-name "$ROLE_NAME" \
--policy-arn "$POLICY_ARN"
echo "IAM role '$ROLE_NAME' created and policy attached."
# ==== CLEANUP ====
rm trust-policy.json
This also can be done using terraform. Sample terraform code
variable "env" {
default = "demo"
}
variable "cluster_oidc_provider_arn" {}
variable "cluster_oidc_provider_url" {}
resource "aws_iam_role" "eks_cni_role" {
name = "AmazonEKSVPCCNIRole-${var.env}"
assume_role_policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Effect = "Allow",
Principal = {
Federated = var.cluster_oidc_provider_arn
},
Action = "sts:AssumeRoleWithWebIdentity",
Condition = {
StringEquals = {
"${replace(var.cluster_oidc_provider_url, "https://", "")}:sub" = "system:serviceaccount:kube-system:aws-node"
}
}
}
]
})
}
resource "aws_iam_role_policy_attachment" "cni_policy_attach" {
role = aws_iam_role.eks_cni_role.name
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
}
Set the cluster_oidc_provider_arn and cluster_oidc_provider_url from your EKS cluster or call from the resource/module block for EKS.
aws eks describe-cluster --name <CLUSTER> --query "cluster.identity.oidc.issuer" --output text
Use the URL as cluster_oidc_provider_url, and the full ARN as cluster_oidc_provider_arn.
Now we would need to create a ServiceAccount with appropriate roles attachments.
apiVersion: v1
kind: ServiceAccount
metadata:
name: aws-node
namespace: kube-system
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::<AWS_ACCOUNT_ID>:role/AmazonEKSVPCCNIRole-<ENV>
Finally we would need to migrate self-managed addons to EKS managed. Keep in mind keyword here is OVERWRITE.
$ aws eks create-addon \
--cluster-name $CLUSTER \
--addon-name vpc-cni \
--service-account-role-arn arn:aws:iam::${AWS_ACCOUNT_ID}:role/AmazonEKSVPCCNIRole-${ENV} \
--resolve-conflicts OVERWRITE
Migrating CoreDNS
CoreDNS is a flexible, extensible DNS server that can serve as the Kubernetes cluster DNS.
More about CoreDNS
Let’s first see the version configured in cluster and also create a backup of configuration. Backup is required, if in case something goes wrong, we can easily restore.
$ kubectl describe deployment coredns --namespace kube-system | grep coredns: | cut -d : -f 3
v1.11.4-eksbuild.2
$ kubectl get deployment coredns -n kube-system -o yaml > aws-k8s-coredns-backup.yaml
Finally we would need to migrate self-managed addons to EKS managed. Again Keep in mind keyword here is OVERWRITE.
$ aws eks create-addon --cluster-name $CLUSTER --addon-name coredns --resolve-conflicts OVERWRITE
Migrating kube-proxy
The kube-proxy add-on is deployed on each Amazon EC2 node in your Amazon EKS cluster. It maintains network rules on your nodes and enables network communication to your Pods.
More about kube-proxy
Let’s first see the version configured in cluster and also create a backup of configuration. Backup is required, if in case something goes wrong, we can easily restore.
$ kubectl describe daemonset kube-proxy -n kube-system | grep Image
602401143452.dkr.ecr.eu-west-2.amazonaws.com/eks/kube-proxy:v1.32.0-eksbuild.2
$ kubectl get daemonset kube-proxy -n kube-system -o yaml > aws-k8s-kube-proxy-backup.yaml
Finally we would need to migrate self-managed addons to EKS managed. Again Keep in mind keyword here is OVERWRITE.
$ aws eks create-addon --cluster-name $CLUSTER --addon-name kube-proxy --resolve-conflicts OVERWRITE
Verification
Finally you can verify the installation of all addons using the command
$ aws eks list-addons --cluster-name demo-eks
{
"addons": [
"coredns",
"kube-proxy",
"vpc-cni",
"aws-efs-csi-driver",
"aws-guardduty-agent"
]
}
Happy Migration ! ! !
Top comments (0)