<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: harshaway</title>
    <description>The latest articles on Forem by harshaway (@harshaway).</description>
    <link>https://forem.com/harshaway</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/harshaway"/>
    <language>en</language>
    <item>
      <title>Amazon EKS now supports Kubernetes version 1.24 (Eks 1.23 To 1.24 Upgrade)</title>
      <dc:creator>harshaway</dc:creator>
      <pubDate>Thu, 13 Jul 2023 05:24:12 +0000</pubDate>
      <link>https://forem.com/harshaway/amazon-eks-now-supports-kubernetes-version-124-eks-123-to-124-upgrade-4aa7</link>
      <guid>https://forem.com/harshaway/amazon-eks-now-supports-kubernetes-version-124-eks-123-to-124-upgrade-4aa7</guid>
      <description>&lt;p&gt;The Amazon Elastic Kubernetes Service (Amazon EKS) team is pleased to announce support for Kubernetes version 1.24 for Amazon EKS and Amazon EKS Distro. We are excited for our customers to experience the power of the “Stargazer” release. Each Kubernetes release is given a name by the release team. The team chose “Stargazer” for this release to honor the work done by hundreds of contributors across the globe: “Every single contributor is a star in our sky, and Amazon EKS extends its sincere thanks to the upstream community and the Kubernetes 1.24 Release Team for bringing this release to the greater cloud-native ecosystem.”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Updating an Amazon EKS cluster Kubernetes version&lt;/strong&gt;&lt;br&gt;
When a new Kubernetes version is available in Amazon EKS, you can update your Amazon EKS cluster to the latest version.&lt;br&gt;
Update the Kubernetes version for your Amazon EKS cluster&lt;br&gt;
To update the Kubernetes version for your cluster&lt;br&gt;
Compare the Kubernetes version of your cluster control plane to the Kubernetes version of your nodes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Get the Kubernetes version of your cluster control plane&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl version --short
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Get the Kubernetes version of your nodes. This command returns all self-managed and managed Amazon EC2 and Fargate nodes. Each Fargate Pod is listed as its own node.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;The major changes from eks 1.24 was Amazon EKS ended support for Dockershim&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Why we’re moving away from dockershim&lt;/strong&gt;&lt;br&gt;
Docker was the first container runtime used by Kubernetes. This is one of the reasons why Docker is so familiar to many Kubernetes users and enthusiasts. Docker support was hardcoded into Kubernetes – a component the project refers to as dockershim. As containerization became an industry standard, the Kubernetes project added support for additional runtimes. This culminated in the implementation of the container runtime interface (CRI), letting system components (like the kubelet) talk to container runtimes in a standardized way. As a result, dockershim became an anomaly in the Kubernetes project. Dependencies on Docker and dockershim have crept into various tools and projects in the CNCF ecosystem ecosystem, resulting in fragile code&lt;br&gt;
&lt;strong&gt;Docker and Containerd in Kubernetes&lt;/strong&gt;&lt;br&gt;
Think of the Docker as a big car with all of its parts: the engine, the steering wheel, the pedals, and so on. And if we need the engine, we can easily extract it and move it into another system.&lt;/p&gt;

&lt;p&gt;This is exactly what happened when Kubernetes needed such an engine. They basically said, "Hey, we don't need the entire car that is Docker; let's just pull out its container runtime/engine, Containerd, and install that into Kubernetes."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker vs. Containerd: What Is The Difference?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Docker was written with human beings in mind. We can imagine it as a sort of translator that tells an entire factory, filled with robots, about what the human wants to build or do. Docker CLI is the actual translator, some of the other pieces in Docker are the robots in the factory.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3Z_6O0Eu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zhfw4cp6ti9az0ipopb8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3Z_6O0Eu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zhfw4cp6ti9az0ipopb8.png" alt="Image description" width="800" height="419"&gt;&lt;/a&gt;&lt;br&gt;
keep in mind Kubernetes is a program, and Containerd is also a program. And programs can quickly talk to each other, even if the language they speak is complex. containerd is developed from the ground up to let other programs give it instructions. It receives instructions in a specialized language named API calls.&lt;/p&gt;

&lt;p&gt;The messages sent in API calls need to follow a certain format so that the receiving program can understand them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qDuWJpxd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fuyjgg92k5ft5fkly7ut.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qDuWJpxd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fuyjgg92k5ft5fkly7ut.png" alt="Image description" width="800" height="244"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Removal of Dockershim&lt;/strong&gt;&lt;br&gt;
The most significant change in this release is the removal of the Container Runtime Interface (CRI) for Docker (also known as Dockershim). Starting with version 1.24 of Kubernetes, the Amazon Machine Images (AMIs) provided by Amazon EKS will only support the containerd runtime. The EKS optimized AMIs for version 1.24 no longer support passing the flags &lt;code&gt;enable-docker-bridge, docker-config-json, and container-runtime&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Before upgrading your worker nodes to Kubernetes 1.24, you must remove all references to these flags&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The open container initiative (OCI) images generated by docker build tools will continue to run in your Amazon EKS clusters as before. As an end-user of Kubernetes, you will not experience significant changes.&lt;/p&gt;

&lt;p&gt;For more information, see &lt;a href="https://kubernetes.io/blog/2022/01/07/kubernetes-is-moving-on-from-dockershim/"&gt;Kubernetes is Moving on From Dockershim&lt;/a&gt;: Commitments and Next Steps on the Kubernetes Blog.&lt;br&gt;
Kubernetes 1.24 features and removals&lt;br&gt;
&lt;strong&gt;Admission controller enabled&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;CertificateApproval&lt;/code&gt;, &lt;code&gt;CertificateSigning&lt;/code&gt;, &lt;code&gt;CertificateSubjectRestriction&lt;/code&gt;, &lt;code&gt;DefaultIngressClass&lt;/code&gt;, &lt;code&gt;DefaultStorageClass&lt;/code&gt;, &lt;code&gt;DefaultTolerationSeconds&lt;/code&gt;, &lt;code&gt;ExtendedResourceToleration&lt;/code&gt;, &lt;code&gt;LimitRanger&lt;/code&gt;, &lt;code&gt;MutatingAdmissionWebhook&lt;/code&gt;, &lt;code&gt;NamespaceLifecycle&lt;/code&gt;, &lt;code&gt;NodeRestriction&lt;/code&gt;, &lt;code&gt;PersistentVolumeClaimResize&lt;/code&gt;, &lt;code&gt;Priority&lt;/code&gt;, &lt;code&gt;PodSecurityPolicy&lt;/code&gt;, &lt;code&gt;ResourceQuota&lt;/code&gt;, &lt;code&gt;RuntimeClass&lt;/code&gt;, &lt;code&gt;ServiceAccount&lt;/code&gt;, &lt;code&gt;StorageObjectInUseProtection&lt;/code&gt;, &lt;code&gt;TaintNodesByCondition&lt;/code&gt;, and &lt;code&gt;ValidatingAdmissionWebhook&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important changes&lt;/strong&gt;&lt;br&gt;
Starting with Kubernetes 1.24, new beta APIs are no longer enabled in clusters by default. Existing beta APIs and new versions of existing beta APIs continue to be enabled. Amazon EKS will have exactly the same behaviour. For more information, see &lt;a href="https://github.com/kubernetes/enhancements/blob/master/keps/sig-architecture/3136-beta-apis-off-by-default/README.md"&gt;KEP-3136: Beta APIs Are Off by Default&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In Kubernetes 1.23 and earlier, kubelet serving certificates with unverifiable IP and DNS Subject Alternative Names (SANs) were automatically issued with unverifiable SANs. These unverifiable SANs are omitted from the provisioned certificate. Starting from version 1.24, kubelet serving certificates aren't issued if any SAN can't be verified. This prevents kubectl exec and kubectl logs commands from working. For more information, see &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/cert-signing.html#csr-considerations"&gt;Certificate signing considerations for Kubernetes 1.24 and later clusters&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Topology Aware Hints&lt;/strong&gt;&lt;br&gt;
It is common practice to deploy Kubernetes workloads to nodes running across different availability zones (AZ) for resiliency and fault isolation. While this architecture provides great benefits, in many scenarios it will also result in cross-AZ data transfer charges. You may refer to this &lt;a href="https://aws.amazon.com/blogs/containers/addressing-latency-and-data-transfer-costs-on-eks-using-istio/"&gt;post&lt;/a&gt; to learn more about common scenarios for data transfer charges on EKS. Amazon EKS customers can now use &lt;a href="https://kubernetes.io/docs/concepts/services-networking/topology-aware-hints/"&gt;Topology Aware Hints&lt;/a&gt;, which are enabled by default, to keep Kubernetes service traffic within the same availability zone. Topology Aware Hints provide a flexible mechanism to provide hints to components, such as kube-proxy, and use them to influence how the traffic is routed within the cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pod Security Policy&lt;/strong&gt; (PSP) was deprecated in Kubernetes version 1.21 and will be removed in version 1.25. PSPs are being replaced by Pod Security Admission (PSA), a built-in admission controller that implements the security controls outlined in the Pod Security Standards (PSS). PSA and PSS have both reached beta feature status as of Kubernetes version 1.23 and are now enabled in EKS. Please read the following when implementing PSP and PSS, please review this blog post.&lt;/p&gt;

&lt;p&gt;You can also leverage Policy-as-Code (PaC) solutions such as &lt;a href="https://github.com/kyverno/kyverno/"&gt;Kyverno&lt;/a&gt;, and &lt;a href="https://github.com/open-policy-agent/gatekeeper/"&gt;OPA/Gatekeeper&lt;/a&gt; from the Kubernetes ecosystem as an alternative to PSA. &lt;a href="https://aws.github.io/aws-eks-best-practices/security/docs/pods/#migrating-to-a-new-pod-security-solution"&gt;Please visit the Amazon EKS Best Practices Guide&lt;/a&gt; for more information on PaC solutions and help deciding between PSA and PaC.&lt;/p&gt;

&lt;p&gt;Simplified scaling for EKS Managed Node Groups (MNG)&lt;br&gt;
For Kubernetes 1.24, we have contributed a &lt;a href="https://github.com/kubernetes/autoscaler/commit/b4cadfb4e25b6660c41dbe2b73e66e9a2f3a2cc9"&gt;feature&lt;/a&gt; to the upstream &lt;a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider/aws"&gt;Cluster Autoscaler &lt;/a&gt;project that simplifies scaling the Amazon EKS managed node group (MNG) to and from zero nodes. Before, you had to tag the underlying EC2 Autoscaling Group (ASG) for the Cluster Autoscaler to recognize the resources, labels, and taints of an MNG that was scaled to zero nodes.&lt;/p&gt;

&lt;p&gt;Starting with Kubernetes 1.24, when there are no running nodes in the MNG, the Cluster Autoscaler will call the EKS DescribeNodegroup API to get the information it needs about MNG resources, labels, and taints. When the value of a Cluster Autoscaler tag on the ASG powering an EKS MNG conflicts with the value of the MNG itself, the Cluster Autoscaler will prefer the ASG tag so that customers can override values as necessary.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Change to certificates controller&lt;/strong&gt;&lt;br&gt;
 In Kubernetes 1.23 and earlier, kubelet serving certificates with unverifiable IP and &lt;strong&gt;DNS Subject Alternative Names&lt;/strong&gt; (SANs) were automatically issued with the unverifiable SANs. Beginning with version 1.24, no kubelet-serving certificates will be issued if any SANs cannot be confirmed. This will prevent the kubectl exec and kubectl logs commands from working. Please follow the steps outlined in the EKS user guide to determine if you are impacted by this issue, the recommended workaround, and long-term resolution.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Upgrade your EKS with terraform&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Clone below repo from Bitbucket which consists of Terraform files for eks-test cluster&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;git clone git@bitbucket.org:example.eks.module&lt;/code&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Change the EKS version in the terraform.tfvars file
In this case, we will change it from 1.22 to 1.23.
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ vi variables.tf

variable "eks_version" {
   default = "1.24"
   description = "kubernetes cluster version provided by AWS EKS"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Once all the above modifications are done then execute terraform plan to verify&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;$ terraform plan&lt;br&gt;
If the changes were made to a single file with launch configuration and auto scaling group (to add would increase by 2 for every file changed), output would be: Plan: 0 to add, 1 to change, 0 to destroy.&lt;/p&gt;

&lt;p&gt;1 to change: EKS version from 1.23 to 1.24. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;After verification, now it’s time to apply the changes
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ terraform apply
# verify again and type 'yes' when prompted.


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Upgrading Managed EKS Add-ons&lt;br&gt;
In this case the change is trivial and works fine, simply update the version of the add-on. In my case, from this release I utilise kube-proxy, coreDNS and ebs-csi-driver.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Terraform resources for add-ons&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_eks_addon" "kube_proxy" {
  cluster_name      = aws_eks_cluster.cluster[0].name
  addon_name        = "kube-proxy"
  addon_version     = "1.24.7-eksbuild.2"
  resolve_conflicts = "OVERWRITE"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_eks_addon" "core_dns" {
  cluster_name      = aws_eks_cluster.cluster[0].name
  addon_name        = "coredns"
  addon_version     = "v1.8.7-eksbuild.3"
  resolve_conflicts = "OVERWRITE"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_eks_addon" "aws_ebs_csi_driver" {
  cluster_name      = aws_eks_cluster.cluster[0].name
  addon_name        = "aws-ebs-csi-driver"
  addon_version     = "v1.13.0-eksbuild.1"
  resolve_conflicts = "OVERWRITE"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After upgrading EKS control-plane&lt;br&gt;
Remember to upgrade core deployments and daemon sets that are recommended for EKS 1.24.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Kube-proxy — 1.24.7-minimal-eksbuild.2&lt;/code&gt;(note the change to minimal version, it is only stated in the official documentation)&lt;br&gt;
&lt;code&gt;VPC CNI — 1.11.4-eksbuild.1&lt;/code&gt;(there is versions 1.12 available but 1.11.4 is the recommended one)&lt;br&gt;
aws-ebs-csi-driver- v1.13.0-eksbuild.1&lt;br&gt;
The above is just a recommendation from AWS. You should look at upgrading all your components to match the 1.24 Kubernetes version. They could include:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;cluster-autoscaler or Karpenter&lt;/li&gt;
&lt;li&gt;kube-state-metrics&lt;/li&gt;
&lt;li&gt;metrics-server&lt;/li&gt;
&lt;/ol&gt;

</description>
    </item>
    <item>
      <title>Upgrade AWS Elastic Kubernetes Service (EKS) Cluster Via Terraform 1.22 to 1.23</title>
      <dc:creator>harshaway</dc:creator>
      <pubDate>Mon, 22 May 2023 12:26:52 +0000</pubDate>
      <link>https://forem.com/harshaway/upgrade-aws-elastic-kubernetes-service-eks-cluster-via-terraform-122-to-123-m0f</link>
      <guid>https://forem.com/harshaway/upgrade-aws-elastic-kubernetes-service-eks-cluster-via-terraform-122-to-123-m0f</guid>
      <description>&lt;p&gt;Kubernetes is the new normal when it comes to host your applications.&lt;/p&gt;

&lt;p&gt;AWS Elastic Kubernetes service is a managed service where the control plane is deployed in a High Availability and it is completely managed by AWS in the backend allowing the administrators/SRE/DevOps Engineers to manage the data plane and the microservices running as pods.&lt;/p&gt;

&lt;p&gt;As of writing the post today Kubernetes community has a three releases per year cadence for the k8s version. On the other hand AWS has their own customized version of Kubernetes(EKS Version) and have their own release cadence. You could find this information at &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html"&gt;https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note - EKS upgrade is a step upgrade and can be upgraded from one minor version at a time for e.g. 1.22 to 1.23&lt;/p&gt;

&lt;p&gt;Managing AWS EKS via terraform helps us to maintain the desired state and it also allows us seamlessly to perform the cluster upgrade.&lt;/p&gt;

&lt;p&gt;Pre-requisites in Terraform&lt;br&gt;
Verify that the state file of EKS does not throws any error before the upgrade.&lt;br&gt;
Ensure the state is stored in a remote place such as Amazon S3&lt;br&gt;
Pre-requisites in EKS&lt;br&gt;
Ensure 5 free IP addresses from the VPC subnets of EKS cluster (explained in below section)&lt;br&gt;
Ensure the Kubelet version is same as the control plane version&lt;br&gt;
Verify EKS addons version and upgrade if necessary before the start of cluster upgrade.&lt;br&gt;
Pod Disruption Budget (PDB) some time cause error while draining pods (recommended to disable it while upgrading)&lt;br&gt;
Use an K8s API depreciation finder tool like Pluto to support the API changes on the newer version.&lt;br&gt;
Upgrade Process&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html"&gt;https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let me break down the upgrade process that happens when we perform the upgrade. This is a sequential upgrade&lt;/p&gt;

&lt;p&gt;Control Plane upgrade&lt;br&gt;
The control plane upgrade is an in-place upgrade means that it launches new control plane with the target version within the same subnet of the existing control plane and that is the where we need atleast 5 free IPs in the EKS subnet to accommodate the new control plane. The new control plane will go through readiness and health check and once passed the new control plane will replace the old plane. This process happens in the backend within AWS infrastructure and there will be no impact to application&lt;/p&gt;

&lt;p&gt;Node upgrade&lt;br&gt;
The node upgrade is also an in-place upgrade where it launches new nodes with the target version and the pod from old nodes will get evicted and launched in the new node.&lt;/p&gt;

&lt;p&gt;Add-ons upgrade&lt;br&gt;
The addons such as coredns, VPC CNI, kube-proxy etc on your cluster need to be upgraded accrodingly as per the matrix in &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/managing-vpc-cni.html#vpc-add-on-update"&gt;https://docs.aws.amazon.com/eks/latest/userguide/managing-vpc-cni.html#vpc-add-on-update&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Update System Components version (Kube-Proxy, CoreDNS, AWS CNI, Cluster Autoscaler)&lt;br&gt;
Check the system component versions before upgrading. Refer to the below page for the desired version of Kube-Proxy, CoreDNS, and AWS CNI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html"&gt;Updating an Amazon EKS cluster Kubernetes version - Amazon EKS&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let us take an example of upgrading from 1.22 to 1.23&lt;/p&gt;

&lt;p&gt;Step-1:&lt;br&gt;
Ensure control plane and nodes are in same version&lt;br&gt;
kubectl version --short&lt;br&gt;
kubectl get nodes&lt;br&gt;
Step-2:&lt;br&gt;
Before updating your cluster, ensure that the proper Pod security policies are in place. This is to avoid potential security issues&lt;br&gt;
kubectl get psp eks.privileged&lt;br&gt;
Step-3:&lt;br&gt;
Update you target version in your terraform file to the target version say 1.23 and then perform a TF plan and apply&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ vi variables.tf

variable "eks_version" {
   default = "1.23"
   description = "kubernetes cluster version provided by AWS EKS"
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan 
terraform apply --auto-approve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step-4:&lt;br&gt;
Once the control is upgraded,the managed worker nodes upgrade process get invoked automatically. In case you are using the self managed worker nodes upgrade. Choose the AMI as per your control plane version and region in the matrix below &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/retrieve-ami-id.html"&gt;https://docs.aws.amazon.com/eks/latest/userguide/retrieve-ami-id.html&lt;/a&gt;&lt;br&gt;
Update your worker nodes TF file with the new AMI id and run TF plan and apply&lt;/p&gt;

&lt;p&gt;Step-5:&lt;br&gt;
Once control plane and workernodes upgrade were completed. Now it is time to upgrade the addons, see what addons are enabled in your cluster and upgrade each addons via console or eksctl based on how you manage it.&lt;br&gt;
Each addons has the compatiblity matrix from the AWS documentation and it has to be upgraded appropriately&lt;br&gt;
sample ref : &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/managing-vpc-cni.html#vpc-add-on-update"&gt;https://docs.aws.amazon.com/eks/latest/userguide/managing-vpc-cni.html#vpc-add-on-update&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step-6:&lt;br&gt;
it's mandatory to install AWS-EBS_CSI Driver for eks cluster from 1.23 to next versions Follow the below steps to upgrade the EKS cluster to version 1.23.&lt;/p&gt;

&lt;p&gt;Need to Deploy Amazon EBS CSI Driver follow below steps.&lt;/p&gt;

&lt;p&gt;Creating the Amazon EBS CSI driver IAM role for service accounts - Amazon EKS &lt;/p&gt;

&lt;p&gt;Using AWS Management Console:&lt;/p&gt;

&lt;p&gt;To create your Amazon EBS CSI plugin IAM role with the AWS Management Console&lt;br&gt;
Open the IAM console at &lt;a href="https://console.aws.amazon.com/iam/"&gt;https://console.aws.amazon.com/iam/&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In the left navigation pane, choose Roles.&lt;/p&gt;

&lt;p&gt;On the Roles page, choose Create role.&lt;/p&gt;

&lt;p&gt;On the Select trusted entity page, do the following:&lt;/p&gt;

&lt;p&gt;In the Trusted entity type section, choose Web identity.&lt;/p&gt;

&lt;p&gt;For Identity provider, choose the OpenID Connect provider URL for your cluster (as shown under Overview in Amazon EKS).&lt;/p&gt;

&lt;p&gt;For Audience, choose &lt;strong&gt;sts.amazonaws.com&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Choose Next.&lt;/p&gt;

&lt;p&gt;On the Add permissions page, do the following:&lt;/p&gt;

&lt;p&gt;In the Filter policies box, enter &lt;strong&gt;AmazonEBSCSIDriverPolicy&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Select the check box to the left of the &lt;strong&gt;AmazonEBSCSIDriverPolicy&lt;/strong&gt; returned in the search.&lt;/p&gt;

&lt;p&gt;Choose &lt;strong&gt;Next&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;On the Name, review, and create page, do the following:&lt;/p&gt;

&lt;p&gt;For Role name, enter a unique name for your role, such as &lt;strong&gt;AmazonEKS_EBS_CSI_DriverRole&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Under Add tags (Optional), add metadata to the role by attaching tags as key–value pairs. For more information about using tags in IAM, see Tagging IAM Entities in the IAM User Guide.&lt;/p&gt;

&lt;p&gt;Choose Create role.&lt;/p&gt;

&lt;p&gt;After the role is created, choose the role in the console to open it for editing.&lt;/p&gt;

&lt;p&gt;Choose the Trust relationships tab, and then choose Edit trust policy.&lt;/p&gt;

&lt;p&gt;Find the line that looks similar to the following line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:aud": "sts.amazonaws.com"
"oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:sub": "system:serviceaccount:kube-system:ebs-csi-controller-sa"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;if we are using encryption on ebs then we need to sepearte policy and role for the same(attach the steps needed )like below &lt;/p&gt;

&lt;p&gt;Copy and paste the following code into the editor, replacing &lt;strong&gt;custom-key-arn&lt;/strong&gt; with the custom KMS key ARN.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "kms:CreateGrant",
        "kms:ListGrants",
        "kms:RevokeGrant"
      ],
      "Resource": ["custom-key-arn"],
      "Condition": {
        "Bool": {
          "kms:GrantIsForAWSResource": "true"
        }
      }
    },
    {
      "Effect": "Allow",
      "Action": [
        "kms:Encrypt",
        "kms:Decrypt",
        "kms:ReEncrypt*",
        "kms:GenerateDataKey*",
        "kms:DescribeKey"
      ],
      "Resource": ["custom-key-arn"]
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Choose Next: &lt;strong&gt;Tags&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;On the Add tags (Optional) page, choose Next: &lt;strong&gt;Review&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;For Name, enter a unique name for your policy &lt;strong&gt;(for example, KMS_Key_For_Encryption_On_EBS_Policy).&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Choose Create policy&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In the left navigation pane, choose Roles.&lt;/p&gt;

&lt;p&gt;Choose the AmazonEKS_EBS_CSI_DriverRole in the console to open it for editing.&lt;/p&gt;

&lt;p&gt;From the Add permissions drop-down list, choose Attach policies.&lt;/p&gt;

&lt;p&gt;In the Filter policies box, enter &lt;strong&gt;KMS_Key_For_Encryption_On_EBS_Policy&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Select the check box to the left of the &lt;strong&gt;KMS_Key_For_Encryption_On_EBS_Policy&lt;/strong&gt; that was returned in the search.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Choose Attach policies.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Managing the Amazon EBS CSI driver as an Amazon EKS add-on&lt;br&gt;
An existing cluster. To see the required platform version, run the following command.&lt;/p&gt;

&lt;p&gt;aws eks describe-addon-versions --addon-name aws-ebs-csi-driver&lt;br&gt;
To add the Amazon EBS CSI add-on using eksctl&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eksctl create addon --name aws-ebs-csi-driver --cluster my-cluster --service-account-role-arn arn:aws:iam::111122223333:role/AmazonEKS_EBS_CSI_DriverRole --force
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Updating the Amazon EBS CSI driver as an Amazon EKS add-on&lt;br&gt;
Amazon EKS doesn't automatically update Amazon EBS CSI for your cluster when new versions are released or after you update your cluster to a new Kubernetes minor version. To update Amazon EBS CSI on an existing cluster, you must initiate the update and then Amazon EKS updates the add-on for you.&lt;/p&gt;

&lt;p&gt;To update the Amazon EBS CSI add-on using eksctl&lt;br&gt;
Check the current version of your Amazon EBS CSI add-on. Replace my-cluster with your cluster name.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eksctl get addon --name aws-ebs-csi-driver --cluster my-cluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;The example output is as follows.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;NAME                    VERSION                      STATUS  ISSUES  IAMROLE UPDATE AVAILABLE&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws-ebs-csi-driver      v1.11.2-eksbuild.1      ACTIVE  0               v1.11.4-eksbuild.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Update the add-on to the version returned under UPDATE AVAILABLE in the output of the previous step.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eksctl update addon --name aws-ebs-csi-driver --version v1.11.4-eksbuild.1 --cluster my-cluster --force
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;by the above procedure, ebs-csi-driver installation will be completed.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>eks</category>
      <category>aws</category>
      <category>updates</category>
    </item>
  </channel>
</rss>
