<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Ashish Gajjar</title>
    <description>The latest articles on Forem by Ashish Gajjar (@gajjarashish).</description>
    <link>https://forem.com/gajjarashish</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/gajjarashish"/>
    <language>en</language>
    <item>
      <title>Cache Me If You Can: Building a Web-Based Redis Upgrade Dashboard on AWS</title>
      <dc:creator>Ashish Gajjar</dc:creator>
      <pubDate>Fri, 03 Apr 2026 10:40:35 +0000</pubDate>
      <link>https://forem.com/aws-builders/cache-me-if-you-can-building-a-web-based-redis-upgrade-dashboard-on-aws-495n</link>
      <guid>https://forem.com/aws-builders/cache-me-if-you-can-building-a-web-based-redis-upgrade-dashboard-on-aws-495n</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr4zas4qgl2feqg47art0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr4zas4qgl2feqg47art0.png" alt=" " width="800" height="800"&gt;&lt;/a&gt;&lt;br&gt;
If you’ve ever upgraded Redis in production, you know one thing — it’s never “just a version change.”&lt;/p&gt;

&lt;p&gt;We were running multiple Redis clusters on Amazon ElastiCache across environments, and every upgrade followed the same painful routine:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Find the right cluster&lt;/li&gt;
&lt;li&gt;Double-check cluster mode&lt;/li&gt;
&lt;li&gt;Take a snapshot&lt;/li&gt;
&lt;li&gt;Validate engine compatibility&lt;/li&gt;
&lt;li&gt;Run CLI commands&lt;/li&gt;
&lt;li&gt;Monitor manually&lt;/li&gt;
&lt;li&gt;Hope nothing reconnects in a storm
After doing this a few times, we realized something obvious:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The problem wasn’t Redis.&lt;br&gt;
The problem was the process.&lt;/p&gt;

&lt;p&gt;So we built a lightweight web-based dashboard to automate Redis cluster upgrades, snapshot management, and restore workflows — safely and predictably.&lt;/p&gt;

&lt;p&gt;This is the story of that tool.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;The Real Problem&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Redis upgrades in AWS look simple in documentation. But in reality, things get tricky:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cluster-mode enabled vs disabled changes the snapshot flow.&lt;/li&gt;
&lt;li&gt;Some versions can’t upgrade directly.&lt;/li&gt;
&lt;li&gt;In-place upgrades may cause failovers.&lt;/li&gt;
&lt;li&gt;Snapshots don’t overwrite existing clusters.&lt;/li&gt;
&lt;li&gt;Restore always creates a new replication group.&lt;/li&gt;
&lt;li&gt;Someone always forgets to take a fresh snapshot.&lt;/li&gt;
&lt;li&gt;We didn’t want another “run this CLI carefully at 11 PM” situation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We didn’t want another “run this CLI carefully at 11 PM” situation.&lt;/p&gt;

&lt;p&gt;We wanted:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Guardrails&lt;/li&gt;
&lt;li&gt;Visibility&lt;/li&gt;
&lt;li&gt;Repeatability&lt;/li&gt;
&lt;li&gt;Zero-downtime options&lt;/li&gt;
&lt;li&gt;Less human error&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  What We Built
&lt;/h2&gt;

&lt;p&gt;A small web application that sits on top of AWS APIs and acts as a Redis operations control panel.&lt;/p&gt;

&lt;p&gt;Under the hood it uses:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Flask (Python backend)&lt;/li&gt;
&lt;li&gt;Boto3 (AWS SDK)&lt;/li&gt;
&lt;li&gt;Simple HTML/CSS frontend&lt;/li&gt;
&lt;li&gt;IAM roles for authentication&lt;/li&gt;
&lt;li&gt;Server-Sent Events for live progress updates&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Nothing fancy. Just clean automation.&lt;/p&gt;

&lt;p&gt;The goal was simple:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Discover → Snapshot → Upgrade → Restore&lt;/li&gt;
&lt;li&gt;All from one clean UI.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Installation Guide
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Step 1: Download/Clone Project
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/gajjarashish007/GenAI/tree/a206b7598b423946b8dcf25aabe6b0fc3464b24f/Redis_Upgrade
&lt;span class="nb"&gt;cd &lt;/span&gt;redis_upgrade
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Step 2: Install Dependencies
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Required packages:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;flask==3.0.0&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;boto3==1.34.0&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Step 3: Configure AWS Credentials
&lt;/h3&gt;

&lt;p&gt;Choose one method:&lt;/p&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;Option A: AWS CLI (Recommended)&lt;/strong&gt;
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws configure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Enter when prompted:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Access Key ID&lt;/li&gt;
&lt;li&gt;AWS Secret Access Key&lt;/li&gt;
&lt;li&gt;Default region (e.g., &lt;code&gt;us-east-1&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Output format (press Enter for default)&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;Option B: Environment Variables&lt;/strong&gt;
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Linux/Mac&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AWS_ACCESS_KEY_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"your_access_key"&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AWS_SECRET_ACCESS_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"your_secret_key"&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AWS_DEFAULT_REGION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"us-east-1"&lt;/span&gt;

&lt;span class="c"&gt;# Windows PowerShell&lt;/span&gt;
&lt;span class="nv"&gt;$env&lt;/span&gt;:AWS_ACCESS_KEY_ID&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"your_access_key"&lt;/span&gt;
&lt;span class="nv"&gt;$env&lt;/span&gt;:AWS_SECRET_ACCESS_KEY&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"your_secret_key"&lt;/span&gt;
&lt;span class="nv"&gt;$env&lt;/span&gt;:AWS_DEFAULT_REGION&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"us-east-1"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Option C: IAM Role (EC2)&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;If running on EC2, attach an IAM role with required permissions. No configuration needed.&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 4: Start Application
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python app.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Expected output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt; * Serving Flask app 'app'
 * Debug mode: on
 * Running on all addresses (0.0.0.0)
 * Running on http://127.0.0.1:3000
 * Running on http://YOUR-IP:3000
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 5: Access Dashboard
&lt;/h3&gt;

&lt;p&gt;Open browser and navigate to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://localhost:3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff2uy3mas6l7oixvlxudq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff2uy3mas6l7oixvlxudq.png" alt=" " width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Feature 1: Auto-Discovery of Redis Clusters&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The first thing the dashboard does is scan the selected region and list all Redis replication groups.&lt;/p&gt;

&lt;p&gt;For each cluster, it shows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Engine version&lt;/li&gt;
&lt;li&gt;Node type&lt;/li&gt;
&lt;li&gt;Cluster mode status&lt;/li&gt;
&lt;li&gt;Number of shards&lt;/li&gt;
&lt;li&gt;Replica count&lt;/li&gt;
&lt;li&gt;Current status&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This removed the need to dig through the AWS console every time.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy7eeiqi437a5pqd8b4cu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy7eeiqi437a5pqd8b4cu.png" alt=" " width="800" height="529"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Feature 2: Snapshot Before Anything Risky&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;We enforced a rule in the UI:&lt;/p&gt;

&lt;p&gt;You cannot upgrade without snapshotting first.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The tool automatically:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creates a manual snapshot&lt;/li&gt;
&lt;li&gt;Waits until status becomes “available”&lt;/li&gt;
&lt;li&gt;Logs the snapshot ID&lt;/li&gt;
&lt;li&gt;Proceeds only if backup succeeds&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This one rule eliminated most operational anxiety.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi7yyw9vuiwzmuhfx0yy6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi7yyw9vuiwzmuhfx0yy6.png" alt=" " width="800" height="668"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd2ts10hom78xpuxm0cqq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd2ts10hom78xpuxm0cqq.png" alt=" " width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Feature 3: Safe Engine Upgrade Workflow
&lt;/h2&gt;

&lt;p&gt;We support two upgrade paths:&lt;br&gt;
&lt;strong&gt;1. Direct Upgrade (In-Place)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Best for non-production environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The tool:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Validates version compatibility&lt;/li&gt;
&lt;li&gt;Executes modify-replication-group&lt;/li&gt;
&lt;li&gt;Streams progress logs&lt;/li&gt;
&lt;li&gt;Displays status in real-time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft6ibicgkux2l48riptri.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft6ibicgkux2l48riptri.png" alt=" " width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F91ma60jmb1fctr5ppo76.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F91ma60jmb1fctr5ppo76.png" alt=" " width="800" height="545"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Blue/Green Restore Strategy (Production)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For production clusters, we rarely do in-place upgrades.&lt;br&gt;
&lt;strong&gt;Instead:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Take snapshot&lt;/li&gt;
&lt;li&gt;Restore snapshot to new cluster&lt;/li&gt;
&lt;li&gt;Validate application connectivity&lt;/li&gt;
&lt;li&gt;Switch endpoint&lt;/li&gt;
&lt;li&gt;Keep old cluster temporarily
This approach gives near zero downtime and easy rollback.
The dashboard guides this flow step by step.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Feature 4: Snapshot Restore&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;One common misconception is:&lt;/p&gt;

&lt;p&gt;“Can we restore over the same cluster?”&lt;/p&gt;

&lt;p&gt;No.&lt;/p&gt;

&lt;p&gt;On Amazon Web Services, restoring a snapshot always creates a new replication group.&lt;/p&gt;

&lt;p&gt;The dashboard enforces unique naming and prevents accidental overwrite attempts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It also validates:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Node type compatibility&lt;/li&gt;
&lt;li&gt;Engine version match&lt;/li&gt;
&lt;li&gt;Memory requirements&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Feature 5: Architecture Behind the Scenes&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Under the hood, the system is simple by design:&lt;br&gt;
Browser → Flask API → Boto3 → AWS ElastiCache&lt;/p&gt;

&lt;p&gt;Final Thoughts&lt;/p&gt;

&lt;p&gt;“Cache Me If You Can” wasn’t just a clever title.&lt;/p&gt;

&lt;p&gt;It reflects a mindset shift — from reactive infrastructure to controlled, confident operations.&lt;/p&gt;

&lt;p&gt;If you're managing Redis on AWS, consider building (or adopting) something similar.&lt;/p&gt;

&lt;p&gt;Because in production systems:&lt;/p&gt;

&lt;p&gt;Speed matters.&lt;br&gt;
Stability matters more.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;“Cache Me If You Can” ultimately highlights that successful Redis upgrades on AWS aren’t about mastering commands — they’re about designing a reliable process. By replacing manual, error-prone steps with a simple, automated dashboard, we transformed upgrades into a predictable, repeatable, and safe operation.&lt;/p&gt;

&lt;p&gt;This tool brought structure through guardrails, confidence through enforced snapshots, and flexibility through blue/green deployment strategies — all while reducing downtime and human error.&lt;/p&gt;

&lt;p&gt;In the end, the biggest win wasn’t just automation — it was peace of mind. Because in real-world production systems, it’s not just about moving fast — it’s about moving safely, every single time.&lt;/p&gt;

</description>
      <category>automation</category>
      <category>aws</category>
      <category>database</category>
      <category>showdev</category>
    </item>
    <item>
      <title>🚀 2025 Top 10 Announcements for AWS Cloud Operations (Don’t Miss)</title>
      <dc:creator>Ashish Gajjar</dc:creator>
      <pubDate>Fri, 05 Dec 2025 07:26:58 +0000</pubDate>
      <link>https://forem.com/aws-builders/2025-top-10-announcements-for-aws-cloud-operations-dont-miss-4i04</link>
      <guid>https://forem.com/aws-builders/2025-top-10-announcements-for-aws-cloud-operations-dont-miss-4i04</guid>
      <description>&lt;p&gt;AWS re:Invent 2025 introduced major advancements that will reshape Cloud Operations — especially around AI-powered observability, centralized logging, automated incident response and hybrid multi-account monitoring.&lt;/p&gt;

&lt;p&gt;Modern cloud workloads are growing rapidly, and teams need tools that can scale, automate, and reduce operational friction. These 10 announcements focus exactly on that.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmvoh1i3g3gmp7dsd0g43.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmvoh1i3g3gmp7dsd0g43.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  🎯 Goal of This Article
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Understand the newest AWS Cloud Operations capabilities announced in 2025&lt;/li&gt;
&lt;li&gt;Learn their real-world impact for DevOps, SRE, Platform Engineering &amp;amp; Cloud teams&lt;/li&gt;
&lt;li&gt;Receive clear steps to get started and adopt each feature practically&lt;/li&gt;
&lt;li&gt;Help teams improve observability, automation, performance &amp;amp; resilience&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🧠 Why this matters now
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2v73sbyin1005i5vvxte.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2v73sbyin1005i5vvxte.png" alt=" " width="800" height="515"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjn1i3p4q4zadkos8k893.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjn1i3p4q4zadkos8k893.png" alt=" " width="800" height="1200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  🏆 Top 10 AWS Cloud Operations Announcements — Deep Dive
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Generative-AI Observability for Amazon CloudWatch + AgentCore&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Built-in observability for AI workloads — metrics like token usage, inference latency, agent-workflow tracing, and AI performance visualization.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why it matters&lt;/strong&gt;&lt;br&gt;
AI-apps behave differently; latency spikes, token costs and agent failures require dedicated monitoring. This feature reduces guess-work and debugging time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Steps to Perform&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Enable CloudWatch AI Observability under Application Signals&lt;/li&gt;
&lt;li&gt;Connect to Amazon Bedrock or agent-framework integration&lt;/li&gt;
&lt;li&gt;Create dashboards for:

&lt;ul&gt;
&lt;li&gt;Token usage (cost control)&lt;/li&gt;
&lt;li&gt;Model latency&lt;/li&gt;
&lt;li&gt;Workflow execution paths&lt;/li&gt;
&lt;li&gt;Configure anomaly alerts
**Goal&lt;/li&gt;
&lt;li&gt;Improve control, reliability, visibility &amp;amp; performance tuning of AI workloads.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;2. CloudWatch Application Map — Auto-discovers Un-instrumented&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why it matters&lt;/strong&gt;&lt;br&gt;
Service dependency maps are hard to maintain manually — auto discovery reveals hidden or undocumented service paths.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Steps&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Enable Application Signals&lt;/li&gt;
&lt;li&gt;Deploy agent to environment (without manual instrumentation)&lt;/li&gt;
&lt;li&gt;Open Application Map for visualization&lt;/li&gt;
&lt;li&gt;Compare detected vs. expected architecture&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Goal&lt;/strong&gt;&lt;br&gt;
Instant architecture awareness &amp;amp; dependency visibility.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. CloudWatch Investigations — AI-generated Incident Reports + “5 Whys” RCA&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why it matters&lt;/strong&gt;&lt;br&gt;
Traditional incident reports are time-consuming; automation reduces MTTR and preserves institutional knowledge.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Steps&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Enable CloudWatch Investigations&lt;/li&gt;
&lt;li&gt;Configure event sources (logs, metrics, CloudTrail, config history)&lt;/li&gt;
&lt;li&gt;Trigger incident report on outage simulation&lt;/li&gt;
&lt;li&gt;Review autogenerated RCA + recommendations&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Goal&lt;/strong&gt;&lt;br&gt;
Automate root cause analysis and accelerate incident recovery.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. MCP Servers for CloudWatch &amp;amp; Application Signals&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why it matters&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Allows AI agents to interact with operations data directly — enabling automated remediation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Steps&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Connect MCP-compatible AI tools/chatbots&lt;/li&gt;
&lt;li&gt;Allow querying of alarms, logs and metrics&lt;/li&gt;
&lt;li&gt;Test automated remediation workflow&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Goal&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create self-healing operations ecosystems.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5. Application Signals + GitHub Actions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why it matters&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Observability is now built into CI/CD; performance defects can be caught before deployment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Steps&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install GitHub Action extension&lt;/li&gt;
&lt;li&gt;Link CI pipelines to Application Signals&lt;/li&gt;
&lt;li&gt;Block merges if metrics degrade&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Goal&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Shift-left reliability checks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;6. OpenSearch Enhanced Log Analytics (PPL upgrade)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why it matters&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Faster troubleshooting for distributed systems with cleaner correlations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Steps&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Enable PPL for log search&lt;/li&gt;
&lt;li&gt;Write multi-service correlation queries&lt;/li&gt;
&lt;li&gt;Build dashboards for repeating patterns&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Goal&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Faster debugging and trend detection.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;7. CloudWatch RUM for iOS &amp;amp; Android&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why it matters&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;End-to-end mobile performance visibility.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Steps&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add RUM SDK to mobile app&lt;/li&gt;
&lt;li&gt;Track latency, error events, client devices&lt;/li&gt;
&lt;li&gt;Analyze funnels &amp;amp; real-user behavior&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Goal&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Detect UX problems early.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;8. CloudTrail Data-Event Aggregation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why it matters&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Huge logs become simpler with intelligent aggregation and anomaly detection.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Steps&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Enable event aggregation on high-volume services (S3, DynamoDB)&lt;/li&gt;
&lt;li&gt;Turn on anomaly detection&lt;/li&gt;
&lt;li&gt;Connect outputs to OpenSearch / SIEM&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Goal&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Better security &amp;amp; lower logging noise.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;9. Multi-Account + Multi-Region Centralized Log Management&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why it matters&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;One dashboard for all accounts instead of custom pipelines.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Steps&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create central logging account&lt;/li&gt;
&lt;li&gt;Configure log routing via CloudWatch&lt;/li&gt;
&lt;li&gt;Separate dev/stage/prod partitions&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Goal&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Unified observability + simplified compliance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;10. CloudWatch Database Insights (Cross-Account &amp;amp; Region)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why it matters&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Databases are performance bottlenecks — unified DB monitoring reduces time to detect slowdowns.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Steps&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Enable DB Insights for RDS/Aurora/DynamoDB&lt;/li&gt;
&lt;li&gt;Centralize accounts &amp;amp; regions&lt;/li&gt;
&lt;li&gt;Correlate DB performance with application metrics&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Goal&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prevent outages &amp;amp; improve performance optimization.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Refreance Link :&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/blogs/mt/2025-top-10-announcements-for-aws-cloud-operations/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/mt/2025-top-10-announcements-for-aws-cloud-operations/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudoperation</category>
      <category>ai</category>
      <category>aiops</category>
    </item>
    <item>
      <title>Migrating AWS IAM Roles via the Web Console: A Step-by-Step Guide</title>
      <dc:creator>Ashish Gajjar</dc:creator>
      <pubDate>Sat, 20 Sep 2025 01:46:07 +0000</pubDate>
      <link>https://forem.com/gajjarashish/migrating-aws-iam-roles-via-the-web-console-a-step-by-step-guide-1igc</link>
      <guid>https://forem.com/gajjarashish/migrating-aws-iam-roles-via-the-web-console-a-step-by-step-guide-1igc</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;br&gt;
Managing and duplicating IAM roles no longer has to be a manual task.&lt;br&gt;
The IAM Role Clone Portal—accessible right from the AWS Management Console and enhanced with Amazon Q—makes cloning roles quick, accurate, and auditable.&lt;/p&gt;

&lt;p&gt;With Amazon Q’s natural-language assistance built into the console, you can simply describe what you need (for example, “Clone the production EC2 role with all tags and inline policies”) and the portal will guide you through each step.&lt;br&gt;
This combination of a clean UI and AI-driven guidance saves time, reduces errors, and ensures every cloned role meets your organization’s security and compliance requirements.&lt;/p&gt;

&lt;p&gt;This guide walks you through the interface, the cloning workflow, and every option available.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftos0xzgdruncvqi1rrip.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftos0xzgdruncvqi1rrip.png" alt=" " width="800" height="1200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🖥 Interface Overview&lt;/p&gt;

&lt;p&gt;The portal is divided into two main panels and a set of handy action buttons.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Left Panel – Role Browser&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Search: Quickly filter roles by name or description.&lt;/li&gt;
&lt;li&gt;Role List: Browse all IAM roles along with a count of attached policies.&lt;/li&gt;
&lt;li&gt;Selection: Click any role to load its details in the right panel.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Right Panel – Role Details&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Overview Tab: Displays basic information and metadata for the selected role.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Policies Tab: Lists all attached managed and inline policies.&lt;/li&gt;
&lt;li&gt;Trust Policy Tab: Shows the trust relationship document.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Tags Tab: Presents all associated tags and metadata.&lt;br&gt;
&lt;strong&gt;Action Buttons&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;🔄 Refresh: Reload the full role list from AWS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;📥 Export: Download the selected role’s configuration as a JSON file.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;🔄 Clone Role: Launch the cloning wizard to create a duplicate role.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🧩 Cloning a Role: Step-by-Step&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Select the Source Role&lt;/strong&gt;&lt;br&gt;
Choose the IAM role you want to clone from the left panel.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhay2ghxbs3cebnzahpjs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhay2ghxbs3cebnzahpjs.png" alt=" " width="800" height="522"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Click “Clone Role”&lt;/strong&gt;&lt;br&gt;
Opens the cloning configuration modal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F87rt1bmnh7xk00kpmndm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F87rt1bmnh7xk00kpmndm.png" alt=" " width="800" height="511"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.Configure the Clone&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;New Role Name: Provide a unique name for the new role.&lt;/li&gt;
&lt;li&gt;Description: (Optional) Add a short description.&lt;/li&gt;
&lt;li&gt;Path: Set the IAM path (defaults to /).&lt;/li&gt;
&lt;li&gt;Options: Decide whether to clone policies, tags, or both.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Start Clone&lt;/strong&gt;&lt;br&gt;
Confirm the settings and monitor real-time progress.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzfttfjihuvvp8wgo3g05.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzfttfjihuvvp8wgo3g05.png" alt=" " width="800" height="521"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Success!&lt;/strong&gt;&lt;br&gt;
View and manage the newly created role right from the portal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8du8ihemja7q62p89ppv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8du8ihemja7q62p89ppv.png" alt=" " width="800" height="519"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;⚙️** Clone Options**&lt;/p&gt;

&lt;p&gt;When configuring a clone, you can choose any combination of these options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Clone Attached Policies: Duplicate all managed policies.&lt;/li&gt;
&lt;li&gt;✅ Clone Inline Policies: Copy inline policy documents.&lt;/li&gt;
&lt;li&gt;✅ Clone Tags: Carry over all existing tags and append clone metadata.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The IAM Role Clone Portal turns what used to be a manual, error-prone process into a fast and reliable workflow.&lt;br&gt;
By offering a clear interface, one-click cloning, and automatic metadata tagging, it helps teams:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Save time when duplicating roles.&lt;/li&gt;
&lt;li&gt;Maintain consistent policies and tags across environments.&lt;/li&gt;
&lt;li&gt;Track the origin of every cloned role for better auditing.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Whether you’re scaling applications, enforcing least-privilege practices, or simply streamlining IAM administration, this portal provides a secure and efficient way to replicate IAM roles with confidence.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Converting IAM Users to Roles: A Complete Web-Based Solution</title>
      <dc:creator>Ashish Gajjar</dc:creator>
      <pubDate>Sat, 20 Sep 2025 01:07:55 +0000</pubDate>
      <link>https://forem.com/gajjarashish/converting-iam-users-to-roles-a-complete-web-based-solution-1e0g</link>
      <guid>https://forem.com/gajjarashish/converting-iam-users-to-roles-a-complete-web-based-solution-1e0g</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;AWS Identity and Access Management (IAM) has evolved significantly since its inception, and one of the most important security best practices today is migrating from IAM Users to IAM Roles. This shift isn't just a recommendation—it's becoming essential for modern cloud security architecture.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why This Migration Matters
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Security Benefits:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Temporary Credentials&lt;/strong&gt;: Roles provide temporary, automatically rotating credentials&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No Long-term Keys&lt;/strong&gt;: Eliminates the risk of hardcoded access keys&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Principle of Least Privilege&lt;/strong&gt;: Better control over permission scope and duration&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Audit Trail&lt;/strong&gt;: Enhanced logging and monitoring capabilities&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Operational Benefits:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Simplified Management&lt;/strong&gt;: Centralized permission management&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cross-Account Access&lt;/strong&gt;: Seamless integration across AWS accounts&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Service Integration&lt;/strong&gt;: Native support for AWS services&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compliance&lt;/strong&gt;: Better alignment with security frameworks&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Challenge
&lt;/h3&gt;

&lt;p&gt;While the benefits are clear, the migration process can be complex:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Manual conversion is time-intensive and error-prone&lt;/li&gt;
&lt;li&gt;Risk of permission gaps during transition&lt;/li&gt;
&lt;li&gt;Difficulty in mapping user policies to appropriate roles&lt;/li&gt;
&lt;li&gt;Need for comprehensive testing and validation&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Our Solution
&lt;/h3&gt;

&lt;p&gt;This article introduces a comprehensive web-based tool that automates the entire IAM User to Role conversion process, providing:&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Visual Interface&lt;/strong&gt;: Intuitive web-based management&lt;br&gt;
✅ &lt;strong&gt;Real-time AWS Integration&lt;/strong&gt;: Live data from your AWS account&lt;br&gt;
✅ &lt;strong&gt;Policy Preservation&lt;/strong&gt;: Maintains all existing permissions&lt;br&gt;
✅ &lt;strong&gt;Trust Policy Templates&lt;/strong&gt;: Pre-configured templates for common scenarios&lt;br&gt;
✅ &lt;strong&gt;Conversion Preview&lt;/strong&gt;: See changes before applying them&lt;br&gt;
✅ &lt;strong&gt;Advanced Role Management&lt;/strong&gt;: Post-conversion policy modification tools&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture Overview
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Core Components
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Frontend (Web Interface)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Modern HTML5/CSS3/JavaScript interface&lt;/li&gt;
&lt;li&gt;Responsive design for desktop and mobile&lt;/li&gt;
&lt;li&gt;Real-time status updates and progress tracking&lt;/li&gt;
&lt;li&gt;Tabbed interface for organized workflow&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Backend (Python Server)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Flask-based REST API&lt;/li&gt;
&lt;li&gt;boto3 integration for AWS API calls&lt;/li&gt;
&lt;li&gt;Authentication and session management&lt;/li&gt;
&lt;li&gt;Real-time logging and error handling&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS Integration&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Direct API calls to IAM services&lt;/li&gt;
&lt;li&gt;Secure credential handling&lt;/li&gt;
&lt;li&gt;Comprehensive permission validation&lt;/li&gt;
&lt;li&gt;CloudTrail integration for audit logging&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Data Flow
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Web Browser] ↔ [Python Backend] ↔ [AWS IAM APIs]
      ↓              ↓                ↓
[User Interface] [Business Logic] [AWS Resources]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Key Features Deep Dive
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. User Discovery and Analysis
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftttenwk6v71zokzf1ujw.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftttenwk6v71zokzf1ujw.jpg" alt=" " width="800" height="583"&gt;&lt;/a&gt;&lt;br&gt;
The tool provides comprehensive user analysis capabilities:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;User Listing&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Displays all IAM users in your AWS account&lt;/li&gt;
&lt;li&gt;Real-time search and filtering&lt;/li&gt;
&lt;li&gt;Shows user creation date and last activity&lt;/li&gt;
&lt;li&gt;Identifies users with console access vs. programmatic access&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Policy Analysis&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lists all attached managed policies&lt;/li&gt;
&lt;li&gt;Displays inline policies with JSON formatting&lt;/li&gt;
&lt;li&gt;Shows group memberships and inherited permissions&lt;/li&gt;
&lt;li&gt;Identifies unused or redundant permissions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Security Assessment&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Flags users with administrative privileges&lt;/li&gt;
&lt;li&gt;Identifies users with long-term access keys&lt;/li&gt;
&lt;li&gt;Shows users without MFA enabled&lt;/li&gt;
&lt;li&gt;Highlights potential security risks&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Role Configuration Engine
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fllykxx05hoajmftazlnp.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fllykxx05hoajmftazlnp.jpg" alt=" " width="800" height="553"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Smart Role Naming&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatic role name generation based on user name&lt;/li&gt;
&lt;li&gt;Customizable naming conventions&lt;/li&gt;
&lt;li&gt;Conflict detection and resolution&lt;/li&gt;
&lt;li&gt;Validation against AWS naming requirements&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Trust Policy Templates&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;EC2 Service Role&lt;/strong&gt;: For applications running on EC2 instances&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lambda Execution Role&lt;/strong&gt;: For serverless functions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ECS Task Role&lt;/strong&gt;: For containerized applications&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cross-Account Role&lt;/strong&gt;: For multi-account architectures&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom Templates&lt;/strong&gt;: Fully customizable trust policies&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Policy Migration Strategy&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Preserves all managed policy attachments&lt;/li&gt;
&lt;li&gt;Converts inline policies to role inline policies&lt;/li&gt;
&lt;li&gt;Maintains policy versions and metadata&lt;/li&gt;
&lt;li&gt;Validates policy syntax and permissions&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Advanced Role Management System
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftiqn18slalj0rtrjv6px.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftiqn18slalj0rtrjv6px.jpg" alt=" " width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Managed Policies Tab
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Policy Attachment Interface&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Browse AWS managed policies by category&lt;/li&gt;
&lt;li&gt;Search customer managed policies&lt;/li&gt;
&lt;li&gt;Bulk attach/detach operations&lt;/li&gt;
&lt;li&gt;Policy version management&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. Click convert to ROLE
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4tcwyu9u7lbyxk0ay0sh.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4tcwyu9u7lbyxk0ay0sh.jpg" alt=" " width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Verify Role
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe2e5a33wc5mvqwv6b5y9.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe2e5a33wc5mvqwv6b5y9.jpg" alt=" " width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Conclusion
&lt;/h3&gt;

&lt;p&gt;Migrating from long-lived IAM user credentials to short-lived IAM roles is no longer just a best practice—it’s the new security baseline.&lt;br&gt;
By following a structured four-phase plan—Preparation, Conversion, Cleanup, and Policy Optimization—and leveraging Infrastructure-as-Code tools such as Terraform alongside the AWS Management Console (web UI), you can:&lt;/p&gt;

&lt;p&gt;Eliminate static access keys and reduce the risk of credential leaks.&lt;/p&gt;

&lt;p&gt;Provide seamless, temporary access through the Switch Role feature in the AWS web console.&lt;/p&gt;

&lt;p&gt;Maintain or even improve functionality for applications and team members.&lt;/p&gt;

&lt;p&gt;Automate ongoing governance and policy audits.&lt;/p&gt;

&lt;p&gt;This combined Terraform-plus-console approach delivers a complete, web-based solution: code handles repeatable provisioning while the AWS UI enables quick verification and user-friendly role switching.&lt;br&gt;
The result is a more secure, maintainable, and future-ready AWS environment.&lt;/p&gt;

&lt;p&gt;Source Code : &lt;a href="https://github.com/gajjarashish007/GenAI/tree/main/IAM_User_to_Role_creation" rel="noopener noreferrer"&gt;https://github.com/gajjarashish007/GenAI/tree/main/IAM_User_to_Role_creation&lt;/a&gt;&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>aws</category>
      <category>tutorial</category>
      <category>security</category>
    </item>
    <item>
      <title>Deploying an EKS Cluster 1.31 with Terraform, IRSA, and Cluster Autoscaler: A Step-by-Step Guide</title>
      <dc:creator>Ashish Gajjar</dc:creator>
      <pubDate>Tue, 10 Jun 2025 09:08:23 +0000</pubDate>
      <link>https://forem.com/aws-builders/deploying-an-eks-cluster-131-with-terraform-irsa-and-cluster-autoscaler-a-step-by-step-guide-3i8i</link>
      <guid>https://forem.com/aws-builders/deploying-an-eks-cluster-131-with-terraform-irsa-and-cluster-autoscaler-a-step-by-step-guide-3i8i</guid>
      <description>&lt;p&gt;Deploying an EKS (Elastic Kubernetes Service) cluster 1.31 with Terraform involves using Infrastructure as Code (IaC) to automate the creation and management of the EKS cluster and its associated resources. You'll typically define your EKS cluster configuration in Terraform modules, which are reusable and shareable code blocks. This approach allows for consistent, repeatable, and version-controlled deployments. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS shared credentials/configuration files&lt;/li&gt;
&lt;li&gt;Environment variables&lt;/li&gt;
&lt;li&gt;Static credentials&lt;/li&gt;
&lt;li&gt;EC2 instance metadata&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Login AWS console&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open IAM Dashboard&lt;/li&gt;
&lt;li&gt;**** Create a user. username : ashish&lt;/li&gt;
&lt;li&gt;Attach AdministratorAccess policy.&lt;/li&gt;
&lt;li&gt;Create access and secret key.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Create a Ec2 machine&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open a EC2 Dashboard.&lt;/li&gt;
&lt;li&gt;Launch instance&lt;/li&gt;
&lt;li&gt;Name and Tags : MyTest&lt;/li&gt;
&lt;li&gt;Application and OS Image ( AMI ) : Amazon Linux 2023 AMI&lt;/li&gt;
&lt;li&gt;Instance Type: t2.micro&lt;/li&gt;
&lt;li&gt;Keypair : ashish.pem&lt;/li&gt;
&lt;li&gt;Network Settings : VPC, subnet&lt;/li&gt;
&lt;li&gt;Security Group : 22 - SSH (inbound)&lt;/li&gt;
&lt;li&gt;Storage : Min 8 GiB , GP3&lt;/li&gt;
&lt;li&gt;Click Launch instance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Login EC2 instance and configure Access/Secret key.&lt;/strong&gt;&lt;br&gt;
Login to EC2 instance.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh -i "myeks.pem" ec2-user@ec2-54-241-103-53.us-west-1.compute.amazonaws.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Configure Access key and Secret key using AWS CLI.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@ip-172-31-88-31 ~]# aws configure
AWS Access Key ID ]: ****************4E4R
AWS Secret Access Key]: [****************HRJx]:
Default region name]: [Region Name]:
Default output format]: [None]:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install terraform&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum install -y yum-utils shadow-utils
sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/AmazonLinux/hashicorp.repo
sudo yum -y install terraform
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Deploying an EKS Cluster 1.31 with Terraform&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a folder&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@ip-172-31-6-151 ~]# mkdir eks_terraform
[root@ip-172-31-6-151 ~]# cd eks_terraform
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Create an AWS provider and give it a name eks_terraform/provider.tf&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;provider.tf&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@ip-172-31-6-151 eks_terraform]# ls -lth
total 4.0K
-rw-r--r--. 1 root root 188 Jun 10 07:04 provider.tf
[root@ip-172-31-6-151 eks_terraform]# cat provider.tf
# Configure the AWS Provider
provider "aws" {
  region = "us-east-1"
}

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~&amp;gt; 4.0"
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Notes:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Above file, you only need to specify the region where I want to create the VPC and EKS cluster.&lt;/li&gt;
&lt;li&gt;Additionally, you can set the constraints on the AWS provider and any other you wish to use in your code.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Output :&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@ip-172-31-6-151 eks_terraform]# terraform init
Initializing the backend...
Initializing provider plugins...
- Finding hashicorp/aws versions matching "~&amp;gt; 4.0"...
- Installing hashicorp/aws v4.67.0...
- Installed hashicorp/aws v4.67.0 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
[root@ip-172-31-6-151 eks_terraform]# terraform plan

No changes. Your infrastructure matches the configuration.

Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next step is to create a virtual private cloud in AWS using the aws_vpc resource.&lt;/p&gt;

&lt;p&gt;There is one required field you need to provide: the size of your network. and 10.0.0.0/16 will give you approximately 65 thousand IP addresses. For your convenience, you can also tag it, for example, myvpc.&lt;/p&gt;

&lt;p&gt;Let's name it terraform&lt;br&gt;
&lt;code&gt;vpc.tf&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create a VPC
resource "aws_vpc" "myvpc" {
  cidr_block = "10.0.0.0/16"
  tags = {
    Name = "myvpc"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Internet Gateway AWS using Terraform&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To provide internet access for your services, we need to have an internet gateway in our VPC. You need to attach it to the VPC that we just created. It will be used as a default route in public subnets. Give it a name eks_terraform/igw.tf
&lt;code&gt;igw.tf&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_internet_gateway" "myvpc-igw" {
  vpc_id = aws_vpc.myvpc.id

  tags = {
    Name = "myvpc-igw"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Create private and public subnets.&lt;/strong&gt;&lt;br&gt;
Now we need to create four subnets.&lt;/p&gt;

&lt;p&gt;To meet EKS requirements, we must have two public and two private subnets in different availability zones.&lt;br&gt;
&lt;code&gt;subnets.tf&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create a VPC
resource "aws_vpc" "myvpc" {
  cidr_block = "10.0.0.0/16"
  tags = {
    Name = "myvpc"
  }
}
[root@ip-172-31-6-151 eks_terraform]# cat igw.tf
resource "aws_internet_gateway" "myvpc-igw" {
  vpc_id = aws_vpc.myvpc.id

  tags = {
    Name = "myvpc-igw"
  }
}
[root@ip-172-31-6-151 eks_terraform]# cat subnets.tf
# private subnet 01

resource "aws_subnet" "private-us-east-1a" {
  vpc_id            = aws_vpc.myvpc.id
  cidr_block        = "10.0.1.0/24"
  availability_zone = "us-east-1a"

  tags = {
    Name                              = "private-us-east-1a"
    "kubernetes.io/role/internal-elb" = "1"
    "kubernetes.io/cluster/demo"      = "owned"
  }
}
# private subnet 02

resource "aws_subnet" "private-us-east-1b" {
  vpc_id            = aws_vpc.myvpc.id
  cidr_block        = "10.0.2.0/24"
  availability_zone = "us-east-1b"

  tags = {
    Name                              = "private-us-east-1b"
    "kubernetes.io/role/internal-elb" = "1"
    "kubernetes.io/cluster/demo"      = "owned"
  }
}

# public subnet 01

resource "aws_subnet" "public-us-east-1a" {
  vpc_id                  = aws_vpc.myvpc.id
  cidr_block              = "10.0.3.0/24"
  availability_zone       = "us-east-1a"
  map_public_ip_on_launch = true

  tags = {
    Name                         = "public-us-east-1a"
    "kubernetes.io/role/elb"     = "1" #this instruct the kubernetes to create public load balancer in these subnets
    "kubernetes.io/cluster/demo" = "owned"
  }
}
# public subnet 02

resource "aws_subnet" "public-us-east-1b" {
  vpc_id                  = aws_vpc.myvpc.id
  cidr_block              = "10.0.4.0/24"
  availability_zone       = "us-east-1b"
  map_public_ip_on_launch = true

  tags = {
    Name                         = "public-us-east-1b"
    "kubernetes.io/role/elb"     = "1" #this instruct the kubernetes to create public load balancer in these subnets
    "kubernetes.io/cluster/demo" = "owned"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In contrast to the VPC resources above we provide tags for our convenience, whereas EKS requires some tags on the subnet to function properly.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;so for private subnets, the Name is just a simple tag that displays when a subnet is created, and following the “kubernetes.io/role/internal-elb” tag is used by Kubernetes to discover subnets where a private load balancer will be created. also, you need to tag your subnet with the cluster equal to the EKS cluster name “kubernetes.io/cluster/ashish” In this case it's a demo and value can be owned if you only use it for Kubernetes.&lt;/li&gt;
&lt;li&gt;And also you can see availability_zone in the two private subnets is different for EKS requirements.&lt;/li&gt;
&lt;li&gt;And also note on cidr_block is different where one is starting from 10.0.0.0/24 CIDR block that will give you 8192 ip addresses and the last IP is 10.0.31.0.&lt;/li&gt;
&lt;li&gt;And for the public subnet, we will use the availability zone the same as the above two private subnets.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Create NAT Gateway&lt;/strong&gt;&lt;br&gt;
Now it's time to create a NAT gateway, it is used in a private subnet to allow services to connect to the internet and an important note is that we must make it inside the public subnets because it is required to send packets to the internet by the Internet gateway.&lt;/p&gt;

&lt;p&gt;For NAT we need to allocate the elastic ip address first. Then we can use it in the aws_nat_gateway resource.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;nat.tf&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_eip" "nat" {
  vpc = true

  tags = {
    Name = "nat"
  }
}

resource "aws_nat_gateway" "k8s-nat" {
  allocation_id = aws_eip.nat.id
  subnet_id     = aws_subnet.public-us-east-1a.id

  tags = {
    Name = "k8s-nat"
  }

  depends_on = [aws_internet_gateway.myvpc-igw]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The important part above, you need to place it in the public subnet subnet_id = aws_subnet.public-us-east-1a.id . That subnet must have an internet gateway as a default route.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;By now, we have created subnets, an internet gateway, and a nat gateway. It’s time to create routing tables and associate subnets with them.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;routes.tf&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# routing table
resource "aws_route_table" "private" {
  vpc_id = aws_vpc.myvpc.id

  route {
      cidr_block                 = "0.0.0.0/0"
      nat_gateway_id             = aws_nat_gateway.k8s-nat.id
    }

  tags = {
    Name = "private"
  }
}

resource "aws_route_table" "public" {
  vpc_id = aws_vpc.myvpc.id

  route {
      cidr_block                 = "0.0.0.0/0"
      gateway_id                 = aws_internet_gateway.myvpc-igw.id
    }

  tags = {
    Name = "public"
  }
}


# routing table association

resource "aws_route_table_association" "private-us-east-1a" {
  subnet_id      = aws_subnet.private-us-east-1a.id
  route_table_id = aws_route_table.private.id
}

resource "aws_route_table_association" "private-us-east-1b" {
  subnet_id      = aws_subnet.private-us-east-1b.id
  route_table_id = aws_route_table.private.id
}

resource "aws_route_table_association" "public-us-east-1a" {
  subnet_id      = aws_subnet.public-us-east-1a.id
  route_table_id = aws_route_table.public.id
}

resource "aws_route_table_association" "public-us-east-1b" {
  subnet_id      = aws_subnet.public-us-east-1b.id
  route_table_id = aws_route_table.public.id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The VPC configuration is complete. We have created the VPC using Terraform.&lt;/p&gt;

&lt;p&gt;Directory structure :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@ip-172-31-6-151 eks_terraform]# tree
.
├── igw.tf
├── nat.tf
├── provider.tf
├── routes.tf
├── subnets.tf
└── vpc.tf

0 directories, 6 files

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;Terraform plan&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@ip-172-31-6-151 eks_terraform]# tree
.
├── igw.tf
├── nat.tf
├── provider.tf
├── routes.tf
├── subnets.tf
└── vpc.tf

0 directories, 6 files
[root@ip-172-31-6-151 eks_terraform]# terraform plan

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # aws_eip.nat will be created
  + resource "aws_eip" "nat" {
      + allocation_id        = (known after apply)
      + association_id       = (known after apply)
      + carrier_ip           = (known after apply)
      + customer_owned_ip    = (known after apply)
      + domain               = (known after apply)
      + id                   = (known after apply)
      + instance             = (known after apply)
      + network_border_group = (known after apply)
      + network_interface    = (known after apply)
      + private_dns          = (known after apply)
      + private_ip           = (known after apply)
      + public_dns           = (known after apply)
      + public_ip            = (known after apply)
      + public_ipv4_pool     = (known after apply)
      + tags                 = {
          + "Name" = "nat"
        }
      + tags_all             = {
          + "Name" = "nat"
        }
      + vpc                  = true
    }

  # aws_internet_gateway.myvpc-igw will be created
  + resource "aws_internet_gateway" "myvpc-igw" {
      + arn      = (known after apply)
      + id       = (known after apply)
      + owner_id = (known after apply)
      + tags     = {
          + "Name" = "myvpc-igw"
        }
      + tags_all = {
          + "Name" = "myvpc-igw"
        }
      + vpc_id   = (known after apply)
    }

  # aws_nat_gateway.k8s-nat will be created
  + resource "aws_nat_gateway" "k8s-nat" {
      + allocation_id        = (known after apply)
      + association_id       = (known after apply)
      + connectivity_type    = "public"
      + id                   = (known after apply)
      + network_interface_id = (known after apply)
      + private_ip           = (known after apply)
      + public_ip            = (known after apply)
      + subnet_id            = (known after apply)
      + tags                 = {
          + "Name" = "k8s-nat"
        }
      + tags_all             = {
          + "Name" = "k8s-nat"
        }
    }

  # aws_route_table.private will be created
  + resource "aws_route_table" "private" {
      + arn              = (known after apply)
      + id               = (known after apply)
      + owner_id         = (known after apply)
      + propagating_vgws = (known after apply)
      + route            = [
          + {
              + cidr_block                 = "0.0.0.0/0"
              + nat_gateway_id             = (known after apply)
                # (12 unchanged attributes hidden)
            },
        ]
      + tags             = {
          + "Name" = "private"
        }
      + tags_all         = {
          + "Name" = "private"
        }
      + vpc_id           = (known after apply)
    }

  # aws_route_table.public will be created
  + resource "aws_route_table" "public" {
      + arn              = (known after apply)
      + id               = (known after apply)
      + owner_id         = (known after apply)
      + propagating_vgws = (known after apply)
      + route            = [
          + {
              + cidr_block                 = "0.0.0.0/0"
              + gateway_id                 = (known after apply)
                # (12 unchanged attributes hidden)
            },
        ]
      + tags             = {
          + "Name" = "public"
        }
      + tags_all         = {
          + "Name" = "public"
        }
      + vpc_id           = (known after apply)
    }

  # aws_route_table_association.private-us-east-1a will be created
  + resource "aws_route_table_association" "private-us-east-1a" {
      + id             = (known after apply)
      + route_table_id = (known after apply)
      + subnet_id      = (known after apply)
    }

  # aws_route_table_association.private-us-east-1b will be created
  + resource "aws_route_table_association" "private-us-east-1b" {
      + id             = (known after apply)
      + route_table_id = (known after apply)
      + subnet_id      = (known after apply)
    }

  # aws_route_table_association.public-us-east-1a will be created
  + resource "aws_route_table_association" "public-us-east-1a" {
      + id             = (known after apply)
      + route_table_id = (known after apply)
      + subnet_id      = (known after apply)
    }

  # aws_route_table_association.public-us-east-1b will be created
  + resource "aws_route_table_association" "public-us-east-1b" {
      + id             = (known after apply)
      + route_table_id = (known after apply)
      + subnet_id      = (known after apply)
    }

  # aws_subnet.private-us-east-1a will be created
  + resource "aws_subnet" "private-us-east-1a" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "us-east-1a"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "10.0.1.0/24"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = false
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + tags                                           = {
          + "Name"                            = "private-us-east-1a"
          + "kubernetes.io/cluster/demo"      = "owned"
          + "kubernetes.io/role/internal-elb" = "1"
        }
      + tags_all                                       = {
          + "Name"                            = "private-us-east-1a"
          + "kubernetes.io/cluster/demo"      = "owned"
          + "kubernetes.io/role/internal-elb" = "1"
        }
      + vpc_id                                         = (known after apply)
    }

  # aws_subnet.private-us-east-1b will be created
  + resource "aws_subnet" "private-us-east-1b" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "us-east-1b"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "10.0.2.0/24"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = false
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + tags                                           = {
          + "Name"                            = "private-us-east-1b"
          + "kubernetes.io/cluster/demo"      = "owned"
          + "kubernetes.io/role/internal-elb" = "1"
        }
      + tags_all                                       = {
          + "Name"                            = "private-us-east-1b"
          + "kubernetes.io/cluster/demo"      = "owned"
          + "kubernetes.io/role/internal-elb" = "1"
        }
      + vpc_id                                         = (known after apply)
    }

  # aws_subnet.public-us-east-1a will be created
  + resource "aws_subnet" "public-us-east-1a" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "us-east-1a"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "10.0.3.0/24"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = true
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + tags                                           = {
          + "Name"                       = "public-us-east-1a"
          + "kubernetes.io/cluster/demo" = "owned"
          + "kubernetes.io/role/elb"     = "1"
        }
      + tags_all                                       = {
          + "Name"                       = "public-us-east-1a"
          + "kubernetes.io/cluster/demo" = "owned"
          + "kubernetes.io/role/elb"     = "1"
        }
      + vpc_id                                         = (known after apply)
    }

  # aws_subnet.public-us-east-1b will be created
  + resource "aws_subnet" "public-us-east-1b" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "us-east-1b"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "10.0.4.0/24"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = true
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + tags                                           = {
          + "Name"                       = "public-us-east-1b"
          + "kubernetes.io/cluster/demo" = "owned"
          + "kubernetes.io/role/elb"     = "1"
        }
      + tags_all                                       = {
          + "Name"                       = "public-us-east-1b"
          + "kubernetes.io/cluster/demo" = "owned"
          + "kubernetes.io/role/elb"     = "1"
        }
      + vpc_id                                         = (known after apply)
    }

  # aws_vpc.myvpc will be created
  + resource "aws_vpc" "myvpc" {
      + arn                                  = (known after apply)
      + cidr_block                           = "10.0.0.0/16"
      + default_network_acl_id               = (known after apply)
      + default_route_table_id               = (known after apply)
      + default_security_group_id            = (known after apply)
      + dhcp_options_id                      = (known after apply)
      + enable_classiclink                   = (known after apply)
      + enable_classiclink_dns_support       = (known after apply)
      + enable_dns_hostnames                 = (known after apply)
      + enable_dns_support                   = true
      + enable_network_address_usage_metrics = (known after apply)
      + id                                   = (known after apply)
      + instance_tenancy                     = "default"
      + ipv6_association_id                  = (known after apply)
      + ipv6_cidr_block                      = (known after apply)
      + ipv6_cidr_block_network_border_group = (known after apply)
      + main_route_table_id                  = (known after apply)
      + owner_id                             = (known after apply)
      + tags                                 = {
          + "Name" = "myvpc"
        }
      + tags_all                             = {
          + "Name" = "myvpc"
        }
    }

Plan: 14 to add, 0 to change, 0 to destroy.

─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Create EKS cluster&lt;/strong&gt;&lt;br&gt;
Kubernetes cluster managed by Amazone EKS service and make calls to other AWS resources on your behalf to manage the resources that you use with the EKS service.&lt;/p&gt;

&lt;p&gt;Before you can create Amazon EKS clusters, you must create an IAM role with the AmazonEKSClusterPolicy.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;eks.tf&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# IAM role for eks

resource "aws_iam_role" "demo" {
  name = "ashish"
  tags = {
    tag-key = "ashish"
  }

  assume_role_policy = &amp;lt;&amp;lt;POLICY
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": [
                    "eks.amazonaws.com"
                ]
            },
            "Action": "sts:AssumeRole"
        }
    ]
}
POLICY
}
# eks policy attachment

resource "aws_iam_role_policy_attachment" "demo-AmazonEKSClusterPolicy" {
  role       = aws_iam_role.demo.name
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
}
# bare minimum requirement of eks

resource "aws_eks_cluster" "ashish" {
  name     = "ashish"
  version = "1.31"
  role_arn = aws_iam_role.demo.arn

  vpc_config {
    subnet_ids = [
      aws_subnet.private-us-east-1a.id,
      aws_subnet.private-us-east-1b.id,
      aws_subnet.public-us-east-1a.id,
      aws_subnet.public-us-east-1b.id
    ]
  }

  depends_on = [aws_iam_role_policy_attachment.demo-AmazonEKSClusterPolicy]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Notes:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Here depends on in the aws_eks_cluster resource means that until the IAM role is ready EKS cluster won't be created.&lt;/li&gt;
&lt;li&gt;Next, we are going to create a single instance group for Kubernetes. Similar to the EKS cluster, it requires an IAM role as well.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;nodes.tf&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@ip-172-31-6-151 eks_terraform]# cat nodes.tf
# role for nodegroup

resource "aws_iam_role" "nodes" {
  name = "eks-node-group-nodes"

  assume_role_policy = jsonencode({
    Statement = [{
      Action = "sts:AssumeRole"
      Effect = "Allow"
      Principal = {
        Service = "ec2.amazonaws.com"
      }
    }]
    Version = "2012-10-17"
  })
}

# IAM policy attachment to nodegroup

resource "aws_iam_role_policy_attachment" "nodes-AmazonEKSWorkerNodePolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
  role       = aws_iam_role.nodes.name
}

resource "aws_iam_role_policy_attachment" "nodes-AmazonEKS_CNI_Policy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
  role       = aws_iam_role.nodes.name
}

resource "aws_iam_role_policy_attachment" "nodes-AmazonEC2ContainerRegistryReadOnly" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
  role       = aws_iam_role.nodes.name
}


# aws node group

resource "aws_eks_node_group" "private-nodes" {
  cluster_name    = aws_eks_cluster.demo.name
  node_group_name = "private-nodes"
  node_role_arn   = aws_iam_role.nodes.arn

  subnet_ids = [
    aws_subnet.private-us-east-1a.id,
    aws_subnet.private-us-east-1b.id
  ]

  capacity_type  = "ON_DEMAND"
  instance_types = ["t2.medium"]

  scaling_config {
    desired_size = 1
    max_size     = 10
    min_size     = 0
  }

  update_config {
    max_unavailable = 1
  }

  labels = {
    node = "kubenode02"
  }

  # taint {
  #   key    = "team"
  #   value  = "devops"
  #   effect = "NO_SCHEDULE"
  # }

  # launch_template {
  #   name    = aws_launch_template.eks-with-disks.name
  #   version = aws_launch_template.eks-with-disks.latest_version
  # }

  depends_on = [
    aws_iam_role_policy_attachment.nodes-AmazonEKSWorkerNodePolicy,
    aws_iam_role_policy_attachment.nodes-AmazonEKS_CNI_Policy,
    aws_iam_role_policy_attachment.nodes-AmazonEC2ContainerRegistryReadOnly,
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Notes:&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The first policy is AmazonEKSWorkerNodePolicy, which is required to allow EC2 instances to interact with the EKS cluster.&lt;/li&gt;
&lt;li&gt;The second policy is AmazonEKS_CNI_Policy, which is needed for Kubernetes networking configuration.&lt;/li&gt;
&lt;li&gt;The last one is AmazonEC2ContainerRegistryReadOnly, which allows nodes to download and run Docker images from the ECR repository.&lt;/li&gt;
&lt;li&gt;In the aws_eks_node_group resource, you have many options to configure the Kubernetes worker nodes.&lt;/li&gt;
&lt;li&gt;Here, we specify the cluster name, node group name, and IAM role, along with two private subnets.&lt;/li&gt;
&lt;li&gt;If you need nodes with public IPs, simply replace the private subnet IDs with public ones.&lt;/li&gt;
&lt;li&gt;For capacity, you can choose between on-demand and spot instances (spot instances are much cheaper but can be terminated by AWS at any time).&lt;/li&gt;
&lt;li&gt;When it comes to scaling, it's important to understand the scaling configuration.&lt;/li&gt;
&lt;li&gt;By default, EKS will not auto-scale your nodes.&lt;/li&gt;
&lt;li&gt;To enable auto-scaling, you need to deploy an additional component in Kubernetes called the Cluster Autoscaler.&lt;/li&gt;
&lt;li&gt;You can define the minimum and maximum number of nodes using the min_size and max_size attributes.&lt;/li&gt;
&lt;li&gt;EKS uses these settings to create an Auto Scaling Group, and then the Cluster Autoscaler adjusts the desired_size based on load.&lt;/li&gt;
&lt;li&gt;You can also define labels and taints for your nodes.&lt;/li&gt;
&lt;li&gt;Labels can be used by the Kubernetes scheduler to place pods on specific node groups using node affinity or node selectors.&lt;/li&gt;
&lt;li&gt;To manage application permissions within Kubernetes, you can either attach IAM policies directly to the node role — in which case all pods will have the same access to AWS resources — or, a better option is to create an OpenID Connect (OIDC) provider.&lt;/li&gt;
&lt;li&gt;This allows granting IAM permissions based on the service account used by each pod.&lt;/li&gt;
&lt;li&gt;In our case, we'll use an OIDC provider to grant permissions specifically to the service account used by the Cluster Autoscaler so it can scale our nodes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;iam-oidc.tf&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "tls_certificate" "eks" {
  url = aws_eks_cluster.demo.identity[0].oidc[0].issuer
}

resource "aws_iam_openid_connect_provider" "eks" {
  client_id_list  = ["sts.amazonaws.com"]
  thumbprint_list = [data.tls_certificate.eks.certificates[0].sha1_fingerprint]
  url             = aws_eks_cluster.demo.identity[0].oidc[0].issuer
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;autoscaler.tf&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "tls_certificate" "eks" {
  url = aws_eks_cluster.demo.identity[0].oidc[0].issuer
}

resource "aws_iam_openid_connect_provider" "eks" {
  client_id_list  = ["sts.amazonaws.com"]
  thumbprint_list = [data.tls_certificate.eks.certificates[0].sha1_fingerprint]
  url             = aws_eks_cluster.demo.identity[0].oidc[0].issuer
}
[root@ip-172-31-6-151 eks_terraform]# cp ../my_eks/autoscaler.tf .
[root@ip-172-31-6-151 eks_terraform]# cat autoscaler.tf
data "aws_iam_policy_document" "eks_cluster_autoscaler_assume_role_policy" {
  statement {
    actions = ["sts:AssumeRoleWithWebIdentity"]
    effect  = "Allow"

    condition {
      test     = "StringEquals"
      variable = "${replace(aws_iam_openid_connect_provider.eks.url, "https://", "")}:sub"
      values   = ["system:serviceaccount:kube-system:cluster-autoscaler"]
    }

    principals {
      identifiers = [aws_iam_openid_connect_provider.eks.arn]
      type        = "Federated"
    }
  }
}

resource "aws_iam_role" "eks_cluster_autoscaler" {
  assume_role_policy = data.aws_iam_policy_document.eks_cluster_autoscaler_assume_role_policy.json
  name               = "eks-cluster-autoscaler"
}

resource "aws_iam_policy" "eks_cluster_autoscaler" {
  name = "eks-cluster-autoscaler"

  policy = jsonencode({
    Statement = [{
      Action = [
                "autoscaling:DescribeAutoScalingGroups",
                "autoscaling:DescribeAutoScalingInstances",
                "autoscaling:DescribeLaunchConfigurations",
                "autoscaling:DescribeTags",
                "autoscaling:SetDesiredCapacity",
                "autoscaling:TerminateInstanceInAutoScalingGroup",
                "ec2:DescribeLaunchTemplateVersions"
            ]
      Effect   = "Allow"
      Resource = "*"
    }]
    Version = "2012-10-17"
  })
}

resource "aws_iam_role_policy_attachment" "eks_cluster_autoscaler_attach" {
  role       = aws_iam_role.eks_cluster_autoscaler.name
  policy_arn = aws_iam_policy.eks_cluster_autoscaler.arn
}

output "eks_cluster_autoscaler_arn" {
  value = aws_iam_role.eks_cluster_autoscaler.arn
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All things are good and now create infrastructure by using terraform apply command.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Directory Structure&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@ip-172-31-6-151 eks_terraform]# tree
.
├── autoscaler.tf
├── eks.tf
├── iam-oidc.tf
├── igw.tf
├── nat.tf
├── nodes.tf
├── provider.tf
├── routes.tf
├── subnets.tf
└── vpc.tf

0 directories, 10 files

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;terraform plan&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@ip-172-31-6-151 eks_terraform]# terraform plan

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create
 &amp;lt;= read (data resources)

Terraform will perform the following actions:

  # data.aws_iam_policy_document.eks_cluster_autoscaler_assume_role_policy will be read during apply
  # (config refers to values not yet known)
 &amp;lt;= data "aws_iam_policy_document" "eks_cluster_autoscaler_assume_role_policy" {
      + id   = (known after apply)
      + json = (known after apply)

      + statement {
          + actions = [
              + "sts:AssumeRoleWithWebIdentity",
            ]
          + effect  = "Allow"

          + condition {
              + test     = "StringEquals"
              + values   = [
                  + "system:serviceaccount:kube-system:cluster-autoscaler",
                ]
              + variable = (known after apply)
            }

          + principals {
              + identifiers = [
                  + (known after apply),
                ]
              + type        = "Federated"
            }
        }
    }

  # data.tls_certificate.eks will be read during apply
  # (config refers to values not yet known)
 &amp;lt;= data "tls_certificate" "eks" {
      + certificates = (known after apply)
      + id           = (known after apply)
      + url          = (known after apply)
    }

  # aws_eip.nat will be created
  + resource "aws_eip" "nat" {
      + allocation_id        = (known after apply)
      + association_id       = (known after apply)
      + carrier_ip           = (known after apply)
      + customer_owned_ip    = (known after apply)
      + domain               = (known after apply)
      + id                   = (known after apply)
      + instance             = (known after apply)
      + network_border_group = (known after apply)
      + network_interface    = (known after apply)
      + private_dns          = (known after apply)
      + private_ip           = (known after apply)
      + public_dns           = (known after apply)
      + public_ip            = (known after apply)
      + public_ipv4_pool     = (known after apply)
      + tags                 = {
          + "Name" = "nat"
        }
      + tags_all             = {
          + "Name" = "nat"
        }
      + vpc                  = true
    }

  # aws_eks_cluster.demo will be created
  + resource "aws_eks_cluster" "demo" {
      + arn                   = (known after apply)
      + certificate_authority = (known after apply)
      + cluster_id            = (known after apply)
      + created_at            = (known after apply)
      + endpoint              = (known after apply)
      + id                    = (known after apply)
      + identity              = (known after apply)
      + name                  = "ashish"
      + platform_version      = (known after apply)
      + role_arn              = (known after apply)
      + status                = (known after apply)
      + tags_all              = (known after apply)
      + version               = "1.31"

      + kubernetes_network_config (known after apply)

      + vpc_config {
          + cluster_security_group_id = (known after apply)
          + endpoint_private_access   = false
          + endpoint_public_access    = true
          + public_access_cidrs       = (known after apply)
          + subnet_ids                = (known after apply)
          + vpc_id                    = (known after apply)
        }
    }

  # aws_eks_node_group.private-nodes will be created
  + resource "aws_eks_node_group" "private-nodes" {
      + ami_type               = (known after apply)
      + arn                    = (known after apply)
      + capacity_type          = "ON_DEMAND"
      + cluster_name           = "ashish"
      + disk_size              = (known after apply)
      + id                     = (known after apply)
      + instance_types         = [
          + "t2.medium",
        ]
      + labels                 = {
          + "node" = "kubenode02"
        }
      + node_group_name        = "private-nodes"
      + node_group_name_prefix = (known after apply)
      + node_role_arn          = (known after apply)
      + release_version        = (known after apply)
      + resources              = (known after apply)
      + status                 = (known after apply)
      + subnet_ids             = (known after apply)
      + tags_all               = (known after apply)
      + version                = (known after apply)

      + scaling_config {
          + desired_size = 1
          + max_size     = 10
          + min_size     = 0
        }

      + update_config {
          + max_unavailable = 1
        }
    }

  # aws_iam_openid_connect_provider.eks will be created
  + resource "aws_iam_openid_connect_provider" "eks" {
      + arn             = (known after apply)
      + client_id_list  = [
          + "sts.amazonaws.com",
        ]
      + id              = (known after apply)
      + tags_all        = (known after apply)
      + thumbprint_list = (known after apply)
      + url             = (known after apply)
    }

  # aws_iam_policy.eks_cluster_autoscaler will be created
  + resource "aws_iam_policy" "eks_cluster_autoscaler" {
      + arn         = (known after apply)
      + id          = (known after apply)
      + name        = "eks-cluster-autoscaler"
      + name_prefix = (known after apply)
      + path        = "/"
      + policy      = jsonencode(
            {
              + Statement = [
                  + {
                      + Action   = [
                          + "autoscaling:DescribeAutoScalingGroups",
                          + "autoscaling:DescribeAutoScalingInstances",
                          + "autoscaling:DescribeLaunchConfigurations",
                          + "autoscaling:DescribeTags",
                          + "autoscaling:SetDesiredCapacity",
                          + "autoscaling:TerminateInstanceInAutoScalingGroup",
                          + "ec2:DescribeLaunchTemplateVersions",
                        ]
                      + Effect   = "Allow"
                      + Resource = "*"
                    },
                ]
              + Version   = "2012-10-17"
            }
        )
      + policy_id   = (known after apply)
      + tags_all    = (known after apply)
    }

  # aws_iam_role.demo will be created
  + resource "aws_iam_role" "demo" {
      + arn                   = (known after apply)
      + assume_role_policy    = jsonencode(
            {
              + Statement = [
                  + {
                      + Action    = "sts:AssumeRole"
                      + Effect    = "Allow"
                      + Principal = {
                          + Service = [
                              + "eks.amazonaws.com",
                            ]
                        }
                    },
                ]
              + Version   = "2012-10-17"
            }
        )
      + create_date           = (known after apply)
      + force_detach_policies = false
      + id                    = (known after apply)
      + managed_policy_arns   = (known after apply)
      + max_session_duration  = 3600
      + name                  = "ashish"
      + name_prefix           = (known after apply)
      + path                  = "/"
      + role_last_used        = (known after apply)
      + tags                  = {
          + "tag-key" = "ashish"
        }
      + tags_all              = {
          + "tag-key" = "ashish"
        }
      + unique_id             = (known after apply)

      + inline_policy (known after apply)
    }

  # aws_iam_role.eks_cluster_autoscaler will be created
  + resource "aws_iam_role" "eks_cluster_autoscaler" {
      + arn                   = (known after apply)
      + assume_role_policy    = (known after apply)
      + create_date           = (known after apply)
      + force_detach_policies = false
      + id                    = (known after apply)
      + managed_policy_arns   = (known after apply)
      + max_session_duration  = 3600
      + name                  = "eks-cluster-autoscaler"
      + name_prefix           = (known after apply)
      + path                  = "/"
      + role_last_used        = (known after apply)
      + tags_all              = (known after apply)
      + unique_id             = (known after apply)

      + inline_policy (known after apply)
    }

  # aws_iam_role.nodes will be created
  + resource "aws_iam_role" "nodes" {
      + arn                   = (known after apply)
      + assume_role_policy    = jsonencode(
            {
              + Statement = [
                  + {
                      + Action    = "sts:AssumeRole"
                      + Effect    = "Allow"
                      + Principal = {
                          + Service = "ec2.amazonaws.com"
                        }
                    },
                ]
              + Version   = "2012-10-17"
            }
        )
      + create_date           = (known after apply)
      + force_detach_policies = false
      + id                    = (known after apply)
      + managed_policy_arns   = (known after apply)
      + max_session_duration  = 3600
      + name                  = "eks-node-group-nodes"
      + name_prefix           = (known after apply)
      + path                  = "/"
      + role_last_used        = (known after apply)
      + tags_all              = (known after apply)
      + unique_id             = (known after apply)

      + inline_policy (known after apply)
    }

  # aws_iam_role_policy_attachment.demo-AmazonEKSClusterPolicy will be created
  + resource "aws_iam_role_policy_attachment" "demo-AmazonEKSClusterPolicy" {
      + id         = (known after apply)
      + policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
      + role       = "ashish"
    }

  # aws_iam_role_policy_attachment.eks_cluster_autoscaler_attach will be created
  + resource "aws_iam_role_policy_attachment" "eks_cluster_autoscaler_attach" {
      + id         = (known after apply)
      + policy_arn = (known after apply)
      + role       = "eks-cluster-autoscaler"
    }

  # aws_iam_role_policy_attachment.nodes-AmazonEC2ContainerRegistryReadOnly will be created
  + resource "aws_iam_role_policy_attachment" "nodes-AmazonEC2ContainerRegistryReadOnly" {
      + id         = (known after apply)
      + policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
      + role       = "eks-node-group-nodes"
    }

  # aws_iam_role_policy_attachment.nodes-AmazonEKSWorkerNodePolicy will be created
  + resource "aws_iam_role_policy_attachment" "nodes-AmazonEKSWorkerNodePolicy" {
      + id         = (known after apply)
      + policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
      + role       = "eks-node-group-nodes"
    }

  # aws_iam_role_policy_attachment.nodes-AmazonEKS_CNI_Policy will be created
  + resource "aws_iam_role_policy_attachment" "nodes-AmazonEKS_CNI_Policy" {
      + id         = (known after apply)
      + policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
      + role       = "eks-node-group-nodes"
    }

  # aws_internet_gateway.myvpc-igw will be created
  + resource "aws_internet_gateway" "myvpc-igw" {
      + arn      = (known after apply)
      + id       = (known after apply)
      + owner_id = (known after apply)
      + tags     = {
          + "Name" = "myvpc-igw"
        }
      + tags_all = {
          + "Name" = "myvpc-igw"
        }
      + vpc_id   = (known after apply)
    }

  # aws_nat_gateway.k8s-nat will be created
  + resource "aws_nat_gateway" "k8s-nat" {
      + allocation_id        = (known after apply)
      + association_id       = (known after apply)
      + connectivity_type    = "public"
      + id                   = (known after apply)
      + network_interface_id = (known after apply)
      + private_ip           = (known after apply)
      + public_ip            = (known after apply)
      + subnet_id            = (known after apply)
      + tags                 = {
          + "Name" = "k8s-nat"
        }
      + tags_all             = {
          + "Name" = "k8s-nat"
        }
    }

  # aws_route_table.private will be created
  + resource "aws_route_table" "private" {
      + arn              = (known after apply)
      + id               = (known after apply)
      + owner_id         = (known after apply)
      + propagating_vgws = (known after apply)
      + route            = [
          + {
              + cidr_block                 = "0.0.0.0/0"
              + nat_gateway_id             = (known after apply)
                # (12 unchanged attributes hidden)
            },
        ]
      + tags             = {
          + "Name" = "private"
        }
      + tags_all         = {
          + "Name" = "private"
        }
      + vpc_id           = (known after apply)
    }

  # aws_route_table.public will be created
  + resource "aws_route_table" "public" {
      + arn              = (known after apply)
      + id               = (known after apply)
      + owner_id         = (known after apply)
      + propagating_vgws = (known after apply)
      + route            = [
          + {
              + cidr_block                 = "0.0.0.0/0"
              + gateway_id                 = (known after apply)
                # (12 unchanged attributes hidden)
            },
        ]
      + tags             = {
          + "Name" = "public"
        }
      + tags_all         = {
          + "Name" = "public"
        }
      + vpc_id           = (known after apply)
    }

  # aws_route_table_association.private-us-east-1a will be created
  + resource "aws_route_table_association" "private-us-east-1a" {
      + id             = (known after apply)
      + route_table_id = (known after apply)
      + subnet_id      = (known after apply)
    }

  # aws_route_table_association.private-us-east-1b will be created
  + resource "aws_route_table_association" "private-us-east-1b" {
      + id             = (known after apply)
      + route_table_id = (known after apply)
      + subnet_id      = (known after apply)
    }

  # aws_route_table_association.public-us-east-1a will be created
  + resource "aws_route_table_association" "public-us-east-1a" {
      + id             = (known after apply)
      + route_table_id = (known after apply)
      + subnet_id      = (known after apply)
    }

  # aws_route_table_association.public-us-east-1b will be created
  + resource "aws_route_table_association" "public-us-east-1b" {
      + id             = (known after apply)
      + route_table_id = (known after apply)
      + subnet_id      = (known after apply)
    }

  # aws_subnet.private-us-east-1a will be created
  + resource "aws_subnet" "private-us-east-1a" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "us-east-1a"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "10.0.1.0/24"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = false
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + tags                                           = {
          + "Name"                            = "private-us-east-1a"
          + "kubernetes.io/cluster/demo"      = "owned"
          + "kubernetes.io/role/internal-elb" = "1"
        }
      + tags_all                                       = {
          + "Name"                            = "private-us-east-1a"
          + "kubernetes.io/cluster/demo"      = "owned"
          + "kubernetes.io/role/internal-elb" = "1"
        }
      + vpc_id                                         = (known after apply)
    }

  # aws_subnet.private-us-east-1b will be created
  + resource "aws_subnet" "private-us-east-1b" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "us-east-1b"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "10.0.2.0/24"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = false
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + tags                                           = {
          + "Name"                            = "private-us-east-1b"
          + "kubernetes.io/cluster/demo"      = "owned"
          + "kubernetes.io/role/internal-elb" = "1"
        }
      + tags_all                                       = {
          + "Name"                            = "private-us-east-1b"
          + "kubernetes.io/cluster/demo"      = "owned"
          + "kubernetes.io/role/internal-elb" = "1"
        }
      + vpc_id                                         = (known after apply)
    }

  # aws_subnet.public-us-east-1a will be created
  + resource "aws_subnet" "public-us-east-1a" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "us-east-1a"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "10.0.3.0/24"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = true
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + tags                                           = {
          + "Name"                       = "public-us-east-1a"
          + "kubernetes.io/cluster/demo" = "owned"
          + "kubernetes.io/role/elb"     = "1"
        }
      + tags_all                                       = {
          + "Name"                       = "public-us-east-1a"
          + "kubernetes.io/cluster/demo" = "owned"
          + "kubernetes.io/role/elb"     = "1"
        }
      + vpc_id                                         = (known after apply)
    }

  # aws_subnet.public-us-east-1b will be created
  + resource "aws_subnet" "public-us-east-1b" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "us-east-1b"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "10.0.4.0/24"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = true
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + tags                                           = {
          + "Name"                       = "public-us-east-1b"
          + "kubernetes.io/cluster/demo" = "owned"
          + "kubernetes.io/role/elb"     = "1"
        }
      + tags_all                                       = {
          + "Name"                       = "public-us-east-1b"
          + "kubernetes.io/cluster/demo" = "owned"
          + "kubernetes.io/role/elb"     = "1"
        }
      + vpc_id                                         = (known after apply)
    }

  # aws_vpc.myvpc will be created
  + resource "aws_vpc" "myvpc" {
      + arn                                  = (known after apply)
      + cidr_block                           = "10.0.0.0/16"
      + default_network_acl_id               = (known after apply)
      + default_route_table_id               = (known after apply)
      + default_security_group_id            = (known after apply)
      + dhcp_options_id                      = (known after apply)
      + enable_classiclink                   = (known after apply)
      + enable_classiclink_dns_support       = (known after apply)
      + enable_dns_hostnames                 = (known after apply)
      + enable_dns_support                   = true
      + enable_network_address_usage_metrics = (known after apply)
      + id                                   = (known after apply)
      + instance_tenancy                     = "default"
      + ipv6_association_id                  = (known after apply)
      + ipv6_cidr_block                      = (known after apply)
      + ipv6_cidr_block_network_border_group = (known after apply)
      + main_route_table_id                  = (known after apply)
      + owner_id                             = (known after apply)
      + tags                                 = {
          + "Name" = "myvpc"
        }
      + tags_all                             = {
          + "Name" = "myvpc"
        }
    }

Plan: 26 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + eks_cluster_autoscaler_arn = (known after apply)

─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;terraform apply&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@ip-172-31-6-151 eks_terraform]# terraform apply

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create
 &amp;lt;= read (data resources)

Terraform will perform the following actions:

  # data.aws_iam_policy_document.eks_cluster_autoscaler_assume_role_policy will be read during apply
  # (config refers to values not yet known)
 &amp;lt;= data "aws_iam_policy_document" "eks_cluster_autoscaler_assume_role_policy" {
      + id   = (known after apply)
      + json = (known after apply)

      + statement {
          + actions = [
              + "sts:AssumeRoleWithWebIdentity",
            ]
          + effect  = "Allow"

          + condition {
              + test     = "StringEquals"
              + values   = [
                  + "system:serviceaccount:kube-system:cluster-autoscaler",
                ]
              + variable = (known after apply)
            }

          + principals {
              + identifiers = [
                  + (known after apply),
                ]
              + type        = "Federated"
            }
        }
    }

  # data.tls_certificate.eks will be read during apply
  # (config refers to values not yet known)
 &amp;lt;= data "tls_certificate" "eks" {
      + certificates = (known after apply)
      + id           = (known after apply)
      + url          = (known after apply)
    }

  # aws_eip.nat will be created
  + resource "aws_eip" "nat" {
      + allocation_id        = (known after apply)
      + association_id       = (known after apply)
      + carrier_ip           = (known after apply)
      + customer_owned_ip    = (known after apply)
      + domain               = (known after apply)
      + id                   = (known after apply)
      + instance             = (known after apply)
      + network_border_group = (known after apply)
      + network_interface    = (known after apply)
      + private_dns          = (known after apply)
      + private_ip           = (known after apply)
      + public_dns           = (known after apply)
      + public_ip            = (known after apply)
      + public_ipv4_pool     = (known after apply)
      + tags                 = {
          + "Name" = "nat"
        }
      + tags_all             = {
          + "Name" = "nat"
        }
      + vpc                  = true
    }

  # aws_eks_cluster.demo will be created
  + resource "aws_eks_cluster" "demo" {
      + arn                   = (known after apply)
      + certificate_authority = (known after apply)
      + cluster_id            = (known after apply)
      + created_at            = (known after apply)
      + endpoint              = (known after apply)
      + id                    = (known after apply)
      + identity              = (known after apply)
      + name                  = "ashish"
      + platform_version      = (known after apply)
      + role_arn              = (known after apply)
      + status                = (known after apply)
      + tags_all              = (known after apply)
      + version               = "1.31"

      + kubernetes_network_config (known after apply)

      + vpc_config {
          + cluster_security_group_id = (known after apply)
          + endpoint_private_access   = false
          + endpoint_public_access    = true
          + public_access_cidrs       = (known after apply)
          + subnet_ids                = (known after apply)
          + vpc_id                    = (known after apply)
        }
    }

  # aws_eks_node_group.private-nodes will be created
  + resource "aws_eks_node_group" "private-nodes" {
      + ami_type               = (known after apply)
      + arn                    = (known after apply)
      + capacity_type          = "ON_DEMAND"
      + cluster_name           = "ashish"
      + disk_size              = (known after apply)
      + id                     = (known after apply)
      + instance_types         = [
          + "t2.medium",
        ]
      + labels                 = {
          + "node" = "kubenode02"
        }
      + node_group_name        = "private-nodes"
      + node_group_name_prefix = (known after apply)
      + node_role_arn          = (known after apply)
      + release_version        = (known after apply)
      + resources              = (known after apply)
      + status                 = (known after apply)
      + subnet_ids             = (known after apply)
      + tags_all               = (known after apply)
      + version                = (known after apply)

      + scaling_config {
          + desired_size = 1
          + max_size     = 10
          + min_size     = 0
        }

      + update_config {
          + max_unavailable = 1
        }
    }

  # aws_iam_openid_connect_provider.eks will be created
  + resource "aws_iam_openid_connect_provider" "eks" {
      + arn             = (known after apply)
      + client_id_list  = [
          + "sts.amazonaws.com",
        ]
      + id              = (known after apply)
      + tags_all        = (known after apply)
      + thumbprint_list = (known after apply)
      + url             = (known after apply)
    }

  # aws_iam_policy.eks_cluster_autoscaler will be created
  + resource "aws_iam_policy" "eks_cluster_autoscaler" {
      + arn         = (known after apply)
      + id          = (known after apply)
      + name        = "eks-cluster-autoscaler"
      + name_prefix = (known after apply)
      + path        = "/"
      + policy      = jsonencode(
            {
              + Statement = [
                  + {
                      + Action   = [
                          + "autoscaling:DescribeAutoScalingGroups",
                          + "autoscaling:DescribeAutoScalingInstances",
                          + "autoscaling:DescribeLaunchConfigurations",
                          + "autoscaling:DescribeTags",
                          + "autoscaling:SetDesiredCapacity",
                          + "autoscaling:TerminateInstanceInAutoScalingGroup",
                          + "ec2:DescribeLaunchTemplateVersions",
                        ]
                      + Effect   = "Allow"
                      + Resource = "*"
                    },
                ]
              + Version   = "2012-10-17"
            }
        )
      + policy_id   = (known after apply)
      + tags_all    = (known after apply)
    }

  # aws_iam_role.demo will be created
  + resource "aws_iam_role" "demo" {
      + arn                   = (known after apply)
      + assume_role_policy    = jsonencode(
            {
              + Statement = [
                  + {
                      + Action    = "sts:AssumeRole"
                      + Effect    = "Allow"
                      + Principal = {
                          + Service = [
                              + "eks.amazonaws.com",
                            ]
                        }
                    },
                ]
              + Version   = "2012-10-17"
            }
        )
      + create_date           = (known after apply)
      + force_detach_policies = false
      + id                    = (known after apply)
      + managed_policy_arns   = (known after apply)
      + max_session_duration  = 3600
      + name                  = "ashish"
      + name_prefix           = (known after apply)
      + path                  = "/"
      + role_last_used        = (known after apply)
      + tags                  = {
          + "tag-key" = "ashish"
        }
      + tags_all              = {
          + "tag-key" = "ashish"
        }
      + unique_id             = (known after apply)

      + inline_policy (known after apply)
    }

  # aws_iam_role.eks_cluster_autoscaler will be created
  + resource "aws_iam_role" "eks_cluster_autoscaler" {
      + arn                   = (known after apply)
      + assume_role_policy    = (known after apply)
      + create_date           = (known after apply)
      + force_detach_policies = false
      + id                    = (known after apply)
      + managed_policy_arns   = (known after apply)
      + max_session_duration  = 3600
      + name                  = "eks-cluster-autoscaler"
      + name_prefix           = (known after apply)
      + path                  = "/"
      + role_last_used        = (known after apply)
      + tags_all              = (known after apply)
      + unique_id             = (known after apply)

      + inline_policy (known after apply)
    }

  # aws_iam_role.nodes will be created
  + resource "aws_iam_role" "nodes" {
      + arn                   = (known after apply)
      + assume_role_policy    = jsonencode(
            {
              + Statement = [
                  + {
                      + Action    = "sts:AssumeRole"
                      + Effect    = "Allow"
                      + Principal = {
                          + Service = "ec2.amazonaws.com"
                        }
                    },
                ]
              + Version   = "2012-10-17"
            }
        )
      + create_date           = (known after apply)
      + force_detach_policies = false
      + id                    = (known after apply)
      + managed_policy_arns   = (known after apply)
      + max_session_duration  = 3600
      + name                  = "eks-node-group-nodes"
      + name_prefix           = (known after apply)
      + path                  = "/"
      + role_last_used        = (known after apply)
      + tags_all              = (known after apply)
      + unique_id             = (known after apply)

      + inline_policy (known after apply)
    }

  # aws_iam_role_policy_attachment.demo-AmazonEKSClusterPolicy will be created
  + resource "aws_iam_role_policy_attachment" "demo-AmazonEKSClusterPolicy" {
      + id         = (known after apply)
      + policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
      + role       = "ashish"
    }

  # aws_iam_role_policy_attachment.eks_cluster_autoscaler_attach will be created
  + resource "aws_iam_role_policy_attachment" "eks_cluster_autoscaler_attach" {
      + id         = (known after apply)
      + policy_arn = (known after apply)
      + role       = "eks-cluster-autoscaler"
    }

  # aws_iam_role_policy_attachment.nodes-AmazonEC2ContainerRegistryReadOnly will be created
  + resource "aws_iam_role_policy_attachment" "nodes-AmazonEC2ContainerRegistryReadOnly" {
      + id         = (known after apply)
      + policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
      + role       = "eks-node-group-nodes"
    }

  # aws_iam_role_policy_attachment.nodes-AmazonEKSWorkerNodePolicy will be created
  + resource "aws_iam_role_policy_attachment" "nodes-AmazonEKSWorkerNodePolicy" {
      + id         = (known after apply)
      + policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
      + role       = "eks-node-group-nodes"
    }

  # aws_iam_role_policy_attachment.nodes-AmazonEKS_CNI_Policy will be created
  + resource "aws_iam_role_policy_attachment" "nodes-AmazonEKS_CNI_Policy" {
      + id         = (known after apply)
      + policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
      + role       = "eks-node-group-nodes"
    }

  # aws_internet_gateway.myvpc-igw will be created
  + resource "aws_internet_gateway" "myvpc-igw" {
      + arn      = (known after apply)
      + id       = (known after apply)
      + owner_id = (known after apply)
      + tags     = {
          + "Name" = "myvpc-igw"
        }
      + tags_all = {
          + "Name" = "myvpc-igw"
        }
      + vpc_id   = (known after apply)
    }

  # aws_nat_gateway.k8s-nat will be created
  + resource "aws_nat_gateway" "k8s-nat" {
      + allocation_id        = (known after apply)
      + association_id       = (known after apply)
      + connectivity_type    = "public"
      + id                   = (known after apply)
      + network_interface_id = (known after apply)
      + private_ip           = (known after apply)
      + public_ip            = (known after apply)
      + subnet_id            = (known after apply)
      + tags                 = {
          + "Name" = "k8s-nat"
        }
      + tags_all             = {
          + "Name" = "k8s-nat"
        }
    }

  # aws_route_table.private will be created
  + resource "aws_route_table" "private" {
      + arn              = (known after apply)
      + id               = (known after apply)
      + owner_id         = (known after apply)
      + propagating_vgws = (known after apply)
      + route            = [
          + {
              + cidr_block                 = "0.0.0.0/0"
              + nat_gateway_id             = (known after apply)
                # (12 unchanged attributes hidden)
            },
        ]
      + tags             = {
          + "Name" = "private"
        }
      + tags_all         = {
          + "Name" = "private"
        }
      + vpc_id           = (known after apply)
    }

  # aws_route_table.public will be created
  + resource "aws_route_table" "public" {
      + arn              = (known after apply)
      + id               = (known after apply)
      + owner_id         = (known after apply)
      + propagating_vgws = (known after apply)
      + route            = [
          + {
              + cidr_block                 = "0.0.0.0/0"
              + gateway_id                 = (known after apply)
                # (12 unchanged attributes hidden)
            },
        ]
      + tags             = {
          + "Name" = "public"
        }
      + tags_all         = {
          + "Name" = "public"
        }
      + vpc_id           = (known after apply)
    }

  # aws_route_table_association.private-us-east-1a will be created
  + resource "aws_route_table_association" "private-us-east-1a" {
      + id             = (known after apply)
      + route_table_id = (known after apply)
      + subnet_id      = (known after apply)
    }

  # aws_route_table_association.private-us-east-1b will be created
  + resource "aws_route_table_association" "private-us-east-1b" {
      + id             = (known after apply)
      + route_table_id = (known after apply)
      + subnet_id      = (known after apply)
    }

  # aws_route_table_association.public-us-east-1a will be created
  + resource "aws_route_table_association" "public-us-east-1a" {
      + id             = (known after apply)
      + route_table_id = (known after apply)
      + subnet_id      = (known after apply)
    }

  # aws_route_table_association.public-us-east-1b will be created
  + resource "aws_route_table_association" "public-us-east-1b" {
      + id             = (known after apply)
      + route_table_id = (known after apply)
      + subnet_id      = (known after apply)
    }

  # aws_subnet.private-us-east-1a will be created
  + resource "aws_subnet" "private-us-east-1a" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "us-east-1a"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "10.0.1.0/24"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = false
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + tags                                           = {
          + "Name"                            = "private-us-east-1a"
          + "kubernetes.io/cluster/demo"      = "owned"
          + "kubernetes.io/role/internal-elb" = "1"
        }
      + tags_all                                       = {
          + "Name"                            = "private-us-east-1a"
          + "kubernetes.io/cluster/demo"      = "owned"
          + "kubernetes.io/role/internal-elb" = "1"
        }
      + vpc_id                                         = (known after apply)
    }

  # aws_subnet.private-us-east-1b will be created
  + resource "aws_subnet" "private-us-east-1b" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "us-east-1b"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "10.0.2.0/24"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = false
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + tags                                           = {
          + "Name"                            = "private-us-east-1b"
          + "kubernetes.io/cluster/demo"      = "owned"
          + "kubernetes.io/role/internal-elb" = "1"
        }
      + tags_all                                       = {
          + "Name"                            = "private-us-east-1b"
          + "kubernetes.io/cluster/demo"      = "owned"
          + "kubernetes.io/role/internal-elb" = "1"
        }
      + vpc_id                                         = (known after apply)
    }

  # aws_subnet.public-us-east-1a will be created
  + resource "aws_subnet" "public-us-east-1a" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "us-east-1a"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "10.0.3.0/24"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = true
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + tags                                           = {
          + "Name"                       = "public-us-east-1a"
          + "kubernetes.io/cluster/demo" = "owned"
          + "kubernetes.io/role/elb"     = "1"
        }
      + tags_all                                       = {
          + "Name"                       = "public-us-east-1a"
          + "kubernetes.io/cluster/demo" = "owned"
          + "kubernetes.io/role/elb"     = "1"
        }
      + vpc_id                                         = (known after apply)
    }

  # aws_subnet.public-us-east-1b will be created
  + resource "aws_subnet" "public-us-east-1b" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = "us-east-1b"
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "10.0.4.0/24"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = true
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + tags                                           = {
          + "Name"                       = "public-us-east-1b"
          + "kubernetes.io/cluster/demo" = "owned"
          + "kubernetes.io/role/elb"     = "1"
        }
      + tags_all                                       = {
          + "Name"                       = "public-us-east-1b"
          + "kubernetes.io/cluster/demo" = "owned"
          + "kubernetes.io/role/elb"     = "1"
        }
      + vpc_id                                         = (known after apply)
    }

  # aws_vpc.myvpc will be created
  + resource "aws_vpc" "myvpc" {
      + arn                                  = (known after apply)
      + cidr_block                           = "10.0.0.0/16"
      + default_network_acl_id               = (known after apply)
      + default_route_table_id               = (known after apply)
      + default_security_group_id            = (known after apply)
      + dhcp_options_id                      = (known after apply)
      + enable_classiclink                   = (known after apply)
      + enable_classiclink_dns_support       = (known after apply)
      + enable_dns_hostnames                 = (known after apply)
      + enable_dns_support                   = true
      + enable_network_address_usage_metrics = (known after apply)
      + id                                   = (known after apply)
      + instance_tenancy                     = "default"
      + ipv6_association_id                  = (known after apply)
      + ipv6_cidr_block                      = (known after apply)
      + ipv6_cidr_block_network_border_group = (known after apply)
      + main_route_table_id                  = (known after apply)
      + owner_id                             = (known after apply)
      + tags                                 = {
          + "Name" = "myvpc"
        }
      + tags_all                             = {
          + "Name" = "myvpc"
        }
    }

Plan: 26 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + eks_cluster_autoscaler_arn = (known after apply)

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

aws_eip.nat: Creating...
aws_vpc.myvpc: Creating...
aws_iam_policy.eks_cluster_autoscaler: Creating...
aws_iam_role.demo: Creating...
aws_iam_role.nodes: Creating...
aws_iam_policy.eks_cluster_autoscaler: Creation complete after 0s [id=arn:aws:iam::256050093938:policy/eks-cluster-autoscaler]
aws_iam_role.nodes: Creation complete after 0s [id=eks-node-group-nodes]
aws_iam_role_policy_attachment.nodes-AmazonEC2ContainerRegistryReadOnly: Creating...
aws_iam_role_policy_attachment.nodes-AmazonEKS_CNI_Policy: Creating...
aws_iam_role_policy_attachment.nodes-AmazonEKSWorkerNodePolicy: Creating...
aws_iam_role.demo: Creation complete after 0s [id=ashish]
aws_iam_role_policy_attachment.demo-AmazonEKSClusterPolicy: Creating...
aws_iam_role_policy_attachment.nodes-AmazonEKS_CNI_Policy: Creation complete after 0s [id=eks-node-group-nodes-20250610082751337600000001]
aws_eip.nat: Creation complete after 1s [id=eipalloc-0eea3bf78b492fbfd]
aws_iam_role_policy_attachment.nodes-AmazonEKSWorkerNodePolicy: Creation complete after 1s [id=eks-node-group-nodes-20250610082751375100000002]
aws_iam_role_policy_attachment.demo-AmazonEKSClusterPolicy: Creation complete after 1s [id=ashish-20250610082751453900000003]
aws_iam_role_policy_attachment.nodes-AmazonEC2ContainerRegistryReadOnly: Creation complete after 1s [id=eks-node-group-nodes-20250610082751582900000004]
aws_vpc.myvpc: Creation complete after 2s [id=vpc-0ba03a84ccfd83d30]
aws_subnet.private-us-east-1a: Creating...
aws_subnet.public-us-east-1a: Creating...
aws_internet_gateway.myvpc-igw: Creating...
aws_subnet.public-us-east-1b: Creating...
aws_subnet.private-us-east-1b: Creating...
aws_internet_gateway.myvpc-igw: Creation complete after 0s [id=igw-00d4e76abce23a7bd]
aws_route_table.public: Creating...
aws_subnet.private-us-east-1a: Creation complete after 0s [id=subnet-0a24ca86181eef50c]
aws_subnet.private-us-east-1b: Creation complete after 1s [id=subnet-040a887feb7b2af36]
aws_route_table.public: Creation complete after 2s [id=rtb-0f1236cd61c6b3915]
aws_subnet.public-us-east-1a: Still creating... [00m10s elapsed]
aws_subnet.public-us-east-1b: Still creating... [00m10s elapsed]
aws_subnet.public-us-east-1b: Creation complete after 11s [id=subnet-0ff0bdd792e4a95cb]
aws_route_table_association.public-us-east-1b: Creating...
aws_route_table_association.public-us-east-1b: Creation complete after 1s [id=rtbassoc-0e840b6fa06c1c731]
aws_subnet.public-us-east-1a: Creation complete after 12s [id=subnet-00a93e051039f58ee]
aws_route_table_association.public-us-east-1a: Creating...
aws_nat_gateway.k8s-nat: Creating...
aws_eks_cluster.demo: Creating...
aws_route_table_association.public-us-east-1a: Creation complete after 1s [id=rtbassoc-0cd739f5e147182e9]
aws_nat_gateway.k8s-nat: Still creating... [00m10s elapsed]
aws_eks_cluster.demo: Still creating... [00m10s elapsed]
aws_nat_gateway.k8s-nat: Still creating... [00m20s elapsed]
aws_eks_cluster.demo: Still creating... [00m20s elapsed]
aws_nat_gateway.k8s-nat: Still creating... [00m30s elapsed]
aws_eks_cluster.demo: Still creating... [00m30s elapsed]
aws_nat_gateway.k8s-nat: Still creating... [00m40s elapsed]
aws_eks_cluster.demo: Still creating... [00m40s elapsed]
aws_eks_cluster.demo: Still creating... [00m50s elapsed]
aws_nat_gateway.k8s-nat: Still creating... [00m50s elapsed]
aws_eks_cluster.demo: Still creating... [01m00s elapsed]
aws_nat_gateway.k8s-nat: Still creating... [01m00s elapsed]
aws_nat_gateway.k8s-nat: Still creating... [01m10s elapsed]
aws_eks_cluster.demo: Still creating... [01m10s elapsed]
aws_eks_cluster.demo: Still creating... [01m20s elapsed]
aws_nat_gateway.k8s-nat: Still creating... [01m20s elapsed]
aws_nat_gateway.k8s-nat: Still creating... [01m30s elapsed]
aws_eks_cluster.demo: Still creating... [01m30s elapsed]
aws_nat_gateway.k8s-nat: Still creating... [01m40s elapsed]
aws_eks_cluster.demo: Still creating... [01m40s elapsed]
aws_nat_gateway.k8s-nat: Creation complete after 1m45s [id=nat-01f8aa45e5dd791b0]
aws_route_table.private: Creating...
aws_route_table.private: Creation complete after 1s [id=rtb-0541bac1f61ca686b]
aws_route_table_association.private-us-east-1a: Creating...
aws_route_table_association.private-us-east-1b: Creating...
aws_route_table_association.private-us-east-1b: Creation complete after 1s [id=rtbassoc-05822d5b565210dcc]
aws_eks_cluster.demo: Still creating... [01m50s elapsed]
aws_route_table_association.private-us-east-1a: Still creating... [00m10s elapsed]
aws_eks_cluster.demo: Still creating... [02m00s elapsed]
aws_route_table_association.private-us-east-1a: Creation complete after 14s [id=rtbassoc-035354c28db8e553c]
aws_eks_cluster.demo: Still creating... [02m10s elapsed]
aws_eks_cluster.demo: Still creating... [02m20s elapsed]
aws_eks_cluster.demo: Still creating... [04m00s elapsed]
aws_eks_cluster.demo: Still creating... [06m50s elapsed]
aws_eks_cluster.demo: Creation complete after 6m54s [id=ashish]
data.tls_certificate.eks: Reading...
aws_eks_node_group.private-nodes: Creating...
data.tls_certificate.eks: Read complete after 0s [id=922877a0975ad078a65b8ff11ebc47b8311945c7]
aws_iam_openid_connect_provider.eks: Creating...
aws_iam_openid_connect_provider.eks: Creation complete after 1s [id=arn:aws:iam::256050093938:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/E07152AC5B9A7239FB346A9681C1994E]
data.aws_iam_policy_document.eks_cluster_autoscaler_assume_role_policy: Reading...
data.aws_iam_policy_document.eks_cluster_autoscaler_assume_role_policy: Read complete after 0s [id=119306707]
aws_iam_role.eks_cluster_autoscaler: Creating...
aws_iam_role.eks_cluster_autoscaler: Creation complete after 0s [id=eks-cluster-autoscaler]
aws_iam_role_policy_attachment.eks_cluster_autoscaler_attach: Creating...
aws_iam_role_policy_attachment.eks_cluster_autoscaler_attach: Creation complete after 0s [id=eks-cluster-autoscaler-20250610083500259000000007]
aws_eks_node_group.private-nodes: Still creating... [00m10s elapsed]
aws_eks_node_group.private-nodes: Still creating... [00m20s elapsed]
aws_eks_node_group.private-nodes: Still creating... [00m30s elapsed]
aws_eks_node_group.private-nodes: Still creating... [00m40s elapsed]
aws_eks_node_group.private-nodes: Still creating... [00m50s elapsed]
aws_eks_node_group.private-nodes: Still creating... [01m00s elapsed]
aws_eks_node_group.private-nodes: Still creating... [01m10s elapsed]
aws_eks_node_group.private-nodes: Still creating... [01m20s elapsed]
aws_eks_node_group.private-nodes: Still creating... [01m30s elapsed]
aws_eks_node_group.private-nodes: Still creating... [01m40s elapsed]
aws_eks_node_group.private-nodes: Still creating... [01m50s elapsed]
aws_eks_node_group.private-nodes: Creation complete after 1m57s [id=ashish:private-nodes]

Apply complete! Resources: 26 added, 0 changed, 0 destroyed.

Outputs:

eks_cluster_autoscaler_arn = "arn:aws:iam::256050093938:role/eks-cluster-autoscaler"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now update the kubeconfig file of your system by following the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws eks --region us-east-1 update-kubeconfig --name ashish
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Output:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ec2-user@ip-172-31-6-151 ~]$ aws eks --region us-east-1 update-kubeconfig --name ashish
Added new context arn:aws:eks:us-east-1:256050093938:cluster/ashish to /home/ec2-user/.kube/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To verify the EKS cluster apply the following command:&lt;/p&gt;

&lt;p&gt;[ec2-user@ip-172-31-6-151 ~]$ kubectl get po -A&lt;br&gt;
NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE&lt;br&gt;
kube-system   aws-node-jppdr             2/2     Running   0          6m53s&lt;br&gt;
kube-system   coredns-789f8477df-lgzbp   1/1     Running   0          8m43s&lt;br&gt;
kube-system   coredns-789f8477df-lw56r   1/1     Running   0          8m43s&lt;br&gt;
kube-system   kube-proxy-mmvfk           1/1     Running   0          6m53s&lt;/p&gt;

&lt;p&gt;To verify the EKS cluster apply the following command:&lt;br&gt;
&lt;code&gt;kubectl get svc&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ec2-user@ip-172-31-6-151 ~]$ kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   172.20.0.1   &amp;lt;none&amp;gt;        443/TCP   11m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;create cluster-autoscler.yaml file and specify arn number of roles in the service account:&lt;br&gt;
&lt;code&gt;cluster-autoscler.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;`[ec2-user@ip-172-31-6-151 ~]$ cat cluster-autoscler.yaml`
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: cluster-autoscaler
  namespace: kube-system
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::&amp;lt;your-account-id&amp;gt;:role/eksctl-cluster-autoscaler
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: cluster-autoscaler
rules:
  - apiGroups: [""]
    resources: ["events", "endpoints"]
    verbs: ["create", "patch"]
  - apiGroups: [""]
    resources: ["pods/eviction"]
    verbs: ["create"]
  - apiGroups: [""]
    resources: ["pods/status"]
    verbs: ["update"]
  - apiGroups: [""]
    resources: ["endpoints"]
    resourceNames: ["cluster-autoscaler"]
    verbs: ["get", "update"]
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["watch", "list", "get", "update"]
  - apiGroups: [""]
    resources: ["namespaces", "pods", "services", "replicationcontrollers", "persistentvolumeclaims", "persistentvolumes"]
    verbs: ["watch", "list", "get"]
  - apiGroups: ["extensions"]
    resources: ["replicasets", "daemonsets"]
    verbs: ["watch", "list", "get"]
  - apiGroups: ["policy"]
    resources: ["poddisruptionbudgets"]
    verbs: ["watch", "list"]
  - apiGroups: ["apps"]
    resources: ["statefulsets", "replicasets", "daemonsets"]
    verbs: ["watch", "list", "get"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses", "csinodes", "csidrivers", "csistoragecapacities"]
    verbs: ["watch", "list", "get"]
  - apiGroups: ["batch", "extensions"]
    resources: ["jobs"]
    verbs: ["get", "list", "watch", "patch"]
  - apiGroups: ["coordination.k8s.io"]
    resources: ["leases"]
    verbs: ["create"]
  - apiGroups: ["coordination.k8s.io"]
    resourceNames: ["cluster-autoscaler"]
    resources: ["leases"]
    verbs: ["get", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: cluster-autoscaler
  namespace: kube-system
rules:
  - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["create","list","watch"]
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["cluster-autoscaler-status", "cluster-autoscaler-priority-expander"]
    verbs: ["delete", "get", "update", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: cluster-autoscaler
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-autoscaler
subjects:
  - kind: ServiceAccount
    name: cluster-autoscaler
    namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: cluster-autoscaler
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: cluster-autoscaler
subjects:
  - kind: ServiceAccount
    name: cluster-autoscaler
    namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: cluster-autoscaler
  namespace: kube-system
  labels:
    app: cluster-autoscaler
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cluster-autoscaler
  template:
    metadata:
      labels:
        app: cluster-autoscaler
    spec:
      serviceAccountName: cluster-autoscaler
      containers:
        - image: k8s.gcr.io/autoscaling/cluster-autoscaler:v1.21.0
          name: cluster-autoscaler
          resources:
            limits:
              cpu: 100m
              memory: 600Mi
            requests:
              cpu: 100m
              memory: 600Mi
          # https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md
          command:
            - ./cluster-autoscaler
            - --v=4
            - --stderrthreshold=info
            - --cloud-provider=aws
            - --skip-nodes-with-local-storage=false
            - --expander=least-waste
            - --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/cluster-irsa # Update cluster
            - --balance-similar-node-groups
            - --skip-nodes-with-system-pods=false
          volumeMounts:
            - name: ssl-certs
              mountPath: /etc/ssl/certs/ca-certificates.crt
              readOnly: true
          imagePullPolicy: "Always"
      volumes:
        - name: ssl-certs
          hostPath:
            path: "/etc/ssl/certs/ca-bundle.crt"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify the infrastructure&lt;br&gt;
&lt;strong&gt;Outputs:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ec2-user@ip-172-31-6-151 ~]$ kubectl apply -f cluster-autoscler.yaml
serviceaccount/cluster-autoscaler created
clusterrole.rbac.authorization.k8s.io/cluster-autoscaler created
role.rbac.authorization.k8s.io/cluster-autoscaler created
clusterrolebinding.rbac.authorization.k8s.io/cluster-autoscaler created
rolebinding.rbac.authorization.k8s.io/cluster-autoscaler created
deployment.apps/cluster-autoscaler created
[ec2-user@ip-172-31-6-151 ~]$ kubectl get po -A
NAMESPACE     NAME                                  READY   STATUS    RESTARTS   AGE
kube-system   aws-node-jppdr                        2/2     Running   0          15m
kube-system   cluster-autoscaler-6748b474f6-hc8n4   1/1     Running   0          7s
kube-system   coredns-789f8477df-lgzbp              1/1     Running   0          17m
kube-system   coredns-789f8477df-lw56r              1/1     Running   0          17m
kube-system   kube-proxy-mmvfk                      1/1     Running   0          15m
[ec2-user@ip-172-31-6-151 ~]$ kubectl get po -A
NAMESPACE     NAME                                  READY   STATUS    RESTARTS   AGE
kube-system   aws-node-jppdr                        2/2     Running   0          15m
kube-system   cluster-autoscaler-6748b474f6-hc8n4   1/1     Running   0          11s
kube-system   coredns-789f8477df-lgzbp              1/1     Running   0          17m
kube-system   coredns-789f8477df-lw56r              1/1     Running   0          17m
kube-system   kube-proxy-mmvfk                      1/1     Running   0          15m

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify &lt;br&gt;
&lt;strong&gt;VPC:&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvr0dn5r3fv9itf73s2s1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvr0dn5r3fv9itf73s2s1.jpg" alt=" " width="800" height="256"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Subnets:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxmsu4ztiwcmdcpj37lzk.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxmsu4ztiwcmdcpj37lzk.jpg" alt=" " width="800" height="208"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;EKS Cluster&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9bmst373fex8dnihrw9w.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9bmst373fex8dnihrw9w.jpg" alt=" " width="800" height="145"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwsqq4az0rd46p6yk8t8e.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwsqq4az0rd46p6yk8t8e.jpg" alt=" " width="800" height="277"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ASG:&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8fd75eqiskf27botj5mg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8fd75eqiskf27botj5mg.png" alt=" " width="800" height="267"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform destroy&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws_vpc.myvpc: Refreshing state... [id=vpc-0ba03a84ccfd83d30]
aws_iam_role.nodes: Refreshing state... [id=eks-node-group-nodes]
aws_iam_role.demo: Refreshing state... [id=ashish]
aws_eip.nat: Refreshing state... [id=eipalloc-0eea3bf78b492fbfd]
aws_iam_policy.eks_cluster_autoscaler: Refreshing state... [id=arn:aws:iam::256050093938:policy/eks-cluster-autoscaler]
aws_iam_role_policy_attachment.demo-AmazonEKSClusterPolicy: Refreshing state... [id=ashish-20250610082751453900000003]
aws_iam_role_policy_attachment.nodes-AmazonEKSWorkerNodePolicy: Refreshing state... [id=eks-node-group-nodes-20250610082751375100000002]
aws_iam_role_policy_attachment.nodes-AmazonEKS_CNI_Policy: Refreshing state... [id=eks-node-group-nodes-20250610082751337600000001]
aws_iam_role_policy_attachment.nodes-AmazonEC2ContainerRegistryReadOnly: Refreshing state... [id=eks-node-group-nodes-20250610082751582900000004]
aws_subnet.private-us-east-1a: Refreshing state... [id=subnet-0a24ca86181eef50c]
aws_subnet.public-us-east-1a: Refreshing state... [id=subnet-00a93e051039f58ee]
aws_internet_gateway.myvpc-igw: Refreshing state... [id=igw-00d4e76abce23a7bd]
aws_subnet.public-us-east-1b: Refreshing state... [id=subnet-0ff0bdd792e4a95cb]
aws_subnet.private-us-east-1b: Refreshing state... [id=subnet-040a887feb7b2af36]
aws_route_table.public: Refreshing state... [id=rtb-0f1236cd61c6b3915]
aws_nat_gateway.k8s-nat: Refreshing state... [id=nat-01f8aa45e5dd791b0]
aws_eks_cluster.demo: Refreshing state... [id=ashish]
.......

........
Plan: 0 to add, 0 to change, 26 to destroy.

Changes to Outputs:
  - eks_cluster_autoscaler_arn = "arn:aws:iam::256050093938:role/eks-cluster-autoscaler" -&amp;gt; null

Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes

aws_iam_role_policy_attachment.eks_cluster_autoscaler_attach: Destroying... [id=eks-cluster-autoscaler-20250610083500259000000007]
aws_route_table_association.public-us-east-1b: Destroying... [id=rtbassoc-0e840b6fa06c1c731]
aws_eks_node_group.private-nodes: Destroying... [id=ashish:private-nodes]
aws_route_table_association.private-us-east-1b: Destroying... [id=rtbassoc-05822d5b565210dcc]
aws_route_table_association.private-us-east-1a: Destroying... [id=rtbassoc-035354c28db8e553c]
aws_route_table_association.public-us-east-1a: Destroying... [id=rtbassoc-0cd739f5e147182e9]
aws_iam_role_policy_attachment.eks_cluster_autoscaler_attach: Destruction complete after 0s
aws_iam_policy.eks_cluster_autoscaler: Destroying... [id=arn:aws:iam::256050093938:policy/eks-cluster-autoscaler]
aws_iam_role.eks_cluster_autoscaler: Destroying... [id=eks-cluster-autoscaler]
aws_route_table_association.private-us-east-1a: Destruction complete after 0s
aws_route_table_association.public-us-east-1a: Destruction complete after 0s
aws_eks_node_group.private-nodes: Still destroying... [id=ashish:private-nodes, 01m40s elapsed]
aws_eks_node_group.private-nodes: Still destroying... [id=ashish:private-nodes, 01m50s elapsed]
aws_eks_node_group.private-nodes: Still destroying... [id=ashish:private-nodes, 02m00s elapsed]
aws_eks_node_group.private-nodes: Still destroying... [id=ashish:private-nodes, 02m10s elapsed]
aws_subnet.private-us-east-1a: Destruction complete after 1s
aws_vpc.myvpc: Destroying... [id=vpc-078475545ade76529]
aws_vpc.myvpc: Destruction complete after 1s

Destroy complete! Resources: 26 destroyed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this step-by-step guide, you’ve learned how to efficiently deploy an Amazon EKS 1.31 cluster using Terraform, implement &lt;strong&gt;IAM Roles for Service Accounts (IRSA)&lt;/strong&gt; for secure and fine-grained permission control, and configure the &lt;strong&gt;Cluster Autoscaler&lt;/strong&gt; to automatically adjust your cluster size based on real-time demand.&lt;/p&gt;

&lt;p&gt;By combining these powerful tools, you now have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A production-ready, scalable Kubernetes environment&lt;/li&gt;
&lt;li&gt;Infrastructure defined as code for repeatability and version control&lt;/li&gt;
&lt;li&gt;Secure workload access to AWS services through IRSA&lt;/li&gt;
&lt;li&gt;Automated scaling to optimize cost and performance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This setup lays a strong foundation for running resilient and efficient containerized applications on AWS. Going forward, you can extend this architecture with monitoring, CI/CD pipelines, and additional security policies tailored to your workloads.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Reference : *&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/gajjarashish007/aws-eks-terraform" rel="noopener noreferrer"&gt;https://github.com/gajjarashish007/aws-eks-terraform&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Don’t just tell people what to do — show them how to do it, and let the results speak for themselves." - Ashish Gajjar&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>aws</category>
      <category>eks</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Amazon EKS (Auto Mode) Infrastructure as Code with Terraform</title>
      <dc:creator>Ashish Gajjar</dc:creator>
      <pubDate>Sun, 11 May 2025 00:29:35 +0000</pubDate>
      <link>https://forem.com/gajjarashish/amazon-eks-auto-mode-infrastructure-as-code-with-terraform-31l</link>
      <guid>https://forem.com/gajjarashish/amazon-eks-auto-mode-infrastructure-as-code-with-terraform-31l</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction:&lt;/strong&gt;&lt;br&gt;
Click here: &lt;a href="https://dev.to/aws-builders/enable-eks-auto-mode-on-an-existing-cluster-1j5m"&gt;https://dev.to/aws-builders/enable-eks-auto-mode-on-an-existing-cluster-1j5m&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Terraform Implementation of Amazon EKS Auto Mode&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
  cluster_name = "my-vpc-eks-test"
}

module "vpc_eks" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "5.18.1"

  name = "my-vpc-eks-test"

  cidr                  = "10.20.0.0/19"

  azs             = ["eu-west-2a", "eu-west-2b", "eu-west-2c"]
  private_subnets = ["10.20.0.0/21", "10.20.8.0/21", "10.20.16.0/21"]
  public_subnets  = ["10.20.24.0/23", "10.20.26.0/23", "10.20.28.0/23"]

  enable_nat_gateway     = true
  single_nat_gateway     = true
  one_nat_gateway_per_az = false

  enable_vpn_gateway = true

  enable_dns_hostnames = true
  enable_dns_support   = true

  propagate_private_route_tables_vgw = true
  propagate_public_route_tables_vgw  = true

  private_subnet_tags = {
    "kubernetes.io/role/internal-elb" = "1",
    "mapPublicIpOnLaunch"             = "FALSE"
    "karpenter.sh/discovery"          = local.cluster_name
    "kubernetes.io/role/cni"          = "1"
  }

  public_subnet_tags = {
    "kubernetes.io/role/elb" = "1",
    "mapPublicIpOnLaunch"    = "TRUE"
  }

  tags = {
    "kubernetes.io/cluster/${local.cluster_name}" = "shared"
  }
}

resource "aws_eks_cluster" "cluster" {
  name     = local.cluster_name
  role_arn = aws_iam_role.cluster.arn
  version  = "1.32"

  vpc_config {
    subnet_ids              = module.vpc_eks.private_subnets
    security_group_ids      = []
    endpoint_private_access = "true"
    endpoint_public_access  = "true"
  }

  access_config {
    authentication_mode                         = "API"
    bootstrap_cluster_creator_admin_permissions = false
  }

  bootstrap_self_managed_addons = false

  zonal_shift_config {
    enabled = true
  }

  compute_config {
    enabled       = true
    node_pools    = ["general-purpose", "system"]
    node_role_arn = aws_iam_role.node.arn
  }

  kubernetes_network_config {
    elastic_load_balancing {
      enabled = true
    }
  }

  storage_config {
    block_storage {
      enabled = true
    }
  }
}

resource "aws_iam_role" "cluster" {
  name = "eks-test-cluster-role"

  assume_role_policy = data.aws_iam_policy_document.cluster_role_assume_role_policy.json
}

resource "aws_iam_role_policy_attachments_exclusive" "cluster" {
  role_name = aws_iam_role.cluster.name
  policy_arns = [
    "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy",
    "arn:aws:iam::aws:policy/AmazonEKSComputePolicy",
    "arn:aws:iam::aws:policy/AmazonEKSBlockStoragePolicy",
    "arn:aws:iam::aws:policy/AmazonEKSLoadBalancingPolicy",
    "arn:aws:iam::aws:policy/AmazonEKSNetworkingPolicy",
    "arn:aws:iam::aws:policy/AmazonEKSServicePolicy",
    "arn:aws:iam::aws:policy/AmazonEKSVPCResourceController"
  ]
}

data "aws_iam_policy_document" "cluster_role_assume_role_policy" {
  statement {
    actions = ["sts:AssumeRole", "sts:TagSession"]

    principals {
      type        = "Service"
      identifiers = ["eks.amazonaws.com"]
    }
  }
}

resource "aws_iam_role" "node" {
  name = "eks-auto-node-example"
  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = ["sts:AssumeRole"]
        Effect = "Allow"
        Principal = {
          Service = "ec2.amazonaws.com"
        }
      },
    ]
  })
}

resource "aws_iam_role_policy_attachment" "node_AmazonEKSWorkerNodeMinimalPolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodeMinimalPolicy"
  role       = aws_iam_role.node.name
}

resource "aws_iam_role_policy_attachment" "node_AmazonEC2ContainerRegistryPullOnly" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryPullOnly"
  role       = aws_iam_role.node.name
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Ref:&lt;/strong&gt; &lt;a href="https://github.com/terraform-aws-modules/terraform-aws-eks/blob/v20.36.0/examples/eks-auto-mode/main.tf" rel="noopener noreferrer"&gt;https://github.com/terraform-aws-modules/terraform-aws-eks/blob/v20.36.0/examples/eks-auto-mode/main.tf&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Cross-account VPC sharing using AWS RAM</title>
      <dc:creator>Ashish Gajjar</dc:creator>
      <pubDate>Sun, 11 May 2025 00:09:35 +0000</pubDate>
      <link>https://forem.com/aws-builders/cross-account-vpc-sharing-using-aws-ram-3f0d</link>
      <guid>https://forem.com/aws-builders/cross-account-vpc-sharing-using-aws-ram-3f0d</guid>
      <description>&lt;p&gt;AWS Resource Access Manager (AWS RAM) simplifies sharing AWS resources across different AWS accounts, including within an organization and with IAM roles and users. It enables you to create a resource once and use it across multiple accounts, reducing operational overhead and duplication. AWS RAM is a centralized service that provides a consistent experience for sharing various AWS resources. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbgnd68piqh0v24zgo0d2.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbgnd68piqh0v24zgo0d2.gif" alt=" " width="560" height="420"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Key Features and Benefits:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Resource Sharing:&lt;/strong&gt; AWS RAM allows you to share resources like Route 53 Resolver Rules, Transit Gateways, Subnets, and License Manager Configurations with other AWS accounts. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Centralized Management:&lt;/strong&gt; It provides a single place to manage resource sharing, making it easier to control access and permissions. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reduced Overhead:&lt;/strong&gt; By sharing resources, you eliminate the need to duplicate them across multiple accounts, saving time and resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Security and Control:&lt;/strong&gt; Access to shared resources is governed by IAM policies and Service Control Policies, ensuring secure and controlled access.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Below are the list of resource type that lets their services can share by using AWS RAM&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;AWS App Mesh&lt;/li&gt;
&lt;li&gt;Amazon Aurora&lt;/li&gt;
&lt;li&gt;AWS Private Certificate Authority&lt;/li&gt;
&lt;li&gt;AWS CodeBuild&lt;/li&gt;
&lt;li&gt;Amazon EC2&lt;/li&gt;
&lt;li&gt;EC2 Image Builder&lt;/li&gt;
&lt;li&gt;AWS Glue&lt;/li&gt;
&lt;li&gt;AWS License Manager&lt;/li&gt;
&lt;li&gt;AWS Migration Hub Refactor Spaces&lt;/li&gt;
&lt;li&gt;AWS Network Firewall&lt;/li&gt;
&lt;li&gt;AWS Outposts&lt;/li&gt;
&lt;li&gt;Amazon S3 on Outposts&lt;/li&gt;
&lt;li&gt;AWS Resource Groups&lt;/li&gt;
&lt;li&gt;Amazon Route 53&lt;/li&gt;
&lt;li&gt;Amazon SageMaker&lt;/li&gt;
&lt;li&gt;AWS Service Catalog AppRegistry&lt;/li&gt;
&lt;li&gt;AWS Systems Manager Incident Manager&lt;/li&gt;
&lt;li&gt;Amazon VPC&lt;/li&gt;
&lt;li&gt;AWS Cloud WAN&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt;&lt;br&gt;
When you share a resource with another account, that account receives access to the resource, and its existing policies and permissions will apply to the shared resource.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyjlouj6kfd6qnyr9sn9h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyjlouj6kfd6qnyr9sn9h.png" alt=" " width="800" height="193"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I will now share subnets from the account (A) which will be the owner account to account (B), say participant account.&lt;/p&gt;

&lt;p&gt;Setting up AWS organization:&lt;/p&gt;

&lt;p&gt;Create an AWS organization in account A and add the participant account B in the Organization.&lt;/p&gt;

&lt;p&gt;Invite the account B in the AWS organization by sending a request from the console.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffzoh8d3loqbbk2ygztrr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffzoh8d3loqbbk2ygztrr.png" alt=" " width="800" height="301"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create a custom VPC and several subnets in the owner account to be shared with the participant account.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp24y764lq9sc8vl85qqk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp24y764lq9sc8vl85qqk.png" alt=" " width="800" height="187"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, enable the resource sharing for your organization from the AWS Resource Access Manager settings in account A.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe0ey77c5pjacsrdd3d5w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe0ey77c5pjacsrdd3d5w.png" alt=" " width="800" height="227"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now let’s start with resource sharing by creating a resource share in “shared by me tab”.&lt;/p&gt;

&lt;p&gt;After providing a description for the shared resource, select “Subnets” in the resource tab and then go ahead and select the subnets which you wish to share with participant account.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp0nrdn9s2lsqs24dt4rg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp0nrdn9s2lsqs24dt4rg.png" alt=" " width="800" height="413"&gt;&lt;/a&gt;&lt;br&gt;
The principal will be the destination account or the AWS Organization to which the subnets will be shared. I will go with AWS organization and select account B in the organization.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6wuc91v2erqufr9hauwu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6wuc91v2erqufr9hauwu.png" alt=" " width="800" height="231"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After creating the resource share in owner account A, go to the participant account B and check if the resource share is visible in AWS RAM dashboard “shared with me” tab.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0uz8epcajk1wxxrxwqri.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0uz8epcajk1wxxrxwqri.png" alt=" " width="800" height="172"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The shared subnets will now appear in the participant account B along with the VPC.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbii60g082mg1t1lyb6uk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbii60g082mg1t1lyb6uk.png" alt=" " width="800" height="216"&gt;&lt;/a&gt;&lt;br&gt;
Let’s use this VPC to launch resources in Participant account. Navigate to the EC2 dashboard and while launching the instance, in the configure instance section check the availability of shared VPC and subnets.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
AWS Resource Access Manager (RAM) removes the need to replicate resources across multiple accounts, reducing the operational overhead of managing them individually.&lt;/p&gt;

&lt;p&gt;With built-in integration with AWS CloudWatch and CloudTrail, RAM offers clear visibility into shared resources and the accounts accessing them.&lt;/p&gt;

&lt;p&gt;Access to shared resources is governed by existing policies and permissions, ensuring security and control. RAM delivers a consistent experience for sharing a wide range of AWS resources.&lt;/p&gt;

&lt;p&gt;By creating resources centrally and sharing them through RAM, you can streamline resource management in a multi-account environment.&lt;/p&gt;

&lt;p&gt;RAM enables efficient resource utilization across different parts of your organization, helping improve performance and reduce costs.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Migrating from EKS Cluster Autoscaler to Karpenter</title>
      <dc:creator>Ashish Gajjar</dc:creator>
      <pubDate>Sat, 18 Jan 2025 12:20:47 +0000</pubDate>
      <link>https://forem.com/aws-builders/migrating-from-eks-cluster-autoscaler-to-karpenter-3h17</link>
      <guid>https://forem.com/aws-builders/migrating-from-eks-cluster-autoscaler-to-karpenter-3h17</guid>
      <description>&lt;p&gt;Karpenter is an open-source, high-performance Kubernetes cluster autoscaler developed by AWS. Amazon Elastic Kubernetes Service (EKS) provides a powerful and flexible platform for running containerized applications. A key component in ensuring your cluster scales appropriately is the use of an autoscaler. Traditionally, EKS has relied on the Cluster Autoscaler (CA) to dynamically adjust node capacity based on the demand for resources. However, a newer tool called Karpenter is gaining traction due to its enhanced capabilities and efficiency.&lt;/p&gt;

&lt;p&gt;In this blog, we will guide you through the process of migrating from EKS Cluster Autoscaler to Karpenter and explore the benefits of making the switch.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fakl8mtjh84dfiii86x0r.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fakl8mtjh84dfiii86x0r.gif" alt=" " width="400" height="204"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why Migrate from EKS Cluster Autoscaler to Karpenter?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Improved Node Scheduling Efficiency&lt;/strong&gt;&lt;br&gt;
Karpenter automatically optimizes the type, size, and number of nodes required for your workloads. Unlike the Cluster Autoscaler, which operates on a fixed set of predefined node groups, Karpenter provides greater flexibility by dynamically selecting the most appropriate instance types and scaling in real-time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Faster Scaling&lt;/strong&gt;&lt;br&gt;
Karpenter scales faster than Cluster Autoscaler. It responds to changes in your cluster within seconds, compared to the Cluster Autoscaler’s typically slower reactions to scaling events. This is especially helpful for workloads that need to scale quickly in response to demand spikes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Cost Optimization&lt;/strong&gt;&lt;br&gt;
Karpenter is designed to maximize the use of available resources by selecting the most cost-efficient instance types and ensuring that only the resources necessary for your workload are provisioned. This makes Karpenter particularly beneficial for cost-conscious organizations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Simpler Configuration&lt;/strong&gt;&lt;br&gt;
With Karpenter, you don’t have to manage separate node groups. Karpenter automatically adjusts the instance size and types needed based on your workloads. It simplifies the configuration process, making it more developer-friendly.&lt;/p&gt;

&lt;p&gt;Step-by-Step Guide to Migrating from EKS Cluster Autoscaler to Karpenter&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 1: Pre-Requisites
&lt;/h2&gt;

&lt;p&gt;Before you start the migration process, ensure that the following pre-requisites are met:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You are running an EKS Cluster.&lt;/li&gt;
&lt;li&gt;You have kubectl access to your cluster.&lt;/li&gt;
&lt;li&gt;You have AWS CLI configured with the necessary permissions to manage your EKS cluster and resources.&lt;/li&gt;
&lt;li&gt;You are familiar with the basics of both Cluster Autoscaler and Karpenter.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Step 2 : Create an IAM Role
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5gacwfqm16v33pzwfeoc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5gacwfqm16v33pzwfeoc.png" alt=" " width="800" height="393"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a role and select the EC2 service&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jkzac4fzd3krppjgbej.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jkzac4fzd3krppjgbej.png" alt=" " width="800" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Role Name  :&lt;/strong&gt; KarpenterNodeRole-ashish"&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiioy5zjjon7u4gqya4kv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiioy5zjjon7u4gqya4kv.png" alt=" " width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;KarpenterNodeRole-ashish" role attached below policy 

&lt;ul&gt;
&lt;li&gt;AmazonEKSWorkerNodePolicy&lt;/li&gt;
&lt;li&gt;AmazonEKS_CNI_Policy&lt;/li&gt;
&lt;li&gt;AmazonEC2ContainerRegistryReadOnly&lt;/li&gt;
&lt;li&gt;AmazonSSMManagedInstanceCore&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx9jeivov2m5howt94duh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx9jeivov2m5howt94duh.png" alt=" " width="800" height="399"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 3 : Create a Controller IAM Role
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Create a role&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5gacwfqm16v33pzwfeoc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5gacwfqm16v33pzwfeoc.png" alt=" " width="800" height="393"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select Web identity and provide identity provider name &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff377x0j8lh84iprc70in.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff377x0j8lh84iprc70in.png" alt=" " width="800" height="249"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Role Name&lt;/strong&gt; : KarpenterControllerRole-ashish" &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fln3d4gxc4sust50gk30u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fln3d4gxc4sust50gk30u.png" alt=" " width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Modify Trust relationships

&lt;ul&gt;
&lt;li&gt;Add OIDC&lt;/li&gt;
&lt;li&gt;Account ID
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/oidc.eks.ap-south-1.amazonaws.com/id/6B407ED9BFC9CE681546033D7AD4156A"
            },
            "Action": "sts:AssumeRoleWithWebIdentity",
            "Condition": {
                "StringEquals": {
                    "oidc.eks.ap-south-1.amazonaws.com/id/6B407ED9BFC9CE681546033D7AD4156A:aud": "sts.amazonaws.com",
                    "oidc.eks.ap-south-1.amazonaws.com/id/6B407ED9BFC9CE681546033D7AD4156A:sub": "system:serviceaccount:karpenter:karpenter"
                }
            }
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Create a KarpenterControllerPolicy-ashish" policy
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt; EOF &amp;gt; controller-policy.json
{
    "Statement": [
        {
            "Action": [
                "ssm:GetParameter",
                "ec2:DescribeImages",
                "ec2:RunInstances",
                "ec2:DescribeSubnets",
                "ec2:DescribeSecurityGroups",
                "ec2:DescribeLaunchTemplates",
                "ec2:DescribeInstances",
                "ec2:DescribeInstanceTypes",
                "ec2:DescribeInstanceTypeOfferings",
                "ec2:DeleteLaunchTemplate",
                "ec2:CreateTags",
                "ec2:CreateLaunchTemplate",
                "ec2:CreateFleet",
                "ec2:DescribeSpotPriceHistory",
                "pricing:GetProducts"
            ],
            "Effect": "Allow",
            "Resource": "*",
            "Sid": "Karpenter"
        },
        {
            "Action": "ec2:TerminateInstances",
            "Condition": {
                "StringLike": {
                    "ec2:ResourceTag/karpenter.sh/nodepool": "*"
                }
            },
            "Effect": "Allow",
            "Resource": "*",
            "Sid": "ConditionalEC2Termination"
        },
        {
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": "arn:${AWS_PARTITION}:iam::${AWS_ACCOUNT_ID}:role/KarpenterNodeRole-ashish",
            "Sid": "PassNodeIAMRole"
        },
        {
            "Effect": "Allow",
            "Action": "eks:DescribeCluster",
            "Resource": "arn:${AWS_PARTITION}:eks:${AWS_REGION}:${AWS_ACCOUNT_ID}:cluster/ashish",
            "Sid": "EKSClusterEndpointLookup"
        },
        {
            "Sid": "AllowScopedInstanceProfileCreationActions",
            "Effect": "Allow",
            "Resource": "*",
            "Action": [
            "iam:CreateInstanceProfile"
            ],
            "Condition": {
            "StringEquals": {
                "aws:RequestTag/kubernetes.io/cluster/ashish": "owned",
                "aws:RequestTag/topology.kubernetes.io/region": "${AWS_REGION}"
            },
            "StringLike": {
                "aws:RequestTag/karpenter.k8s.aws/ec2nodeclass": "*"
            }
            }
        },
        {
            "Sid": "AllowScopedInstanceProfileTagActions",
            "Effect": "Allow",
            "Resource": "*",
            "Action": [
            "iam:TagInstanceProfile"
            ],
            "Condition": {
            "StringEquals": {
                "aws:ResourceTag/kubernetes.io/cluster/ashish": "owned",
                "aws:ResourceTag/topology.kubernetes.io/region": "${AWS_REGION}",
                "aws:RequestTag/kubernetes.io/cluster/ashish": "owned",
                "aws:RequestTag/topology.kubernetes.io/region": "${AWS_REGION}"
            },
            "StringLike": {
                "aws:ResourceTag/karpenter.k8s.aws/ec2nodeclass": "*",
                "aws:RequestTag/karpenter.k8s.aws/ec2nodeclass": "*"
            }
            }
        },
        {
            "Sid": "AllowScopedInstanceProfileActions",
            "Effect": "Allow",
            "Resource": "*",
            "Action": [
            "iam:AddRoleToInstanceProfile",
            "iam:RemoveRoleFromInstanceProfile",
            "iam:DeleteInstanceProfile"
            ],
            "Condition": {
            "StringEquals": {
                "aws:ResourceTag/kubernetes.io/cluster/ashish": "owned",
                "aws:ResourceTag/topology.kubernetes.io/region": "${AWS_REGION}"
            },
            "StringLike": {
                "aws:ResourceTag/karpenter.k8s.aws/ec2nodeclass": "*"
            }
            }
        },
        {
            "Sid": "AllowInstanceProfileReadActions",
            "Effect": "Allow",
            "Resource": "*",
            "Action": "iam:GetInstanceProfile"
        }
    ],
    "Version": "2012-10-17"
}
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Attach KarpenterControllerPolicy-ashish to the controller role
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fme47ayrw5rrj24bx300d.png" alt=" " width="800" height="397"&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Step 4 : Add tags to subnets and security groups
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Collect the subnet details
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ec2-user@ip-172-31-0-244 ~]$ aws eks describe-nodegroup --cluster-name "ashish" --nodegroup-name "ashish-workers" --query 'nodegroup.subnets' --output text
subnet-0a968db0a4c73858d        subnet-0bcd684f5878c3282        subnet-061e107c1f8ebc361
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Collect the security group&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ec2-user@ip-172-31-0-244 ~]$ aws eks describe-cluster --name "ashish" --query "cluster.resourcesVpcConfig.clusterSecurityGroupId" --output text
sg-0e0ac4fa44824e1aa
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create tags for Security Groups and subnets.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 create-tags --tags "Key=karpenter.sh/discovery,Value=ashish" --resources "sg-0e0ac4fa44824e1aa"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxwweizvgnzr8yuy9n6e8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxwweizvgnzr8yuy9n6e8.png" alt=" " width="800" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fasgats63zhtmgqmkyxng.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fasgats63zhtmgqmkyxng.png" alt=" " width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5 : Update aws-auth ConfigMap
&lt;/h2&gt;

&lt;p&gt;We need to allow nodes that are using the node IAM role we just created to join the cluster. To do that we have to modify the aws-auth ConfigMap in the cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl edit configmap aws-auth -n kube-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;you will need to add a section to the mapRoles that looks something like this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- groups:
  - system:bootstrappers
  - system:nodes
  # - eks:kube-proxy-windows
  rolearn: arn:aws:iam::${AWS_ACCOUNT_ID}:role/KarpenterNodeRole-ashish
  username: system:node:{{EC2PrivateDNSName}}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The full aws-auth configmap should have two groups. One for your Karpenter node role and one for your existing node group.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Deploy Karpenter
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install karpenter oci://public.ecr.aws/karpenter/karpenter  --namespace "karpenter" --create-namespace \
    --set "settings.clusterName=ashish" \
    --set "serviceAccount.annotations.eks\.amazonaws\.com/role-arn=arn:aws:iam::${AWS_ACCOUNT_ID}:role/KarpenterControllerRole-ashish" \
    --set controller.resources.requests.cpu=1 \
    --set controller.resources.requests.memory=1Gi \
    --set controller.resources.limits.cpu=1 \
    --set controller.resources.limits.memory=1Gi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ec2-user@ip-172-31-0-244 ~]$ helm install karpenter oci://public.ecr.aws/karpenter/karpenter  --namespace "karpenter" --create-namespace \
    --set "settings.clusterName=ashish" \
    --set "serviceAccount.annotations.eks\.amazonaws\.com/role-arn=arn:aws:iam::256050093938:role/KarpenterControllerRole-ashish" \
    --set controller.resources.requests.cpu=1 \
    --set controller.resources.requests.memory=1Gi \
    --set controller.resources.limits.cpu=1 \
    --set controller.resources.limits.memory=1Gi
Pulled: public.ecr.aws/karpenter/karpenter:1.1.1
Digest: sha256:b42c6d224e7b19eafb65e2d440734027a8282145569d4d142baf10ba495e90d0
NAME: karpenter
LAST DEPLOYED: Sat Jan 18 01:51:41 2025
NAMESPACE: karpenter
STATUS: deployed
REVISION: 1
TEST SUITE: None

[ec2-user@ip-172-31-0-244 ~]$  kubectl get po -A
NAMESPACE     NAME                         READY   STATUS    RESTARTS   AGE
karpenter     karpenter-7d4c9cbd84-vpbfw   1/1     Running   0          29m
karpenter     karpenter-7d4c9cbd84-zjwz4   1/1     Running   0          29m
kube-system   aws-node-889mt               2/2     Running   0          16m
kube-system   aws-node-rnzsk               2/2     Running   0          51m
kube-system   coredns-6c55b85fbb-4cj87     1/1     Running   0          54m
kube-system   coredns-6c55b85fbb-nxwrg     1/1     Running   0          54m
kube-system   kube-proxy-8jmbr             1/1     Running   0          16m
kube-system   kube-proxy-mt4nt             1/1     Running   0          51m
kube-system   metrics-server-5-4zwff       1/1     Running   0          54m
kube-system   cluster-autoscaler-lb7cw     1/1     Running   0          54m

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 7 : Create a Nodepool
&lt;/h2&gt;

&lt;p&gt;We need to create a default NodePool so Karpenter knows what types of nodes we want for unscheduled workloads.&lt;/p&gt;

&lt;p&gt;You can retrieve the image ID of the latest recommended Amazon EKS optimized Amazon Linux AMI with the following command&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;fetch AMI ID using command line:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ssm get-parameter --name /aws/service/eks/optimized-ami/1.30/amazon-linux-2-arm64/recommended/image_id --query Parameter.Value --output text
aws ssm get-parameter --name /aws/service/eks/optimized-ami/1.30/amazon-linux-2/recommended/image_id --query Parameter.Value --output text
aws ssm get-parameter --name /aws/service/eks/optimized-ami/1.30/amazon-linux-2-gpu/recommended/image_id --query Parameter.Value --output text
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Please change instance type as per your requirment&lt;/li&gt;
&lt;li&gt;This is unsafe for production workloads. Validate AMIs in lower environments before deploying them to production.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
  name: default
spec:
  template:
    spec:
      requirements:
        - key: kubernetes.io/arch
          operator: In
          values: ["amd64"]
        - key: kubernetes.io/os
          operator: In
          values: ["linux"]
        - key: karpenter.sh/capacity-type
          operator: In
          values: ["spot"]
        - key: karpenter.k8s.aws/instance-category
          operator: In
          values: ["t","m"]
        - key: karpenter.k8s.aws/instance-generation
          operator: Gt
          values: ["2"]
      nodeClassRef:
        group: karpenter.k8s.aws
        kind: EC2NodeClass
        name: default
      expireAfter: 720h # 30 * 24h = 720h
  limits:
    cpu: 1000
  disruption:
    consolidationPolicy: WhenEmptyOrUnderutilized
    consolidateAfter: 1m
####
apiVersion: karpenter.k8s.aws/v1
kind: EC2NodeClass
metadata:
  name: default
spec:
  amiFamily: AL2 # Amazon Linux 2
  role: "KarpenterNodeRole-ashish" # replace with your cluster name
  subnetSelectorTerms:
    - tags:
        karpenter.sh/discovery: "ashish" # replace with your cluster name
  securityGroupSelectorTerms:
    - tags:
        karpenter.sh/discovery: "ashish" # replace with your cluster name
  amiSelectorTerms:
    - id: "${ARM_AMI_ID}"
    - id: "${AMD_AMI_ID}"
#   - id: "${GPU_AMI_ID}" # &amp;lt;- GPU Optimized AMD AMI 
#   - name: "amazon-eks-node-${K8S_VERSION}-*" # &amp;lt;- automatically upgrade when a new AL2 EKS Optimized AMI is released. 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  output:
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ec2-user@ip-172-31-0-244 ~]$ vim nodepool.yaml
[ec2-user@ip-172-31-0-244 ~]$ kubectl apply -f nodepool.yaml
nodepool.karpenter.sh/general-purpose created
ec2nodeclass.karpenter.k8s.aws/default created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Increase the Nginx load.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deploy a nginx
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ec2-user@ip-172-31-0-244 ~]$ kubectl create deploy nginx --image=nginx:1.7.8 -- replicas=2
deployment.apps/nginx created
[ec2-user@ip-172-31-0-244 ~]$ kubectl edit deployment nginx
deployment.apps/nginx edited
[ec2-user@ip-172-31-0-244 ~]$ kubectl get po -A
NAMESPACE     NAME                         READY   STATUS    RESTARTS   AGE
default       nginx-65757d685b-74h84       1/1     Running   0          78s
default       nginx-65757d685b-jzxrv       1/1     Running   0          78s
default       nginx-65757d685b-lqwf4       1/1     Running   0          3m11s
default       nginx-65757d685b-qfgcq       1/1     Running   0          78s
default       nginx-65757d685b-ssrrk       1/1     Running   0          78s
karpenter     karpenter-7d4c9cbd84-vpbfw   1/1     Running   0          25m
karpenter     karpenter-7d4c9cbd84-zjwz4   1/1     Running   0          25m
kube-system   aws-node-889mt               2/2     Running   0          11m
kube-system   aws-node-rnzsk               2/2     Running   0          46m
kube-system   coredns-6c55b85fbb-4cj87     1/1     Running   0          50m
kube-system   coredns-6c55b85fbb-nxwrg     1/1     Running   0          50m
kube-system   kube-proxy-8jmbr             1/1     Running   0          11m
kube-system   kube-proxy-mt4nt             1/1     Running   0          46m
kube-system   metrics-server-5-4zwff       1/1     Running   0          32m
kube-system   cluster-autoscaler-lb7cw     1/1     Running   0          35m
[ec2-user@ip-172-31-0-244 ~]$ kubectl get nodes
NAME                                            STATUS   ROLES    AGE   VERSION
ip-192-168-54-99.ap-south-1.compute.internal    Ready    &amp;lt;none&amp;gt;   51m   v1.30.8-eks-aeac579
ip-192-168-72-178.ap-south-1.compute.internal   Ready    &amp;lt;none&amp;gt;   17m   v1.30.8-eks-aeac579
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Increased load in Nginx&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ec2-user@ip-172-31-0-244 ~]$ kubectl edit deployment nginx
deployment.apps/nginx edited
[ec2-user@ip-172-31-0-244 ~]$ kubectl get po -A
NAMESPACE     NAME                         READY   STATUS    RESTARTS   AGE
default       nginx-65757d685b-5q9dn       1/1     Running   0          2m12s
default       nginx-65757d685b-6twr7       0/1     Pending   0          4s
default       nginx-65757d685b-74h84       1/1     Running   0          10m
default       nginx-65757d685b-7fqwh       0/1     Pending   0          4s
default       nginx-65757d685b-8s4vx       1/1     Running   0          4s
default       nginx-65757d685b-b46x9       1/1     Running   0          2m12s
default       nginx-65757d685b-b4vxx       1/1     Running   0          2m12s
default       nginx-65757d685b-c9xk2       0/1     Pending   0          4s
default       nginx-65757d685b-cfsg9       0/1     Pending   0          4s
default       nginx-65757d685b-cwcz4       0/1     Pending   0          4s
default       nginx-65757d685b-f9z6f       1/1     Running   0          3m38s
default       nginx-65757d685b-gprq7       0/1     Pending   0          4s
default       nginx-65757d685b-hcqlq       0/1     Pending   0          4s
default       nginx-65757d685b-jcd2b       0/1     Pending   0          4s
default       nginx-65757d685b-m6kbf       1/1     Running   0          3m38s
default       nginx-65757d685b-mvpcf       0/1     Pending   0          4s
default       nginx-65757d685b-nshbx       1/1     Running   0          2m12s
default       nginx-65757d685b-pt7fj       1/1     Running   0          2m12s
default       nginx-65757d685b-q6vnq       0/1     Pending   0          4s
default       nginx-65757d685b-qcx94       0/1     Pending   0          4s
default       nginx-65757d685b-qfgcq       1/1     Running   0          10m
default       nginx-65757d685b-sfhsn       0/1     Pending   0          4s
default       nginx-65757d685b-sj9vd       1/1     Running   0          3m38s
default       nginx-65757d685b-sk74g       0/1     Pending   0          4s
default       nginx-65757d685b-vptn5       1/1     Running   0          4s
karpenter     karpenter-7d4c9cbd84-74527   0/1     Pending   0          2m12s
karpenter     karpenter-7d4c9cbd84-zjwz4   1/1     Running   0          34m
kube-system   aws-node-rnzsk               2/2     Running   0          55m
kube-system   coredns-6c55b85fbb-4cj87     1/1     Running   0          59m
kube-system   coredns-6c55b85fbb-nxwrg     1/1     Running   0          59m
kube-system   kube-proxy-mt4nt             1/1     Running   0          55m
kube-system   metrics-server-5-4zwff       1/1     Running   0          54m
kube-system   cluster-autoscaler-lb7cw     1/1     Running   0          54m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;*&lt;em&gt;New node created *&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                                            STATUS   ROLES    AGE   VERSION
ip-192-168-54-99.ap-south-1.compute.internal    Ready    &amp;lt;none&amp;gt;   57m   v1.30.8-eks-aeac579
ip-192-168-75-159.ap-south-1.compute.internal   Ready    &amp;lt;none&amp;gt;   95s   v1.30.8-eks-aeac579

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsbaw4ikprmfvuib574o0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsbaw4ikprmfvuib574o0.png" alt=" " width="800" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 8 :Remove Cluster Autoscalar
&lt;/h2&gt;

&lt;p&gt;Now that karpenter is running we can disable the cluster autoscaler. To do that we will scale the number of replicas to zero.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl scale deploy/cluster-autoscaler -n kube-system --replicas=0

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you have a single multi-AZ node group, we suggest a minimum of 2 instances.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws eks update-nodegroup-config --cluster-name "ashish" \
    --nodegroup-name "ashish-workers" \
    --scaling-config "minSize=2,maxSize=2,desiredSize=2"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;if you have multiple single-AZ node groups, we suggest a minimum of 1 instance each.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for NODEGROUP in $(aws eks list-nodegroups --cluster-name "ashish" \
    --query 'nodegroups' --output text); do aws eks update-nodegroup-config --cluster-name "ashish" \
    --nodegroup-name "ashish-workers" \
    --scaling-config "minSize=1,maxSize=1,desiredSize=1"
done

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 9 : Verify Karpenter
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl logs -f -n karpenter -c controller -l app.kubernetes.io/name=karpenter

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Pricing
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;EKS Management Fees:&lt;/strong&gt;&lt;br&gt;
`- Both EKS Cluster Autoscaler and Karpenter incur a $0.10 per hour fee for the EKS cluster management.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EKS Management Fee = $0.10 per hour&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;EC2 instance cost:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Assume you're running an instance type m5.large at $0.096 per hour for 5 nodes.&lt;/li&gt;
&lt;li&gt;Cluster Autoscaler EC2 Cost = $0.096 * 5 * 24 hours = $11.52 per day&lt;/li&gt;
&lt;li&gt;Karpenter EC2 Cost (optimized with Spot Instances): Assume a 60% discount on Spot pricing, so the cost would be $0.0384 per hour per instance.

&lt;ul&gt;
&lt;li&gt; Karpenter EC2 Cost = $0.0384 * 5 * 24 hours = $4.61 per day&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cost Comparison for 30 Days:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cluster Autoscaler (with On-Demand EC2 instances):

&lt;ul&gt;
&lt;li&gt;$11.52 * 30 = $345.60 per month&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Karpenter (with Spot Instances):

&lt;ul&gt;
&lt;li&gt;$4.61 * 30 = $138.30 per month&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Total Cost Calculation (for one month):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cluster Autoscaler Total:

&lt;ul&gt;
&lt;li&gt;EKS Fee ($0.10 * 24 hours * 30 days) = $72&lt;/li&gt;
&lt;li&gt;EC2 Cost = $345.60&lt;/li&gt;
&lt;li&gt;$72 + $345.60 = $417.60 per month&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Karpenter Total:

&lt;ul&gt;
&lt;li&gt;EKS Fee ($0.10 * 24 hours * 30 days) = $72&lt;/li&gt;
&lt;li&gt;EC2 Cost = $138.30&lt;/li&gt;
&lt;li&gt;Total = $72 + $138.30 = $210.30 per month&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cost Savings with Karpenter:&lt;/strong&gt;&lt;br&gt;
By migrating to Karpenter, you can save approximately:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;$417.60 - $210.30 = $207.30 per month&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Migrating from EKS Cluster Autoscaler to Karpenter offers several benefits, including improved scaling speed, cost efficiency, and simplified management. By following the steps outlined in this blog, you should be able to successfully migrate your cluster to use Karpenter, enhancing both performance and scalability.&lt;/p&gt;

&lt;p&gt;Remember, while Karpenter provides a more dynamic scaling solution, it’s important to continuously monitor your cluster to ensure it is optimizing resources effectively and making adjustments as your workloads evolve.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ref:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://karpenter.sh/docs" rel="noopener noreferrer"&gt;https://karpenter.sh/docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/autoscaling.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/eks/latest/userguide/autoscaling.html&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>kubernetes</category>
      <category>awseks</category>
      <category>aws</category>
      <category>devops</category>
    </item>
    <item>
      <title>Configure monit service in AL2023</title>
      <dc:creator>Ashish Gajjar</dc:creator>
      <pubDate>Sun, 05 Jan 2025 04:44:20 +0000</pubDate>
      <link>https://forem.com/aws-builders/configure-monit-services-in-al2023-2d5o</link>
      <guid>https://forem.com/aws-builders/configure-monit-services-in-al2023-2d5o</guid>
      <description>&lt;h2&gt;
  
  
  Introduction :
&lt;/h2&gt;

&lt;p&gt;Monit is a free, open-source process supervision tool for Unix and Linux.With Monit, system status can be viewed directly from the command line, or via the native HTTP(S) web server. Monit is able to do automatic maintenance, repair, and run meaningful causal actions in error situations.&lt;/p&gt;

&lt;p&gt;Here's an overview of Monit and how it works:&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Monit?
&lt;/h3&gt;

&lt;p&gt;Monit is a small, open-source, and lightweight monitoring tool used primarily on Unix-based systems (Linux, macOS, BSD). It’s designed to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Monitor system processes (e.g., web servers, databases).&lt;/li&gt;
&lt;li&gt;Check resource usage (e.g., CPU, memory).&lt;/li&gt;
&lt;li&gt;Watch files and directories for changes.&lt;/li&gt;
&lt;li&gt;Restart failed services automatically to ensure high availability.&lt;/li&gt;
&lt;li&gt;Send alerts via email or other methods when a service fails or when certain thresholds are exceeded.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Why Use Monit on AWS EC2?
&lt;/h3&gt;

&lt;p&gt;AWS EC2 instances can run various services and applications that need constant monitoring. While AWS offers CloudWatch for monitoring resources like CPU, memory, and disk usage, Monit provides more granular control for monitoring services like web servers (Nginx, Apache), databases (MySQL, PostgreSQL), and other background processes. Some key features include:&lt;/p&gt;

&lt;p&gt;using monit automatic service restarts when services fail or stop.&lt;br&gt;
Customizable monitoring thresholds for CPU, memory, disk space, and more.&lt;br&gt;
Real-time alerts for failures or resource usage violations.&lt;/p&gt;
&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;An AWS EC2 instance running any supported Linux AMI (Amazon Linux 2, Ubuntu, etc.).&lt;/li&gt;
&lt;li&gt;Basic familiarity with Linux command-line tools and SSH access to the EC2 instance.&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  Step 1: Launch an EC2 Instance with a Suitable AMI
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Log in to your AWS Management Console and navigate to EC2.&lt;/li&gt;
&lt;li&gt;Select Launch Instance.&lt;/li&gt;
&lt;li&gt;Choose an Amazon Machine Image (AMI), such as Amazon Linux 2023 or Ubuntu (based on your preference).&lt;/li&gt;
&lt;li&gt;Configure instance details, add storage, configure security groups (ensure SSH access), and launch the instance.&lt;/li&gt;
&lt;li&gt;After the instance is launched, make a note of the Public IP or Public DNS to SSH into the instance.&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  Step 2: SSH into Your EC2 Instance
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Once your EC2 instance is running, SSH into the instance using the following command (replace your-key.pem and your-ec2-ip with your actual private key and EC2 IP address):
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh -i your-key.pem ec2-user@your-ec2-ip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  Step 3: Install Monit on Your EC2 Instance
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Monit can be installed using your system's package manager. Follow the steps based on your EC2 instance's AMI.&lt;/li&gt;
&lt;li&gt;Monit Packages : &lt;a href="https://mmonit.com/monit/dist/" rel="noopener noreferrer"&gt;https://mmonit.com/monit/dist/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget  https://mmonit.com/monit/dist/monit-5.34.3.tar.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;You can also create an RPM package for CentOS/RHEL/Fedora from the source code directly using rpmbuild:
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rpmbuild -tb monit-x.y.z.tar.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; $ tar zxvf monit-x.y.z.tar.gz (where x.y.z denotes version numbers)
 $ cd monit-x.y.z
 $ ./configure (use ./configure --help to view available options)
 $ make &amp;amp;&amp;amp; make install
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;./configure output :&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@ip-172-31-11-168 monit-5.34.3]# ./configure
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a race-free mkdir -p... /usr/bin/mkdir -p
........
........
config.status: executing libtool commands
+------------------------------------------------------------+
| License:                                                   |
| This is Open Source Software and use is subject to the GNU |
| AFFERO GENERAL PUBLIC LICENSE version 3, available in this |
| distribution in the file COPYING.                          |
|                                                            |
| By continuing this installation process, you are bound by  |
| the terms of this license agreement. If you do not agree   |
| with the terms of this license, you must abort the         |
| installation process at this point.                        |
+------------------------------------------------------------+
| Libmonit is configured as follows:                         |
|                                                            |
|   Optimized:                                    DISABLED   |
|   Profiling:                                    DISABLED   |
|   Compression:                                  ENABLED    |
+------------------------------------------------------------+

Monit Build Information:

                Architecture: LINUX
       SSL include directory: /usr/include
       SSL library directory: /lib64
              Compiler flags: -g -O2 -Wextra -fstack-protector-all -D_GNU_SOURCE -Wall -Wunused  -std=c11 -D _REENTRANT -I/usr/include -I/usr/include
                Linker flags: -lpam -lz -lpthread -lcrypt -lresolv  -L/lib64 -lssl -lcrypto -L/lib64
           pid file location: /run
           Install directory: /usr/local

+------------------------------------------------------------+
| License:                                                   |
| This is Open Source Software and use is subject to the GNU |
| AFFERO GENERAL PUBLIC LICENSE version 3, available in this |
| distribution in the file COPYING.                          |
|                                                            |
| By continuing this installation process, you are bound by  |
| the terms of this license agreement. If you do not agree   |
| with the terms of this license, you must abort the         |
| installation process at this point.                        |
+------------------------------------------------------------+
| Monit has been configured with the following options:      |
|                                                            |
|  Compression:                                  ENABLED     |
|  PAM support:                                  ENABLED     |
|  SSL support:                                  ENABLED     |
|  Large files support:                          ENABLED     |
|  ASAN support:                                 DISABLED    |
|  IPv6 support:                                 ENABLED     |
|  Optimized:                                    DISABLED    |
|  Profiling:                                    DISABLED    |
+------------------------------------------------------------+

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;make output :&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@ip-172-31-11-168 monit-5.34.3]# make
make  all-recursive
make[1]: Entering directory '/root/rpmbuild/BUILD/monit-5.34.3'
Making all in libmonit
make[2]: Entering directory '/root/rpmbuild/BUILD/monit-5.34.3/libmonit'
Making all in .
make[3]: Entering directory '/root/rpmbuild/BUILD/monit-5.34.3/libmonit'
make[3]: Nothing to be done for 'all-am'.
make[3]: Leaving directory '/root/rpmbuild/BUILD/monit-5.34.3/libmonit'
Making all in test
make[3]: Entering directory '/root/rpmbuild/BUILD/monit-5.34.3/libmonit/test'
make[3]: Nothing to be done for 'all'.
make[3]: Leaving directory '/root/rpmbuild/BUILD/monit-5.34.3/libmonit/test'
make[2]: Leaving directory '/root/rpmbuild/BUILD/monit-5.34.3/libmonit'
make[2]: Entering directory '/root/rpmbuild/BUILD/monit-5.34.3'
make[2]: Nothing to be done for 'all-am'.
make[2]: Leaving directory '/root/rpmbuild/BUILD/monit-5.34.3'
make[1]: Leaving directory '/root/rpmbuild/BUILD/monit-5.34.3'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;make install output :&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@ip-172-31-11-168 monit-5.34.3]# make install
make  install-recursive
make[1]: Entering directory '/root/rpmbuild/BUILD/monit-5.34.3'
Making install in libmonit
make[2]: Entering directory '/root/rpmbuild/BUILD/monit-5.34.3/libmonit'
Making install in .
make[3]: Entering directory '/root/rpmbuild/BUILD/monit-5.34.3/libmonit'
make[4]: Entering directory '/root/rpmbuild/BUILD/monit-5.34.3/libmonit'
make[4]: Nothing to be done for 'install-exec-am'.
make[4]: Nothing to be done for 'install-data-am'.
make[4]: Leaving directory '/root/rpmbuild/BUILD/monit-5.34.3/libmonit'
make[3]: Leaving directory '/root/rpmbuild/BUILD/monit-5.34.3/libmonit'
Making install in test
make[3]: Entering directory '/root/rpmbuild/BUILD/monit-5.34.3/libmonit/test'
make[4]: Entering directory '/root/rpmbuild/BUILD/monit-5.34.3/libmonit/test'
make[4]: Nothing to be done for 'install-exec-am'.
make[4]: Nothing to be done for 'install-data-am'.
make[4]: Leaving directory '/root/rpmbuild/BUILD/monit-5.34.3/libmonit/test'
make[3]: Leaving directory '/root/rpmbuild/BUILD/monit-5.34.3/libmonit/test'
make[2]: Leaving directory '/root/rpmbuild/BUILD/monit-5.34.3/libmonit'
make[2]: Entering directory '/root/rpmbuild/BUILD/monit-5.34.3'
make[3]: Entering directory '/root/rpmbuild/BUILD/monit-5.34.3'
 /usr/bin/mkdir -p '/usr/local/bin'
  /bin/sh ./libtool   --mode=install /usr/bin/install -c monit '/usr/local/bin'
libtool: install: /usr/bin/install -c monit /usr/local/bin/monit
 /usr/bin/mkdir -p '/usr/local/share/man/man1'
 /usr/bin/install -c -m 644 monit.1 '/usr/local/share/man/man1'
make[3]: Leaving directory '/root/rpmbuild/BUILD/monit-5.34.3'
make[2]: Leaving directory '/root/rpmbuild/BUILD/monit-5.34.3'
make[1]: Leaving directory '/root/rpmbuild/BUILD/monit-5.34.3'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Monit basic command&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Optional commands are as follows:
 start all             - Start all services
 start &amp;lt;name&amp;gt;          - Only start the named service
 stop all              - Stop all services
 stop &amp;lt;name&amp;gt;           - Stop the named service
 restart all           - Stop and start all services
 restart &amp;lt;name&amp;gt;        - Only restart the named service
 monitor all           - Enable monitoring of all services
 monitor &amp;lt;name&amp;gt;        - Only enable monitoring of the named service
 unmonitor all         - Disable monitoring of all services
 unmonitor &amp;lt;name&amp;gt;      - Only disable monitoring of the named service
 reload                - Reinitialize monit
 status [name]         - Print full status information for service(s)
 summary [name]        - Print short status information for service(s)
 report [up|down|..]   - Report state of services. See manual for options
 quit                  - Kill the monit daemon process
 validate              - Check all services and start if not running
 procmatch &amp;lt;pattern&amp;gt;   - Test process matching pattern
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Output:&lt;/strong&gt; &lt;br&gt;
&lt;strong&gt;1. monit --version&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@ip-172-31-11-168 tmp]# monit --version
This is Monit version 5.34.3
Built with ssl, with ipv6, with compression, with pam and with large files
Copyright (C) 2001-2024 Tildeslash Ltd. All Rights Reserved.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2.Test Monit&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@ip-172-31-11-168 monit-5.34.3]# monit status
Monit 5.34.3 uptime: 57m

System 'ip-172-31-11-168.ap-south-1.compute.internal'
  status                       OK
  monitoring status            Monitored
  monitoring mode              active
  on reboot                    start
  load average                 [0.00] [0.00] [0.00]
  cpu                          0.2%usr 0.1%sys 0.0%nice 0.1%iowait 0.0%hardirq 0.0%softirq 3.6%steal 0.0%guest 0.0%guestnice
  memory usage                 375.2 MB [39.5%]
  swap usage                   0 B [0.0%]
  uptime                       1h 8m
  boot time                    Sun, 05 Jan 2025 03:28:13
  filedescriptors              1248 [0.0% of 9223372036854775807 limit]
  data collected               Sun, 05 Jan 2025 04:37:04
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 4: Troubleshooting
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;If Monit is not behaving as expected, you can check the logs to troubleshoot:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo tail -f /var/log/monit.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;By setting up Monit on your AWS EC2 AMI, you gain the ability to monitor critical services and system resources in real time. Monit’s lightweight nature and easy-to-configure alerting system make it an excellent tool for ensuring that your AWS infrastructure remains healthy and operational. Additionally, with features like service restarts and resource usage thresholds, Monit will help automate the maintenance of your EC2 instance and keep services running smoothly.&lt;/p&gt;

</description>
      <category>linux</category>
      <category>monitoring</category>
      <category>aws</category>
      <category>operations</category>
    </item>
    <item>
      <title>Enable EKS Auto Mode on an existing cluster</title>
      <dc:creator>Ashish Gajjar</dc:creator>
      <pubDate>Mon, 23 Dec 2024 08:58:04 +0000</pubDate>
      <link>https://forem.com/aws-builders/enable-eks-auto-mode-on-an-existing-cluster-1j5m</link>
      <guid>https://forem.com/aws-builders/enable-eks-auto-mode-on-an-existing-cluster-1j5m</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;Introduction&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Amazon Elastic Kubernetes Service (Amazon EKS) Auto Mode that provides a new capability streamlining Kubernetes cluster management for compute, storage, and networking. You can now get started quickly, improve performance, and reduce overhead, enabling you to focus on building applications that drive innovation by offloading cluster management to AWS.&lt;/p&gt;

&lt;p&gt;Amazon EKS Auto mode streamlined Kubernetes cluster management by automatically provisioning infrastructure, dynamic scaling resources, continually optimizing compute for costs, patching that cluster ready for deployment.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Challenges Before EKS Auto Mode&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Before the advent of EKS Auto Mode, users of Amazon EKS had to deal with several complex tasks related to the infrastructure management of their Kubernetes clusters. Even though the Kubernetes control plane itself was managed, the underlying worker node infrastructure remained the user's responsibility. This required a significant amount of time and expertise, leading to several challenges:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Selecting and Provisioning EC2 Instances&lt;br&gt;
Users were required to choose the appropriate EC2 instances for their Kubernetes clusters. This involved balancing resource optimization with cost considerations—an often tricky task that required deep knowledge of instance types and workload requirements.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Installing and Maintaining Plug-ins&lt;br&gt;
Kubernetes clusters often need additional plug-ins for networking, storage, and monitoring. Users had to ensure that these plug-ins were correctly installed, updated, and maintained to ensure the smooth operation of their clusters.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ongoing Maintenance and Security Updates&lt;br&gt;
In addition to managing EC2 instances and plug-ins, users were also tasked with performing routine maintenance such as OS patching and cluster upgrades. These activities were essential for maintaining security, but they also added to the operational overhead, as each upgrade had to be carefully planned and executed to avoid disruptions.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Introducing EKS Auto Mode: Automating Kubernetes Management&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The introduction of EKS Auto Mode represents a significant shift in how Kubernetes clusters are managed on AWS. With Auto Mode, much of the manual effort that previously fell on users is now automated, making it easier to run and scale Kubernetes workloads. Let’s look at how EKS Auto Mode addresses the challenges we just discussed:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Automatic EC2 Instance Provisioning&lt;br&gt;
With EKS Auto Mode, users no longer need to manually provision EC2 instances. The service automatically selects the best-suited instances for the Kubernetes cluster based on the workload's needs, optimizing both performance and cost. This removes the guesswork from EC2 selection and saves users valuable time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Streamlined Plug-in Management&lt;br&gt;
EKS Auto Mode also takes care of plug-in management. It ensures that essential Kubernetes plug-ins are installed and updated automatically, reducing the maintenance burden and ensuring that the cluster is always running the latest and most secure versions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Automated Cluster Upgrades and Security Patching&lt;br&gt;
One of the most time-consuming tasks for administrators is ensuring that the Kubernetes control plane and worker nodes are kept up-to-date with the latest patches. EKS Auto Mode automates these upgrades and security patches, ensuring that clusters remain secure without manual intervention.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;## How EKS Auto Mode Benefits Users&lt;/strong&gt;&lt;br&gt;
By automating critical infrastructure management tasks, EKS Auto Mode allows Kubernetes users to focus more on their applications rather than the underlying infrastructure. The key benefits include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Reduced Operational Overhead: Automated provisioning, patching, and scaling eliminate much of the time-consuming management tasks, freeing up resources for more important work.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Improved Security: Automatic updates and patching ensure that security vulnerabilities are addressed in a timely manner, without requiring manual intervention.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cost Optimization: By automatically selecting the right EC2 instances for the workload, users can optimize both performance and cost, making it easier to run Kubernetes efficiently at scale.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Getting started&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://dev.to/aws-builders/creating-a-eks-cluster-124-version-from-scratch-using-eksctl-1k2p"&gt;&lt;strong&gt;Step 1: Create a EKS Cluster&lt;/strong&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Step 2: Verify EKS Cluster&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4tmzrbco5vxj6yqkaabc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4tmzrbco5vxj6yqkaabc.png" alt=" " width="800" height="163"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Upgrade cluster using  Auto Mode option.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;select a cluster and click create cluster&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fevmal9wm7d8z7lwrepm9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fevmal9wm7d8z7lwrepm9.png" alt=" " width="800" height="143"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Configure cluster&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open your cluster overview page in the AWS Management Console.&lt;/li&gt;
&lt;li&gt;Under EKS Auto Mode select Manage

&lt;ul&gt;
&lt;li&gt;Quick configuration (with EKS Auto Mode) - new
Quickly create a cluster with production-grade default settings. The configuration uses EKS Auto Mode to automate infrastructure tasks like creating nodes and provisioning storage.&lt;/li&gt;
&lt;li&gt;Custom configuration
To change default settings prior to creation, choose this option. This configuration gives the option to use EKS Auto Mode and customize the cluster’s configuration.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Cluster IAM Role of the existing EKS Cluster must include sufficent permissiosn for EKS Auto Mode, such as the following policies:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AmazonEKSComputePolicy&lt;/li&gt;
&lt;li&gt;AmazonEKSBlockStoragePolicy&lt;/li&gt;
&lt;li&gt;AmazonEKSLoadBalancingPolicy&lt;/li&gt;
&lt;li&gt;AmazonEKSNetworkingPolicy&lt;/li&gt;
&lt;li&gt;AmazonEKSClusterPolicy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ozpukgjtdzgfwd7pv1r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ozpukgjtdzgfwd7pv1r.png" alt="Cluster configuration" width="800" height="460"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8gk2oxjal35ax2gmyu2e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8gk2oxjal35ax2gmyu2e.png" alt=" " width="800" height="459"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Verify cluster&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs4mnt7awn7f8yvg1jyh0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs4mnt7awn7f8yvg1jyh0.png" alt=" " width="800" height="108"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Comma Line Procedure&lt;/strong&gt;&lt;br&gt;
Use the following commands to enable EKS Auto Mode on an existing cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws eks update-cluster-config \
 --name $CLUSTER_NAME \
 --compute-config enabled=true \
 --kubernetes-network-config '{"elasticLoadBalancing":{"enabled": true}}' 
 --storage-config '{"blockStorage":{"enabled": true}}'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
EKS Auto Mode significantly reduces the operational complexity of running Kubernetes clusters on AWS. It automates key tasks like instance provisioning, plug-in management, and cluster upgrades, making it easier to maintain secure, up-to-date, and cost-optimized Kubernetes environments. If you’re looking for a simpler, more efficient way to manage your Kubernetes infrastructure, EKS Auto Mode is the solution you’ve been waiting for.&lt;/p&gt;

</description>
      <category>eks</category>
      <category>recap</category>
      <category>kubernetes</category>
      <category>aws</category>
    </item>
    <item>
      <title>Upgrading Lambda function from Python 3.8 to Higher</title>
      <dc:creator>Ashish Gajjar</dc:creator>
      <pubDate>Sat, 16 Nov 2024 20:03:36 +0000</pubDate>
      <link>https://forem.com/aws-builders/upgrading-lambda-function-from-python-38-to-higher-emc</link>
      <guid>https://forem.com/aws-builders/upgrading-lambda-function-from-python-38-to-higher-emc</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvdy82dll9n9lel94iaep.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvdy82dll9n9lel94iaep.gif" alt=" " width="400" height="225"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This guide walks you through the process of upgrading AWS Lambda Python code from version 3.8 to 3.11. While there are several approaches to achieving this, I will focus on one method: deploying a new Lambda function&lt;/p&gt;

&lt;p&gt;The reason for this upgrade is that AWS Lambda will deprecate Python 3.8 on October 14, 2024, in line with Python 3.8’s End-Of-Life (EOL), also scheduled for October 2024 [1]. Migrating to a newer version ensures continued support and access to the latest features and security updates.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Background&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Our existing Lambda function is from an Get S3 Object which was an existing deployment provided by AWS. This python code was deployed with Python 3.8.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Create a New Lambda Function&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The first option we will explore is deploying a new Lambda function and updating the existing code to be compatible with Python 3.11. One of the major changes when upgrading from Python 3.8 to Python 3.11 is the shift in how the print statement works. In Python 3.8, print was a statement, but in Python 3.11, it became a function. Since print is commonly used for debugging and logging, you’ll often encounter it in Python code. This change will require updating any legacy print statements to function calls in the new version.&lt;/p&gt;

&lt;p&gt;The screen shot below is an example of the existing Lambda function definition. You will notice that the code is deployed as a “Zip” package type.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdhr4ciziup2pq6bwagh4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdhr4ciziup2pq6bwagh4.png" alt="Function deployed as a “Zip” package." width="800" height="201"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In Python 3, the print statement from Python 2 was replaced by the print() function, which means you need to use parentheses around what you want to print. If you want to print an exception error message (such as when catching an exception), you can print the exception object itself.&lt;/p&gt;

&lt;p&gt;Here’s how you can print an exception using Python 3:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import traceback

try:
    # Code that might raise an exception
    1 / 0  # This will raise a ZeroDivisionError
except Exception as e:
    # Print the exception details
    print(f"An error occurred: {e}")

    # Alternatively, you can also print the full traceback:
    print("Full traceback:")
    traceback.print_exc()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;br&gt;
In Python 3, e is the exception object in the except block. To print its details, use print&lt;code&gt;(f"An error occurred: {e}").&lt;/code&gt; The f string allows for string interpolation, inserting the value of e into the string.&lt;br&gt;
&lt;code&gt;traceback.print_exc()&lt;/code&gt; will print the full traceback of the exception if you want more detailed debugging information.&lt;br&gt;
This is the modern way to handle and print errors in Python 3.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flo5xu4avtv8nbuoynqlp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flo5xu4avtv8nbuoynqlp.png" alt=" " width="800" height="488"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 1: Backup your existing function code&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;As a precaution with everything we do in the IT space it is good to start with a backup. On the top right corner of the screen there is an “Action” drop down list. Press the “Action” title and select “Download function zip”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhj3b8efmd66cuyyzk8en.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhj3b8efmd66cuyyzk8en.png" alt=" " width="531" height="249"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then we will select “Download deployment package”. This will down a Zip file with the original deployment package an any code changes that have occurred. Save this file somewhere were you can find it in the future, incase there are any issues.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 2: Create a new version of your code&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Navigate to Lambda Functions. On the top of the screen you will see your existing functions. We will create a new function with the “Create function” orange button on the top right.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5csgb2rukfwabv7qo257.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5csgb2rukfwabv7qo257.png" alt=" " width="800" height="100"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the create screen we will give a new “Function name”, and select the “Runtime” of “Python 3.11” and in this case we will use the existing role from our previous Lambda function as we are going to be duplicating that code. Then press the “Create function” on the both&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Runtime Changes are as below&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqxbyrc89ncze1tjz3mk5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqxbyrc89ncze1tjz3mk5.png" alt=" " width="800" height="329"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqes5fb4nmw75ub3sjq7m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqes5fb4nmw75ub3sjq7m.png" alt=" " width="800" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Deploy a new Lambda function,&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuhodcqnmscz4jznb252p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuhodcqnmscz4jznb252p.png" alt=" " width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5jo00f7m0cjiygty7226.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5jo00f7m0cjiygty7226.png" alt=" " width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhunla0er6w7vinbuyxcw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhunla0er6w7vinbuyxcw.png" alt=" " width="800" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next we will need to import your code as a Zip file or if you are using code that does not require supporting libraries you can add your code directly into the editor. While you are converting your code you will need to make sure you convert all of the print statements to the Python 3.11 version of a print.&lt;/p&gt;

&lt;p&gt;After you have changed your lambda_function.py code to incorporate all of the Python 3 changes you will need to upload a Zip of the code. If you need help creating a zip file for python libraries that are not part of the AWS Lambda default footprint you can follow the following article&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/python-package.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/lambda/latest/dg/python-package.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also don’t forget to edit the basic settings if your code is taking a while to execute and you get a timeout message. The default for Lambda is 3 seconds&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fplrwmp6of9lxxz0mhi9w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fplrwmp6of9lxxz0mhi9w.png" alt=" " width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 3: Test your Code&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;We will then test the Lambda function by pressing the “Test” button on your “Code” screen for the Lambda function&lt;/p&gt;

&lt;p&gt;Name your test, and provide any JSON data objects that you normally test your code with. This will give you an execution results tab that you can debug your code from.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmv0k4tj4jkkjvm6o5kns.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmv0k4tj4jkkjvm6o5kns.png" alt=" " width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hope this helped migrate your Lambda function to Python 3 by creating a new Lambda function and changing the trigger to call your new Lambda function.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Conclusion:&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Upgrading the Python version in an AWS Lambda function is beneficial for staying current with new features, performance improvements, and security updates. However, it requires careful testing of your codebase, dependencies, and Lambda configuration to avoid breaking changes and ensure a seamless transition. Following best practices like using Lambda versioning, testing in a staging environment, and monitoring post-upgrade performance will help ensure that the upgrade is successful and provides the expected benefits.&lt;/p&gt;

</description>
      <category>lambda</category>
      <category>serverless</category>
      <category>community</category>
      <category>devops</category>
    </item>
    <item>
      <title>Cluster Autoscaler configure on AWS EKS -1.24</title>
      <dc:creator>Ashish Gajjar</dc:creator>
      <pubDate>Fri, 01 Nov 2024 12:47:21 +0000</pubDate>
      <link>https://forem.com/aws-builders/cluster-autoscaler-configure-on-aws-eks-130-22eg</link>
      <guid>https://forem.com/aws-builders/cluster-autoscaler-configure-on-aws-eks-130-22eg</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction :&lt;/strong&gt;&lt;br&gt;
The Kubernetes Cluster Autoscaler automatically adjusts the number of nodes in your cluster when pods fail or are rescheduled onto other nodes. The Cluster Autoscaler uses Auto Scaling groups. For more information, see Cluster Autoscaler on AWS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm7lk53g4cnxzztixbpjl.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm7lk53g4cnxzztixbpjl.gif" alt=" " width="853" height="480"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Step 1: Create a EKS Cluster&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;performed Step 1 : to step 5 : Click here&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Verify how many nodes and pods are running&lt;/strong&gt;&lt;br&gt;
Node :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@ip-172-31-18-194 ~]# kubectl get nodes
NAME                            STATUS     ROLES    AGE     VERSION
ip-192-168-5-245.ec2.internal   Ready      &amp;lt;none&amp;gt;   4m19s   v1.24.17-eks-e71965b
ip-192-168-63-39.ec2.internal   Ready      &amp;lt;none&amp;gt;   2s      v1.24.17-eks-e71965b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pods:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@ip-172-31-18-194 ~]# kubectl get po -A
NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE
kube-system   aws-node-4fdfg             1/1     Running   0          2m50s
kube-system   aws-node-mm84r             1/1     Running   0          2m53s
kube-system   coredns-79989457d9-798tx   1/1     Running   0          10m
kube-system   coredns-79989457d9-7fhzl   1/1     Running   0          10m
kube-system   kube-proxy-rkbzz           1/1     Running   0          2m50s
kube-system   kube-proxy-vfq7k           1/1     Running   0          2m53s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 3: Create a IAM OIDC Provider&lt;/strong&gt;&lt;br&gt;
IAM OIDC is used for authorizing the Cluster Autoscaler to launch or terminate instances under an Auto Scaling group.&lt;br&gt;
Open EKS Dashboard and copy a OpenID Connect Provider link&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fok37ablqi66958y0xpak.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fok37ablqi66958y0xpak.png" alt=" " width="750" height="297"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open a IAM Providers
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuzt17xbmn2h15c4fle13.png" alt=" " width="750" height="316"&gt;
&lt;/li&gt;
&lt;li&gt;Click “Add provider,” select “OpenID Connect,” and click “Get thumbprint” as shown below:
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faoa3m4wlrwisiy02e8nf.png" alt=" " width="750" height="664"&gt;
&lt;/li&gt;
&lt;li&gt;Then enter the “Audience” (sts.amazonaws.com in our example pointing to the AWS STS, also known as the Security Token Service) and click the add provider
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkasyam8fitbx827kse8a.png" alt=" " width="750" height="713"&gt;
&lt;/li&gt;
&lt;li&gt;Adding identity information on identity providers
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftqwxc1p3ir2wp36mz6jw.png" alt=" " width="750" height="257"&gt;
&lt;strong&gt;Step 4: Create IAM Policy&lt;/strong&gt;
Create a Policy with necessary permission.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnx6ugz5ue8m7kbm7x0bn.png" alt=" " width="750" height="113"&gt;
&lt;/li&gt;
&lt;li&gt;To create the policy with the necessary permissions, save the below file as AmazonEKSClusterAutoscalerPolicy
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "autoscaling:DescribeAutoScalingGroups",
                "autoscaling:DescribeAutoScalingInstances",
                "autoscaling:DescribeLaunchConfigurations",
                "autoscaling:DescribeScalingActivities",
                "autoscaling:DescribeTags",
                "ec2:DescribeInstanceTypes",
                "ec2:DescribeLaunchTemplateVersions"
            ],
            "Resource": [
                "*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "autoscaling:SetDesiredCapacity",
                "autoscaling:TerminateInstanceInAutoScalingGroup",
                "ec2:DescribeImages",
                "ec2:GetInstanceTypesFromInstanceRequirements",
                "eks:DescribeNodegroup"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Review and create a policy
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk19i5jyqb86ky6l39959.png" alt=" " width="750" height="311"&gt;
&lt;strong&gt;Step 5 : Create a IAM Role for the provider.&lt;/strong&gt;
Create role
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frkcw3fz1jiywjpy3mwtk.png" alt=" " width="750" height="209"&gt;
Select the web identity&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select identity provide and audience click next.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F37nxldy5gr0v2dacdbjr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F37nxldy5gr0v2dacdbjr.png" alt=" " width="750" height="393"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add Policy AmazonEKSClusterAutoscalerPolicy&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbnf162v9oj7xon1ninwe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbnf162v9oj7xon1ninwe.png" alt=" " width="750" height="425"&gt;&lt;/a&gt;&lt;br&gt;
Click Next and provide Role Name : EKS_Autoscaler&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu77j22j4t53wuxsnwiqy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu77j22j4t53wuxsnwiqy.png" alt=" " width="750" height="427"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5dw1b224lq7wv8x83sn5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5dw1b224lq7wv8x83sn5.png" alt=" " width="750" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;verify the IAM role and make sure the policy is attached.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fad1sa0vb4j97qkwjydaj.png" alt=" " width="750" height="364"&gt;
Edit the “Trust relationships.”
Before Edit “Trust relationships.”
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Federated": "arn:aws:iam::256050093938:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/CD6440D4E14822FC649C070BD8C41A96"
            },
            "Action": "sts:AssumeRoleWithWebIdentity",
            "Condition": {
                "StringEquals": {
                    "oidc.eks.us-east-1.amazonaws.com/id/CD6440D4E14822FC649C070BD8C41A96:aud": "sts.amazonaws.com"
                }
            }
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;After Edit “Trust relationships.”
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Federated": "arn:aws:iam::256050093938:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/CD6440D4E14822FC649C070BD8C41A96"
            },
            "Action": "sts:AssumeRoleWithWebIdentity",
            "Condition": {
                "StringEquals": {
                    "oidc.eks.us-east-1.amazonaws.com/id/CD6440D4E14822FC649C070BD8C41A96:aud": "sts.amazonaws.com",
                    "oidc.eks.us-east-1.amazonaws.com/id/CD6440D4E14822FC649C070BD8C41A96:sub": "system:serviceaccount:kube-system:cluster-autoscaler"
                }
            }
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Step 6 : Deploy a Cluster Autoscaler&lt;/strong&gt;&lt;br&gt;
Next, we deploy Cluster Autoscaler. To do so, you must use the Amazon Resource Names (ARN) number of the IAM role created in our earlier step.&lt;br&gt;
The content intended to save into a file (make sure you copy all of the content presented over the next page):&lt;br&gt;
&lt;strong&gt;Modify below two lines&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Line 8 : change IAM Role name&lt;/li&gt;
&lt;li&gt;Line 159 : --node-group-auto-discovery = This is used by CA to discover the Auto Scaling group based on its tag.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::256050093938:role/EKS_Autoscaler
  name: cluster-autoscaler
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: cluster-autoscaler
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
rules:
  - apiGroups: [""]
    resources: ["events", "endpoints"]
    verbs: ["create", "patch"]
  - apiGroups: [""]
    resources: ["pods/eviction"]
    verbs: ["create"]
  - apiGroups: [""]
    resources: ["pods/status"]
    verbs: ["update"]
  - apiGroups: [""]
    resources: ["endpoints"]
    resourceNames: ["cluster-autoscaler"]
    verbs: ["get", "update"]
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["watch", "list", "get", "update"]
  - apiGroups: [""]
    resources:
      - "pods"
      - "services"
      - "replicationcontrollers"
      - "persistentvolumeclaims"
      - "persistentvolumes"
    verbs: ["watch", "list", "get"]
  - apiGroups: ["extensions"]
    resources: ["replicasets", "daemonsets"]
    verbs: ["watch", "list", "get"]
  - apiGroups: ["policy"]
    resources: ["poddisruptionbudgets"]
    verbs: ["watch", "list"]
  - apiGroups: ["apps"]
    resources: ["statefulsets", "replicasets", "daemonsets"]
    verbs: ["watch", "list", "get"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses", "csinodes"]
    verbs: ["watch", "list", "get"]
  - apiGroups: ["batch", "extensions"]
    resources: ["jobs"]
    verbs: ["get", "list", "watch", "patch"]
  - apiGroups: ["coordination.k8s.io"]
    resources: ["leases"]
    verbs: ["create"]
  - apiGroups: ["coordination.k8s.io"]
    resourceNames: ["cluster-autoscaler"]
    resources: ["leases"]
    verbs: ["get", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: cluster-autoscaler
  namespace: kube-system
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
rules:
  - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["create","list","watch"]
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["cluster-autoscaler-status", "cluster-autoscaler-priority-expander"]
    verbs: ["delete", "get", "update", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: cluster-autoscaler
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-autoscaler
subjects:
  - kind: ServiceAccount
    name: cluster-autoscaler
    namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: cluster-autoscaler
  namespace: kube-system
  labels:
    k8s-addon: cluster-autoscaler.addons.k8s.io
    k8s-app: cluster-autoscaler
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: cluster-autoscaler
subjects:
  - kind: ServiceAccount
    name: cluster-autoscaler
    namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: cluster-autoscaler
  namespace: kube-system
  labels:
    app: cluster-autoscaler
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cluster-autoscaler
  template:
    metadata:
      labels:
        app: cluster-autoscaler
      annotations:
        cluster-autoscaler.kubernetes.io/safe-to-evict: 'false'
    spec:
      serviceAccountName: cluster-autoscaler
      containers:
        - image: k8s.gcr.io/autoscaling/cluster-autoscaler:v1.20.0
          name: cluster-autoscaler
          resources:
            limits:
              cpu: 100m
              memory: 500Mi
            requests:
              cpu: 100m
              memory: 500Mi
          command:
            - ./cluster-autoscaler
            - --v=4
            - --stderrthreshold=info
            - --cloud-provider=aws
            - --skip-nodes-with-local-storage=false
            - --expander=least-waste
            - --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/ashish
            - --balance-similar-node-groups
            - --skip-nodes-with-system-pods=false
          volumeMounts:
            - name: ssl-certs
              mountPath: /etc/ssl/certs/ca-certificates.crt #/etc/ssl/certs/ca-bundle.crt for Amazon Linux Worker Nodes
              readOnly: true
          imagePullPolicy: "Always"
      volumes:
        - name: ssl-certs
          hostPath:
            path: "/etc/ssl/certs/ca-bundle.crt"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;To deploy CA, save the following content presented after the command below in a file and run this provided command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f cluster-autoscaler.yaml
serviceaccount/cluster-autoscaler created
clusterrole.rbac.authorization.k8s.io/cluster-autoscaler created
role.rbac.authorization.k8s.io/cluster-autoscaler created
clusterrolebinding.rbac.authorization.k8s.io/cluster-autoscaler created
rolebinding.rbac.authorization.k8s.io/cluster-autoscaler created
deployment.apps/cluster-autoscaler created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The expected results are displayed below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@ip-172-31-18-194 ~]# kubectl get po -A
NAMESPACE     NAME                                  READY   STATUS    RESTARTS   AGE
kube-system   aws-node-2frzk                        1/1     Running   0          67m
kube-system   aws-node-drmtr                        1/1     Running   0          63m
kube-system   cluster-autoscaler-657d67cd5d-l7q4m   1/1     Running   0          8s
kube-system   coredns-79989457d9-89f48              1/1     Running   0          75m
kube-system   coredns-79989457d9-ddvvb              1/1     Running   0          75m
kube-system   kube-proxy-hpzxj                      1/1     Running   0          63m
kube-system   kube-proxy-vb2gj                      1/1     Running   0          67m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The expected results are displayed below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@ip-172-31-18-194 ~]# kubectl get nodes
NAME                            STATUS   ROLES    AGE   VERSION
ip-192-168-5-245.ec2.internal   Ready    &amp;lt;none&amp;gt;   76m   v1.24.17-eks-e71965b
ip-192-168-63-39.ec2.internal   Ready    &amp;lt;none&amp;gt;   72m   v1.24.17-eks-e71965b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Troubleshoot :&lt;/strong&gt;&lt;br&gt;
verify the logs by issuing this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl logs -l app=cluster-autoscaler -n kubesystem -f
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Conclusion :&lt;/strong&gt;&lt;br&gt;
Cluster Autoscaler plays a vital role in a Kubernetes cluster by ensuring adequate computing resources are available by adding the nodes to a cluster and keeping infrastructure costs down by removing nodes&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
