<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Amit Kushwaha</title>
    <description>The latest articles on Forem by Amit Kushwaha (@amit_kumar_7db8e36a64dd45).</description>
    <link>https://forem.com/amit_kumar_7db8e36a64dd45</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/amit_kumar_7db8e36a64dd45"/>
    <language>en</language>
    <item>
      <title>-&gt;&gt; Day-27 Automating AWS Infrastructure Using Terraform &amp; Github Actions</title>
      <dc:creator>Amit Kushwaha</dc:creator>
      <pubDate>Tue, 03 Mar 2026 08:12:53 +0000</pubDate>
      <link>https://forem.com/amit_kumar_7db8e36a64dd45/-day-27-automating-aws-infrastructure-using-terraform-github-actions-g42</link>
      <guid>https://forem.com/amit_kumar_7db8e36a64dd45/-day-27-automating-aws-infrastructure-using-terraform-github-actions-g42</guid>
      <description>&lt;p&gt;In modern cloud environments, manually provisioning infrastructure is inefficient, error-prone, and not scalable.&lt;/p&gt;

&lt;p&gt;To solve this, I built a fully automated AWS infrastructure using Terraform integrated with GitHub Actions for CI/CD.&lt;/p&gt;

&lt;p&gt;This project provisions a production-style architecture including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Custom VPC&lt;/li&gt;
&lt;li&gt;Application Load Balancer&lt;/li&gt;
&lt;li&gt;Auto Scaling Group&lt;/li&gt;
&lt;li&gt;EC2 instances&lt;/li&gt;
&lt;li&gt;Remote backend using S3&lt;/li&gt;
&lt;li&gt;Multi-environment configuration (dev, test, prod)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All infrastructure is defined as code and deployed automatically via GitHub.&lt;/p&gt;

&lt;p&gt;No manual console clicks. Just version-controlled automation.&lt;/p&gt;




&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7xpbndwwuc9595lmzc2o.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7xpbndwwuc9595lmzc2o.gif" alt=" " width="800" height="785"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deployment Flow&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Developer pushes Terraform code to GitHub&lt;/li&gt;
&lt;li&gt;GitHub Actions workflow triggers&lt;/li&gt;
&lt;li&gt;Terraform executes:

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;terraform init&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;terraform validate&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;terraform plan&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Manual approval required&lt;/li&gt;
&lt;li&gt;&lt;code&gt;terraform apply&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  AWS infrastructure is provisioned automatically
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Tech Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Terraform (Infrastructure as Code)&lt;/li&gt;
&lt;li&gt;GitHub Actions (CI/CD automation)&lt;/li&gt;
&lt;li&gt;AWS (VPC, EC2, ASG, ALB, S3)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  - Remote backend with S3 for state management
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Project Structure
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="err"&gt;.&lt;/span&gt;
&lt;span class="err"&gt;├──&lt;/span&gt; &lt;span class="nx"&gt;terraform&lt;/span&gt;&lt;span class="err"&gt;/&lt;/span&gt;
&lt;span class="err"&gt;│&lt;/span&gt;   &lt;span class="err"&gt;├──&lt;/span&gt; &lt;span class="nx"&gt;main&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tf&lt;/span&gt;
&lt;span class="err"&gt;│&lt;/span&gt;   &lt;span class="err"&gt;├──&lt;/span&gt; &lt;span class="nx"&gt;vpc&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tf&lt;/span&gt;
&lt;span class="err"&gt;│&lt;/span&gt;   &lt;span class="err"&gt;├──&lt;/span&gt; &lt;span class="nx"&gt;security_groups&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tf&lt;/span&gt;
&lt;span class="err"&gt;│&lt;/span&gt;   &lt;span class="err"&gt;├──&lt;/span&gt; &lt;span class="nx"&gt;alb&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tf&lt;/span&gt;
&lt;span class="err"&gt;│&lt;/span&gt;   &lt;span class="err"&gt;├──&lt;/span&gt; &lt;span class="nx"&gt;asg&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tf&lt;/span&gt;
&lt;span class="err"&gt;│&lt;/span&gt;   &lt;span class="err"&gt;├──&lt;/span&gt; &lt;span class="nx"&gt;s3&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tf&lt;/span&gt;
&lt;span class="err"&gt;│&lt;/span&gt;   &lt;span class="err"&gt;├──&lt;/span&gt; &lt;span class="nx"&gt;backend&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tf&lt;/span&gt;
&lt;span class="err"&gt;│&lt;/span&gt;   &lt;span class="err"&gt;├──&lt;/span&gt; &lt;span class="nx"&gt;dev&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tfvars&lt;/span&gt;
&lt;span class="err"&gt;│&lt;/span&gt;   &lt;span class="err"&gt;├──&lt;/span&gt; &lt;span class="nx"&gt;test&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tfvars&lt;/span&gt;
&lt;span class="err"&gt;│&lt;/span&gt;   &lt;span class="err"&gt;└──&lt;/span&gt; &lt;span class="nx"&gt;prod&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tfvars&lt;/span&gt;
&lt;span class="err"&gt;│&lt;/span&gt;
&lt;span class="err"&gt;├──&lt;/span&gt; &lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;github&lt;/span&gt;&lt;span class="err"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;workflows&lt;/span&gt;&lt;span class="err"&gt;/&lt;/span&gt;
&lt;span class="err"&gt;│&lt;/span&gt;   &lt;span class="err"&gt;├──&lt;/span&gt; &lt;span class="nx"&gt;terraform&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;yaml&lt;/span&gt;
&lt;span class="err"&gt;│&lt;/span&gt;   &lt;span class="err"&gt;└──&lt;/span&gt; &lt;span class="nx"&gt;terraform-destroy&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;yaml&lt;/span&gt;
&lt;span class="err"&gt;│&lt;/span&gt;
&lt;span class="err"&gt;├──&lt;/span&gt; &lt;span class="nx"&gt;scripts&lt;/span&gt;&lt;span class="err"&gt;/&lt;/span&gt;
&lt;span class="err"&gt;│&lt;/span&gt;   &lt;span class="err"&gt;└──&lt;/span&gt; &lt;span class="nx"&gt;user_data&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sh&lt;/span&gt;
&lt;span class="err"&gt;│&lt;/span&gt;
&lt;span class="err"&gt;└──&lt;/span&gt; &lt;span class="nx"&gt;README&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;md&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Multi Environment Deployment
&lt;/h2&gt;

&lt;p&gt;One of the key design decisions was environment separation.&lt;/p&gt;

&lt;p&gt;This project supports:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;dev&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;test&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;prod&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Each environment has its own &lt;code&gt;.tfvars&lt;/code&gt; file, allowing controlled configuration changes without modifying core infrastructure code.
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Remote State Management
&lt;/h2&gt;

&lt;p&gt;Terraform state is stored in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;S3 (remote backend)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This ensures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Centralized state storage&lt;/li&gt;
&lt;li&gt;Team collaboration support&lt;/li&gt;
&lt;li&gt;State consistency&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  This avoids local state conflicts and improves production readiness.
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Github Actions Workflow
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Two workflows were implemented:&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Deployment Workflow&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Triggers on push and performs:&lt;/li&gt;
&lt;li&gt;Checkout repository&lt;/li&gt;
&lt;li&gt;Configure AWS credentials via GitHub Secrets&lt;/li&gt;
&lt;li&gt;Setup Terraform&lt;/li&gt;
&lt;li&gt;Initialize backend&lt;/li&gt;
&lt;li&gt;Validate configuration&lt;/li&gt;
&lt;li&gt;Plan and Apply infrastructure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Destroy Workflow&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Allows controlled teardown of infrastructure using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;terraform&lt;/span&gt; &lt;span class="nx"&gt;destroy&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;em&gt;This helps prevent unnecessary AWS costs.&lt;/em&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Implemented Features
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Infrastructure as Code using Terraform&lt;/li&gt;
&lt;li&gt;CI/CD pipeline integration&lt;/li&gt;
&lt;li&gt;Auto Scaling architecture&lt;/li&gt;
&lt;li&gt;Application Load Balancer routing&lt;/li&gt;
&lt;li&gt;EC2 bootstrapping via &lt;code&gt;user_data&lt;/code&gt; script&lt;/li&gt;
&lt;li&gt;Multi-environment deployment&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  - Remote backend configuration
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This project demonstrates how Terraform and GitHub Actions can be combined to build a fully automated, scalable AWS infrastructure.&lt;/p&gt;

&lt;p&gt;By eliminating manual provisioning and adopting Infrastructure as Code, we achieve:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Consistency&lt;/li&gt;
&lt;li&gt;Scalability&lt;/li&gt;
&lt;li&gt;Faster deployments&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  - Reduced human error
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/Amitkushwaha7/TerraformFullCourse/tree/d4a744cba6fa385810b2f6cfa6f8a25124a272b2/Day-27" rel="noopener noreferrer"&gt;GitHub Repository&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html" rel="noopener noreferrer"&gt;Auto Scaling Docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/vpc-security-best-practices.html" rel="noopener noreferrer"&gt;AWS VPC Best Practices&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs" rel="noopener noreferrer"&gt;Terraform AWS Provider Docs&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &amp;gt;&amp;gt; Connect With Me
&lt;/h2&gt;

&lt;p&gt;If you enjoyed this post or want to follow my #30DaysOfAWSTerraformChallenge journey, feel free to connect with me here:&lt;/p&gt;

&lt;p&gt;💼 LinkedIn: &lt;a href="https://www.linkedin.com/in/amitkushwaha7/" rel="noopener noreferrer"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🐙 GitHub: &lt;a href="https://github.com/Amitkushwaha7" rel="noopener noreferrer"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📝 Dev.to / &lt;a href="https://dev.to/amit_kumar_7db8e36a64dd45"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🐦 Twitter/X: &lt;a href="https://x.com/AmitKum43380951" rel="noopener noreferrer"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Found this helpful? Drop a ❤️ and follow for more AWS and Terraform tutorials!&lt;/p&gt;

&lt;h2&gt;
  
  
  Questions? Drop them in the comments below! 👇
&lt;/h2&gt;

</description>
      <category>terraform</category>
      <category>devops</category>
      <category>aws</category>
      <category>cicd</category>
    </item>
    <item>
      <title>-&gt;&gt; Day-26 Provisioning an AWS S3 Bucket using HCP Terraform</title>
      <dc:creator>Amit Kushwaha</dc:creator>
      <pubDate>Mon, 02 Mar 2026 12:26:02 +0000</pubDate>
      <link>https://forem.com/amit_kumar_7db8e36a64dd45/-provisioning-an-aws-s3-bucket-using-hcp-terraform-1mjc</link>
      <guid>https://forem.com/amit_kumar_7db8e36a64dd45/-provisioning-an-aws-s3-bucket-using-hcp-terraform-1mjc</guid>
      <description>&lt;p&gt;In this blog, I implemented a cloud-based Terraform workflow using HCP Terraform integrated with Github to provision an AWS S3 in a prodcution style setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  &amp;gt;&amp;gt; Project Objective:
&lt;/h2&gt;

&lt;p&gt;The goal was to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Define AWS infrastructure using Terraform&lt;/li&gt;
&lt;li&gt;Store and version control the code in Github&lt;/li&gt;
&lt;li&gt;Execute Terraform runs a remotely using HCP Terraform&lt;/li&gt;
&lt;li&gt;Implement a VCS- driven automated workflow&lt;/li&gt;
&lt;li&gt;Manage state securely in the cloud&lt;/li&gt;
&lt;li&gt;Isolate environments using Projects and Workspaces&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &amp;gt;&amp;gt; Architecture Overview:
&lt;/h2&gt;

&lt;p&gt;The deployment workflow follows this structure:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnvd6w285y196p1197cb0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnvd6w285y196p1197cb0.png" alt=" " width="618" height="684"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;Developer&lt;/span&gt; &lt;span class="nx"&gt;-&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;Github&lt;/span&gt; &lt;span class="nx"&gt;-&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;HCP&lt;/span&gt; &lt;span class="nx"&gt;-&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;Terraform&lt;/span&gt; &lt;span class="nx"&gt;-&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;AWS&lt;/span&gt; &lt;span class="nx"&gt;-&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;S3&lt;/span&gt; &lt;span class="nx"&gt;Bucket&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Execution Flow&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Write Terraform configuration for S3 bucket.&lt;/li&gt;
&lt;li&gt;Push the code to GitHub.&lt;/li&gt;
&lt;li&gt;HCP Terraform detects the change.&lt;/li&gt;
&lt;li&gt;Automatically runs terraform init and terraform plan.&lt;/li&gt;
&lt;li&gt;Review the plan in the UI.&lt;/li&gt;
&lt;li&gt;Confirm and apply the changes.&lt;/li&gt;
&lt;li&gt;AWS provisions the S3 bucket.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Step-by-Step Guide to deploy an aws s3 bucket using HCP Terraform
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;Before starting, make sure you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Account&lt;/li&gt;
&lt;li&gt;GitHub Account&lt;/li&gt;
&lt;li&gt;HCP Terraform Account&lt;/li&gt;
&lt;li&gt;Basic knowledge of Terraform syntax&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Step 1: Create a GitHub Repository
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Log in to GitHub.&lt;/li&gt;
&lt;li&gt;Create a new repository (e.g., &lt;code&gt;terraform-s3-demo&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Clone it locally:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/your-username/terraform-s3-demo.git
cd terraform-s3-demo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Step:2 Write Terraform Configuration
&lt;/h3&gt;

&lt;p&gt;Create the following files:&lt;/p&gt;

&lt;h3&gt;
  
  
  main.tf
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"aws"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;region&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;region&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_s3_bucket"&lt;/span&gt; &lt;span class="s2"&gt;"mybucket"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;bucket&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;bucket_name&lt;/span&gt;

  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;Name&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;bucket_name&lt;/span&gt;
    &lt;span class="nx"&gt;Environment&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;environment&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  variables.tf
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"region"&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
&lt;span class="nx"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"bucket_name"&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
&lt;span class="nx"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"environment"&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Step3: Push Code to Github
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git add &lt;span class="nb"&gt;.&lt;/span&gt;
git commit &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"Initial S3 bucket Terraform configuration"&lt;/span&gt;
git push origin main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your Terraform code is now version-controlled.&lt;/p&gt;




&lt;h3&gt;
  
  
  Step 4: Set Up HCP Terraform
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Log in to HCP Terraform&lt;/li&gt;
&lt;li&gt;Create a new Organization&lt;/li&gt;
&lt;li&gt;Inside the organization, create a Project &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Projects help logically group infrastructure.&lt;/p&gt;




&lt;h3&gt;
  
  
  Step 5: Create a VCS-Driven Workspace
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Click &lt;strong&gt;Create Workspace&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Version Control Workflow&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Connect your &lt;strong&gt;GitHub account&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Choose the &lt;strong&gt;repository&lt;/strong&gt; (&lt;code&gt;terraform-s3-demo&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Set working directory (if needed)&lt;/li&gt;
&lt;li&gt;Create &lt;strong&gt;workspace&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;Now your repo is linked to HCP Terraform.&lt;/em&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Step:6 Configure Varibales in Workspace
&lt;/h3&gt;

&lt;p&gt;Inside the Workspace -&amp;gt; varibales section:&lt;/p&gt;

&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Add Environment Variables&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;AWS_ACCESS_KEY_ID&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;AWS_SECRET_ACCESS_KEY&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;


&lt;/blockquote&gt;
&lt;br&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  mark them as sensitive.
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Add Terraform variables&lt;/strong&gt;&lt;br&gt;
Example:&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;


&lt;/blockquote&gt;
&lt;br&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;region&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;ap-south-1&lt;/span&gt;
&lt;span class="nx"&gt;bucket_name&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;amit-terraform-demo-bucket&lt;/span&gt;
&lt;span class="nx"&gt;environment&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;dev&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;em&gt;Do NOT hardcode credentials in code.&lt;/em&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 7: Trigger the First Run
&lt;/h3&gt;

&lt;p&gt;Now go back to GitHub and make a small change (or re-push code).&lt;/p&gt;

&lt;p&gt;HCP Terraform will automatically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Clone the repository&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;terraform init&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;terraform plan&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Show execution plan in UI&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Step 8: Review and Apply
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Review the plan output.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Confirm &amp;amp; Apply&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Wait for execution to complete.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If successful, your S3 bucket will be created in AWS.&lt;/p&gt;




&lt;h3&gt;
  
  
  Step 9: Verify in AWS Console
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Log in to AWS.&lt;/li&gt;
&lt;li&gt;Navigate to S3.&lt;/li&gt;
&lt;li&gt;Confirm the bucket is created.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Congratulations - infrastructure deployed using cloud-based Terraform workflow. 
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &amp;gt;&amp;gt; Secure Credential Management:
&lt;/h3&gt;

&lt;p&gt;AWS credentials were added as sensitive environment variables inside the HCP Terraform workspace.&lt;/p&gt;

&lt;p&gt;This ensures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No secrets in source code&lt;/li&gt;
&lt;li&gt;Secure execution&lt;/li&gt;
&lt;li&gt;Production-aligned security practice&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;Resource:&lt;br&gt;
Github Repo: &lt;a href="https://github.com/Amitkushwaha7/TerraformFullCourse/tree/f148fb7e496484a6172e490639de1e8b3784d7ec/Day-26" rel="noopener noreferrer"&gt;Github Repo&lt;/a&gt;&lt;br&gt;
Hashicorp: &lt;a href="https://app.terraform.io/app/organizations" rel="noopener noreferrer"&gt;Hashicorp&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conslusion
&lt;/h2&gt;

&lt;p&gt;This project showcases how to provision AWS infrastructure using a cloud-native Terraform workflow powered by HCP Terraform and GitHub.&lt;/p&gt;

&lt;p&gt;By combining Infrastructure as Code with automated VCS-driven execution, the deployment process becomes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Repeatable&lt;/li&gt;
&lt;li&gt;Secure&lt;/li&gt;
&lt;li&gt;Collaborative&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  - Production-ready
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &amp;gt;&amp;gt; Connect With Me
&lt;/h2&gt;

&lt;p&gt;If you enjoyed this post or want to follow my #30DaysOfAWSTerraformChallenge journey, feel free to connect with me here:&lt;/p&gt;

&lt;p&gt;💼 LinkedIn: &lt;a href="https://www.linkedin.com/in/amitkushwaha7/" rel="noopener noreferrer"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🐙 GitHub: &lt;a href="https://github.com/Amitkushwaha7" rel="noopener noreferrer"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📝 Hashnode / &lt;a href="https://hashnode.com/@amit902" rel="noopener noreferrer"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🐦 Twitter/X: &lt;a href="https://x.com/AmitKum43380951" rel="noopener noreferrer"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Found this helpful? Drop a ❤️ and follow for more AWS and Terraform tutorials!&lt;/p&gt;

&lt;h2&gt;
  
  
  Questions? Drop them in the comments below! 👇
&lt;/h2&gt;

</description>
      <category>devops</category>
      <category>automation</category>
      <category>aws</category>
      <category>cloud</category>
    </item>
    <item>
      <title>-&gt;&gt; Day-25 Terraform Import In AWS</title>
      <dc:creator>Amit Kushwaha</dc:creator>
      <pubDate>Sat, 28 Feb 2026 12:38:42 +0000</pubDate>
      <link>https://forem.com/amit_kumar_7db8e36a64dd45/-day-25-terraform-import-in-aws-fim</link>
      <guid>https://forem.com/amit_kumar_7db8e36a64dd45/-day-25-terraform-import-in-aws-fim</guid>
      <description>&lt;h2&gt;
  
  
  managing Existing AWS Infrastructure Using Terraform Import!
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcna1zjsv4z75dvhexykw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcna1zjsv4z75dvhexykw.png" alt=" " width="599" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When working in real-world cloud environments, infrastructure is not always created using infrastructure as Code from day one.&lt;/p&gt;

&lt;p&gt;Sometimes resources already exist - created manually through the AWS Console.&lt;/p&gt;

&lt;p&gt;So the question becomes:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How do we bring those existing resources under Terraform management safely?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That’s exactly what this project demonstrates.&lt;/p&gt;




&lt;p&gt;The Problem&lt;br&gt;
 You already have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A VPC&lt;/li&gt;
&lt;li&gt;A Security Group&lt;/li&gt;
&lt;li&gt;Possibly EC2 instances&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But none of them are managed through Terraform.&lt;/p&gt;

&lt;p&gt;Managing infrastructure manually:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;X Is not version controlled&lt;/li&gt;
&lt;li&gt;X Is not reproducible&lt;/li&gt;
&lt;li&gt;X Is error-prone&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We need a structured way to manage it using Iac - without recreating everything.&lt;/p&gt;


&lt;h3&gt;
  
  
  The Solution: &lt;code&gt;terraform import&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;Terraform provides a command that allows you to map existing cloud resources into Terraform state:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform import &amp;lt;resource_type.resource_name&amp;gt; &amp;lt;resource_id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;This command does not create infrastructure.&lt;/strong&gt;&lt;br&gt;
It simply tells Terraform:&lt;/p&gt;

&lt;p&gt;| "This resource already exists. start managing it."&lt;/p&gt;




&lt;h3&gt;
  
  
  Architecture Overview
&lt;/h3&gt;

&lt;p&gt;The workflow looks like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Write Terraform configuration files (&lt;code&gt;.tf&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Configure AWS provider&lt;/li&gt;
&lt;li&gt;Reference existing VPC using a data source&lt;/li&gt;
&lt;li&gt;Define the Security Group in Terraform&lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;terraform import&lt;/code&gt; to attach the real AWS resource to Terraform state&lt;/li&gt;
&lt;li&gt;Validate using &lt;code&gt;terraform plan&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once imported, Terraform can now track and manage that resource.&lt;/p&gt;




&lt;h3&gt;
  
  
  Project Structure
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform/
├── main.tf           # Provider configuration
├── variables.tf      # Region and VPC input
├── vpc.tf            # Fetch existing VPC using data source
├── security_group.tf # Define Security Group to import
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Import Workflow
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Initialize Terraform
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Import Existing Security Group
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform import aws_security_group.app_sg sg-xxxxxxxx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Terraform now maps the real AWS Security Group to the resource block.&lt;/p&gt;




&lt;h3&gt;
  
  
  Terraform validate
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If everything matches, you’ll see &lt;strong&gt;No changes&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;That means Terraform and AWS are in sync.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Adopting Infrastructure as Code doesn’t mean you need to rebuild everything from scratch.&lt;/p&gt;

&lt;p&gt;With &lt;code&gt;terraform import&lt;/code&gt;, you can gradually transition manual cloud infrastructure into a version-controlled, structured Terraform workflow.&lt;/p&gt;

&lt;p&gt;This is a practical and realistic DevOps approach - especially in environments where infrastructure already exists.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/Amitkushwaha7/TerraformFullCourse/tree/1809804d9b5fd72e2d9cda2ce7409a2fd1ddb976/Day-25" rel="noopener noreferrer"&gt;Github Repository&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest" rel="noopener noreferrer"&gt;Terraform AWS Provider Docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developer.hashicorp.com/terraform/cli/commands/import" rel="noopener noreferrer"&gt;Terraform Import Command&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developer.hashicorp.com/terraform/language/state" rel="noopener noreferrer"&gt;Terraform state Management&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &amp;gt;&amp;gt; Connect With Me
&lt;/h2&gt;

&lt;p&gt;If you enjoyed this post or want to follow my #30DaysOfAWSTerraformChallenge journey, feel free to connect with me here:&lt;/p&gt;

&lt;p&gt;💼 LinkedIn: &lt;a href="https://www.linkedin.com/in/amitkushwaha7/" rel="noopener noreferrer"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🐙 GitHub: &lt;a href="https://github.com/Amitkushwaha7" rel="noopener noreferrer"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📝 Hashnode / &lt;a href="https://hashnode.com/@amit902" rel="noopener noreferrer"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🐦 Twitter/X: &lt;a href="https://x.com/AmitKum43380951" rel="noopener noreferrer"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Found this helpful? Drop a ❤️ and follow for more AWS and Terraform tutorials!&lt;/p&gt;

&lt;h2&gt;
  
  
  Questions? Drop them in the comments below! 👇
&lt;/h2&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>cloud</category>
      <category>github</category>
    </item>
    <item>
      <title>-&gt;&gt; Day-24 Highly Available and Scalable Architecture Using Terraform</title>
      <dc:creator>Amit Kushwaha</dc:creator>
      <pubDate>Fri, 27 Feb 2026 20:10:41 +0000</pubDate>
      <link>https://forem.com/amit_kumar_7db8e36a64dd45/-day-24-highly-available-and-scalable-architecture-using-terraform-3508</link>
      <guid>https://forem.com/amit_kumar_7db8e36a64dd45/-day-24-highly-available-and-scalable-architecture-using-terraform-3508</guid>
      <description>&lt;h1&gt;
  
  
  Deploying a Scalable Dockerized Django Application on AWS using Terraform
&lt;/h1&gt;

&lt;p&gt;Modern applications are not deployed on a single EC2 instance anymore.&lt;/p&gt;

&lt;p&gt;In this project, I built a production-style architecture on AWS using Terraform, where:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Dockerized Django application runs on EC2 instances&lt;/li&gt;
&lt;li&gt;Instances are deployed in private subnets&lt;/li&gt;
&lt;li&gt;Application Load Balancer handles incoming traffic&lt;/li&gt;
&lt;li&gt;Auto Scaling Group maintains availability&lt;/li&gt;
&lt;li&gt;NAT Gateways provide outbound internet access&lt;/li&gt;
&lt;li&gt;docker images are pulled from Docker Hub&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Everything was provisioned using Infrastructure as Code.&lt;/p&gt;




&lt;h2&gt;
  
  
  Project Goal
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Deploy a containerized Django app securely&lt;/li&gt;
&lt;li&gt;Follow real production architecture patterns&lt;/li&gt;
&lt;li&gt;Keep EC2 instance private&lt;/li&gt;
&lt;li&gt;Enable horizontal scaling&lt;/li&gt;
&lt;li&gt;Automate the full infrastructure using Terraform&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Architecture Diagram
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fikxx3vuf08qbbuhxfqc1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fikxx3vuf08qbbuhxfqc1.jpg" alt=" " width="800" height="488"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Internet → ALB (Public) → EC2 Instances (Private) → NAT Gateways → Internet
                                ↓
                          Django Docker App
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Networking Design
&lt;/h2&gt;

&lt;h2&gt;
  
  
  VPC
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Custom VPC created using Terraform&lt;/li&gt;
&lt;li&gt;CIDR-based subnet planning&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Public Subnets (AZ1 &amp;amp; AZ2)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Host Application Load Balancer&lt;/li&gt;
&lt;li&gt;Host NAT Gateways&lt;/li&gt;
&lt;li&gt;Connected to Internet Gateway&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Private Subnets (AZ1 &amp;amp; AZ2)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Host EC2 instances&lt;/li&gt;
&lt;li&gt;No direct internet access&lt;/li&gt;
&lt;li&gt;Outbound access via NAT Gateway&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why this matters:
&lt;/h2&gt;

&lt;p&gt;Private subnets ensure your application servers are not publicly exposed.&lt;/p&gt;




&lt;h2&gt;
  
  
  Application Load Balancer (ALB)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The ALB:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Listens on HTTP (80) / HTTPS (if configured)&lt;/li&gt;
&lt;li&gt;Forwards traffic to target group&lt;/li&gt;
&lt;li&gt;Performs health checks&lt;/li&gt;
&lt;li&gt;Distributes traffic across instances in multiple AZs&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Auto Scaling Group (ASG)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Configured with:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Minimum: 1 instance&lt;/li&gt;
&lt;li&gt;Desired: 2 instances&lt;/li&gt;
&lt;li&gt;Maximum: 5 instances&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The ASG:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Maintains desired capacity&lt;/li&gt;
&lt;li&gt;Replaces unhealthy instances&lt;/li&gt;
&lt;li&gt;Works across multiple availability zones&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If one AZ fails, traffic shifts automatically.&lt;/p&gt;




&lt;h2&gt;
  
  
  EC2 + Docker Deployment
&lt;/h2&gt;

&lt;p&gt;EC2 instances are launched using a Launch Template.&lt;/p&gt;

&lt;p&gt;Inside the user data script:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker is installed&lt;/li&gt;
&lt;li&gt;Django image is pulled from Docker Hub&lt;/li&gt;
&lt;li&gt;Container is started with port mapping
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d -p 80:8000 your-django-image
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Port mapping:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Django runs internally on 8000&lt;/li&gt;
&lt;li&gt;Exposed externally via port 80&lt;/li&gt;
&lt;li&gt;ALB forwards traffic to port 80&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  NAT Gateway Usage
&lt;/h2&gt;

&lt;p&gt;Since EC2 instances are in private subnets:&lt;/p&gt;

&lt;p&gt;They cannot directly access the internet.&lt;/p&gt;

&lt;h3&gt;
  
  
  NAT Gateway allows:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Pulling Docker images from Docker Hub&lt;/li&gt;
&lt;li&gt;Installing updates&lt;/li&gt;
&lt;li&gt;Accessing external services&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Deployment Steps
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
terraform plan
terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4nuj7vcp1emow9z58ddq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4nuj7vcp1emow9z58ddq.png" alt=" " width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/Amitkushwaha7/TerraformFullCourse/tree/42050208d93d8acbaded26823f0abfa4c2cc1e7d/Day-24" rel="noopener noreferrer"&gt;Github Repo&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &amp;gt;&amp;gt; Connect With Me
&lt;/h2&gt;

&lt;p&gt;If you enjoyed this post or want to follow my #30DaysOfAWSTerraformChallenge journey, feel free to connect with me here:&lt;/p&gt;

&lt;p&gt;💼 LinkedIn: &lt;a href="https://www.linkedin.com/in/amitkushwaha7/" rel="noopener noreferrer"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🐙 GitHub: &lt;a href="https://github.com/Amitkushwaha7" rel="noopener noreferrer"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📝 Hashnode / &lt;a href="https://hashnode.com/@amit902" rel="noopener noreferrer"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🐦 Twitter/X: &lt;a href="https://x.com/AmitKum43380951" rel="noopener noreferrer"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Found this helpful? Drop a ❤️ and follow for more AWS and Terraform tutorials!&lt;/p&gt;

&lt;h2&gt;
  
  
  Questions? Drop them in the comments below! 👇
&lt;/h2&gt;

</description>
      <category>devops</category>
      <category>automation</category>
      <category>aws</category>
      <category>cloud</category>
    </item>
    <item>
      <title>-&gt;&gt; Day-23 Setup End-to-End Observability in AWS Using Terraform</title>
      <dc:creator>Amit Kushwaha</dc:creator>
      <pubDate>Mon, 23 Feb 2026 12:25:11 +0000</pubDate>
      <link>https://forem.com/amit_kumar_7db8e36a64dd45/-day-23-setup-end-to-end-observability-in-aws-using-terraform-252f</link>
      <guid>https://forem.com/amit_kumar_7db8e36a64dd45/-day-23-setup-end-to-end-observability-in-aws-using-terraform-252f</guid>
      <description>&lt;p&gt;A production-ready AWS Lambda function for automated image processing with enterprise-grade CloudWatch monitoring, implemented using modular Terraform.&lt;/p&gt;

&lt;h2&gt;
  
  
  Overview:
&lt;/h2&gt;

&lt;p&gt;This project demonstrates AWS serverless best practices by combining:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lambda-based image processing (resize, compress, format conversion)&lt;/li&gt;
&lt;li&gt;S3 event-driven architecture (automatic triggering)&lt;/li&gt;
&lt;li&gt;Comprehensive CloudWatch monitoring (metrics, alarms, dashboards)&lt;/li&gt;
&lt;li&gt;SNS alerting (email/SMS notifications)&lt;/li&gt;
&lt;li&gt;Modular Terraform (reusable, maintainable infrastructure)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What It Does-
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Upload an image to S3 upload bucket&lt;/li&gt;
&lt;li&gt;Lambda function automatically triggers&lt;/li&gt;
&lt;li&gt;Processes image (creates 5 variants: compressed, low-quality, WebP, PNG, thumbnail)&lt;/li&gt;
&lt;li&gt;Saves processed images to destination bucket&lt;/li&gt;
&lt;li&gt;Monitors everything with CloudWatch metrics and alarms&lt;/li&gt;
&lt;li&gt;Sends alerts via SNS when issues occur&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3bqfshjhperbclvaye8m.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3bqfshjhperbclvaye8m.jpg" alt=" " width="800" height="518"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Features:-
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Image Processing
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Multiple format supports (JPEG, PNG, WebP, BMP, TIFF)&lt;/li&gt;
&lt;li&gt;Automatic format conversion&lt;/li&gt;
&lt;li&gt;Quality-based compression (85%, 60%)&lt;/li&gt;
&lt;li&gt;Thumbnail generation (300x300)&lt;/li&gt;
&lt;li&gt;Large image resizing (max 4096px)&lt;/li&gt;
&lt;li&gt;Automatic color space conversion&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Monitoring &amp;amp; Observability
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;12 CloudWatch Alarms:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Error rate monitoring&lt;/li&gt;
&lt;li&gt;Duration/timeout warnings&lt;/li&gt;
&lt;li&gt;Throttle detection&lt;/li&gt;
&lt;li&gt;Memory usage tracking&lt;/li&gt;
&lt;li&gt;Concurrent execution limits&lt;/li&gt;
&lt;li&gt;Log-based error patterns&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Custom Metrics:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Image processing time&lt;/li&gt;
&lt;li&gt;Image sizes processed&lt;/li&gt;
&lt;li&gt;Success/failure rates&lt;/li&gt;
&lt;li&gt;Business-level insights&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Comprehensive Dashboard:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Real-time metrics visualization&lt;/li&gt;
&lt;li&gt;AWS metrics + custom metrics&lt;/li&gt;
&lt;li&gt;Log insights integration&lt;/li&gt;
&lt;li&gt;Performance trends&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Log-Based Alerts:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Timeout detection&lt;/li&gt;
&lt;li&gt;Memory errors&lt;/li&gt;
&lt;li&gt;S3 permission issues&lt;/li&gt;
&lt;li&gt;Image processing failures&lt;/li&gt;
&lt;li&gt;Critical application errors&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Infrastructure
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Modular Terraform (6 reusable modules)&lt;/li&gt;
&lt;li&gt;Security best practices (IAM least privilege, S3 encryption)&lt;/li&gt;
&lt;li&gt;Scalable architecture (auto-scaling Lambda)&lt;/li&gt;
&lt;li&gt;Cost-optimized (pay per use)&lt;/li&gt;
&lt;li&gt;Environment-agnostic (dev/staging/prod)&lt;/li&gt;
&lt;/ul&gt;




</description>
      <category>devops</category>
      <category>aws</category>
      <category>automation</category>
      <category>cloud</category>
    </item>
    <item>
      <title>-&gt;&gt; Day-22 2-Tier Architecture Setup on AWS Using Terraform</title>
      <dc:creator>Amit Kushwaha</dc:creator>
      <pubDate>Sun, 08 Feb 2026 13:30:11 +0000</pubDate>
      <link>https://forem.com/amit_kumar_7db8e36a64dd45/-day-22-2-tier-architecture-setup-on-aws-using-terraform-363j</link>
      <guid>https://forem.com/amit_kumar_7db8e36a64dd45/-day-22-2-tier-architecture-setup-on-aws-using-terraform-363j</guid>
      <description>&lt;h2&gt;
  
  
  Introduction:
&lt;/h2&gt;

&lt;p&gt;In the world of DevOps, building secure, scalable, and automated infrastructure is a superpower. Today, we are going to provision a classic &lt;strong&gt;Two-Tier Architecture&lt;/strong&gt; on AWS completely from scratch using Terraform. We won't just launch an EC2 instance, we are going to build a production-ready environment with custom VPC networking, private subnets for the database, secret management, and automated application bootstrapping.&lt;/p&gt;

&lt;p&gt;-&amp;gt;&amp;gt; We are Building:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Web Tier&lt;/strong&gt;: A Flask python application running on an ubuntu EC2 instance in a public subnet.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database Tier:&lt;/strong&gt; A managed MySQL RDS instance hidden securely in private subnets.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security:&lt;/strong&gt; Minimal privilege security Groups and AWS Secrets Manager for credential handling.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automation:&lt;/strong&gt; Zero manual configuration inside the server, everything is scripted via Terraform &lt;code&gt;user_data&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Before writing code, let's visualize what we are building.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1o0cc34v2i7sz3o51abg.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1o0cc34v2i7sz3o51abg.jpg" alt=" " width="800" height="560"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;VPC: Our own isolated network &lt;code&gt;(10.0.0.0/16)&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Public Subnet: For the Web Server (Internet accessible).&lt;/li&gt;
&lt;li&gt;Private Subnets: Two of them across different Availability Zones for the RDS database (No direct Internet access).&lt;/li&gt;
&lt;li&gt;Internet Gateway: To allow the web server to talk to the world.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Step 1: The Networking Foundation(VPC)
&lt;/h2&gt;

&lt;p&gt;We need a VPC, one public subnet for the web server, and two private subnets for the RDS instance (AWS RDS requires at least two AZs for high availability).&lt;/p&gt;

&lt;p&gt;Resource: &lt;code&gt;aws_vpc&lt;/code&gt;, &lt;code&gt;aws_subnets&lt;/code&gt;, &lt;code&gt;aws_internet_gateway&lt;/code&gt;, &lt;code&gt;aws_route_table&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# VPC Module
module "vpc" {
  source = "./modules/vpc"

  project_name         = var.project_name
  environment          = var.environment
  aws_region           = var.aws_region
  vpc_cidr             = var.vpc_cidr
  public_subnet_cidr   = var.public_subnet_cidr
  private_subnet_cidrs = var.private_subnet_cidrs
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;## Step 2: Security &amp;amp; Firewall Rules&lt;/p&gt;

&lt;p&gt;Security Groups act as our virtual firewalls. We follow the Principle of Least Privilege.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Web SG:&lt;/strong&gt; Allows HTTP (80) from anywhere (0.0.0.0/0) and SSH (22) for management.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database SG:&lt;/strong&gt; This is where the magic happens. We do not open port 3306 to the world. We only allow traffic from the Web Security Group.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Security Groups Module
module "security_groups" {
  source = "./modules/security_groups"

  project_name = var.project_name
  environment  = var.environment
  vpc_id       = module.vpc.vpc_id
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;## Step 3: Managing Secrets &lt;/p&gt;

&lt;p&gt;Never, ever hardcode database passwords in your &lt;code&gt;main.tf&lt;/code&gt; files. We use the Terraform Random Provider to generate a password and store it immediately in AWS Secrets Manager.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Secrets Module
module "secrets" {
  source = "./modules/secrets"

  project_name = var.project_name
  environment  = var.environment
  db_username  = var.db_username
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 4: The Database (RDS MySQL)
&lt;/h2&gt;

&lt;p&gt;We provision an AWS RDS instance running MySQL. We place it in the private subnets using a &lt;code&gt;db_subnet_group&lt;/code&gt; so it's not accessible from the public internet.&lt;/p&gt;

&lt;p&gt;The password? It's pulled dynamically from our Secrets module!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_db_instance" "main" {
  identifier             = "${var.project_name}-db"
  allocated_storage      = var.allocated_storage
  storage_type           = "gp2"
  engine                 = "mysql"
  engine_version         = var.engine_version
  instance_class         = var.instance_class
  db_name                = var.db_name
  username               = var.db_username
  password               = var.db_password
  parameter_group_name   = "default.mysql8.0"
  skip_final_snapshot    = true
  vpc_security_group_ids = [var.db_security_group_id]
  db_subnet_group_name   = aws_db_subnet_group.main.name
  publicly_accessible    = false

  tags = {
    Name        = "${var.project_name}-rds"
    Environment = var.environment
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 5: The Application Server (EC2 + User Data)
&lt;/h2&gt;

&lt;p&gt;This is the coolest part. We don't want to SSH in and manually install Python, Flask, and Git. We use Terraform's &lt;code&gt;user_data&lt;/code&gt; to script the entire setup process.&lt;/p&gt;

&lt;p&gt;When the instance launches, it:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Updates Ubuntu packages.&lt;/li&gt;
&lt;li&gt;Installs Python3 and Flask.&lt;/li&gt;
&lt;li&gt;Creates a simple Flask App connecting to our MySQL DB.&lt;/li&gt;
&lt;li&gt;Injects the Database Host and Credentials (passed from Terraform variables) directly into the Python code.&lt;/li&gt;
&lt;li&gt;Starts the web server as a systemd service.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# modules/ec2/templates/user_data.sh
#!/bin/bash
pip install flask mysql-connector-python

# Terraform Template Injection happening here:
DB_CONFIG = {
    "host": "${db_host}",
    "user": "${db_username}",
    "password": "${db_password}",
    "database": "${db_name}"
}
# ... application logic ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Deployment Time!
&lt;/h2&gt;

&lt;p&gt;With our modular structure in place (&lt;code&gt;vpc&lt;/code&gt;, &lt;code&gt;ec2&lt;/code&gt;, &lt;code&gt;rds&lt;/code&gt;, &lt;code&gt;secrets&lt;/code&gt;, &lt;code&gt;security_groups&lt;/code&gt;), deploying the entire stack is as simple as:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Initialize: &lt;code&gt;terraform init&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Plan: &lt;code&gt;terraform plan&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Apply: &lt;code&gt;terraform apply -auto-approve&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;Wait about 5-8 minutes (RDS takes a while to spin up), and Terraform will output your application URL.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmp39h9wdsse9c1duaqa6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmp39h9wdsse9c1duaqa6.png" alt=" " width="723" height="254"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Verification
&lt;/h2&gt;

&lt;p&gt;Open the &lt;code&gt;application_url&lt;/code&gt; in your browser. You should see the nice blue and white "Terraform RDS Demo" dashboard.&lt;/p&gt;

&lt;p&gt;Try typing a message and clicking Save.&lt;br&gt;
If it appears in "Recent Messages", congratulations! Your EC2 instance successfully talked to your RDS database in the private subnet!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkuh4rl8vwegzdh3i30sr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkuh4rl8vwegzdh3i30sr.png" alt=" " width="800" height="523"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyokhggu0hrka19h0y9y2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyokhggu0hrka19h0y9y2.png" alt=" " width="800" height="522"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0rwq0sxf5xlecfw90izr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0rwq0sxf5xlecfw90izr.png" alt=" " width="800" height="261"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx7zp8p9564i1gc41mh8a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx7zp8p9564i1gc41mh8a.png" alt=" " width="800" height="214"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0qy2eeoxp0ss1dpcy7cx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0qy2eeoxp0ss1dpcy7cx.png" alt=" " width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this project, we moved beyond simple resource creation and built a fully integrated environment.&lt;/p&gt;

&lt;p&gt;Key Takeaways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Modular Design: Reusable code makes infrastructure manageable.&lt;/li&gt;
&lt;li&gt;Security First: Private subnets and strict Security Groups are essential.&lt;/li&gt;
&lt;li&gt;Automation: user_data saves hours of manual configuration.&lt;/li&gt;
&lt;li&gt;Secret Management: Never store secrets in plain text.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://github.com/Amitkushwaha7/TerraformFullCourse/tree/1cc325bcf765b6a539c0d15557249c73d5ae57fa/Day-22" rel="noopener noreferrer"&gt;GitHub Repository&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/Amitkushwaha7/Deploy-a-Two-Tier-Web-Application-on-AWS-with-Terraform.git" rel="noopener noreferrer"&gt;Github Repository&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  &amp;gt;&amp;gt; Connect With Me
&lt;/h2&gt;

&lt;p&gt;If you enjoyed this post or want to follow my #30DaysOfAWSTerraformChallenge journey, feel free to connect with me here:&lt;/p&gt;

&lt;p&gt;💼 LinkedIn: &lt;a href="https://www.linkedin.com/in/amitkushwaha7/" rel="noopener noreferrer"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🐙 GitHub: &lt;a href="https://github.com/Amitkushwaha7" rel="noopener noreferrer"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📝 Hashnode / &lt;a href="https://hashnode.com/@amit902" rel="noopener noreferrer"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🐦 Twitter/X: &lt;a href="https://x.com/AmitKum43380951" rel="noopener noreferrer"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Found this helpful? Drop a ❤️ and follow for more AWS and Terraform tutorials!&lt;/p&gt;

&lt;h2&gt;
  
  
  Questions? Drop them in the comments below! 👇
&lt;/h2&gt;

&lt;p&gt;Thanks for reading! Happy Terraforming! &lt;/p&gt;

</description>
      <category>devops</category>
      <category>automation</category>
      <category>aws</category>
      <category>devchallenge</category>
    </item>
    <item>
      <title>-&gt;&gt; Day-21 AWS Policy and Governance Setup Using Terraform</title>
      <dc:creator>Amit Kushwaha</dc:creator>
      <pubDate>Sat, 07 Feb 2026 20:05:39 +0000</pubDate>
      <link>https://forem.com/amit_kumar_7db8e36a64dd45/-day-21-aws-policy-and-governance-setup-using-terraform-2aae</link>
      <guid>https://forem.com/amit_kumar_7db8e36a64dd45/-day-21-aws-policy-and-governance-setup-using-terraform-2aae</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In this blog, I share my experience implementing AWS Policy and Governance using Terraform as part of my #30DaysOfAWSTerraform journey. The goal was to build a secure-by-default foundation that enforces policies and continuously monitors compliance.&lt;/p&gt;

&lt;p&gt;This project combines IAM guardrails, AWS Config, and a secure S3 bucket for configuration history. It helped me learn how prevention (IAM policies) and detection (Config rules) work together in real-world cloud governance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Project Objective
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Implement IAM policies for security guardrails&lt;/li&gt;
&lt;li&gt;Enable AWS Config for continuous monitoring&lt;/li&gt;
&lt;li&gt;Store configuration history securely in S3&lt;/li&gt;
&lt;li&gt;Enforce tagging standards&lt;/li&gt;
&lt;li&gt;Track compliance and violations&lt;/li&gt;
&lt;li&gt;Automate governance using Terraform&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;IAM Policies&lt;/strong&gt; to prevent risky actions (MFA delete, TLS-only S3 access, required tags).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Config&lt;/strong&gt; to record configuration changes and evaluate compliance rules.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;S3 Bucket&lt;/strong&gt; to store AWS Config snapshots securely with encryption and versioning.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;The IAM policies enforce guardrails upfront, AWS Config continuously checks resource compliance, and S3 stores audit data.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxz96zcxukdlkmefet8sr.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxz96zcxukdlkmefet8sr.jpg" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementation Steps:
&lt;/h2&gt;

&lt;p&gt;Step 1: IAM Policy Setup&lt;br&gt;
I created policies for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;MFA Delete Policy to block S3 object deletion without MFA&lt;/li&gt;
&lt;li&gt;S3 Encryption in Transit to enforce HTTPS/TLS&lt;/li&gt;
&lt;li&gt;Required Tags Policy to ensure resources include Environment and Owner&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;These policies matter because they stop risky actions before they happen.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Step 2: AWS Config Setup&lt;br&gt;
I configured:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Config Recorder to track resource changes&lt;/li&gt;
&lt;li&gt;Delivery Channel to store snapshots in S3&lt;/li&gt;
&lt;li&gt;Recorder Status to start compliance tracking&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0gidt5yc8zvgyq35b1e1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0gidt5yc8zvgyq35b1e1.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 3: Adding Config Rules&lt;br&gt;
I added AWS managed rules to validate governance:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;S3 Public Write Prohibited - Prevents public write access to S3 buckets&lt;/li&gt;
&lt;li&gt;S3 Encryption Enabled - Ensures server-side encryption on S3 buckets&lt;/li&gt;
&lt;li&gt;S3 Public Read Prohibited - Blocks public read access to S3 buckets&lt;/li&gt;
&lt;li&gt;EBS Volumes Encrypted - Verifies all EBS volumes are encrypted&lt;/li&gt;
&lt;li&gt;Required Tags - Checks for Environment and Owner tags&lt;/li&gt;
&lt;li&gt;IAM Password Policy - Enforces strong password requirements&lt;/li&gt;
&lt;li&gt;Root MFA Enabled - Ensures root account has MFA configured&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;Non-compliant means the resource violates a rule (for example, missing tags or encryption)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcz31ltmc3rdzhnagp4gj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcz31ltmc3rdzhnagp4gj.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 5: Terraform Automation&lt;/p&gt;

&lt;p&gt;Terraform let me define everything as code: IAM policies, the Config recorder, rules, and the S3 bucket. This made the setup repeatable and version controlled.&lt;/p&gt;

&lt;p&gt;Benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Faster deployments&lt;/li&gt;
&lt;li&gt;Consistent governance&lt;/li&gt;
&lt;li&gt;Easy auditing and updates&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Step 6: Testing &amp;amp; Validation&lt;/p&gt;

&lt;p&gt;I ran &lt;code&gt;terraform plan&lt;/code&gt; and &lt;code&gt;terraform apply&lt;/code&gt;, then verified compliance using AWS Config. The dashboard showed compliant and non-compliant resources clearly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmc5avw8fuqh5c9b8d07f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmc5avw8fuqh5c9b8d07f.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkmbu0d5b0l238igb6ev6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkmbu0d5b0l238igb6ev6.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Monitoring Dashboard&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AWS Config provides a central view of compliance status across rules and resources. It helps quickly identify violations and track fixes over time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cost Considerations&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AWS Config is a paid service, so I kept the scope small and cleaned up resources when done using &lt;code&gt;terraform destroy&lt;/code&gt;. This helps control costs while still learning the full workflow.&lt;/p&gt;
&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This project showed how governance can be automated with Terraform by combining IAM guardrails, AWS Config compliance checks, and secure S3 storage. It reinforced the value of policy‑as‑code, continuous monitoring, and defense‑in‑depth in real AWS environments. Most importantly, it mirrors how cloud teams enforce security at scale—making it a practical and recruiter‑relevant demonstration of cloud governance skills.&lt;/p&gt;
&lt;h2&gt;
  
  
  Reference:
&lt;/h2&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/sAtbDGi-82A"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/Amitkushwaha7/TerraformFullCourse/tree/dfc210b76b68ad7c747b961b548ba97601f14ed5/Day-21" rel="noopener noreferrer"&gt;GitHub Repository&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/config/" rel="noopener noreferrer"&gt;AWS Config Docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html" rel="noopener noreferrer"&gt;AWS IAM Best Practices&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs" rel="noopener noreferrer"&gt;Terraform AWS Provider&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/config/latest/developerguide/managed-rules-by-aws-config.html" rel="noopener noreferrer"&gt;AWS Config Rule&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &amp;gt;&amp;gt; Connect With Me
&lt;/h2&gt;

&lt;p&gt;If you enjoyed this post or want to follow my #30DaysOfAWSTerraformChallenge journey, feel free to connect with me here:&lt;/p&gt;

&lt;p&gt;💼 LinkedIn: &lt;a href="https://www.linkedin.com/in/amitkushwaha7/" rel="noopener noreferrer"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🐙 GitHub: &lt;a href="https://github.com/Amitkushwaha7" rel="noopener noreferrer"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📝 Hashnode / &lt;a href="https://hashnode.com/@amit902" rel="noopener noreferrer"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🐦 Twitter/X: &lt;a href="https://x.com/AmitKum43380951" rel="noopener noreferrer"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Found this helpful? Drop a ❤️ and follow for more AWS and Terraform tutorials!&lt;/p&gt;

&lt;p&gt;Questions? Drop them in the comments below! 👇&lt;/p&gt;




&lt;p&gt;Happy Terraforming and Deploying!!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>cloud</category>
      <category>learning</category>
    </item>
    <item>
      <title>-&gt;&gt; Day-20 Terraform Custom Modules for EKS - From Zero to Production</title>
      <dc:creator>Amit Kushwaha</dc:creator>
      <pubDate>Tue, 03 Feb 2026 00:01:13 +0000</pubDate>
      <link>https://forem.com/amit_kumar_7db8e36a64dd45/-day-20-terraform-custom-modules-for-eks-from-zero-to-production-4j92</link>
      <guid>https://forem.com/amit_kumar_7db8e36a64dd45/-day-20-terraform-custom-modules-for-eks-from-zero-to-production-4j92</guid>
      <description>&lt;p&gt;Kubernetes (K8s) has become the de facto standard for orchestrating containerized applications. It provides powerful primitives for deploying, scaling, and managing containerized workloads, making it a top choice for modern DevOps teams and cloud-native development.&lt;/p&gt;

&lt;p&gt;In this blog series, we’ll explore how to set up a production-ready Kubernetes environment on AWS using Amazon Elastic Kubernetes Service (EKS) and Terraform, starting with the foundational infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why EKS?
&lt;/h2&gt;

&lt;p&gt;Amazon EKS is a fully managed Kubernetes service that makes it easy to run Kubernetes on AWS without needing to install or operate your own control plane or nodes. EKS handles high availability, scalability, and patching of the Kubernetes control plane, so you can focus on running your applications instead of managing infrastructure.&lt;/p&gt;

&lt;p&gt;-&amp;gt; Benefits of using EKS:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Managed control plane: No need to run your own etcd or master nodes.&lt;/li&gt;
&lt;li&gt;Native AWS integration: IAM, VPC, CloudWatch, EC2, ECR and more.&lt;/li&gt;
&lt;li&gt;Secure by default: Runs in a dedicated, isolated VPC.&lt;/li&gt;
&lt;li&gt;Scalable and production-ready.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;-&amp;gt; In our setup:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The VPC module creates a network with public and private subnets.&lt;/li&gt;
&lt;li&gt;The IAM module creates cluster roles, node roles, and OIDC provider for Kubernetes-AWS integration.&lt;/li&gt;
&lt;li&gt;The ECR module creates a container registry to store and manage Docker images.&lt;/li&gt;
&lt;li&gt;The EKS module provisions the EKS control plane and worker nodes in private subnets.&lt;/li&gt;
&lt;li&gt;The Secrets Manager module stores optional database, API, and application configuration secrets.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Architecture Overview
&lt;/h2&gt;

&lt;p&gt;Here's how the setup works at high level:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VPC is created with 3 Availability Zones for high availability.&lt;/li&gt;
&lt;li&gt;Each AZ contains both a public and a private subnet.&lt;/li&gt;
&lt;li&gt;EKS worker nodes (EC2 instances) are launched in private subnets for better security.&lt;/li&gt;
&lt;li&gt;A NAT Gateway is provisioned in a public subnet to allow worker nodes in private subnets to pull images and updates from the internet (e.g., from ECR, Docker Hub).&lt;/li&gt;
&lt;li&gt;EKS control plane (managed by AWS) communicates with the worker nodes securely within the VPC.&lt;/li&gt;
&lt;li&gt;The Internet Gateway in the public subnet provides external users access to the Kubernetes LoadBalancer service for the demo website.&lt;/li&gt;
&lt;li&gt;IAM roles and OIDC provider enable pod-level permissions through IRSA (IAM Roles for Service Accounts).&lt;/li&gt;
&lt;li&gt;KMS encryption secures the etcd database at rest on the EKS control plane.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This setup ensures that your nodes are not directly exposed to the internet while still having outbound internet access via the NAT gateway.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzbmpashuuomtspz1vkm6.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzbmpashuuomtspz1vkm6.jpg" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  The Five Custom Terraform Modules
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Step 1: Create the VPC
&lt;/h2&gt;

&lt;p&gt;The foundation of our infrastructure. Creates networking with high availability across multiple AZs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Custom VPC Module
module "vpc" {
  source = "./modules/vpc"

  name_prefix     = var.cluster_name
  vpc_cidr        = var.vpc_cidr
  azs             = slice(data.aws_availability_zones.available.names, 0, 3)
  private_subnets = var.private_subnets
  public_subnets  = var.public_subnets

  enable_nat_gateway = true
  single_nat_gateway = true

  # Required tags for EKS
  public_subnet_tags = {
    "kubernetes.io/role/elb"                    = "1"
    "kubernetes.io/cluster/${var.cluster_name}" = "shared"
  }

  private_subnet_tags = {
    "kubernetes.io/role/internal-elb"           = "1"
    "kubernetes.io/cluster/${var.cluster_name}" = "shared"
  }

  tags = {
    Environment = var.environment
    Terraform   = "true"
    Project     = "EKS-Cluster"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;what it creates:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VPC with CIDR 10.0.0.0/16&lt;/li&gt;
&lt;li&gt;3 public subnets (10.0.1-3.0/24) for NAT Gateway and Internet Gateway&lt;/li&gt;
&lt;li&gt;3 private subnets (10.0.11-13.0/24) for EKS nodes&lt;/li&gt;
&lt;li&gt;Single NAT Gateway for cost optimization&lt;/li&gt;
&lt;li&gt;Internet Gateway for public internet access&lt;/li&gt;
&lt;li&gt;20 total resources&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Step 2: IAM Module
&lt;/h2&gt;

&lt;p&gt;Handles all identity and across management. Enables secure communication between Kubernetes and AWS services.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Custom IAM Module
module "iam" {
  source = "./modules/iam"

  cluster_name = var.cluster_name

  tags = {
    Environment = var.environment
    Terraform   = "true"
    Project     = "EKS-Cluster"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;what it creates&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EKS cluster IAM role with necessary permissions&lt;/li&gt;
&lt;li&gt;EC2 node IAM role for worker nodes&lt;/li&gt;
&lt;li&gt;OIDC Provider for Kubernetes-AWS integration&lt;/li&gt;
&lt;li&gt;IRSA (IAM Roles for Service Accounts) configuration&lt;/li&gt;
&lt;li&gt;Inline policies for EKS and node permissions&lt;/li&gt;
&lt;li&gt;7 total resources&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Step 3: Create the EKS Cluster
&lt;/h2&gt;

&lt;p&gt;Provisions the Kubernetes cluster with managed control plane and worker nodes.&lt;/p&gt;

&lt;p&gt;We use the terraform-aws-eks module to spin up the cluster. This will provision:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A managed EKS control plane&lt;/li&gt;
&lt;li&gt;A node group with autoscaling enabled&lt;/li&gt;
&lt;li&gt;Nodes inside private subnets with internet access via NAT Gateway
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Custom EKS Module
module "eks" {
  source = "./modules/eks"

  cluster_name       = var.cluster_name
  kubernetes_version = var.kubernetes_version
  vpc_id             = module.vpc.vpc_id
  subnet_ids         = module.vpc.private_subnets

  cluster_role_arn = module.iam.cluster_role_arn
  node_role_arn    = module.iam.node_group_role_arn

  endpoint_public_access  = true
  endpoint_private_access = true
  public_access_cidrs     = ["0.0.0.0/0"]

  enable_irsa = true

  # Node groups configuration
  node_groups = {
    general = {
      instance_types = ["t3.medium"]
      desired_size   = 2
      min_size       = 2
      max_size       = 4
      capacity_type  = "ON_DEMAND"
      disk_size      = 20

      labels = {
        role = "general"
      }

      tags = {
        NodeGroup = "general"
      }
    }

    spot = {
      instance_types = ["t3.medium", "t3a.medium"]
      desired_size   = 1
      min_size       = 1
      max_size       = 3
      capacity_type  = "SPOT"
      disk_size      = 20

      labels = {
        role = "spot"
      }

      taints = [{
        key    = "spot"
        value  = "true"
        effect = "NO_SCHEDULE"
      }]

      tags = {
        NodeGroup = "spot"
      }
    }
  }

  tags = {
    Environment = var.environment
    Terraform   = "true"
    Project     = "EKS-Cluster"
  }

  depends_on = [module.iam]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;what it creates:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EKS cluster control plane (Kubernetes 1.31)&lt;/li&gt;
&lt;li&gt;4 worker node groups: 2 on-demand (general), 1 on-demand (general), 1 spot (cost-optimized)&lt;/li&gt;
&lt;li&gt;Cluster security groups and node security groups&lt;/li&gt;
&lt;li&gt;CloudWatch logging configuration&lt;/li&gt;
&lt;li&gt;Add-ons (CoreDNS, kube-proxy, VPC CNI, EBS CSI)&lt;/li&gt;
&lt;li&gt;KMS encryption for etcd&lt;/li&gt;
&lt;li&gt;17 total resources&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Step 4: ECR Module
&lt;/h2&gt;

&lt;p&gt;Container registry for strong and managing Docker images.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "ecr" {
  source = "./modules/ecr"

  repository_name = "demo-website"

  tags = {
    Environment = var.environment
    Terraform   = "true"
    Project     = "EKS-Cluster"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;what it creates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Elastic Container Registry repository&lt;/li&gt;
&lt;li&gt;Image scanning on push&lt;/li&gt;
&lt;li&gt;Lifecycle policies for image retention&lt;/li&gt;
&lt;li&gt;1 total resource&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Step 5: Secrets Manager Module
&lt;/h2&gt;

&lt;p&gt;Securely stores sensitive data like database credentials and API keys (Optional).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "secrets_manager" {
  source = "./modules/secrets-manager"

  name_prefix = var.cluster_name

  # Enable secrets as needed
  create_db_secret         = var.enable_db_secret
  create_api_secret        = var.enable_api_secret
  create_app_config_secret = var.enable_app_config_secret

  # Database credentials (if enabled)
  db_username = var.db_username
  db_password = var.db_password
  db_engine   = var.db_engine
  db_host     = var.db_host
  db_port     = var.db_port
  db_name     = var.db_name

  # API keys (if enabled)
  api_key    = var.api_key
  api_secret = var.api_secret

  # App config (if enabled)
  app_config = var.app_config

  tags = {
    Environment = var.environment
    Terraform   = "true"
    Project     = "EKS-Cluster"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What it Creates:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Optional database secrets&lt;/li&gt;
&lt;li&gt;Optional API secrets&lt;/li&gt;
&lt;li&gt;Optional application configuration secrets&lt;/li&gt;
&lt;li&gt;0-3 total resources (optional)&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  How Modules Work Together
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;In our setup:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The VPC module creates a network with public and private subnets&lt;/li&gt;
&lt;li&gt;The EKS module provisions the EKS control plane and worker nodes in private subnets&lt;/li&gt;
&lt;li&gt;The IAM module creates cluster roles, node roles, and OIDC provider for Kubernetes-AWS integration&lt;/li&gt;
&lt;li&gt;The ECR module creates a container registry to store and manage Docker images&lt;/li&gt;
&lt;li&gt;The Secrets Manager module stores optional database, API, and application configuration secrets.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Deploying the Infrastructure
&lt;/h2&gt;

&lt;p&gt;Step 1: Initialize Terraform&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd terraform
terraform init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This downloads the required Terraform providers and initializes the working directory.&lt;/p&gt;

&lt;p&gt;Step 2: Review the Plan&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This shows all 45 resources that will be created.&lt;/p&gt;

&lt;p&gt;Step 3: Apply Configuration&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Apply complete! Resources: 45 added, 0 changed, 0 destroyed.


![ ](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2e4tc8uo4l56lb4ahmgm.png)


![ ](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9dx9ngv84i012smoxw9j.png)

Outputs:
cluster_endpoint = "https://EA6F63CF5CF44B594EA9533013CF21C4.gr7.us-east-1.eks.amazonaws.com"
cluster_name = "eks-custom-modules-cluster"
ecr_repository_url = "123456789.dkr.ecr.us-east-1.amazonaws.com/demo-website"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;Step 4: Configure Kubectl&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform output -raw configure_kubectl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This outputs the aws eks update-kubeconfig command. Run it to connect kubectl to your cluster.&lt;/p&gt;




&lt;p&gt;Step 5: Deploy Demo Application&lt;br&gt;
Build and push a Docker Image to ECR:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ../demo-website

# Build Docker image
docker build -t demo-website:latest .

# Get ECR login command
cd ../terraform
$(terraform output -raw ecr_login_command)

# Tag and push to ECR
docker tag demo-website:latest &amp;lt;ECR_URL&amp;gt;:latest
docker push &amp;lt;ECR_URL&amp;gt;:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deploy to Kubernetes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ../demo-website
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml

# Get LoadBalancer URL
kubectl get svc demo-website -o wide
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check deployment status:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods
NAME                            READY   STATUS    RESTARTS   AGE
demo-website-5d9c8d7f6-2m4kl    1/1     Running   0          30s
demo-website-5d9c8d7f6-7p9q2    1/1     Running   0          30s

$ kubectl get svc
NAME           TYPE           CLUSTER-IP    EXTERNAL-IP                                        PORT(S)        AGE
demo-website   LoadBalancer   172.20.0.1    a1234567890.elb.us-east-1.amazonaws.com           80:31234/TCP   45s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Access the demo website at the LoadBalancer URL!&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8eogjtrze7jfswjquao1.png" alt=" "&gt;
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Cleanup
&lt;/h2&gt;

&lt;p&gt;Once you're done experimenting, clean up resources to avoid charges:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Delete Kubernetes resources
kubectl delete svc demo-website
kubectl delete deployment demo-website

# Destroy infrastructure
cd terraform
terraform destroy -auto-approve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We've successfully set up a production-grade Kubernetes cluster on AWS using custom Terraform modules. By building our own modules, we achieved.&lt;/p&gt;

&lt;p&gt;References: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/Amitkushwaha7/TerraformFullCourse/tree/4a0fc343a58b419ca138c26403e63e2ce0c25644/Day-20" rel="noopener noreferrer"&gt;GitHub Repo&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/eks/" rel="noopener noreferrer"&gt;AWS EKS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs" rel="noopener noreferrer"&gt;Terraform AWS Provider&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/a_j6Gq-KtxE"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;h2&gt;
  
  
  &amp;gt;&amp;gt; Connect With Me
&lt;/h2&gt;

&lt;p&gt;If you enjoyed this post or want to follow my #30DaysOfAWSTerraformChallenge journey, feel free to connect with me here:&lt;/p&gt;

&lt;p&gt;💼 LinkedIn: &lt;a href="https://www.linkedin.com/in/amitkushwaha7/" rel="noopener noreferrer"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🐙 GitHub: &lt;a href="https://github.com/Amitkushwaha7" rel="noopener noreferrer"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📝 Hashnode / &lt;a href="https://hashnode.com/@amit902" rel="noopener noreferrer"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🐦 Twitter/X: &lt;a href="https://x.com/AmitKum43380951" rel="noopener noreferrer"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Found this helpful? Drop a ❤️ and follow for more AWS and Terraform tutorials!&lt;/p&gt;

&lt;p&gt;Questions? Drop them in the comments below! 👇&lt;/p&gt;




&lt;p&gt;Happy Terraforming and Deploying!!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>career</category>
      <category>aws</category>
      <category>cloud</category>
    </item>
    <item>
      <title>-&gt;&gt; Day-19 Terraform Provisioners in Action: A Hands-On Demo (local-exec, remote-exec &amp; file)</title>
      <dc:creator>Amit Kushwaha</dc:creator>
      <pubDate>Tue, 27 Jan 2026 11:09:12 +0000</pubDate>
      <link>https://forem.com/amit_kumar_7db8e36a64dd45/-day-19-terraform-provisioners-in-action-a-hands-on-demo-local-exec-remote-exec-file-510f</link>
      <guid>https://forem.com/amit_kumar_7db8e36a64dd45/-day-19-terraform-provisioners-in-action-a-hands-on-demo-local-exec-remote-exec-file-510f</guid>
      <description>&lt;p&gt;When learning Terraform, most people stop at creating infrastructure.&lt;/p&gt;

&lt;p&gt;But what if you want to &lt;strong&gt;run scripts&lt;/strong&gt;, &lt;strong&gt;install packages&lt;/strong&gt;, or &lt;strong&gt;trigger actions&lt;/strong&gt; after a resource is created?&lt;/p&gt;

&lt;p&gt;That's where &lt;strong&gt;Terraform Provisioners&lt;/strong&gt; come in.&lt;/p&gt;

&lt;p&gt;In this blog, I will walk through:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What provisioner really are&lt;/li&gt;
&lt;li&gt;When (and when not) to use them&lt;/li&gt;
&lt;li&gt;A hands-on AWS EC2 demo using:
     - &lt;code&gt;local-exec&lt;/code&gt;
     - &lt;code&gt;remote-exec&lt;/code&gt;
     - &lt;code&gt;file + remote-exec&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2718rqkfix3g4pmkoykp.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2718rqkfix3g4pmkoykp.jpg" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What are Terraform Provisioners?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Provisioners&lt;/strong&gt; let Terraform execute scripts or commands during resource creation or destruction.&lt;/p&gt;

&lt;p&gt;They’re useful for tasks that Terraform can’t model declaratively, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Installing software&lt;/li&gt;
&lt;li&gt;Running bootstrap scripts&lt;/li&gt;
&lt;li&gt;Registering resources in external systems&lt;/li&gt;
&lt;li&gt;Sending notifications&lt;/li&gt;
&lt;li&gt;Copying files to servers&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ Terraform officially recommends using provisioners as a last resort.&lt;br&gt;
Prefer &lt;code&gt;user_data&lt;/code&gt;, cloud-init, Packer, or configuration management tools for serious production setups.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Types of Provisioners
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. local-exec
&lt;/h3&gt;

&lt;p&gt;Runs commands on your &lt;strong&gt;local machine&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use case:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Trigger webhooks&lt;/li&gt;
&lt;li&gt;Call APIs&lt;/li&gt;
&lt;li&gt;Write to local inventory files&lt;/li&gt;
&lt;li&gt;Send Slack notifications
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   provisioner "local-exec" {
   command = "echo 'Local-exec: created instance ${self.id} with IP ${self.public_ip}'"
   }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  2. remote-exec
&lt;/h2&gt;

&lt;p&gt;Runs commands on the &lt;strong&gt;remote resource&lt;/strong&gt; via &lt;strong&gt;SSH&lt;/strong&gt; (Linux) or &lt;strong&gt;WinRM&lt;/strong&gt; (Windows).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use Cases:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install packages (nginx, docker, node, etc.)&lt;/li&gt;
&lt;li&gt;Configure OS settings&lt;/li&gt;
&lt;li&gt;Start services&lt;/li&gt;
&lt;li&gt;Quick bootstrap scripts
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  provisioner "remote-exec" {
    inline = [
      "sudo apt-get update",
      "echo 'Hello from remote-exec' | sudo tee /tmp/remote_exec.txt",
    ]
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  3. file
&lt;/h2&gt;

&lt;p&gt;Copies files from your machine to the remote resource.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use Cases:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Copy setup scripts&lt;/li&gt;
&lt;li&gt;Upload config files&lt;/li&gt;
&lt;li&gt;Transfer certificates&lt;/li&gt;
&lt;li&gt;Deploy small binaries
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  provisioner "file" {
    source      = "${path.module}/scripts/welcome.sh"
    destination = "/tmp/welcome.sh"
  }

  provisioner "remote-exec" {
    inline = [
      "sudo chmod +x /tmp/welcome.sh",
      "sudo /tmp/welcome.sh"
    ]
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Laptop / CI
     │
     ▼
Terraform
     │
     ▼
AWS EC2 (Ubuntu)
     │
     ├── local-exec   → runs locally
     ├── remote-exec  → runs via SSH on EC2
     └── file         → copies scripts to EC2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  -&amp;gt; Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before running the demo:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS credentials configured&lt;/li&gt;
&lt;li&gt;Terraform v1.0+ installed&lt;/li&gt;
&lt;li&gt;AWS CLI installed&lt;/li&gt;
&lt;li&gt;SSH client installed&lt;/li&gt;
&lt;li&gt;An EC2 key pair created
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 create-key-pair --key-name terraform-demo-key \
  --query 'KeyMaterial' --output text &amp;gt; terraform-demo-key.pem

chmod 400 terraform-demo-key.pem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;Main.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~&amp;gt; 4.0"
    }
  }
}

# EC2 instance used for provisioner demos.
# Each provisioner block is included below but wrapped in block comments (/* ... */).
# For the demo, uncomment one provisioner block at a time, then `terraform apply`.

data "aws_ami" "ubuntu" {
  most_recent = true
  owners      = ["099720109477"] # Canonical (Ubuntu official) - Current owner ID

  filter {
    name   = "name"
    values = ["ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*"]
  }

  filter {
    name   = "virtualization-type"
    values = ["hvm"]
  }
}

resource "aws_security_group" "ssh" {
  name        = "tf-prov-demo-ssh"
  description = "Allow SSH inbound"

  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

resource "aws_instance" "demo" {
  ami                    = data.aws_ami.ubuntu.id
  instance_type          = var.instance_type
  key_name               = var.key_name
  vpc_security_group_ids = [aws_security_group.ssh.id]

  tags = {
    Name = "terraform-provisioner-demo"
  }

  connection {
    type        = "ssh"
    user        = var.ssh_user
    private_key = file(var.private_key_path)
    host        = self.public_ip
    timeout     = "5m"
  }

  /*
  ------------------------------------------------------------------
  Provisioner 1: local-exec
  - Runs on the machine where you run Terraform (your laptop/CI agent).
  - Useful for local tasks, logging, calling local scripts, etc.
  - To demo: uncomment this block, then run `terraform apply`.
  ------------------------------------------------------------------
  */

  # provisioner "local-exec" {
  #   command = "echo 'Local-exec: created instance ${self.id} with IP ${self.public_ip}'"
  # }


  /*
  ------------------------------------------------------------------
  Provisioner 2: remote-exec
  - Runs commands on the remote instance over SSH.
  - Requires SSH access (security group + key pair + reachable IP).
  - To demo: uncomment this block, ensure `var.private_key_path` is correct, then run `terraform apply`.
  ------------------------------------------------------------------
  */

  provisioner "remote-exec" {
    inline = [
      "sudo apt-get update",
      "echo 'Hello from remote-exec' | sudo tee /tmp/remote_exec.txt",
    ]
  }


  /*
  ------------------------------------------------------------------
  Provisioner 3: file + remote-exec
  - Copies a script (scripts/welcome.sh) to the instance, then executes it.
  - Good pattern for more complex bootstrapping when script files are preferred.
  - To demo: uncomment both the file provisioner and the remote-exec block below.
  ------------------------------------------------------------------
  */

  # provisioner "file" {
  #   source      = "${path.module}/scripts/welcome.sh"
  #   destination = "/tmp/welcome.sh"
  # }

  # provisioner "remote-exec" {
  #   inline = [
  #     "sudo chmod +x /tmp/welcome.sh",
  #     "sudo /tmp/welcome.sh"
  #   ]
  # }

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Demo 1: local-exec
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provisioner "local-exec" {
  command = "echo 'Created ${self.id} with IP ${self.public_ip}'"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What happens:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EC2 is created &lt;/li&gt;
&lt;li&gt;Terraform prints a message on your computer&lt;/li&gt;
&lt;li&gt;Nothing changes inside the EC2&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Demo 2: remote-exec
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provisioner "remote-exec" {
  inline = [
    "sudo apt-get update",
    "echo 'Hello from remote-exec' | sudo tee /tmp/remote_exec.txt"
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What happens:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Terraform waits for SSH&lt;/li&gt;
&lt;li&gt;Connects to EC2&lt;/li&gt;
&lt;li&gt;Runs the commands remotely&lt;/li&gt;
&lt;li&gt;Creates &lt;code&gt;/tmp/remote_exec.txt&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Verify:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh -i terraform-demo-key.pem ubuntu@&amp;lt;PUBLIC_IP&amp;gt;

cat /tmp/remote_exec.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Expected output:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Hello from remote-exec
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Demo 3: file + remote-exec
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provisioner "file" {
  source      = "scripts/welcome.sh"
  destination = "/tmp/welcome.sh"
}

provisioner "remote-exec" {
  inline = [
    "sudo chmod +x /tmp/welcome.sh",
    "sudo /tmp/welcome.sh"
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;welcome.sh&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
echo "Welcome to the Provisioner Demo" | sudo tee /tmp/welcome_msg.txt
uname -a | sudo tee -a /tmp/welcome_msg.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What happens:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Script is copied to EC2&lt;/li&gt;
&lt;li&gt;Script is executed&lt;/li&gt;
&lt;li&gt;Output file is created at &lt;code&gt;/tmp/welcome_msg.txt&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Resources
&lt;/h1&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/DkhAgYa0448"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;h2&gt;
  
  
  &amp;gt;&amp;gt; Connect With Me
&lt;/h2&gt;

&lt;p&gt;If you enjoyed this post or want to follow my #30DaysOfAWSTerraformChallenge journey, feel free to connect with me here:&lt;/p&gt;

&lt;p&gt;💼 LinkedIn: &lt;a href="https://www.linkedin.com/in/amitkushwaha7/" rel="noopener noreferrer"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🐙 GitHub: &lt;a href="https://github.com/Amitkushwaha7" rel="noopener noreferrer"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📝 Hashnode / &lt;a href="https://hashnode.com/@amit902" rel="noopener noreferrer"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🐦 Twitter/X: &lt;a href="https://x.com/AmitKum43380951" rel="noopener noreferrer"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Found this helpful? Drop a ❤️ and follow for more AWS and Terraform tutorials!&lt;/p&gt;

&lt;p&gt;Questions? Drop them in the comments below! 👇&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>devops</category>
      <category>aws</category>
      <category>linux</category>
    </item>
    <item>
      <title>Cloud Cost Optimization Using Boto3: Automating EC2 Management with AWS Lambda</title>
      <dc:creator>Amit Kushwaha</dc:creator>
      <pubDate>Thu, 22 Jan 2026 08:24:35 +0000</pubDate>
      <link>https://forem.com/amit_kumar_7db8e36a64dd45/cloud-cost-optimization-using-boto3-automating-ec2-management-with-aws-lambda-5g86</link>
      <guid>https://forem.com/amit_kumar_7db8e36a64dd45/cloud-cost-optimization-using-boto3-automating-ec2-management-with-aws-lambda-5g86</guid>
      <description>&lt;h2&gt;
  
  
  -&amp;gt;&amp;gt; Introduction
&lt;/h2&gt;

&lt;p&gt;Why do companies migrate from on-premises servers to the cloud?&lt;/p&gt;

&lt;p&gt;Simple reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;High Maintenance overhead&lt;/li&gt;
&lt;li&gt;Expensive infrastructure&lt;/li&gt;
&lt;li&gt;Poor scalability&lt;/li&gt;
&lt;li&gt;Operational inefficiency&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cloud Platforms like &lt;strong&gt;AWS&lt;/strong&gt;, &lt;strong&gt;Azure&lt;/strong&gt;, and &lt;strong&gt;GCP&lt;/strong&gt; promise:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pay-as-you-go pricing&lt;/li&gt;
&lt;li&gt;Elastic scalability&lt;/li&gt;
&lt;li&gt;Managed services&lt;/li&gt;
&lt;li&gt;Faster innovation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Sounds perfect right?&lt;br&gt;
But here's the reality check:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Migrating to the cloud does not automatically mean your costs will go down.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Cloud cost optimization is a shared responsibility.&lt;/p&gt;

&lt;p&gt;Cloud providers give you powerful tools.&lt;br&gt;
It's &lt;em&gt;your&lt;/em&gt; job to use them responsibly.&lt;/p&gt;

&lt;p&gt;As a DevOps Engineer, one of your core responsibilities is:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Managing resources efficiently and cleaning up unused or stale infrastructure&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And That's exactly what this project is about.&lt;/p&gt;


&lt;h2&gt;
  
  
  The Real Problem?
&lt;/h2&gt;

&lt;p&gt;Let's take a very common scenario.&lt;/p&gt;

&lt;p&gt;You spin up an Ec2 instance to host an application.&lt;br&gt;
Its run in &lt;strong&gt;Dev/Test&lt;/strong&gt; environment.&lt;br&gt;
The workday ends...&lt;br&gt;
But the instance keeps running overnight.&lt;br&gt;
And the next night.&lt;br&gt;
And the next week.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Money is burning silently.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now multiply this by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;20 developers&lt;/li&gt;
&lt;li&gt;Multiple AWS regions&lt;/li&gt;
&lt;li&gt;Multiple projects&lt;/li&gt;
&lt;li&gt;Multiple Environments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Manually stopping instances?&lt;br&gt;
Not scalable.&lt;br&gt;
Not reliable.&lt;br&gt;
Not realistic.&lt;/p&gt;


&lt;h2&gt;
  
  
  Solution Overview
&lt;/h2&gt;

&lt;p&gt;I built a serverless AWS cost optimization system that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatically detects non-critical EC2 instances&lt;/li&gt;
&lt;li&gt;Checks business hours&lt;/li&gt;
&lt;li&gt;Analyzes CPU usage&lt;/li&gt;
&lt;li&gt;Stops underutilized machines&lt;/li&gt;
&lt;li&gt;Logs every action&lt;/li&gt;
&lt;li&gt;Sends email alerts&lt;/li&gt;
&lt;li&gt;Runs on a schedule&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All powered by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Lambds&lt;/li&gt;
&lt;li&gt;Boto3 (Python SDK)&lt;/li&gt;
&lt;li&gt;CloudWatch Metrics&lt;/li&gt;
&lt;li&gt;DynamoDB&lt;/li&gt;
&lt;li&gt;SNS&lt;/li&gt;
&lt;li&gt;EventBridge&lt;/li&gt;
&lt;/ul&gt;



&lt;p&gt;Why Serverless (Why Lambda?)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lambda is perfect for this use case because:&lt;/li&gt;
&lt;li&gt;No server management&lt;/li&gt;
&lt;li&gt;Pay only for execution time&lt;/li&gt;
&lt;li&gt;Event-driven automation&lt;/li&gt;
&lt;li&gt;Auto scales&lt;/li&gt;
&lt;li&gt;Secure with IAM&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Instead of running a VM 24/7 just to check EC2 usage...&lt;/p&gt;

&lt;p&gt;We let AWS run code only when needed.&lt;/p&gt;

&lt;p&gt;That's peak cloud efficiency.&lt;/p&gt;


&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;Here's how the system works:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fako6olz8a1qx6ejsbbub.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fako6olz8a1qx6ejsbbub.png" alt=" " width="620" height="379"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;EventBridge (Schedule)
        ↓
AWS Lambda (Cost Optimizer)
        ↓
EC2 Instances  ←→  CloudWatch Metrics
        ↓
DynamoDB Logs
        ↓
SNS Email Alerts

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Flow Explanation:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;EventBridge&lt;/strong&gt; triggers Lambda on a Schedule&lt;/li&gt;
&lt;li&gt;Lambda scans EC2 instances using tags&lt;/li&gt;
&lt;li&gt;Lambda checks:

&lt;ul&gt;
&lt;li&gt;Business hours&lt;/li&gt;
&lt;li&gt;CPU utilization&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;If eligible -&amp;gt; stops EC2 &lt;/li&gt;
&lt;li&gt;Logs action into DynamoDB&lt;/li&gt;
&lt;li&gt;Sends email alert using SNS&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Tag-Based Safety Layer
&lt;/h2&gt;

&lt;p&gt;One of the smartest design choices was tag-based filtering.&lt;/p&gt;

&lt;p&gt;Only EC2 instances with these tags processed:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tag Key **            **Required Value&lt;/strong&gt;&lt;br&gt;
AutoStop                  Yes&lt;br&gt;
Environment               Dev or Test&lt;br&gt;
Critical                  No&lt;/p&gt;

&lt;p&gt;This ensures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;X&lt;/strong&gt; Production systems are never touched.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;X&lt;/strong&gt; Critical workloads are protected.&lt;/li&gt;
&lt;li&gt;Only disposable environments are optimized.&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  Time-Based Optimization
&lt;/h2&gt;

&lt;p&gt;Business hours rules:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;If current time &amp;gt;= 8 PM OR &amp;lt; 8 AM
→ instance is eligible for stopping
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Time is calculated using a configurable time zone.&lt;/p&gt;

&lt;p&gt;so, the system works globally.&lt;/p&gt;




&lt;h2&gt;
  
  
  CPU-Based Optimization
&lt;/h2&gt;

&lt;p&gt;Even during business hours, some machines are just... idle.&lt;/p&gt;

&lt;p&gt;Lambda fetches CPU metrics from CloudWatch:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Average CPU &amp;lt; CPU_THRESHOLD
→ instance is eligible for stopping

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lambda fetches waste from zombie EC2 instances doing absolutely nothing.&lt;/p&gt;




&lt;p&gt;DRY_RUN Mode&lt;br&gt;
One of the most important features:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DRY_RUN = true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this mode:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lambda does NOT stop EC2&lt;/li&gt;
&lt;li&gt;Logs and alerts still run&lt;/li&gt;
&lt;li&gt;You can safely test logic&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When ready:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DRY_RUN = false
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now Lambda actually stops EC2.&lt;br&gt;
This prevents accidental disasters&lt;/p&gt;


&lt;h2&gt;
  
  
  Core Lambda Code
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
import datetime
import os
import pytz

ec2 = boto3.client('ec2')
cloudwatch = boto3.client('cloudwatch')
sns = boto3.client('sns')
ddb = boto3.resource('dynamodb')

TABLE_NAME = os.environ['TABLE_NAME']
SNS_TOPIC_ARN = os.environ['SNS_TOPIC_ARN']
DRY_RUN = os.environ['DRY_RUN'].lower() == "true"
CPU_THRESHOLD = int(os.environ['CPU_THRESHOLD'])
TIMEZONE = os.environ['TIMEZONE']

table = ddb.Table(TABLE_NAME)

def get_cpu_utilization(instance_id):
    response = cloudwatch.get_metric_statistics(
        Namespace='AWS/EC2',
        MetricName='CPUUtilization',
        Dimensions=[{'Name': 'InstanceId', 'Value': instance_id}],
        StartTime=datetime.datetime.utcnow() - datetime.timedelta(minutes=30),
        EndTime=datetime.datetime.utcnow(),
        Period=300,
        Statistics=['Average']
    )

    if not response['Datapoints']:
        return None

    latest = sorted(response['Datapoints'], key=lambda x: x['Timestamp'])[-1]
    return latest['Average']

def lambda_handler(event, context):
    tz = pytz.timezone(TIMEZONE)
    now = datetime.datetime.now(tz)
    hour = now.hour

    instances = ec2.describe_instances(Filters=[
        {'Name': 'tag:AutoStop', 'Values': ['Yes']},
        {'Name': 'tag:Environment', 'Values': ['Dev', 'Test']},
        {'Name': 'tag:Critical', 'Values': ['No']},
        {'Name': 'instance-state-name', 'Values': ['running']}
    ])

    for res in instances['Reservations']:
        for inst in res['Instances']:
            instance_id = inst['InstanceId']
            cpu = get_cpu_utilization(instance_id)

            reason = None
            if hour &amp;gt;= 20 or hour &amp;lt; 8:
                reason = "After hours"
            elif cpu is not None and cpu &amp;lt; CPU_THRESHOLD:
                reason = f"Low CPU: {cpu}%"

            if reason:
                if not DRY_RUN:
                    ec2.stop_instances(InstanceIds=[instance_id])

                table.put_item(Item={
                    "InstanceId": instance_id,
                    "Timestamp": str(now),
                    "Reason": reason
                })

                sns.publish(
                    TopicArn=SNS_TOPIC_ARN,
                    Subject="EC2 Optimization Alert",
                    Message=f"Stopped {instance_id} due to {reason}"
                )

    return {"statusCode": 200, "body": "Optimization complete"}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  DynamoDB Logging
&lt;/h2&gt;

&lt;p&gt;Each stop action is logged:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "InstanceId": "i-0abc12345",
  "Timestamp": "2026-01-22 20:15:04",
  "Reason": "Low CPU: 4.2%"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This gives: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Audit history&lt;/li&gt;
&lt;li&gt;Compliance Data&lt;/li&gt;
&lt;li&gt;Debugging Visibility&lt;/li&gt;
&lt;li&gt;Cost optimization reports&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  SNS Notifications
&lt;/h2&gt;

&lt;p&gt;Every action triggers an email alert:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Subject:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;EC2 Optimization Alert
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Message:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Stopped i-0abc12345 due to After hours
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This ensures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Transparency&lt;/li&gt;
&lt;li&gt;Human awareness&lt;/li&gt;
&lt;li&gt;Manual override if needed&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Automation with EventBridge
&lt;/h2&gt;

&lt;p&gt;EventBridge schedules Lambda:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rule&lt;/strong&gt;              &lt;strong&gt;Purpose&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;EC2AutoStopRule         Runs every hour or at 8PM&lt;br&gt;
EC2AutoStartRule        Starts instances at 8 AM&lt;/p&gt;

&lt;p&gt;Now the system is fully hands-free.&lt;/p&gt;




&lt;h2&gt;
  
  
  Challenges I Faced
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Runtime mismatch&lt;br&gt;
Python 3.14 Lambda + Python 3.10 layer -&amp;gt; crash&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Fix:&lt;/strong&gt; matched runtime versions
&lt;/h2&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;IAM permission hell&lt;br&gt;
Missing permissions broke EC2 stop, DynamoDB logs, SNS alerts&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Fix:&lt;/strong&gt; attached least-privilege IAM policies
&lt;/h2&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Environment variable bugs&lt;br&gt;
Wrong variable names caused KeyErrors&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Fix:&lt;/strong&gt; standardized env vars (&lt;code&gt;TABLE_NAME&lt;/code&gt;, &lt;code&gt;SNS_TOPIC_ARN&lt;/code&gt;)
&lt;/h2&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Lambda working but EC2 not stopping&lt;br&gt;
Turns out DRY_RUN was still true&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Fix:&lt;/strong&gt; flipped it to false
&lt;/h2&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;No CPU metrics available&lt;br&gt;
CloudWatch had no datapoints&lt;br&gt;
&lt;strong&gt;Fix:&lt;/strong&gt; added safe fallback logic&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Cloud cost optimization is not just about cutting bills - it’s about building responsible, automated, and scalable systems.&lt;/p&gt;

&lt;p&gt;In this project, we proved that with the right mix of AWS serverless services and Python automation, it’s possible to create a production-grade cost optimization system that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatically stops non-essential EC2 instances&lt;/li&gt;
&lt;li&gt;Protects critical workloads using tags&lt;/li&gt;
&lt;li&gt;Makes smart decisions using time and CPU metrics&lt;/li&gt;
&lt;li&gt;Logs every action for audit and visibility&lt;/li&gt;
&lt;li&gt;Sends real-time alerts for transparency&lt;/li&gt;
&lt;li&gt;Runs fully hands-free using EventBridge&lt;/li&gt;
&lt;li&gt;Includes a DRY_RUN safety mode to prevent accidents&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This approach eliminates manual intervention, reduces human error, and ensures that cloud resources are used only when they are truly needed.&lt;/p&gt;

&lt;p&gt;By combining Lambda, Boto3, CloudWatch, DynamoDB, SNS, and EventBridge, we created a lightweight yet powerful solution that can be easily extended to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Auto-start instances in the morning&lt;/li&gt;
&lt;li&gt;Optimize EBS volumes and snapshots&lt;/li&gt;
&lt;li&gt;Integrate Slack or Teams notifications&lt;/li&gt;
&lt;li&gt;Track cost trends using AWS Cost Explorer&lt;/li&gt;
&lt;li&gt;Manage multi-account AWS environments&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Resources:
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/Amitkushwaha7/cloud-cost-optimization.git" rel="noopener noreferrer"&gt;Github Repository&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/Amitkushwaha7/cloud-cost-optimization/blob/cdeb5d6c5d2cfe97e4ca8044264fb7bb8492a0d2/Demo.md" rel="noopener noreferrer"&gt;Project Demo&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.linkedin.com/in/amitkushwaha7/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://x.com/AmitKum43380951" rel="noopener noreferrer"&gt;X&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>automation</category>
      <category>aws</category>
      <category>devops</category>
      <category>python</category>
    </item>
    <item>
      <title>-&gt;&gt; Day-18 Image Processing Serverless Project using AWS Lambda (with terraform)</title>
      <dc:creator>Amit Kushwaha</dc:creator>
      <pubDate>Tue, 13 Jan 2026 22:12:33 +0000</pubDate>
      <link>https://forem.com/amit_kumar_7db8e36a64dd45/image-processing-serverless-project-using-aws-lambda-with-terraform-53je</link>
      <guid>https://forem.com/amit_kumar_7db8e36a64dd45/image-processing-serverless-project-using-aws-lambda-with-terraform-53je</guid>
      <description>&lt;p&gt;In this tutorial, I'll show you how to build a production-ready serverless image processing pipeline that automatically creates multiple image variants when you upload a photo to S3.&lt;/p&gt;

&lt;h2&gt;
  
  
  What we'll build:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Automatic image processing triggered by S3 uploads&lt;/li&gt;
&lt;li&gt;5 different image variants (compressed, low-quality, WebP, PNG, thumbnail)&lt;/li&gt;
&lt;li&gt;Email notifications via SNS&lt;/li&gt;
&lt;li&gt;Complete Infrastructure as Code using Terraform&lt;/li&gt;
&lt;li&gt;Cross-platform Lambda Layer build with Docker&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Tech Stack:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;AWS Lambda (Python 3.12)&lt;/li&gt;
&lt;li&gt;AWS S3 (storage)&lt;/li&gt;
&lt;li&gt;AWS SNS (notifications)&lt;/li&gt;
&lt;li&gt;Terraform (infrastructure)&lt;/li&gt;
&lt;li&gt;Docker (Lambda layer build)&lt;/li&gt;
&lt;li&gt;Pillow (image processing)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Architecture Overview
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbnrk740kvo1i25wpywqr.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbnrk740kvo1i25wpywqr.jpg" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The flow is simple:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;User uploads an image to the Source S3 Bucket&lt;/li&gt;
&lt;li&gt;S3 event triggers the Lambda Function&lt;/li&gt;
&lt;li&gt;Lambda (with Pillow layer) processes the image into 5 variants&lt;/li&gt;
&lt;li&gt;Processed images are saved to the Destination S3 Bucket&lt;/li&gt;
&lt;li&gt;SNS sends an email notification with processing details&lt;/li&gt;
&lt;li&gt;CloudWatch logs everything for monitoring&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Why This Architecture?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Serverless Benefits&lt;/strong&gt;&lt;br&gt;
No Server Management&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No EC2 instances to maintain&lt;/li&gt;
&lt;li&gt;No patching or updates&lt;/li&gt;
&lt;li&gt;Automatic scaling&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cost-Effective&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pay only for execution time&lt;/li&gt;
&lt;li&gt;~$0.14/month for 1,000 images&lt;/li&gt;
&lt;li&gt;Free tier covers most small projects&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Event-Driven&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatic processing on upload&lt;/li&gt;
&lt;li&gt;No polling or cron jobs needed&lt;/li&gt;
&lt;li&gt;Real-time processing&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before we start, make sure you have:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;AWS Account with CLI configured
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws configure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Terraform (v1.0+)
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform --version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Docker desktop(running)
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker info
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Basic knowledge of:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;AWS services (S3, Lambda, SNS)&lt;/li&gt;
&lt;li&gt;Terraform Basics&lt;/li&gt;
&lt;li&gt;Python&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Project Structure
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Day-18/
├── Assets/
│   ├── architecture-diagram.jpg
│   └── ... (screenshots)
├── lambda/
│   ├── lambda_function.py
│   └── requirements.txt
├── scripts/
│   ├── build_layer_docker.sh
│   ├── deploy.sh
│   └── destroy.sh
├── terraform/
│   ├── main.tf
│   ├── variables.tf
│   ├── outputs.tf
│   ├── provider.tf
│   └── terraform.tfvars.example
└── Readme.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Step1: Lambda Function&lt;br&gt;
Let's start with the core - the Lambda function that processes images.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Image Processor&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import json
import boto3
import os
from PIL import Image
from io import BytesIO
import uuid

s3_client = boto3.client('s3')
sns_client = boto3.client('sns')

def lambda_handler(event, context):
    """Process uploaded images into multiple variants"""

    for record in event['Records']:
        bucket = record['s3']['bucket']['name']
        key = record['s3']['object']['key']

        # Download image
        response = s3_client.get_object(Bucket=bucket, Key=key)
        image_data = response['Body'].read()

        # Process image
        processed_images = process_image(image_data, key)

        # Upload variants
        processed_bucket = os.environ['PROCESSED_BUCKET']
        for img in processed_images:
            s3_client.put_object(
                Bucket=processed_bucket,
                Key=img['key'],
                Body=img['data'],
                ContentType=img['content_type']
            )

        # Send notification
        send_notification(key, processed_images, processed_bucket)

    return {'statusCode': 200}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Creating Image Variants&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def process_image(image_data, original_key):
    """Create 5 variants of the image"""
    processed_images = []
    image = Image.open(BytesIO(image_data))

    # Convert RGBA to RGB for JPEG compatibility
    if image.mode in ('RGBA', 'LA', 'P'):
        background = Image.new('RGB', image.size, (255, 255, 255))
        if image.mode == 'P':
            image = image.convert('RGBA')
        background.paste(image, mask=image.split()[-1])
        image = background

    # Auto-resize large images
    if image.size[0] &amp;gt; 4096 or image.size[1] &amp;gt; 4096:
        ratio = min(4096 / image.size[0], 4096 / image.size[1])
        new_size = (int(image.size[0] * ratio), int(image.size[1] * ratio))
        image = image.resize(new_size, Image.Resampling.LANCZOS)

    base_name = os.path.splitext(original_key)[0]
    unique_id = str(uuid.uuid4())[:8]

    # Create variants
    variants = [
        {'format': 'JPEG', 'quality': 85, 'suffix': 'compressed'},
        {'format': 'JPEG', 'quality': 60, 'suffix': 'low'},
        {'format': 'WEBP', 'quality': 85, 'suffix': 'webp'},
        {'format': 'PNG', 'quality': None, 'suffix': 'png'}
    ]

    for variant in variants:
        output = BytesIO()
        if variant['quality']:
            image.save(output, format=variant['format'], 
                      quality=variant['quality'], optimize=True)
        else:
            image.save(output, format=variant['format'], optimize=True)

        output.seek(0)
        extension = variant['format'].lower()
        if extension == 'jpeg':
            extension = 'jpg'

        processed_images.append({
            'key': f"{base_name}_{variant['suffix']}_{unique_id}.{extension}",
            'data': output.getvalue(),
            'content_type': f"image/{variant['format'].lower()}"
        })

    # Create thumbnail
    thumbnail = image.copy()
    thumbnail.thumbnail((300, 300), Image.Resampling.LANCZOS)
    thumb_output = BytesIO()
    thumbnail.save(thumb_output, format='JPEG', quality=80, optimize=True)
    thumb_output.seek(0)

    processed_images.append({
        'key': f"{base_name}_thumbnail_{unique_id}.jpg",
        'data': thumb_output.getvalue(),
        'content_type': 'image/jpeg'
    })

    return processed_images
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2: Building the Lambda Layer
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The Docker Challenge&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Problem:&lt;/strong&gt; AWS Lambda runs on Linux, but you might be developing on Windows or Mac. The Pillow library has C dependencies that must be compiled for the target OS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt; Use Docker to create a Linux environment and build the layer there.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Build Script&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
set -e

echo "🚀 Building Lambda Layer with Pillow using Docker..."

SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" &amp;amp;&amp;amp; pwd )"
PROJECT_DIR="$( cd "$SCRIPT_DIR/.." &amp;amp;&amp;amp; pwd )"
TERRAFORM_DIR="$PROJECT_DIR/terraform"

# Check Docker is running
if ! docker info &amp;amp;&amp;gt; /dev/null 2&amp;gt;&amp;amp;1; then
    echo "❌ Docker is not running. Please start Docker first."
    exit 1
fi

# Get Windows-compatible path
if command -v cygpath &amp;amp;&amp;gt; /dev/null; then
    DOCKER_MOUNT_PATH=$(cygpath -w "$TERRAFORM_DIR")
elif [[ -n "$WINDIR" ]]; then
    DOCKER_MOUNT_PATH=$(cd "$TERRAFORM_DIR" &amp;amp;&amp;amp; pwd -W 2&amp;gt;/dev/null || pwd)
else
    DOCKER_MOUNT_PATH="$TERRAFORM_DIR"
fi

# Build layer in Linux container
docker run --rm \
  --platform linux/amd64 \
  -v "$DOCKER_MOUNT_PATH":/output \
  python:3.12-slim \
  bash -c "
    pip install --quiet Pillow==10.4.0 -t /tmp/python/lib/python3.12/site-packages/ &amp;amp;&amp;amp; \
    cd /tmp &amp;amp;&amp;amp; \
    apt-get update -qq &amp;amp;&amp;amp; apt-get install -y -qq zip &amp;gt; /dev/null 2&amp;gt;&amp;amp;1 &amp;amp;&amp;amp; \
    zip -q -r pillow_layer.zip python/ &amp;amp;&amp;amp; \
    cp pillow_layer.zip /output/ &amp;amp;&amp;amp; \
    echo '✅ Layer built successfully!'
  "

echo "📍 Location: $TERRAFORM_DIR/pillow_layer.zip"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Terraform Infrastructure
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Main Infrastructure (main.tf)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# S3 Upload Bucket
resource "aws_s3_bucket" "upload_bucket" {
  bucket        = local.upload_bucket_name
  force_destroy = true  # Allows easy cleanup
}

resource "aws_s3_bucket_versioning" "upload_bucket" {
  bucket = aws_s3_bucket.upload_bucket.id
  versioning_configuration {
    status = "Enabled"
  }
}

resource "aws_s3_bucket_server_side_encryption_configuration" "upload_bucket" {
  bucket = aws_s3_bucket.upload_bucket.id
  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm = "AES256"
    }
  }
}

# S3 Processed Bucket
resource "aws_s3_bucket" "processed_bucket" {
  bucket        = local.processed_bucket_name
  force_destroy = true
}

# Lambda Function
resource "aws_lambda_function" "image_processor" {
  filename         = data.archive_file.lambda_zip.output_path
  function_name    = local.lambda_function_name
  role             = aws_iam_role.lambda_role.arn
  handler          = "lambda_function.lambda_handler"
  runtime          = "python3.12"
  timeout          = var.lambda_timeout
  memory_size      = var.lambda_memory_size
  layers           = [aws_lambda_layer_version.pillow_layer.arn]

  environment {
    variables = {
      PROCESSED_BUCKET = aws_s3_bucket.processed_bucket.id
      SNS_TOPIC_ARN    = var.notification_email != "" ? aws_sns_topic.processing_notifications[0].arn : ""
    }
  }
}

# Lambda Layer
resource "aws_lambda_layer_version" "pillow_layer" {
  filename            = "${path.module}/pillow_layer.zip"
  layer_name          = "${var.project_name}-pillow-layer"
  compatible_runtimes = ["python3.12"]
  description         = "Pillow library for image processing"
}

# S3 Event Trigger
resource "aws_s3_bucket_notification" "upload_bucket_notification" {
  bucket = aws_s3_bucket.upload_bucket.id

  lambda_function {
    lambda_function_arn = aws_lambda_function.image_processor.arn
    events              = ["s3:ObjectCreated:*"]
  }

  depends_on = [aws_lambda_permission.allow_s3]
}

# SNS Topic
resource "aws_sns_topic" "processing_notifications" {
  count        = var.notification_email != "" ? 1 : 0
  name         = "${var.project_name}-${var.environment}-notifications"
  display_name = "Image Processing Notifications"
}

resource "aws_sns_topic_subscription" "email_subscription" {
  count     = var.notification_email != "" ? 1 : 0
  topic_arn = aws_sns_topic.processing_notifications[0].arn
  protocol  = "email"
  endpoint  = var.notification_email
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;IAM Permissions&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_iam_role" "lambda_role" {
  name = "${local.lambda_function_name}-role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Action = "sts:AssumeRole"
      Effect = "Allow"
      Principal = {
        Service = "lambda.amazonaws.com"
      }
    }]
  })
}

resource "aws_iam_role_policy" "lambda_policy" {
  name = "${local.lambda_function_name}-policy"
  role = aws_iam_role.lambda_role.id

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Effect = "Allow"
        Action = [
          "logs:CreateLogGroup",
          "logs:CreateLogStream",
          "logs:PutLogEvents"
        ]
        Resource = "arn:aws:logs:${var.aws_region}:*:*"
      },
      {
        Effect = "Allow"
        Action = [
          "s3:GetObject",
          "s3:GetObjectVersion"
        ]
        Resource = "${aws_s3_bucket.upload_bucket.arn}/*"
      },
      {
        Effect = "Allow"
        Action = [
          "s3:PutObject",
          "s3:PutObjectAcl"
        ]
        Resource = "${aws_s3_bucket.processed_bucket.arn}/*"
      },
      {
        Effect = "Allow"
        Action = ["sns:Publish"]
        Resource = var.notification_email != "" ? aws_sns_topic.processing_notifications[0].arn : "*"
      }
    ]
  })
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4: Deployment
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Configuration&lt;/strong&gt;&lt;br&gt;
Create &lt;code&gt;terraform.tfvars:&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws_region         = "us-east-1"
environment        = "dev"
project_name       = "serverless-image-processor"
lambda_timeout     = 60
lambda_memory_size = 1024
notification_email = "your-email@example.com"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Deploy with Scripts&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# 1. Build Lambda Layer
cd scripts
./build_layer_docker.sh

# 2. Deploy Infrastructure
./deploy.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Manual Deployment&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# 1. Build layer
cd scripts
./build_layer_docker.sh

# 2. Initialize Terraform
cd ../terraform
terraform init

# 3. Plan
terraform plan

# 4. Apply
terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Testing the Pipeline&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Confirm SNS Subscription&lt;/strong&gt;&lt;br&gt;
Check your email for the AWS SNS confirmation and click "Confirm subscription".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsn8j52obyjrmowj9c8p4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsn8j52obyjrmowj9c8p4.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd6frfmhhm6hpp4pmu66q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd6frfmhhm6hpp4pmu66q.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Upload a Test Image&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Get bucket name
terraform output upload_bucket_name

# Upload image
aws s3 cp test-image.jpg s3://YOUR-UPLOAD-BUCKET/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Check Processed Images&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# List processed variants
aws s3 ls s3://YOUR-PROCESSED-BUCKET/ --recursive
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Expected output:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;test-image_compressed_a1b2c3d4.jpg
test-image_low_a1b2c3d4.jpg
test-image_webp_a1b2c3d4.webp
test-image_png_a1b2c3d4.png
test-image_thumbnail_a1b2c3d4.jpg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Lessons:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Docker is Essential for Lambda Layers&lt;/strong&gt;&lt;br&gt;
Initially, I tried installing Pillow directly on Windows. The layer worked locally but failed on Lambda with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Unable to import module 'lambda_function': No module named '_imaging'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Solution:&lt;/strong&gt; Always use Docker to build layers for Lambda, regardless of your development OS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Force Destroy is Your Friend (in Dev)&lt;/strong&gt;&lt;br&gt;
Without &lt;code&gt;force_destroy = true&lt;/code&gt; on S3 buckets, &lt;code&gt;terraform destroy&lt;/code&gt; fails if buckets contain objects.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_s3_bucket" "upload_bucket" {
  bucket        = local.upload_bucket_name
  force_destroy = true  # Enables easy cleanup
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Warning:&lt;/strong&gt; Never use this in production!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Image Format Conversion is Tricky&lt;/strong&gt;&lt;br&gt;
JPEG doesn't support transparency. Converting RGBA images directly to JPEG results in black backgrounds.&lt;/p&gt;

&lt;p&gt;Solution: Create a white background and paste the image with alpha channel as mask:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if image.mode in ('RGBA', 'LA', 'P'):
    background = Image.new('RGB', image.size, (255, 255, 255))
    background.paste(image, mask=image.split()[-1])
    image = background
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. SNS Requires Email Confirmation&lt;/strong&gt;&lt;br&gt;
SNS subscriptions aren't active until the user confirms via email. Make sure to mention this in documentation!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Unique Filenames Prevent Conflicts&lt;/strong&gt;&lt;br&gt;
Using UUIDs in filenames prevents overwriting when processing multiple images with the same name:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;unique_id = str(uuid.uuid4())[:8]
output_key = f"{base_name}_{variant['suffix']}_{unique_id}.{extension}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We've built a production-ready serverless image processing pipeline that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatically processes images on upload&lt;/li&gt;
&lt;li&gt;Creates 5 optimized variants&lt;/li&gt;
&lt;li&gt;Sends email notifications&lt;/li&gt;
&lt;li&gt;Costs less than $0.15/month for 1,000 images&lt;/li&gt;
&lt;li&gt;Scales automatically&lt;/li&gt;
&lt;li&gt;Requires zero server management&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Resources:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/Amitkushwaha7/TerraformFullCourse/tree/adc5bfd9c6c1fc4cec2a55a682ca8a1c5efb3d96/Day-18" rel="noopener noreferrer"&gt;Github Repository&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/lambda/" rel="noopener noreferrer"&gt;AWS Lambda&lt;/a&gt;&lt;br&gt;
&lt;a href="https://pillow.readthedocs.io/en/stable/" rel="noopener noreferrer"&gt;Pillow Docs&lt;/a&gt;&lt;br&gt;
&lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs" rel="noopener noreferrer"&gt;Terraform AWS Provider&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Reference:
&lt;/h2&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/l0RYCxczgyk"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;h2&gt;
  
  
  &amp;gt;&amp;gt; Connect With Me
&lt;/h2&gt;

&lt;p&gt;If you enjoyed this post or want to follow my #30DaysOfAWSTerraformChallenge journey, feel free to connect with me here:&lt;/p&gt;

&lt;p&gt;💼 LinkedIn: &lt;a href="https://www.linkedin.com/in/amitkushwaha7/" rel="noopener noreferrer"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🐙 GitHub: &lt;a href="https://github.com/Amitkushwaha7" rel="noopener noreferrer"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📝 Hashnode / &lt;a href="https://hashnode.com/@amit902" rel="noopener noreferrer"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🐦 Twitter/X: &lt;a href="https://x.com/AmitKum43380951" rel="noopener noreferrer"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Found this helpful? Drop a ❤️ and follow for more AWS and Terraform tutorials!&lt;/p&gt;

&lt;p&gt;Questions? Drop them in the comments below! 👇&lt;/p&gt;

</description>
      <category>aws</category>
      <category>python</category>
      <category>serverless</category>
      <category>terraform</category>
    </item>
    <item>
      <title>-&gt;&gt; Day-17 AWS Terraform Blue-Green Deployment Using Elastic Beanstalk</title>
      <dc:creator>Amit Kushwaha</dc:creator>
      <pubDate>Sun, 11 Jan 2026 08:16:36 +0000</pubDate>
      <link>https://forem.com/amit_kumar_7db8e36a64dd45/aws-terraform-blue-green-deployment-using-elastic-beanstalk-5647</link>
      <guid>https://forem.com/amit_kumar_7db8e36a64dd45/aws-terraform-blue-green-deployment-using-elastic-beanstalk-5647</guid>
      <description>&lt;h2&gt;
  
  
  Introduction:
&lt;/h2&gt;

&lt;p&gt;In a traditional approach to application deployment, you typically fix a failed deployment by redeploying an earlier, stable version of the application. Redeployment in traditional data centers is typically done on the same set of resources due to the cost and effort of provisioning additional resources. Although this approach works, it has many shortcomings. Rollback isn’t easy because it’s implemented by redeployment of an earlier version from scratch. This process takes time, making the application potentially unavailable for long periods. Even in situations where the application is only impaired, a rollback is required, which overwrites the faulty version. As a result, you have no opportunity to debug the faulty application in place.&lt;/p&gt;

&lt;p&gt;Applying the principles of agility, scalability, utility consumption, as well as the automation capabilities of Amazon Web Services can shift the paradigm of application deployment. This enables a better deployment technique called &lt;em&gt;blue/green deployment&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Blue/Green Deployment Methodology:
&lt;/h2&gt;

&lt;p&gt;Blue/green deployments provide releases with near zero-downtime and rollback capabilities. The fundamental idea behind blue/green deployment is to shift traffic between two identical environments that are running different versions of your application. The blue environment represents the current application version serving production traffic. In parallel, the green environment is staged running a different version of your application. After the green environment is ready and tested, production traffic is redirected from blue to green. If any problems are identified, you can roll back by reverting traffic back to the blue environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture diagram
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fejnqg9do6xserojvpcos.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fejnqg9do6xserojvpcos.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of Blue/Green Deployment:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Zero Downtime:&lt;/strong&gt; Seamless traffic transition ensures uninterrupted service.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Safe Rollbacks:&lt;/strong&gt; Easy reversion to the Blue environment if issues arise.&lt;/li&gt;
&lt;li&gt;Efficient Testing: Allows comprehensive testing of the new environment without affecting live users.&lt;/li&gt;
&lt;li&gt;Scalability: Works well for both monolithic and microservices architectures.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;AWS Elastic Beanstalk:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AWS Elastic Beanstalk performs an in-place update when you update your application versions, your application might become unavailable to users for a short period of time. To avoid this, perform a blue/green deployment. To do this, deploy the new version to a separate environment, and then swap the CNAMEs of the two environments to redirect traffic to the new version instantly.&lt;/p&gt;

&lt;p&gt;To validate this approach, &lt;strong&gt;I tested and deployed Blue-Green Deployment with javascript application,&lt;/strong&gt; This hands-on implementation ensured real-world feasibility and reliability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Project Highlights:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tech Stack: Terraform, Elastic Beanstalk, EC2, S3.&lt;/li&gt;
&lt;li&gt;Goal: Achieve zero downtime while updating my application.&lt;/li&gt;
&lt;li&gt;Key Challenges: Ensuring that visitors never experience an outage during deployments.&lt;/li&gt;
&lt;li&gt;Solution: Implementing Blue-Green Deployment using Elastic Beanstalk and EC2.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Project Structure&lt;/strong&gt;&lt;br&gt;
My project is structured in the following way:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Day-17/
├── Readme.md
├── Assets/
└── terraform/
    ├── .terraform/
    ├── .terraform.lock.hcl
    ├── app-v1/
    │   ├── app.js
    │   ├── package.json
    │   └── app-v1.zip
    ├── app-v2/
    │   ├── app.js
    │   ├── package.json
    │   └── app-v2.zip
    ├── blue-environments.tf
    ├── green-environments.tf
    ├── main.tf
    ├── outputs.tf
    ├── package-apps.ps1
    ├── package-apps.sh
    ├── swap-environments.ps1
    ├── swap-environments.sh
    ├── terraform.tfstate
    ├── terraform.tfstate.backup
    ├── terraform.tfvars.example
    └── variables.tf

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Checkout my &lt;a href="https://github.com/Amitkushwaha7/TerraformFullCourse/tree/96479ff67cc0e962c6a50e294def6488cc44cc59/Day-17" rel="noopener noreferrer"&gt;Github Repository&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step-by-step Implementation:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Package Your Applications&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before deploying, you need to package your application code into ZIP files. The repository includes packaging scripts for different operating systems.​&lt;/p&gt;

&lt;p&gt;For Linux/Mac:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chmod +x package-apps.sh
./package-apps.sh

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For Windows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.\package-apps.ps1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These scripts create &lt;code&gt;app-v1.zip&lt;/code&gt; and &lt;code&gt;app-v2.zip&lt;/code&gt; files that Terraform will upload to S3.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Core Infrastructure Setup (main.tf)&lt;/strong&gt;&lt;br&gt;
The main.tf file establishes the foundational infrastructure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~&amp;gt; 5.0"
    }
  }
  required_version = "&amp;gt;= 1.0"
}

provider "aws" {
  region = var.aws_region
}

# IAM Role for Elastic Beanstalk EC2 instances
resource "aws_iam_role" "eb_ec2_role" {
  name = "${var.app_name}-eb-ec2-role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Principal = {
          Service = "ec2.amazonaws.com"
        }
      }
    ]
  })

  tags = var.tags
}

# Attach the AWS managed policy for Web Tier
resource "aws_iam_role_policy_attachment" "eb_web_tier" {
  role       = aws_iam_role.eb_ec2_role.name
  policy_arn = "arn:aws:iam::aws:policy/AWSElasticBeanstalkWebTier"
}

# Attach the AWS managed policy for Worker Tier
resource "aws_iam_role_policy_attachment" "eb_worker_tier" {
  role       = aws_iam_role.eb_ec2_role.name
  policy_arn = "arn:aws:iam::aws:policy/AWSElasticBeanstalkWorkerTier"
}

# Attach the AWS managed policy for Multicontainer Docker
resource "aws_iam_role_policy_attachment" "eb_multicontainer_docker" {
  role       = aws_iam_role.eb_ec2_role.name
  policy_arn = "arn:aws:iam::aws:policy/AWSElasticBeanstalkMulticontainerDocker"
}

# Instance Profile
resource "aws_iam_instance_profile" "eb_ec2_profile" {
  name = "${var.app_name}-eb-ec2-profile"
  role = aws_iam_role.eb_ec2_role.name

  tags = var.tags
}

# IAM Role for Elastic Beanstalk Service
resource "aws_iam_role" "eb_service_role" {
  name = "${var.app_name}-eb-service-role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Principal = {
          Service = "elasticbeanstalk.amazonaws.com"
        }
      }
    ]
  })

  tags = var.tags
}

# Attach Enhanced Health Reporting policy
resource "aws_iam_role_policy_attachment" "eb_service_health" {
  role       = aws_iam_role.eb_service_role.name
  policy_arn = "arn:aws:iam::aws:policy/service-role/AWSElasticBeanstalkEnhancedHealth"
}

# Attach Managed Updates policy
resource "aws_iam_role_policy_attachment" "eb_service_managed_updates" {
  role       = aws_iam_role.eb_service_role.name
  policy_arn = "arn:aws:iam::aws:policy/AWSElasticBeanstalkManagedUpdatesCustomerRolePolicy"
}

# Elastic Beanstalk Application
resource "aws_elastic_beanstalk_application" "app" {
  name        = var.app_name
  description = "Blue-Green Deployment Demo Application"

  tags = var.tags
}

# S3 Bucket for application versions
resource "aws_s3_bucket" "app_versions" {
  bucket = "${var.app_name}-versions-${data.aws_caller_identity.current.account_id}"

  tags = var.tags
}

# Block public access to S3 bucket
resource "aws_s3_bucket_public_access_block" "app_versions" {
  bucket = aws_s3_bucket.app_versions.id

  block_public_acls       = true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true
}

# Data source for current AWS account
data "aws_caller_identity" "current" {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 3: Blue Environment Configuration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The blue-environment.tf file defines the production environment running version 1.0:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Application Version 1.0 (Blue Environment - Production)
resource "aws_s3_object" "app_v1" {
  bucket = aws_s3_bucket.app_versions.id
  key    = "app-v1.zip"
  source = "${path.module}/app-v1/app-v1.zip"
  etag   = filemd5("${path.module}/app-v1/app-v1.zip")

  tags = var.tags
}

resource "aws_elastic_beanstalk_application_version" "v1" {
  name        = "${var.app_name}-v1"
  application = aws_elastic_beanstalk_application.app.name
  description = "Application Version 1.0 - Initial Release"
  bucket      = aws_s3_bucket.app_versions.id
  key         = aws_s3_object.app_v1.id

  tags = var.tags
}

# Blue Environment (Production)
resource "aws_elastic_beanstalk_environment" "blue" {
  name                = "${var.app_name}-blue"
  application         = aws_elastic_beanstalk_application.app.name
  solution_stack_name = var.solution_stack_name
  tier                = "WebServer"
  version_label       = aws_elastic_beanstalk_application_version.v1.name

  # IAM Settings
  setting {
    namespace = "aws:autoscaling:launchconfiguration"
    name      = "IamInstanceProfile"
    value     = aws_iam_instance_profile.eb_ec2_profile.name
  }

  setting {
    namespace = "aws:elasticbeanstalk:environment"
    name      = "ServiceRole"
    value     = aws_iam_role.eb_service_role.name
  }

  # Instance Settings
  setting {
    namespace = "aws:autoscaling:launchconfiguration"
    name      = "InstanceType"
    value     = var.instance_type
  }

  # Environment Type (Load Balanced)
  setting {
    namespace = "aws:elasticbeanstalk:environment"
    name      = "EnvironmentType"
    value     = "LoadBalanced"
  }

  setting {
    namespace = "aws:elasticbeanstalk:environment"
    name      = "LoadBalancerType"
    value     = "application"
  }

  # Auto Scaling Settings
  setting {
    namespace = "aws:autoscaling:asg"
    name      = "MinSize"
    value     = "1"
  }

  setting {
    namespace = "aws:autoscaling:asg"
    name      = "MaxSize"
    value     = "2"
  }

  # Health Reporting
  setting {
    namespace = "aws:elasticbeanstalk:healthreporting:system"
    name      = "SystemType"
    value     = "enhanced"
  }

  # Platform Settings
  setting {
    namespace = "aws:elasticbeanstalk:environment:process:default"
    name      = "HealthCheckPath"
    value     = "/"
  }

  setting {
    namespace = "aws:elasticbeanstalk:environment:process:default"
    name      = "Port"
    value     = "8080"
  }

  setting {
    namespace = "aws:elasticbeanstalk:environment:process:default"
    name      = "Protocol"
    value     = "HTTP"
  }

  # Environment Variables
  setting {
    namespace = "aws:elasticbeanstalk:application:environment"
    name      = "ENVIRONMENT"
    value     = "blue"
  }

  setting {
    namespace = "aws:elasticbeanstalk:application:environment"
    name      = "VERSION"
    value     = "1.0"
  }

  # Deployment Policy
  setting {
    namespace = "aws:elasticbeanstalk:command"
    name      = "DeploymentPolicy"
    value     = "Rolling"
  }

  setting {
    namespace = "aws:elasticbeanstalk:command"
    name      = "BatchSizeType"
    value     = "Percentage"
  }

  setting {
    namespace = "aws:elasticbeanstalk:command"
    name      = "BatchSize"
    value     = "50"
  }

  # Managed Updates
  setting {
    namespace = "aws:elasticbeanstalk:managedactions"
    name      = "ManagedActionsEnabled"
    value     = "false"
  }

  tags = merge(
    var.tags,
    {
      Environment = "blue"
      Role        = "production"
    }
  )
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 4: Green Environment Configuration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The green-environment.tf file creates an identical staging environment with version 2.0:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Application Version 2.0 (Green Environment - Staging)
resource "aws_s3_object" "app_v2" {
  bucket = aws_s3_bucket.app_versions.id
  key    = "app-v2.zip"
  source = "${path.module}/app-v2/app-v2.zip"
  etag   = filemd5("${path.module}/app-v2/app-v2.zip")

  tags = var.tags
}

resource "aws_elastic_beanstalk_application_version" "v2" {
  name        = "${var.app_name}-v2"
  application = aws_elastic_beanstalk_application.app.name
  description = "Application Version 2.0 - New Feature Release"
  bucket      = aws_s3_bucket.app_versions.id
  key         = aws_s3_object.app_v2.id

  tags = var.tags
}

# Green Environment (Staging/Pre-production)
resource "aws_elastic_beanstalk_environment" "green" {
  name                = "${var.app_name}-green"
  application         = aws_elastic_beanstalk_application.app.name
  solution_stack_name = var.solution_stack_name
  tier                = "WebServer"
  version_label       = aws_elastic_beanstalk_application_version.v2.name

  # IAM Settings
  setting {
    namespace = "aws:autoscaling:launchconfiguration"
    name      = "IamInstanceProfile"
    value     = aws_iam_instance_profile.eb_ec2_profile.name
  }

  setting {
    namespace = "aws:elasticbeanstalk:environment"
    name      = "ServiceRole"
    value     = aws_iam_role.eb_service_role.name
  }

  # Instance Settings
  setting {
    namespace = "aws:autoscaling:launchconfiguration"
    name      = "InstanceType"
    value     = var.instance_type
  }

  # Environment Type (Load Balanced)
  setting {
    namespace = "aws:elasticbeanstalk:environment"
    name      = "EnvironmentType"
    value     = "LoadBalanced"
  }

  setting {
    namespace = "aws:elasticbeanstalk:environment"
    name      = "LoadBalancerType"
    value     = "application"
  }

  # Auto Scaling Settings
  setting {
    namespace = "aws:autoscaling:asg"
    name      = "MinSize"
    value     = "1"
  }

  setting {
    namespace = "aws:autoscaling:asg"
    name      = "MaxSize"
    value     = "2"
  }

  # Health Reporting
  setting {
    namespace = "aws:elasticbeanstalk:healthreporting:system"
    name      = "SystemType"
    value     = "enhanced"
  }

  # Platform Settings
  setting {
    namespace = "aws:elasticbeanstalk:environment:process:default"
    name      = "HealthCheckPath"
    value     = "/"
  }

  setting {
    namespace = "aws:elasticbeanstalk:environment:process:default"
    name      = "Port"
    value     = "8080"
  }

  setting {
    namespace = "aws:elasticbeanstalk:environment:process:default"
    name      = "Protocol"
    value     = "HTTP"
  }

  # Environment Variables
  setting {
    namespace = "aws:elasticbeanstalk:application:environment"
    name      = "ENVIRONMENT"
    value     = "green"
  }

  setting {
    namespace = "aws:elasticbeanstalk:application:environment"
    name      = "VERSION"
    value     = "2.0"
  }

  # Deployment Policy
  setting {
    namespace = "aws:elasticbeanstalk:command"
    name      = "DeploymentPolicy"
    value     = "Rolling"
  }

  setting {
    namespace = "aws:elasticbeanstalk:command"
    name      = "BatchSizeType"
    value     = "Percentage"
  }

  setting {
    namespace = "aws:elasticbeanstalk:command"
    name      = "BatchSize"
    value     = "50"
  }

  # Managed Updates
  setting {
    namespace = "aws:elasticbeanstalk:managedactions"
    name      = "ManagedActionsEnabled"
    value     = "false"
  }

  tags = merge(
    var.tags,
    {
      Environment = "green"
      Role        = "staging"
    }
  )
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 5: Deploy the Infrastructure&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Execute the Terraform workflow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Initialize Terraform
terraform init

# Review the planned changes
terraform plan

# Apply the configuration
terraform apply

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates both environments and uploads your application versions to S3. The output will display the URLs for both environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6: Verify Deployment&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After deployment, access both environments using the provided URLs:​&lt;/p&gt;

&lt;p&gt;Blue Environment: &lt;a href="http://blue-environment.xxx.elasticbeanstalk.com" rel="noopener noreferrer"&gt;http://blue-environment.xxx.elasticbeanstalk.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Displays: "This is version 2.0"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foemkbnw4cpjwmkb2dhvk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foemkbnw4cpjwmkb2dhvk.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Status: Current production environment&lt;/p&gt;

&lt;p&gt;Green Environment: &lt;a href="http://green-environment.xxx.elasticbeanstalk.com" rel="noopener noreferrer"&gt;http://green-environment.xxx.elasticbeanstalk.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Displays: "This is version 2.0 with new features"&lt;/p&gt;

&lt;p&gt;Status: Staging environment&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu7qm5oogwkshfer6ztqy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu7qm5oogwkshfer6ztqy.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 7: Perform the Blue-Green Swap&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Method: AWS Console&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to Elastic Beanstalk → Environments&lt;/li&gt;
&lt;li&gt;Select the blue-environment&lt;/li&gt;
&lt;li&gt;Click Actions → Swap environment URLs&lt;/li&gt;
&lt;li&gt;Choose green-environment as the target&lt;/li&gt;
&lt;li&gt;Click Swap&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After swapping, verify that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The blue URL now serves version 2.0&lt;/li&gt;
&lt;li&gt;The green URL now serves version 1.0&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cleanup&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After completing your testing, destroy all resources to avoid charges:​&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform destroy

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feyvxg5dots7k4a003d28.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feyvxg5dots7k4a003d28.jpg" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Blue-green deployments with Terraform and AWS Elastic Beanstalk provide a robust framework for releasing applications with confidence. This approach eliminates downtime, enables instant rollbacks, and maintains high availability during deployments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reference:
&lt;/h2&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/fTVx2m5fEbQ"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;h2&gt;
  
  
  &amp;gt;&amp;gt; Connect With Me
&lt;/h2&gt;

&lt;p&gt;If you enjoyed this post or want to follow my #30DaysOfAWSTerraformChallenge journey, feel free to connect with me here:&lt;/p&gt;

&lt;p&gt;💼 LinkedIn: &lt;a href="https://www.linkedin.com/in/amitkushwaha7/" rel="noopener noreferrer"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🐙 GitHub: &lt;a href="https://github.com/Amitkushwaha7" rel="noopener noreferrer"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📝 Hashnode / &lt;a href="https://hashnode.com/@amit902" rel="noopener noreferrer"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🐦 Twitter/X: &lt;a href="https://x.com/AmitKum43380951" rel="noopener noreferrer"&gt;Amit Kushwaha&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>learning</category>
      <category>kubernetes</category>
    </item>
  </channel>
</rss>
