<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Anil KUMAR</title>
    <description>The latest articles on Forem by Anil KUMAR (@anil_kumar_noolu).</description>
    <link>https://forem.com/anil_kumar_noolu</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/anil_kumar_noolu"/>
    <language>en</language>
    <item>
      <title>AWS Lambda: Deactivate Inactive IAM Keys</title>
      <dc:creator>Anil KUMAR</dc:creator>
      <pubDate>Wed, 28 Jan 2026 11:03:40 +0000</pubDate>
      <link>https://forem.com/anil_kumar_noolu/aws-lambda-deactivate-inactive-iam-keys-26gh</link>
      <guid>https://forem.com/anil_kumar_noolu/aws-lambda-deactivate-inactive-iam-keys-26gh</guid>
      <description>&lt;p&gt;This marks the first blog of AWS Lambda Series. I will be doing some automations on AWS using Lambda and will be posting them here with a blog. In this blog, we will use Lambda and Event driven functions to deactivate/disable the keys which are older than 3 months to keep our AWS account safe and secure.&lt;/p&gt;

&lt;p&gt;Imagine you are working in a big team and you have multiple people working across various environments. You will be having a lot of security credentials lying with of no use. So for those use cases, you can consider this automation where it will remove the access keys and secret_access_keys of the users in lower environments which are older than a month.&lt;/p&gt;

&lt;p&gt;Consider this automation in 2 phases:&lt;/p&gt;

&lt;h2&gt;
  
  
  PHASE-1 Notification &amp;amp; Test Setup :
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Create a SNS Topic and add your email id to the subscription topic of that SNS. So In this case whenever a key has been disbaled you will be informed via email.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8c2mxxdc8vag1iwsomma.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8c2mxxdc8vag1iwsomma.jpg" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu8c4ldmdubnawyow7n66.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu8c4ldmdubnawyow7n66.jpg" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the SNS is created and your email has been added to the subscription, then go to the email and confirm the subscription.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Now Create a Dummy IAM user for practice purpose and create security access and secret access keys.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  PHASE-2 Lambda &amp;amp; Event Configuration:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Create a IAM Execution Role for Lambda so that it should have all the permissions for the lambda to execute.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "iam:ListUsers",
                "iam:ListAccessKeys",
                "iam:UpdateAccessKey",
                "sns:Publish",
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": "*"
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You could see that the above role have access to list users, keys and publish it to sns and create logs for the same.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhp6g6r2rlqdhbkq39ub.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhp6g6r2rlqdhbkq39ub.jpg" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Develop the Lambda Function logic with all the code and pre-requisites.&lt;br&gt;
Give the runtime as Python 3.12 and also extend the timeout to 1 minute as the default timeout is 3 seconds which will be not sufficient.&lt;br&gt;
Copy the code from this &lt;a href="https://github.com/anilkumar-noolu/mastering-lambda/blob/Master/01-iam_keys_inactive/terraform/modules/lambda/iam_key_auditor.py" rel="noopener noreferrer"&gt;repo&lt;/a&gt; and save it in the Lambda.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now create the EventBridge function to trigger Lambda in relevant events. While creating, select a target of AWS Service such as Lambda and create a role for that. Select the rate of 1 minute so that it will execute for every one minute.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9typ6cjzxot4liz6awle.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9typ6cjzxot4liz6awle.jpg" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 3: Automated Remediation
&lt;/h2&gt;

&lt;p&gt;Now for testing function, go to the lambda function and click on deploy and test, you can see that the security keys will be disabled, you can check the same in the Monitoring tab of the Lambda Function.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhd584sfvpqks4kwzsw5g.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhd584sfvpqks4kwzsw5g.jpg" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we will add the above trigger to the Lambda function we have created so that everything will be automated instead of us going to the Lambda function to deploy.&lt;/p&gt;

&lt;p&gt;Go to Lambda -&amp;gt; Select the function -&amp;gt; Click on Add Trigger and select the created Eventbridge Function.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3iaiznhtv1gavuk37a7a.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3iaiznhtv1gavuk37a7a.jpg" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8qidx79dcend0371mj25.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8qidx79dcend0371mj25.jpg" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhcvho9bm6hey3ewdi0zr.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhcvho9bm6hey3ewdi0zr.jpg" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have created the sample security keys for a user.&lt;br&gt;
Now you can see from the above 2 pics that the keys have been disabled automatically after one minute.&lt;/p&gt;
&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;Exposed IAM access keys are one of the most common causes of AWS account compromise. Manual monitoring does not scale, and delayed response often leads to serious incidents.&lt;/p&gt;

&lt;p&gt;The above automation solves the above problems and helps us to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Reacts automatically&lt;/li&gt;
&lt;li&gt;Notifies the right people&lt;/li&gt;
&lt;li&gt;Removes the risk without human intervention&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Thanks to Sai Kiran Pinapathruni for the Youtube videos and if you have any doubts refer to the youtube video below:&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/fg6n-SNaXdQ"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

</description>
      <category>lambda</category>
      <category>aws</category>
    </item>
    <item>
      <title>Day 22: Two -Tier Architecture Setup On AWS</title>
      <dc:creator>Anil KUMAR</dc:creator>
      <pubDate>Mon, 22 Dec 2025 07:53:23 +0000</pubDate>
      <link>https://forem.com/anil_kumar_noolu/day-22-two-tier-architecture-setup-on-aws-lci</link>
      <guid>https://forem.com/anil_kumar_noolu/day-22-two-tier-architecture-setup-on-aws-lci</guid>
      <description>&lt;p&gt;Today marks the Day 22 of 30 Days of AWS Terraform Challenge by Piyush Sachdeva. Today we will discuss about the designing and implementation of a secure, modular two-tier AWS architecture using Terraform. Think of this like a secure building with a public lobby and a high-security vault.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture Flow:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffmvzjen21b6imd8vz3z4.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffmvzjen21b6imd8vz3z4.jpg" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The design follows a classic two-tier application model, implemented with security and scalability in mind:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌─────────────────────────────────────────────────────────────┐
│                         VPC (10.0.0.0/16)                   │
│  ┌─────────────────────┐   ┌─────────────────────────────┐  │
│  │   Public Subnet     │   │     Private Subnets         │  │
│  │   (10.0.1.0/24)     │   │  (10.0.2.0/24, 10.0.3.0/24) │  │
│  │                     │   │                             │  │
│  │  ┌───────────────┐  │   │    ┌──────────────────┐     │  │
│  │  │  EC2 (Flask)  │──│───│───►│   RDS MySQL      │     │  │
│  │  │  Web Server   │  │   │    │   Database       │     │  │
│  │  └───────────────┘  │   │    └──────────────────┘     │  │
│  └─────────────────────┘   └─────────────────────────────┘  │
│           │                                                  │
│           ▼                                                  │
│  ┌─────────────────┐                                        │
│  │ Internet Gateway│                                        │
│  └─────────────────┘                                        │
└─────────────────────────────────────────────────────────────┘
           │
           ▼
      Internet (Users)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Web Tier (Public)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EC2 instance running a Flask application&lt;/li&gt;
&lt;li&gt;Deployed in a public subnet&lt;/li&gt;
&lt;li&gt;Internet access via Internet Gateway&lt;/li&gt;
&lt;li&gt;Listens on port 80&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Data Tier (Private)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;RDS MySQL instance&lt;/li&gt;
&lt;li&gt;Deployed in private subnets across multiple AZs&lt;/li&gt;
&lt;li&gt;No direct inbound internet access&lt;/li&gt;
&lt;li&gt;Outbound connectivity via NAT Gateway (for patching, backups, etc.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Secrets Management&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Secrets Manager stores database credentials&lt;/li&gt;
&lt;li&gt;Terraform generates a strong random password&lt;/li&gt;
&lt;li&gt;Secrets stored as JSON (username, password, engine, host)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Network &amp;amp; Security&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Custom VPC with public and private subnets&lt;/li&gt;
&lt;li&gt;NAT Gateway for private subnet outbound traffic&lt;/li&gt;
&lt;li&gt;Tight security group rules enforcing least privilege&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The Public Lobby (The Web Tier)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is where the "front door" of the application lives. It hosts the website that users interact with.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt; A computer (EC2) running a simple website (Flask).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security:&lt;/strong&gt; It is open to the public so people can visit the site, but it is heavily guarded to only allow web traffic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Private Vault (The Data Tier)&lt;/strong&gt;&lt;br&gt;
This is where all the important information is stored.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt; A database (RDS) that holds all the user records.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security:&lt;/strong&gt; This area has no internet access. It is buried deep inside the network. The only way to get in is through the "Public Lobby." If a hacker tries to find the database from the outside, it simply doesn’t exist to them.&lt;/p&gt;
&lt;h2&gt;
  
  
  Secrets Manager:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;No Written Passwords&lt;/strong&gt;: I told the system to create a random, 16-character password automatically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hidden Keys&lt;/strong&gt;: The password is kept in a digital safe. When the website starts up, it asks the safe for the key, uses it to talk to the database, and then throws the "memory" of it away. It’s never saved in a file where people can see it.&lt;/p&gt;
&lt;h2&gt;
  
  
  Modules:
&lt;/h2&gt;

&lt;p&gt;To keep the code clean, reusable, and production-ready, the project is broken into custom Terraform modules. Each module owns a single responsibility.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;secret&lt;/strong&gt;&lt;br&gt;
Generates a secure database password and stores credentials in AWS Secrets Manager&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;vpc&lt;/strong&gt;&lt;br&gt;
Provisions the VPC, public &amp;amp; private subnets, Internet Gateway, route tables, and NAT Gateway&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;security_group&lt;/strong&gt;&lt;br&gt;
Creates separate security groups for the web and database tiers&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;rds&lt;/strong&gt;&lt;br&gt;
Deploys a private RDS MySQL instance using credentials from Secrets Manager&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ec2&lt;/strong&gt;&lt;br&gt;
Provisions the web server and deploys the Flask app via user data&lt;/p&gt;

&lt;p&gt;The root module wires everything together using outputs and input variables.&lt;/p&gt;
&lt;h2&gt;
  
  
  Implementation:
&lt;/h2&gt;
&lt;h2&gt;
  
  
  Secure Secrets Handling:
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "random_password" "db_password" {
  length           = 16
  special          = true
  override_special = "!#$%&amp;amp;*()-_=+[]{}&amp;lt;&amp;gt;:?"
}

resource "random_id" "suffix" {
  byte_length = 4
}

resource "aws_secretsmanager_secret" "db_password" {
  name        = "${var.project_name}-${var.environment}-db-password-${random_id.suffix.hex}"
  description = "Database password for ${var.project_name}"

  tags = {
    Name        = "${var.project_name}-db-password"
    Environment = var.environment
  }
}

resource "aws_secretsmanager_secret_version" "db_password" {
  secret_id = aws_secretsmanager_secret.db_password.id
  secret_string = jsonencode({
    username = var.db_username
    password = random_password.db_password.result
    engine   = "mysql"
    host     = "" # Will be populated by application or looked up
  })
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Instead of hardcoding credentials:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Terraform generates a 16-character random password resource.&lt;/li&gt;
&lt;li&gt;Credentials are stored securely in AWS Secrets Manager&lt;/li&gt;
&lt;li&gt;Secret payload is structured JSON&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Only required outputs are passed to dependent modules, making communication easier between modules.&lt;/p&gt;
&lt;h2&gt;
  
  
  VPC, Subnets, and NAT Gateway:
&lt;/h2&gt;

&lt;p&gt;We have used a custom module of vpc to create all this networking components.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Public subnets host the EC2 instance&lt;/li&gt;
&lt;li&gt;Private subnets host the RDS instance.&lt;/li&gt;
&lt;li&gt;Internet gateway for instance in Public subnet to access the Internet.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Security Groups:
&lt;/h2&gt;

&lt;p&gt;We used this Security Groups module to make this project Least Privilege by Design.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Web Security Group&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Allow HTTP (80) from anywhere&lt;/li&gt;
&lt;li&gt;SSH restricted to a specific IP only&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Database Security Group&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Allow MySQL (3306) only from the web security group&lt;/li&gt;
&lt;li&gt;No public exposure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Security groups reference each other instead of CIDR blocks — a cleaner and safer approach.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;RDS Module&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;We uses this module to create a RDS database.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deployed across multiple private subnets&lt;/li&gt;
&lt;li&gt;Not publicly accessible&lt;/li&gt;
&lt;li&gt;Ingress limited strictly to web tier&lt;/li&gt;
&lt;li&gt;Designed for availability and isolation&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  EC2 Module:
&lt;/h2&gt;

&lt;p&gt;The EC2 instance uses a user data template to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install system dependencies&lt;/li&gt;
&lt;li&gt;Deploy a Flask application&lt;/li&gt;
&lt;li&gt;Inject database connection details&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The app exposes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;/ – homepage&lt;/li&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;li&gt;/health – database connectivity check&lt;/li&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;li&gt;/db/info – database metadata&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It performs basic insert and read operations to validate end-to-end connectivity.&lt;/p&gt;
&lt;h2&gt;
  
  
  Implementation:
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
terraform plan
terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;RDS takes time to come up, so wait till RDS is up and running while executing terraform apply.&lt;/p&gt;
&lt;h2&gt;
  
  
  Validation Steps
&lt;/h2&gt;

&lt;p&gt;After deployment:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Verified Flask app via EC2 public DNS&lt;/li&gt;
&lt;li&gt;Tested /health and /db/info endpoints&lt;/li&gt;
&lt;li&gt;Confirmed RDS had no public access&lt;/li&gt;
&lt;li&gt;Checked Secrets Manager values&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;This concludes the Mini project of 2 tier architecture using RDS, EC2 and Networking components.&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/7XcqRDVMv3o"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>architecture</category>
      <category>devops</category>
    </item>
    <item>
      <title>Day 21: AWS IAM Policy and Governance Setup Using Terraform</title>
      <dc:creator>Anil KUMAR</dc:creator>
      <pubDate>Fri, 19 Dec 2025 12:29:01 +0000</pubDate>
      <link>https://forem.com/anil_kumar_noolu/day-21-aws-iam-policy-and-governance-setup-using-terraform-2jh5</link>
      <guid>https://forem.com/anil_kumar_noolu/day-21-aws-iam-policy-and-governance-setup-using-terraform-2jh5</guid>
      <description>&lt;p&gt;Today marks the Day 21 of 30 days of AWS Terraform challeneg Initiative by Piyush Sachdeva. Today we will learn about the AWS IAM Policy and Governance Setup using Terraform.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Policy and Governance?
&lt;/h2&gt;

&lt;p&gt;Policy enforces rules to prevent non-compliant actions in AWS:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Users without MFA cannot delete resources&lt;/li&gt;
&lt;li&gt;S3 uploads must use HTTPS (encrypted in transit)&lt;/li&gt;
&lt;li&gt;Resources like S3 buckets/EC2 must have required tags (e.g., environment=dev)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;IAM policies&lt;/strong&gt; block bad actions before they occur.&lt;/p&gt;

&lt;p&gt;If any one tries to delete a S3 bucket with IAM policy not having required permissions for it, then it will block that action preventing deletion of that bucket.&lt;/p&gt;

&lt;p&gt;Governance tracks compliance via &lt;strong&gt;AWS Config&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Monitors resources after creation.&lt;/li&gt;
&lt;li&gt;Logs compliant/non-compliant status.&lt;/li&gt;
&lt;li&gt;Stores audit logs in secure S3 bucket.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;On the other hand, Governance will not prevent that action but will store everything in a config file inside S3 bucket or any origin source provided.&lt;/p&gt;

&lt;p&gt;Ex: If we tried creating a S3 Bucket or an EC2 instance without tags, then IAM policy will prevent that action whereas Governance will not block that action but logs that action as non-complaint status in config.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌─────────────────────────────────────────────────────┐
│                                                     │
│   1. PREVENTIVE (IAM Policies)                     │
│      → Blocks bad actions BEFORE they happen       │
│      → Example: "Cannot delete S3 without MFA"     │
│                                                     │
│   2. DETECTIVE (AWS Config)                        │
│      → Finds violations AFTER they happen          │
│      → Example: "This bucket is not encrypted"     │
│                                                     │
└─────────────────────────────────────────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Project Architecture Overview:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd8moxp2k5fhao360slw1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd8moxp2k5fhao360slw1.jpg" alt=" "&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Policy (Prevent) → AWS Config (Detect) → S3 Audit Logs (Store)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Key Components Created:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Encrypted, versioned S3 bucket for audit logs (public access blocked).&lt;/li&gt;
&lt;li&gt;3 IAM policies for enforcement.&lt;/li&gt;
&lt;li&gt;6 AWS Config rules for compliance monitoring.&lt;/li&gt;
&lt;li&gt;IAM roles/users for service access
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;         ┌──────────────────┐
         │   IAM POLICIES   │  ◄── PREVENT bad actions
         │  • MFA Delete    │
         │  • Encryption    │
         │  • Required Tags │
         └────────┬─────────┘
                  │
                  ▼
         ┌──────────────────┐
         │   AWS CONFIG     │  ◄── DETECT violations
         │   6 Rules        │
         │  (Compliance)    │
         └────────┬─────────┘
                  │
                  ▼
         ┌──────────────────┐
         │    S3 BUCKET     │  ◄── STORE logs
         │  🔒 Encrypted    │
         │  🔒 Versioned    │
         └──────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Project Objectives:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Policy Creation&lt;/strong&gt;: Implement IAM policies to enforce security best practices&lt;br&gt;
&lt;strong&gt;Governance Setup&lt;/strong&gt;: Configure AWS Config for continuous compliance monitoring&lt;br&gt;
&lt;strong&gt;Resource Tagging&lt;/strong&gt;: Demonstrate tagging strategies for resource management&lt;br&gt;
&lt;strong&gt;S3 Security&lt;/strong&gt;: Apply encryption, versioning, and access controls&lt;br&gt;
&lt;strong&gt;Compliance Monitoring&lt;/strong&gt;: Track configuration changes and detect violations&lt;/p&gt;
&lt;h2&gt;
  
  
  Terraform Implementation Breakdown:
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;day21/
├── provider.tf       # AWS provider configuration
├── variables.tf      # Input variables
├── main.tf          # S3 bucket and shared resources
├── iam.tf           # IAM policies and roles
├── config.tf        # AWS Config recorder and rules
├── outputs.tf       # Output values
└── README.md        # This file
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;provider.tf&lt;/strong&gt; — AWS Provider&lt;br&gt;
What it does: Tells Terraform to use AWS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;variables.tf&lt;/strong&gt; — Inputs&lt;br&gt;
What it does: Makes the code reusable&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;main.tf&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;In this file, we will be creating S3 Bucket for Audit Logs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# S3 Bucket to store AWS Config history
resource "aws_s3_bucket" "config_bucket" {
  bucket        = "${var.project_name}-config-bucket-${random_string.suffix.result}"
  force_destroy = true

  tags = {
    Name        = "${var.project_name}-config-bucket"
    Environment = "governance"
    Purpose     = "aws-config-storage"
    ManagedBy   = "terraform"
  }
}

resource "random_string" "suffix" {
  length  = 6
  special = false
  upper   = false
}

# Enable versioning on Config bucket
resource "aws_s3_bucket_versioning" "config_bucket_versioning" {
  bucket = aws_s3_bucket.config_bucket.id
  versioning_configuration {
    status = "Enabled"
  }
}

# Enable encryption on Config bucket
resource "aws_s3_bucket_server_side_encryption_configuration" "config_bucket_encryption" {
  bucket = aws_s3_bucket.config_bucket.id

  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm = "AES256"
    }
  }
}

# Block public access to Config bucket
resource "aws_s3_bucket_public_access_block" "config_bucket_public_access" {
  bucket = aws_s3_bucket.config_bucket.id

  block_public_acls       = true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true
}

# S3 Bucket Policy for Config
resource "aws_s3_bucket_policy" "config_bucket_policy" {
  bucket = aws_s3_bucket.config_bucket.id

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Sid    = "AWSConfigBucketPermissionsCheck"
        Effect = "Allow"
        Principal = {
          Service = "config.amazonaws.com"
        }
        Action   = "s3:GetBucketAcl"
        Resource = aws_s3_bucket.config_bucket.arn
      },
      {
        Sid    = "AWSConfigBucketExistenceCheck"
        Effect = "Allow"
        Principal = {
          Service = "config.amazonaws.com"
        }
        Action   = "s3:ListBucket"
        Resource = aws_s3_bucket.config_bucket.arn
      },
      {
        Sid    = "AWSConfigBucketPutObject"
        Effect = "Allow"
        Principal = {
          Service = "config.amazonaws.com"
        }
        Action   = "s3:PutObject"
        Resource = "${aws_s3_bucket.config_bucket.arn}/*"
        Condition = {
          StringEquals = {
            "s3:x-amz-acl" = "bucket-owner-full-control"
          }
        }
      },
      {
        Sid       = "DenyInsecureTransport"
        Effect    = "Deny"
        Principal = "*"
        Action    = "s3:*"
        Resource = [
          aws_s3_bucket.config_bucket.arn,
          "${aws_s3_bucket.config_bucket.arn}/*"
        ]
        Condition = {
          Bool = {
            "aws:SecureTransport" = "false"
          }
        }
      }
    ]
  })

  depends_on = [aws_s3_bucket_public_access_block.config_bucket_public_access]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For the above S3 bucket creation, we have used random_string of suffix for unique name of S3 bucket and then we have enabled Versioning, Encryption (Server-Side-encryption) and blocked public access for that bucket.&lt;/p&gt;

&lt;p&gt;Also, we have added a S3 Bucket Policy for Config so that config should be able to write logs to this bucket without which config will not be able to write logs to this bucket.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;iam.tf&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;In this file, we will be creating IAM Policy Examples following which the failure of the implementation should result in the errors.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ------------------------------------------------------------------------------
# 1. IAM Policy Examples
# ------------------------------------------------------------------------------

# Create a custom IAM policy that enforces MFA for deleting S3 objects
resource "aws_iam_policy" "mfa_delete_policy" {
  name        = "${var.project_name}-mfa-delete-policy"
  description = "Policy that requires MFA to delete S3 objects"

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Sid      = "DenyDeleteWithoutMFA"
        Effect   = "Deny"
        Action   = "s3:DeleteObject"
        Resource = "*"
        Condition = {
          BoolIfExists = {
            "aws:MultiFactorAuthPresent" = "false"
          }
        }
      }
    ]
  })
}

# IAM Policy: Enforce encryption in transit for S3
resource "aws_iam_policy" "enforce_s3_encryption_transit" {
  name        = "${var.project_name}-s3-encryption-transit"
  description = "Deny S3 actions without encryption in transit"

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Sid      = "DenyUnencryptedObjectUploads"
        Effect   = "Deny"
        Action   = "s3:PutObject"
        Resource = "*"
        Condition = {
          Bool = {
            "aws:SecureTransport" = "false"
          }
        }
      }
    ]
  })
}

# IAM Policy: Require tagging for resource creation
resource "aws_iam_policy" "require_tags_policy" {
  name        = "${var.project_name}-require-tags"
  description = "Require specific tags when creating resources"

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Sid    = "RequireTagsOnEC2"
        Effect = "Deny"
        Action = [
          "ec2:RunInstances"
        ]
        Resource = "arn:aws:ec2:*:*:instance/*"
        Condition = {
          StringNotLike = {
            "aws:RequestTag/Environment" = ["dev", "staging", "prod"]
          }
        }
      },
      {
        Sid    = "RequireOwnerTag"
        Effect = "Deny"
        Action = [
          "ec2:RunInstances"
        ]
        Resource = "arn:aws:ec2:*:*:instance/*"
        Condition = {
          "Null" = {
            "aws:RequestTag/Owner" = "true"
          }
        }
      }
    ]
  })
}

# IAM User for demonstration
resource "aws_iam_user" "demo_user" {
  name = "${var.project_name}-demo-user"
  path = "/governance/"

  tags = {
    Environment = "demo"
    Purpose     = "governance-training"
  }
}

# Attach MFA delete policy to demo user
resource "aws_iam_user_policy_attachment" "demo_user_mfa" {
  user       = aws_iam_user.demo_user.name
  policy_arn = aws_iam_policy.mfa_delete_policy.arn
}

# ------------------------------------------------------------------------------
# 2. IAM Role for AWS Config Service
# ------------------------------------------------------------------------------

# IAM Role for AWS Config Service
resource "aws_iam_role" "config_role" {
  name = "${var.project_name}-config-role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Principal = {
          Service = "config.amazonaws.com"
        }
      }
    ]
  })
}

# Attach managed policy to Config Role
resource "aws_iam_role_policy_attachment" "config_policy_attach" {
  role       = aws_iam_role.config_role.name
  policy_arn = "arn:aws:iam::aws:policy/service-role/AWS_ConfigRole"
}

# Additional policy for Config to write to S3
resource "aws_iam_role_policy" "config_s3_policy" {
  name = "${var.project_name}-config-s3-policy"
  role = aws_iam_role.config_role.id

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Effect = "Allow"
        Action = [
          "s3:GetBucketVersioning",
          "s3:PutObject",
          "s3:GetObject"
        ]
        Resource = [
          aws_s3_bucket.config_bucket.arn,
          "${aws_s3_bucket.config_bucket.arn}/*"
        ]
      }
    ]
  })
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the first block, we will create a custom IAM policy that enforces MFA for deleting S3 objects, If MFA is absent, it will deny deletion of that resource.&lt;/p&gt;

&lt;p&gt;In the second block, we will enforce encryption in transit for S3, so whenever you are trying to upload an object to S3 over http without secure, it will deny that request.&lt;/p&gt;

&lt;p&gt;In the third block, It requires tagging for resource creation. When users tried to create any resource without tagging, it will fail. We have created this in such a way that if anyone tries to create a EC2 instance, it should have mandatory tags of Environment=["dev", "staging", "prod"] and have Owner tag too.&lt;/p&gt;

&lt;p&gt;In the fourth block, It shows the creation of IAM User and it attaches the above MFA delete policy to demo user we have created.&lt;/p&gt;

&lt;p&gt;In the fifth block, It creates an IAM Role for AWS Config Service and attaching a policy for that after which we create an additional policy too for writing config logs to S3 bucket.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;config.tf&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;The above block creates Config recorder + 6 compliance rules.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ------------------------------------------------------------------------------
# AWS Config Recorder and Delivery Channel
# ------------------------------------------------------------------------------

# AWS Config Recorder
resource "aws_config_configuration_recorder" "main" {
  name     = "${var.project_name}-recorder"
  role_arn = aws_iam_role.config_role.arn

  recording_group {
    all_supported                 = true
    include_global_resource_types = true
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This records every configuration change in your AWS account.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# AWS Config Delivery Channel
resource "aws_config_delivery_channel" "main" {
  name           = "${var.project_name}-delivery-channel"
  s3_bucket_name = aws_s3_bucket.config_bucket.bucket
  depends_on     = [aws_config_configuration_recorder.main]
}

# Start the Config Recorder
resource "aws_config_configuration_recorder_status" "main" {
  name       = aws_config_configuration_recorder.main.name
  is_enabled = true
  depends_on = [aws_config_delivery_channel.main]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above block creates a Config Delivery channel which records all the config related information directly to the S3 bucket and starting the config recorder.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Config Rule: Ensure S3 buckets do not allow public write
resource "aws_config_config_rule" "s3_public_write_prohibited" {
  name = "s3-bucket-public-write-prohibited"

  source {
    owner             = "AWS"
    source_identifier = "S3_BUCKET_PUBLIC_WRITE_PROHIBITED"
  }

  depends_on = [aws_config_configuration_recorder.main]
}

# Config Rule: Ensure S3 buckets have encryption enabled
resource "aws_config_config_rule" "s3_encryption" {
  name = "s3-bucket-server-side-encryption-enabled"

  source {
    owner             = "AWS"
    source_identifier = "S3_BUCKET_SERVER_SIDE_ENCRYPTION_ENABLED"
  }

  depends_on = [aws_config_configuration_recorder.main]
}

# Config Rule: Ensure S3 buckets block public access
resource "aws_config_config_rule" "s3_public_read_prohibited" {
  name = "s3-bucket-public-read-prohibited"

  source {
    owner             = "AWS"
    source_identifier = "S3_BUCKET_PUBLIC_READ_PROHIBITED"
  }

  depends_on = [aws_config_configuration_recorder.main]
}

# Config Rule: Ensure EBS volumes are encrypted
resource "aws_config_config_rule" "ebs_encryption" {
  name = "encrypted-volumes"

  source {
    owner             = "AWS"
    source_identifier = "ENCRYPTED_VOLUMES"
  }

  depends_on = [aws_config_configuration_recorder.main]
}

# Config Rule: Ensure EC2 instances have required tags
resource "aws_config_config_rule" "required_tags" {
  name = "required-tags"

  source {
    owner             = "AWS"
    source_identifier = "REQUIRED_TAGS"
  }

  input_parameters = jsonencode({
    tag1Key = "Environment"
    tag2Key = "Owner"
  })

  scope {
    compliance_resource_types = [
      "AWS::EC2::Instance",
      "AWS::S3::Bucket"
    ]
  }

  depends_on = [aws_config_configuration_recorder.main]
}

# Config Rule: Ensure root account has MFA enabled
resource "aws_config_config_rule" "root_mfa_enabled" {
  name = "root-account-mfa-enabled"

  source {
    owner             = "AWS"
    source_identifier = "ROOT_ACCOUNT_MFA_ENABLED"
  }

  depends_on = [aws_config_configuration_recorder.main]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above block contains all the config rules which we have created of the config to detect any complaint issues.&lt;/p&gt;

&lt;p&gt;It contains mainly 6 complaince blocks:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Rule&lt;/th&gt;
&lt;th&gt;Source Identifier&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;S3 Public Write Prohibited&lt;/td&gt;
&lt;td&gt;S3_BUCKET_PUBLIC_WRITE_PROHIBITED&lt;/td&gt;
&lt;td&gt;Block public writes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;S3 Encryption Enabled&lt;/td&gt;
&lt;td&gt;S3_BUCKET_SERVER_SIDE_ENCRYPTION_ENABLED&lt;/td&gt;
&lt;td&gt;Enforce SSE s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;S3 Public Read Prohibited&lt;/td&gt;
&lt;td&gt;S3_BUCKET_PUBLIC_READ_PROHIBITED&lt;/td&gt;
&lt;td&gt;Block public reads&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;EBS Volumes Encrypted&lt;/td&gt;
&lt;td&gt;VOLUME_EBS_ENCRYPTED&lt;/td&gt;
&lt;td&gt;Encrypted volumes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Required Tags&lt;/td&gt;
&lt;td&gt;Custom parameters&lt;/td&gt;
&lt;td&gt;Tag validation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Root MFA Enabled&lt;/td&gt;
&lt;td&gt;ROOT_ACCOUNT_MFA_ENABLED&lt;/td&gt;
&lt;td&gt;Root account security&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;output.tf&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It shows important information after deployment&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "config_rules" {
  value = [list of all rule names]
}
output "config_recorder_status" {
  value = true  # Recorder is running
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Verification &amp;amp; Testing:
&lt;/h2&gt;

&lt;p&gt;Now trigger the creation of all the resources using terraform commands.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
terraform plan
terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After terraform apply, we can see 23 resources created getting created.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;IAM policies&lt;/li&gt;
&lt;li&gt;S3 bucket with security settings&lt;/li&gt;
&lt;li&gt;Config recorder&lt;/li&gt;
&lt;li&gt;6 Config rules&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  ✅ S3 Bucket Properties
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;✅ Versioning enabled - ✅ AES256 encryption - ✅ Public access blocked - ✅ AWS logs folder created&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  ✅ AWS Config Dashboard:
&lt;/h2&gt;

&lt;p&gt;We could see the resources with the status as complaint or non-complaint.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Compliant: 4 rules ✓
Non-Compliant: 1 resource (missing tags) ✗
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Test Compliance Detection:
&lt;/h2&gt;

&lt;p&gt;Created untagged S3 bucket → Detected as non-compliant after 2 minutes.&lt;/p&gt;

&lt;p&gt;Check Config -&amp;gt; Wait 2–3 minutes, then:&lt;/p&gt;

&lt;p&gt;Go to AWS Config → Rules&lt;br&gt;
Click “s3-bucket-server-side-encryption-enabled”&lt;br&gt;
See new bucket as NON-COMPLIANT (red)&lt;br&gt;
Explain: Config detected the violation automatically!&lt;/p&gt;
&lt;h2&gt;
  
  
  Key Learnings
&lt;/h2&gt;
&lt;h2&gt;
  
  
  Policy vs Config Rules:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;IAM Policy&lt;/strong&gt;: Prevents actions before execution&lt;br&gt;
&lt;strong&gt;Config Rules&lt;/strong&gt;: Detects issues after creation&lt;/p&gt;
&lt;h2&gt;
  
  
  Terraform Best Practices:
&lt;/h2&gt;

&lt;p&gt;Use random_string for unique names&lt;br&gt;
Explicit dependencies between resources&lt;br&gt;
Reference Terraform Registry for resource syntax&lt;/p&gt;
&lt;h2&gt;
  
  
  Security Tip:
&lt;/h2&gt;

&lt;p&gt;Always use the Principle of Least Privilege. While we used Resource: "*" for this lab, in production, always restrict policies to specific ARNs!&lt;/p&gt;
&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;See the below video for more understanding about AWS IAM Policy and Governance Setup using Terraform.&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/sAtbDGi-82A"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
    </item>
    <item>
      <title>Day 20: Terraform Custom Modules for EKS</title>
      <dc:creator>Anil KUMAR</dc:creator>
      <pubDate>Wed, 17 Dec 2025 17:45:00 +0000</pubDate>
      <link>https://forem.com/anil_kumar_noolu/day-20-terraform-custom-modules-for-eks-31lh</link>
      <guid>https://forem.com/anil_kumar_noolu/day-20-terraform-custom-modules-for-eks-31lh</guid>
      <description>&lt;p&gt;Welcome to the Day 20 of 30 days of AWS Terraform challenge initiative by Piyush Sachdeva. In this blog, we will understand about modules, what exactly are modules, when we will use them and what are the use cases of modules in Terraform real life use cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Modules:
&lt;/h2&gt;

&lt;p&gt;Modules are nothing but a reusable piece of code in which you can encapsulate all the complexity of Terraform and can reuse them in all the possible situations.&lt;/p&gt;

&lt;p&gt;In other words, Modules are self-contained packages of configuration code that you create yourself to group related resources together. Think of them as custom functions for your infrastructure: you define the logic once, and then call it multiple times with different parameters. In other programming languages, we have functions whereas in terraform, modules will replace them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use Cases of Modules&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Abstraction&lt;/strong&gt; - You want to hide 50 lines of complex networking code behind a simple 5-line block.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reduced Complexity&lt;/strong&gt; — A custom module lets you expose only few variables your team actually cares about.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Consistency&lt;/strong&gt; — ensuring every EKS cluster in your company is built exactly the same way across Dev, Staging, and Prod.&lt;/p&gt;

&lt;h2&gt;
  
  
  Types of Modules:
&lt;/h2&gt;

&lt;p&gt;Modules are further divided into 3 types based on their use cases and functionality.&lt;/p&gt;

&lt;h2&gt;
  
  
  Public Modules:
&lt;/h2&gt;

&lt;p&gt;Public Modules are the modules which have been built and maintained by Cloud providers like AWS, Azure, Hashicorp, GCP. These modules will be developed by those cloud providers and will be available for anyone to use without any restrictions.&lt;/p&gt;

&lt;p&gt;These are community-driven modules hosted on the Terraform Registry. They are publicly accessible and free for anyone to use.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Maintained by&lt;/strong&gt;: Individual contributors, the open-source community, or smaller organizations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Feature&lt;/strong&gt;: Great for common tasks (e.g., setting up a basic S3 bucket or a generic VPC).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Risk&lt;/strong&gt;: Quality and maintenance can vary; always check the download count and "stars" before using.&lt;/p&gt;

&lt;h2&gt;
  
  
  Partner Modules:
&lt;/h2&gt;

&lt;p&gt;Partner modules are the modules which are maintained by both Hashicorp and the Partner, Here Hashicorp will also be maintaining that Modules along with the partner.&lt;/p&gt;

&lt;p&gt;These are a subset of public modules but carry a "Verified" or "Partner" badge on the Terraform Registry.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Maintained by&lt;/strong&gt;: Major technology companies (like AWS, Azure, Google Cloud, or HashiCorp itself) in partnership with HashiCorp.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Feature&lt;/strong&gt;: These undergo a rigorous verification process to ensure they follow best practices and are actively maintained.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use Case&lt;/strong&gt;: Ideal for mission-critical infrastructure where stability and official support are required.&lt;/p&gt;

&lt;h2&gt;
  
  
  Custom Modules:
&lt;/h2&gt;

&lt;p&gt;Custom modules are the modules which will be developed by the community or the Individuals.&lt;br&gt;
We can customize these modules based on the requirements we have and can utilize them based on our test cases.&lt;/p&gt;

&lt;p&gt;These are modules created internally by you or your organization to meet specific business needs or security standards.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Maintained by&lt;/strong&gt;: Your internal DevOps or Platform teams.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Feature&lt;/strong&gt;: They often wrap public/partner modules to "hard-code" organizational standards (e.g., always enabling encryption or specific tagging).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Source&lt;/strong&gt;: Usually stored in a Private Module Registry (via HCP Terraform/Enterprise) or directly in a private Git repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxmc9cvlhcyi0aylo8vrl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxmc9cvlhcyi0aylo8vrl.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Modules working flow Architecture:
&lt;/h2&gt;

&lt;p&gt;In today blog, we will see how this Terraform configuration demonstrates custom module creation for EKS cluster deployment.&lt;/p&gt;

&lt;p&gt;Generally for the creation of EKS Cluster, we need mainly VPC for networking, IAM for roles of worker nodes, eks for creating master and worker nodes and also ec2 for creating required instances.&lt;/p&gt;
&lt;h2&gt;
  
  
  Structure:
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;modules/
├── vpc/              # Custom VPC module
│   ├── main.tf
│   ├── variables.tf
│   └── outputs.tf
├── iam/              # Custom IAM roles module
│   ├── main.tf
│   ├── variables.tf
│   └── outputs.tf
├── eks/              # Custom EKS cluster module
│   ├── main.tf
│   ├── variables.tf
│   ├── outputs.tf
│   └── templates/
│       └── userdata.sh
└── secrets-manager/  # Custom Secrets Manager module
    ├── main.tf
    ├── variables.tf
    └── outputs.tf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Modules Overview:
&lt;/h2&gt;
&lt;h2&gt;
  
  
  1. VPC Module (modules/vpc/):
&lt;/h2&gt;

&lt;p&gt;Creates networking infrastructure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VPC with custom CIDR&lt;/li&gt;
&lt;li&gt;Public subnets (3 AZs) with Internet Gateway&lt;/li&gt;
&lt;li&gt;Private subnets (3 AZs) with NAT Gateway&lt;/li&gt;
&lt;li&gt;Route tables and associations&lt;/li&gt;
&lt;li&gt;EKS-required subnet tags&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  2. IAM Module (modules/iam/)
&lt;/h2&gt;

&lt;p&gt;Creates IAM resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EKS cluster IAM role with policies&lt;/li&gt;
&lt;li&gt;Node group IAM role with policies&lt;/li&gt;
&lt;li&gt;OIDC provider for IRSA (IAM Roles for Service Accounts)&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  3. EKS Module (modules/eks/)
&lt;/h2&gt;

&lt;p&gt;Creates EKS cluster resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EKS control plane with KMS encryption&lt;/li&gt;
&lt;li&gt;CloudWatch log group&lt;/li&gt;
&lt;li&gt;Security groups (cluster + nodes)&lt;/li&gt;
&lt;li&gt;EKS addons (CoreDNS, kube-proxy, VPC CNI)&lt;/li&gt;
&lt;li&gt;Managed node groups with launch templates&lt;/li&gt;
&lt;li&gt;Customizable node group configurations&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  4. Secrets Manager Module (modules/secrets-manager/)
&lt;/h2&gt;

&lt;p&gt;Creates secrets management resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;KMS key for secrets encryption&lt;/li&gt;
&lt;li&gt;Database credentials secret (optional)&lt;/li&gt;
&lt;li&gt;API keys secret (optional)&lt;/li&gt;
&lt;li&gt;Application config secret (optional)&lt;/li&gt;
&lt;li&gt;IAM policy for reading secrets&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  The setup includes:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;VPC&lt;/strong&gt;: Custom VPC with public and private subnets across 3 availability zones&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;EKS Cluster&lt;/strong&gt;: Managed Kubernetes cluster with latest version&lt;br&gt;
&lt;strong&gt;Node Groups&lt;/strong&gt;: General purpose node group (on-demand instances), Spot instance node group for cost optimization&lt;br&gt;
&lt;strong&gt;Add-ons:&lt;/strong&gt; CoreDNS, kube-proxy, VPC CNI, and EBS CSI driver&lt;br&gt;
&lt;strong&gt;IRSA&lt;/strong&gt;: IAM Roles for Service Accounts enabled for fine-grained permissions&lt;/p&gt;

&lt;p&gt;Think of this structure as Parent and Child Modules.&lt;/p&gt;

&lt;p&gt;Inside each module be it Parent module or child module of VPC, IAM or EKS, we will be having separate terraform files like main.tf, variables.tf, output.tf, providers.tf amd so on..&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft9sqatu141ni1v4w5jbc.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft9sqatu141ni1v4w5jbc.jpg" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Variables.tf and outputs.tf plays a key role as they help in how the root and child module communicates with each other.&lt;/p&gt;

&lt;p&gt;Also main.tf of root and child modules will also communicate with each other mainly with Call by Parameters.&lt;/p&gt;

&lt;p&gt;Lets deep dive into the VPC module now:&lt;/p&gt;

&lt;p&gt;Below is the &lt;strong&gt;main.tf&lt;/strong&gt; code for root Module:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Custom VPC Module
module "vpc" {
  source = "./modules/vpc"

  name_prefix     = var.cluster_name
  vpc_cidr        = var.vpc_cidr
  azs             = slice(data.aws_availability_zones.available.names, 0, 3)
  private_subnets = var.private_subnets
  public_subnets  = var.public_subnets

  enable_nat_gateway = true
  single_nat_gateway = true

  # Required tags for EKS
  public_subnet_tags = {
    "kubernetes.io/role/elb"                    = "1"
    "kubernetes.io/cluster/${var.cluster_name}" = "shared"
  }

  private_subnet_tags = {
    "kubernetes.io/role/internal-elb"           = "1"
    "kubernetes.io/cluster/${var.cluster_name}" = "shared"
  }

  tags = {
    Environment = var.environment
    Terraform   = "true"
    Project     = "EKS-Day20"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Instead of writing out the entire EKS resource, your Root configuration (where you run terraform apply) looks like this:&lt;/p&gt;

&lt;p&gt;Here you can see we have not written the configuration for VPC instead pointed out the source block to the custom module we have written for VPC.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;source = "./modules/vpc"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lets take an example of a argument and see how the parameters are passed between Root and child modules.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  azs = slice(data.aws_availability_zones.available.names, 0, 3)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above block, you can see that we have taken the availability zones from a Data source instead of hardcoding and storing that in a variable named azs.&lt;/p&gt;

&lt;p&gt;Now to reference the value of azs in the VPC module, we need to pass the variable azs inside the variables.tf of VPC module and then use it in the main.tf of VPC module, so that it understands what is the data type and the value it is holding from the main module.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "azs" {
  description = "List of availability zones"
  type        = list(string)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# VPC
resource "aws_vpc" "main" {
  cidr_block           = var.vpc_cidr
  enable_dns_hostnames = true
  enable_dns_support   = true
}

# Public Subnets
resource "aws_subnet" "public" {
  count                   = length(var.public_subnets)
  vpc_id                  = aws_vpc.main.id
  cidr_block              = var.public_subnets[count.index]
  availability_zone       = var.azs[count.index]
  map_public_ip_on_launch = true
}

# Private Subnets
resource "aws_subnet" "private" {
  count             = length(var.private_subnets)
  vpc_id            = aws_vpc.main.id
  cidr_block        = var.private_subnets[count.index]
  availability_zone = var.azs[count.index]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So whatever the values we are passing in the variables.tf of root module must be passed to the variables.tf of child module too.&lt;/p&gt;

&lt;p&gt;These custom modules doesn't talk with each other like VPC module doesn't communicate directly with EKS module or vice-versa instead they will communicate with main module and main module will communicate with all the child modules.&lt;/p&gt;

&lt;p&gt;Also if we want to pass any value from child module to root module, we can output that vale in child module and then reference that value in main.tf of root module.&lt;/p&gt;

&lt;p&gt;Also we can add dependency in modules:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Custom EKS Module
module "eks" {
  source = "./modules/eks"

  cluster_name       = var.cluster_name
  kubernetes_version = var.kubernetes_version
  vpc_id             = module.vpc.vpc_id
  subnet_ids         = module.vpc.private_subnets

  cluster_role_arn = module.iam.cluster_role_arn
  node_role_arn    = module.iam.node_group_role_arn

  endpoint_public_access  = true
  endpoint_private_access = true
  public_access_cidrs     = ["0.0.0.0/0"]

  enable_irsa = true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above block, we can see we have mentioned about vpc_id and subnet_id in EKS module, so EKS module will only be created after the complete creation of VPC module.&lt;/p&gt;

&lt;p&gt;You can also see the vpc_id and subnet_ids are referencing to the VPC module where they have been declared in the output.tf of VPC module.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "vpc_id" {
  description = "The ID of the VPC"
  value       = aws_vpc.main.id
}

output "vpc_cidr_block" {
  description = "The CIDR block of the VPC"
  value       = aws_vpc.main.cidr_block
}

output "public_subnets" {
  description = "List of IDs of public subnets"
  value       = aws_subnet.public[*].id
}

output "private_subnets" {
  description = "List of IDs of private subnets"
  value       = aws_subnet.private[*].id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So the values in output.tf of VPC module was referenced in the root module main.tf using variables.tf and output.tf.&lt;/p&gt;

&lt;p&gt;Hope you understand the flow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Execution:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tf init
tf plan
tf apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By executing the above commands you could see the infrastructure getting created, first VPC and IAM will be created parallely followed by EKS.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;Custom Terraform modules transform infrastructure from scripts into systems. For EKS in particular, modular design is the difference between a demo cluster and a production-grade platform.&lt;/p&gt;

&lt;p&gt;This concludes Day 20 of custom modules for EKS Cluster. See you in the next blog.&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/a_j6Gq-KtxE"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
    </item>
    <item>
      <title>Day 19: Terraform Provisioners</title>
      <dc:creator>Anil KUMAR</dc:creator>
      <pubDate>Mon, 15 Dec 2025 17:37:06 +0000</pubDate>
      <link>https://forem.com/anil_kumar_noolu/day-19-terraform-provisioners-1hlp</link>
      <guid>https://forem.com/anil_kumar_noolu/day-19-terraform-provisioners-1hlp</guid>
      <description>&lt;p&gt;Today marks the Day 19 of 30 days of AWS Terraform challenge initiative by Piyush Sachdeva. Today we will deep dive into the Terraform Provisoners concept, what exactly is a provisoners. What are the different types of provisioners and how provisioners will be helpful in writing clean and effective code along with critical world test cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Provisoner:
&lt;/h2&gt;

&lt;p&gt;Think of a Provisoner like something that performs a task like executing a script, running a command or doing some operation.&lt;/p&gt;

&lt;p&gt;Provisioners are Terraform's way to execute scripts or commands during resource creation or destruction. They enable you to perform actions that go beyond Terraform's declarative resource management.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Concepts:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Bootstrapping&lt;/strong&gt;: Performing initial setup like installing software, configuring services, or preparing an instance for service.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;File Transfer&lt;/strong&gt;: Copying files or directories between the machine running Terraform and the newly created remote resource.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Post-Deployment Cleanup/Operations&lt;/strong&gt;: Executing final commands or scripts after a resource is created or before it is destroyed.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provisioners run during resource lifecycle events (creation or destruction)&lt;/li&gt;
&lt;li&gt;They are a "last resort" - Terraform recommends using native cloud-init, user_data, or configuration management tools when possible&lt;/li&gt;
&lt;li&gt;They execute only once during resource creation (not on updates)&lt;/li&gt;
&lt;li&gt;Failure handling: By default, if a provisioner fails, the resource is marked as "tainted" and will be recreated on next apply&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faz0mgfyzrws1lgirhij2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faz0mgfyzrws1lgirhij2.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Types of Provisioners:
&lt;/h2&gt;

&lt;p&gt;There are 3 types of provisioners available based on their use cases such as local-exec, remote-exec and file provisioner.&lt;/p&gt;

&lt;h2&gt;
  
  
  Local-exec:
&lt;/h2&gt;

&lt;p&gt;Local-exec provisioners are used to run basically on the machine where the Terraform host machine is present. It doesn't require any connection like SSH or RDP for execution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use cases:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Trigger webhooks or API calls&lt;/li&gt;
&lt;li&gt;Update local inventory files&lt;/li&gt;
&lt;li&gt;Run local scripts for orchestration&lt;/li&gt;
&lt;li&gt;Send notifications (Slack, email)&lt;/li&gt;
&lt;li&gt;Register resources in external systems&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Syntax:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provisioner "local-exec" {
  command = "echo ${self.public_ip} &amp;gt;&amp;gt; inventory.txt"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above block, we are just executing a simple echo command on the Terraform host machine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tip:&lt;/strong&gt; Always remember that provisoners must be inside of the Terraform resource blocks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Remote-exec:
&lt;/h2&gt;

&lt;p&gt;We will use this remote-exec provisioner when we would like to perform some tasks on remote machines by using SSH connectivity.&lt;br&gt;
Like you have created a EC2 Instance resource using terraform and want to install nginx or apache server on that instance on the go in the same code, then you can use this remote exec provisoner which will wait till the creation of EC2 instance and then executes the installation of nginx.&lt;/p&gt;

&lt;p&gt;The connection we will be using for this is SSH for Linux and WinRM for Windows.&lt;/p&gt;
&lt;h2&gt;
  
  
  Use cases:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Install packages (nginx, docker, etc.)&lt;/li&gt;
&lt;li&gt;Run initialization commands&lt;/li&gt;
&lt;li&gt;Configure system settings&lt;/li&gt;
&lt;li&gt;Start services or daemons&lt;/li&gt;
&lt;li&gt;Quick bootstrap tasks&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Syntax:
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provisioner "remote-exec" {
  inline = [
    "sudo apt-get update",
    "sudo apt-get install -y nginx",
    "sudo systemctl start nginx"
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We will be using an incline block while executing remote-exec provisoners.&lt;/p&gt;
&lt;h2&gt;
  
  
  file Provisioner:
&lt;/h2&gt;

&lt;p&gt;We will use this file provisoner when we want to copy a file from one machine to another machine. For example in your local you have a file and you want to copy that file to the remote instance you have created with your Terraform code.&lt;/p&gt;

&lt;p&gt;In this case, we can use the file provisioner and yes we will need the SSH conectivity for this provisioner just like remote-exec and also if you want to connect to an instance, then you need a key-pair too.&lt;/p&gt;
&lt;h2&gt;
  
  
  Use cases:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Copy configuration files&lt;/li&gt;
&lt;li&gt;Deploy scripts for execution&lt;/li&gt;
&lt;li&gt;Transfer SSL certificates&lt;/li&gt;
&lt;li&gt;Upload application binaries&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Syntax:
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provisioner "file" {
  source      = "scripts/setup.sh"
  destination = "/tmp/setup.sh"
}

provisioner "remote-exec" {
  inline = [
    "chmod +x /tmp/setup.sh",
    "/tmp/setup.sh"
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;In the above block, you could see that we have used file provisioner to copy file from source to destination and then used remote-exec provisioner to setup required permissions for that file and execute that.&lt;/p&gt;
&lt;h2&gt;
  
  
  Connection Block:
&lt;/h2&gt;

&lt;p&gt;For remote-exec and file provisioners, you need a connection block:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;connection {
  type        = "ssh"              # or "winrm" for Windows
  user        = "ubuntu"           # SSH user
  private_key = file("~/.ssh/id_rsa")  # SSH private key
  host        = self.public_ip     # Target host
  timeout     = "5m"               # Connection timeout
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Best Practices:
&lt;/h2&gt;

&lt;h2&gt;
  
  
  DO:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Use provisioners as a last resort&lt;/li&gt;
&lt;li&gt;Prefer cloud-init, user_data, or AMI baking (Packer)&lt;/li&gt;
&lt;li&gt;Keep provisioner scripts idempotent&lt;/li&gt;
&lt;li&gt;Handle errors gracefully with on_failure parameter&lt;/li&gt;
&lt;li&gt;Use connection timeouts to avoid hanging&lt;/li&gt;
&lt;li&gt;Test thoroughly in non-production environments&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  DON'T:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Use provisioners when native Terraform resources exist&lt;/li&gt;
&lt;li&gt;Rely on provisioners for critical configuration&lt;/li&gt;
&lt;li&gt;Forget that provisioners only run on creation&lt;/li&gt;
&lt;li&gt;Store sensitive data in provisioner commands&lt;/li&gt;
&lt;li&gt;Use complex logic - move to proper config management tools&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Code Execution:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;main.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~&amp;gt; 4.0"
    }
  }

data "aws_ami" "ubuntu" {
  most_recent = true
  owners      = ["099720109477"] # Canonical (Ubuntu official)

  filter {
    name   = "name"
    values = ["ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*"]
  }

  filter {
    name   = "virtualization-type"
    values = ["hvm"]
  }
}

resource "aws_security_group" "ssh" {
  name        = "tf-prov-demo-ssh"
  description = "Allow SSH inbound"

  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

resource "aws_instance" "demo" {
  ami                    = data.aws_ami.ubuntu.id
  instance_type          = var.instance_type
  key_name               = var.key_name
  vpc_security_group_ids = [aws_security_group.ssh.id]

  tags = {
    Name = "terraform-provisioner-demo"
  }

}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above code, we have created a basic terraform code for creation of an Ec2 instance using AMI data source and with a security group allowing ports 22 and outbound traffic as all.&lt;/p&gt;

&lt;h2&gt;
  
  
  Local-exec:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_instance" "demo" {
  ami                    = data.aws_ami.ubuntu.id
  instance_type          = var.instance_type
  key_name               = var.key_name
  vpc_security_group_ids = [aws_security_group.ssh.id]

  tags = {
    Name = "terraform-provisioner-demo"
  }
  provisioner "local-exec" {
    command = "echo 'Local-exec: created instance ${self.id} with IP ${self.public_ip}'"
 }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We have initiated a local-exec command which will execute a simple echo command with instance_id and instance_public_ip of the created EC2 instance.&lt;/p&gt;

&lt;h2&gt;
  
  
  remote-exec:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_instance" "demo" {
  ami                    = data.aws_ami.ubuntu.id
  instance_type          = var.instance_type
  key_name               = var.key_name
  vpc_security_group_ids = [aws_security_group.ssh.id]

  tags = {
    Name = "terraform-provisioner-demo"
  }
  provisioner "remote-exec" {
    inline = [
      "sudo apt-get update",
      "echo 'Hello from remote-exec' | sudo tee /tmp/remote_exec.txt",
    ]
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We have added a remote-exec provisoner of updating the packages in the remote machine of aws_instance.demo server and then echoing a simple line to the file "/tmp/remote_exec.txt".&lt;/p&gt;

&lt;p&gt;After the resource is created, you can ssh into that instance using the key-pair and can check whether the file has been created in that location or not.&lt;/p&gt;

&lt;h2&gt;
  
  
  file provisoner:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_instance" "demo" {
  ami                    = data.aws_ami.ubuntu.id
  instance_type          = var.instance_type
  key_name               = var.key_name
  vpc_security_group_ids = [aws_security_group.ssh.id]

  tags = {
    Name = "terraform-provisioner-demo"
  }
  provisioner "file" {
    source      = "${path.module}/scripts/welcome.sh"
    destination = "/tmp/welcome.sh"
  }

  provisioner "remote-exec" {
    inline = [
      "sudo chmod +x /tmp/welcome.sh",
      "sudo /tmp/welcome.sh"
    ]
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we are using both remote-exec and file provisioners. The above script copies a script (scripts/welcome.sh) to the instance, then executes it. Good pattern for more complex bootstrapping when script files are preferred.&lt;/p&gt;

&lt;p&gt;After execution of that script, you can ssh into that instance and check whether everything is working fine as per mentioned in the provisoners.&lt;/p&gt;

&lt;p&gt;Also while doing hands-on exercises for the provisioners, make sure dont always follow terraform destroy followed by terraform apply which will be unnecessary and a waste of time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Taint:
&lt;/h2&gt;

&lt;p&gt;Instead use taint, when you mark a resource as taint, then Terraform will re-create that instance on the next terraform apply.&lt;/p&gt;

&lt;p&gt;so just taint the aws_instance.demo and then do the terraform apply so that instance will be recreated with local-exec, remote-exec and file provisoners&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform taint aws_instance.demo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;This marks the Day 19 of 30 days of AWS Terraform challenge. We have done a deep dive into the Terraform provisoners, what exactly are provisioners, different types of provisioners and what are the use cases for each of them and when we will use a specific provisioner.&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/DkhAgYa0448"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
    </item>
    <item>
      <title>Day 18: Image Processing Serverless Project using AWS Lambda</title>
      <dc:creator>Anil KUMAR</dc:creator>
      <pubDate>Sun, 14 Dec 2025 17:41:59 +0000</pubDate>
      <link>https://forem.com/anil_kumar_noolu/day-18-image-processing-serverless-project-using-aws-lambda-4732</link>
      <guid>https://forem.com/anil_kumar_noolu/day-18-image-processing-serverless-project-using-aws-lambda-4732</guid>
      <description>&lt;p&gt;Today marks the Day 18 of 30 Days of Terraform challenge by Piyush Sachdeva. In this Blog, we will deep dive into a project of Images Processing Serverless Project using AWS Lambda entirely using Terraform. We’ll walk through an end-to-end image processing project i.e. from uploading a file to S3, to automatically processing it using a Lambda function, all orchestrated through Terraform.&lt;/p&gt;

&lt;p&gt;Before diving deep into the project, lets first understand what exactly is AWS Lambda and why it is used and what is the significance of that.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Lambda:
&lt;/h2&gt;

&lt;p&gt;At its core, AWS Lambda is a serverless function. What is a serverless, you would imagine that for any service to be up and running we would need a service. yeah that's true.&lt;/p&gt;

&lt;p&gt;When you want to host any app or server, you need to set up a service and deploy that app on that server like EC2 Instance. &lt;/p&gt;

&lt;p&gt;But What do you mean by Serverless, There are no servers at all, which isn’t really true. There are servers involved, but the difference is we don’t manage them.&lt;/p&gt;

&lt;p&gt;With Lambda, we don’t provision servers at all. Instead of thinking about machines, we think about functions. We simply write our application code, package it, and upload it as a Lambda function.&lt;/p&gt;

&lt;p&gt;AWS takes care of everything else.&lt;/p&gt;

&lt;p&gt;The servers still exist, but AWS manages them for us. We don’t worry about operating systems, scaling, or uptime. Our responsibility ends with the code.&lt;/p&gt;

&lt;p&gt;And here’s the key difference:&lt;br&gt;
a Lambda function does not run all the time and runs only when something triggers it.&lt;/p&gt;

&lt;p&gt;An event could be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A file being uploaded to an S3 bucket&lt;/li&gt;
&lt;li&gt;A scheduled time (for example, every Monday at 7 AM)&lt;/li&gt;
&lt;li&gt;A real-time system event&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqrrizi9d7n6mjouc5eij.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqrrizi9d7n6mjouc5eij.jpg" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Project Architecture:
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌─────────────────┐
│  Upload Image   │  You upload image via AWS CLI or SDK
│   to S3 Bucket  │
└────────┬────────┘
         │ s3:ObjectCreated:* event
         ↓
┌─────────────────┐
│ Lambda Function │  Automatically triggered
│ Image Processor │  - Compresses JPEG (quality 85)
└────────┬────────┘  - Low quality JPEG (quality 60)
         │            - WebP format
         │            - PNG format
         │            - Thumbnail (200x200)
         ↓
┌─────────────────┐
│ Processed S3    │  5 variants saved automatically
│    Bucket       │
└─────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We’ll have two S3 buckets:&lt;/p&gt;

&lt;p&gt;One bucket where we upload the original image&lt;/p&gt;

&lt;p&gt;Another bucket where the processed images will be stored&lt;/p&gt;

&lt;p&gt;The first bucket is our source bucket. Whenever we upload an image to this bucket, that upload creates an S3 event.&lt;/p&gt;

&lt;p&gt;And remember what we discussed earlier, events are exactly what serverless functions like Lambda are waiting for.&lt;/p&gt;

&lt;p&gt;So as soon as an image is uploaded, that S3 event will trigger our Lambda function.&lt;/p&gt;

&lt;p&gt;This Lambda function is where all the image processing logic lives. It will take the original image and automatically generate:&lt;/p&gt;

&lt;p&gt;A JPEG image with 85% quality&lt;/p&gt;

&lt;p&gt;Another JPEG image with 60% quality&lt;/p&gt;

&lt;p&gt;A WebP version&lt;/p&gt;

&lt;p&gt;A PNG version&lt;/p&gt;

&lt;p&gt;And a thumbnail image resized to 200 by 200&lt;/p&gt;

&lt;p&gt;All of this happens without us clicking any extra buttons or running any manual commands.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1g3chr93bzoibyjqh9ss.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1g3chr93bzoibyjqh9ss.jpg" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Components:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Upload S3 Bucket: Source bucket for original images&lt;/li&gt;
&lt;li&gt;Processed S3 Bucket: Destination bucket for processed variants&lt;/li&gt;
&lt;li&gt;Lambda Function: Image processor with Pillow library&lt;/li&gt;
&lt;li&gt;Lambda Layer: Pillow 10.4.0 for image manipulation&lt;/li&gt;
&lt;li&gt;S3 Event Trigger: Automatically invokes Lambda on upload&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Terraform Code:
&lt;/h2&gt;

&lt;p&gt;Now we will go through the code for the project execution. &lt;/p&gt;

&lt;p&gt;The first step is to clone the repository and move into the Day 18 directory.&lt;/p&gt;

&lt;p&gt;The repository lives &lt;a href="https://github.com/piyushsachdeva/Terraform-Full-Course-Aws" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once we clone it, navigate into the day-18 folder and then into the terraform directory. This is where all the Terraform files for today’s project live.&lt;/p&gt;
&lt;h2&gt;
  
  
  1. Making unique resource names:
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "random_id" "suffix" {
  byte_length = 4
}

locals {
  bucket_prefix         = "${var.project_name}-${var.environment}"
  upload_bucket_name    = "${local.bucket_prefix}-upload-${random_id.suffix.hex}"
  processed_bucket_name = "${local.bucket_prefix}-processed-${random_id.suffix.hex}"
  lambda_function_name  = "${var.project_name}-${var.environment}-processor"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;As we know the names of the S3 bucket should be unique, we need to name the S3 buckets carefully making sure those names do not exist earlier. For this we use a resource of random_id in Terraform which generates random characters and we append them before and after our bucket name.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We build a common bucket prefix using the project name and environment&lt;/li&gt;
&lt;li&gt;We create two bucket names i.e. one for uploads and one for processed images&lt;/li&gt;
&lt;li&gt;We append the random suffix so the names stay unique&lt;/li&gt;
&lt;li&gt;We also define a clear name for our Lambda function&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  2. Creating the Source S3 Bucket
&lt;/h2&gt;

&lt;p&gt;This project begins with an S3 bucket that acts as the source bucket. We will be uploading image which will start the entire image processing workflow.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# S3 Bucket for uploading original images (SOURCE)
resource "aws_s3_bucket" "upload_bucket" {
  bucket = local.upload_bucket_name
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We’re simply creating an S3 bucket and giving it the name we already prepared using locals.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Enabling Versioning:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_s3_bucket_versioning" "upload_bucket" {
  bucket = aws_s3_bucket.upload_bucket.id

  versioning_configuration {
    status = "Enabled"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Versioning helps us keep track of changes. If the same file name is uploaded again, S3 doesn’t overwrite the old object, it stores a new version instead. Eventhough it is not needed in this project, we will keep it as it is best industry standard.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Enabling Server-Side Encryption:
&lt;/h2&gt;

&lt;p&gt;Next, we enable server-side encryption.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_s3_bucket_server_side_encryption_configuration" "upload_bucket" {
  bucket = aws_s3_bucket.upload_bucket.id

  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm = "AES256"
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we are enabling the server_side_encryption on the bucket so the files in the bucket will be encrypted. This ensures that any image uploaded to the bucket is encrypted at rest using AES-256. We don’t need to manage encryption keys manually, AWS takes care of that for us.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Making Bucket Private:
&lt;/h2&gt;

&lt;p&gt;We are making the source bucket private so that no one else will be accessing this source bucket.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_s3_bucket_public_access_block" "upload_bucket" {
  bucket = aws_s3_bucket.upload_bucket.id

  block_public_acls       = true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By blocking public ACLs and policies, we make sure the bucket isn’t accidentally exposed. In real production systems, public access is usually handled through controlled layers in front of S3, not directly on the bucket itself.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Creating Destination S3 Bucket:
&lt;/h2&gt;

&lt;p&gt;Now we will be doing the same above steps for Destination Bucket too.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_s3_bucket" "processed_bucket" {
  bucket = local.processed_bucket_name
}

resource "aws_s3_bucket_versioning" "processed_bucket" {
  bucket = aws_s3_bucket.processed_bucket.id

  versioning_configuration {
    status = "Enabled"
  }
}

resource "aws_s3_bucket_server_side_encryption_configuration" "processed_bucket" {
  bucket = aws_s3_bucket.processed_bucket.id

  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm = "AES256"
    }
  }
}

resource "aws_s3_bucket_public_access_block" "processed_bucket" {
  bucket = aws_s3_bucket.processed_bucket.id

  block_public_acls       = true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We have done creating the bucket, enabling versioning on that, enabling server-side encryption and making that bucket access private.&lt;/p&gt;

&lt;h2&gt;
  
  
  IAM Roles and Policies:
&lt;/h2&gt;

&lt;p&gt;This is an important section and we need to be very careful about what access does a Lambda role needs for this project.&lt;/p&gt;

&lt;p&gt;Instead of hard-coding permissions or credentials, AWS uses roles to define what a service is allowed to do. In our case, we want the Lambda function to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Write logs to cloudwatch so we can see what’s happening&lt;/li&gt;
&lt;li&gt;Read images from the source bucket&lt;/li&gt;
&lt;li&gt;Write processed images to the destination bucket&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  1. Creating the IAM Role for Lambda:
&lt;/h2&gt;

&lt;p&gt;We start by creating an IAM role that Lambda can assume.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_iam_role" "lambda_role" {
  name = "${local.lambda_function_name}-role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Principal = {
          Service = "lambda.amazonaws.com"
        }
      }
    ]
  })
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This role doesn’t give any permissions yet. It simply says:&lt;br&gt;
This role can be assumed by AWS Lambda. &lt;br&gt;
AWS even provides a policy generator to help create these documents, which makes life easier when you’re starting out.&lt;/p&gt;
&lt;h2&gt;
  
  
  2. Defining the Permissions with an IAM Policy:
&lt;/h2&gt;

&lt;p&gt;Next, we create a policy that tells AWS exactly what this Lambda function is allowed to do.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_iam_role_policy" "lambda_policy" {
  name = "${local.lambda_function_name}-policy"
  role = aws_iam_role.lambda_role.id

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Effect = "Allow"
        Action = [
          "logs:CreateLogGroup",
          "logs:CreateLogStream",
          "logs:PutLogEvents"
        ]
        Resource = "arn:aws:logs:${var.aws_region}:*:*"
      },
      {
        Effect = "Allow"
        Action = [
          "s3:GetObject",
          "s3:GetObjectVersion"
        ]
        Resource = "${aws_s3_bucket.upload_bucket.arn}/*"
      },
      {
        Effect = "Allow"
        Action = [
          "s3:PutObject",
          "s3:PutObjectAcl"
        ]
        Resource = "${aws_s3_bucket.processed_bucket.arn}/*"
      }
    ]
  })
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are 3 blocks in the above IAM JSON Policy:&lt;/p&gt;

&lt;p&gt;The first block allows the Lambda function to create log groups, log streams, and write logs. Without this, we’d have no visibility into what the function is doing, especially if something goes wrong.&lt;/p&gt;

&lt;p&gt;The second block allows Lambda to read objects from the source bucket. This is how it gets access to the uploaded image.&lt;/p&gt;

&lt;p&gt;The third block allows Lambda to write objects to the destination bucket. This is where all the processed images will be stored.&lt;/p&gt;

&lt;p&gt;We can also give S3 full access but it is not recommended for Best practices.&lt;/p&gt;

&lt;p&gt;With this IAM role and policy in place, our Lambda function will be able to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Read images from S3 bucket&lt;/li&gt;
&lt;li&gt;Process them using Pillow Libraries&lt;/li&gt;
&lt;li&gt;Store the results to Destination Bucket&lt;/li&gt;
&lt;li&gt;Write logs to Cloudwatch to inspect&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  3. LAMBDA LAYER (Pillow):
&lt;/h2&gt;

&lt;p&gt;A Lambda layer is a way to package external libraries and dependencies separately from our function code. Instead of bundling everything inside the function zip, we place shared or heavy dependencies into a layer and then attach that layer to the Lambda function.&lt;/p&gt;

&lt;p&gt;This keeps the function code clean and makes dependencies easier to manage.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_lambda_layer_version" "pillow_layer" {
  filename            = "${path.module}/pillow_layer.zip"
  layer_name          = "${var.project_name}-pillow-layer"
  compatible_runtimes = ["python3.12"]
  description         = "Pillow library for image processing"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here’s what’s happening:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;filename points to a zip file that contains the Pillow library&lt;/li&gt;
&lt;li&gt;layer_name gives the layer a clear, readable name&lt;/li&gt;
&lt;li&gt;compatible_runtimes ensures this layer works with Python 3.12&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;How do we create the pillow_layer.zip file in the first place?&lt;/p&gt;

&lt;p&gt;Because AWS Lambda runs on Linux, the dependencies inside the layer must also be built for a Linux environment. This is important, especially if you’re working on macOS or Windows.&lt;/p&gt;

&lt;p&gt;To solve this, we use Docker.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. LAMBDA FUNCTION (Image Processor):
&lt;/h2&gt;

&lt;p&gt;Our Lambda function is written in Python and lives inside the repository. To package it correctly, we use a Terraform data source.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Data source for Lambda function zip
data "archive_file" "lambda_zip" {
  type        = "zip"
  source_file = "${path.module}/../lambda/lambda_function.py"
  output_path = "${path.module}/lambda_function.zip"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This data source takes the Python file, compresses it into a zip archive, and makes it ready for deployment.&lt;/p&gt;

&lt;p&gt;Even though we’re working with a local file here, Terraform treats this as data it needs to reference during deployment which is exactly what data sources are designed for.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Defining the Lambda Function:
&lt;/h2&gt;

&lt;p&gt;Now we define the Lambda function itself.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_lambda_function" "image_processor" {
  filename         = data.archive_file.lambda_zip.output_path
  function_name    = local.lambda_function_name
  role             = aws_iam_role.lambda_role.arn
  handler          = "lambda_function.lambda_handler"
  source_code_hash = data.archive_file.lambda_zip.output_base64sha256
  runtime          = "python3.12"
  timeout          = 60
  memory_size      = 1024

  layers = [aws_lambda_layer_version.pillow_layer.arn]

  environment {
    variables = {
      PROCESSED_BUCKET = aws_s3_bucket.processed_bucket.id
      LOG_LEVEL        = "INFO"
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above block:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;filename points to the zip file created earlier&lt;/li&gt;
&lt;li&gt;function_name gives the Lambda function a clear identity&lt;/li&gt;
&lt;li&gt;role attaches the IAM role we created, allowing the function to access S3 and logs&lt;/li&gt;
&lt;li&gt;handler tells Lambda where execution begins in the Python file&lt;/li&gt;
&lt;li&gt;runtime specifies Python 3.12&lt;/li&gt;
&lt;li&gt;timeout is set to 60 seconds, which is more than enough for image processing&lt;/li&gt;
&lt;li&gt;memory_size is set to 1024 MB to give the function enough resources&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  6. CloudWatch Logs:
&lt;/h2&gt;

&lt;p&gt;We will create a cloudwatch log group to make sure logs are retained in a predictable way.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_cloudwatch_log_group" "lambda_processor" {
  name              = "/aws/lambda/${local.lambda_function_name}"
  retention_in_days = 7
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  7. S3 EVENT TRIGGER:
&lt;/h2&gt;

&lt;p&gt;Now, we will give S3 permission to invoke our Lambda function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Lambda permission to be invoked by S3
resource "aws_lambda_permission" "allow_s3" {
  statement_id  = "AllowExecutionFromS3"
  action        = "lambda:InvokeFunction"
  function_name = aws_lambda_function.image_processor.function_name
  principal     = "s3.amazonaws.com"
  source_arn    = aws_s3_bucket.upload_bucket.arn
}

# S3 bucket notification to trigger Lambda
resource "aws_s3_bucket_notification" "upload_bucket_notification" {
  bucket = aws_s3_bucket.upload_bucket.id

  lambda_function {
    lambda_function_arn = aws_lambda_function.image_processor.arn
    events              = ["s3:ObjectCreated:*"]
  }

  depends_on = [aws_lambda_permission.allow_s3]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Without the above permission, S3 events would never be able to trigger the function, even if everything else was configured correctly.&lt;/p&gt;

&lt;p&gt;Whenever an object is created, in any way, invoke this Lambda function. As long as an object is created in the bucket, the event fires.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deployment:
&lt;/h2&gt;

&lt;p&gt;Now everything is set, now all that is left is to deploy them. We will be deploying this entire project using a shell script named &lt;strong&gt;deploy.sh&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When we run this script, here’s what it does.&lt;/p&gt;

&lt;p&gt;First, it performs a few basic checks. It makes sure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS CLI is installed&lt;/li&gt;
&lt;li&gt;Terraform is installed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If either of these is missing, the script stops and tells us exactly what’s wrong. This saves time and avoids confusion later.&lt;/p&gt;

&lt;p&gt;Next, the script builds the Lambda layer.&lt;/p&gt;

&lt;p&gt;This is an important step. Remember, the Pillow library needs to be compiled in a Linux environment to work correctly with AWS Lambda. Instead of doing this manually, the script calls another helper script that uses Docker to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Spin up a Linux-based Python environment&lt;/li&gt;
&lt;li&gt;Install Pillow in the correct directory structure&lt;/li&gt;
&lt;li&gt;Package everything into a pillow_layer.zip file&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once that’s done, the script moves into the Terraform directory and runs the familiar commands:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;terraform init&lt;/li&gt;
&lt;li&gt;terraform plan&lt;/li&gt;
&lt;li&gt;terraform apply&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Terraform then takes over and creates every AWS resource we discussed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Both S3 buckets&lt;/li&gt;
&lt;li&gt;IAM roles and policies&lt;/li&gt;
&lt;li&gt;Lambda layer&lt;/li&gt;
&lt;li&gt;Lambda function&lt;/li&gt;
&lt;li&gt;CloudWatch log group&lt;/li&gt;
&lt;li&gt;S3 event trigger&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When the deployment finishes, the script prints out something very useful information in the terraform output console.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing / Verification:
&lt;/h2&gt;

&lt;p&gt;Setup is all done, Now all we need to do is to just upload an image to the upload bucket.&lt;/p&gt;

&lt;p&gt;It can be any JPG or JPEG file. We can upload it using:&lt;/p&gt;

&lt;p&gt;The AWS Console or AWS CLI&lt;/p&gt;

&lt;p&gt;Any method that creates an object in the bucket&lt;/p&gt;

&lt;p&gt;The moment the file is uploaded, the event is triggered.&lt;/p&gt;

&lt;p&gt;Behind the scenes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lambda starts&lt;/li&gt;
&lt;li&gt;The image is processed&lt;/li&gt;
&lt;li&gt;Five new images are generated&lt;/li&gt;
&lt;li&gt;All processed files appear in the destination bucket&lt;/li&gt;
&lt;li&gt;If we open CloudWatch, we can also see the logs generated by the Lambda function, helpful for understanding what happened and for troubleshooting if something goes wrong.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;And with that, we’ve completed Day 18 of the 30 Days of Terraform Challenge.&lt;/p&gt;

&lt;p&gt;We have gone a deep dive into a serverless project of Image processing using AWS Lambda involving S3 Buckets and CloudWatch.&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/l0RYCxczgyk"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
      <category>lambda</category>
    </item>
    <item>
      <title>Day 17: AWS Blue/Green with Terraform and Elastic Beanstalk</title>
      <dc:creator>Anil KUMAR</dc:creator>
      <pubDate>Fri, 12 Dec 2025 12:09:53 +0000</pubDate>
      <link>https://forem.com/anil_kumar_noolu/day-17-aws-bluegreen-with-terraform-and-elastic-beanstalk-313m</link>
      <guid>https://forem.com/anil_kumar_noolu/day-17-aws-bluegreen-with-terraform-and-elastic-beanstalk-313m</guid>
      <description>&lt;p&gt;As part of 30-Day AWS Terraform Challenge, Today is Day 17 which will be focusing on one of the most critical topics in modern infrastructure: achieving zero-downtime updates. This deep dive covered implementing a Blue-Green Deployment strategy on AWS using Terraform and Elastic Beanstalk.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Blue-Green Deployment:
&lt;/h2&gt;

&lt;p&gt;Blue-Green Deployment is an infrastructure strategy designed to virtually eliminate downtime during application updates. It operates on a simple, powerful premise. It is also a release technique that keeps two identical production environments - nicknamed “blue” and “green.”&lt;/p&gt;

&lt;p&gt;Blue Environment (The Live): This is your current, live production environment, actively receiving all user traffic (e.g., Application Version 1.0).&lt;/p&gt;

&lt;p&gt;Green Environment (The Standby): This is an exact, identical copy of the Blue environment. We deploy the new application version (e.g., Application Version 2.0) here. Since it's not receiving live traffic, we can test it thoroughly without impacting the live users.&lt;/p&gt;

&lt;p&gt;The real magic is the Swap Process:&lt;/p&gt;

&lt;p&gt;Once the Green environment (v2.0) is fully tested and stable, we simply swap the DNS pointer that routes traffic from the Blue environment to the Green environment.&lt;/p&gt;

&lt;p&gt;The Green environment immediately becomes the new live "Blue" environment (v2.0).&lt;/p&gt;

&lt;p&gt;The old Blue environment (v1.0) becomes the standby "Green," ready for instant rollback or eventual decommissioning.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Benefits:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Zero/Minimal Downtime: The only "downtime" is the minimal DNS propagation time.&lt;/li&gt;
&lt;li&gt;Safe Testing: Changes are tested on a non-live environment.&lt;/li&gt;
&lt;li&gt;Instant Rollback: If v2.0 has an issue, you can instantly revert by swapping the DNS back to the original v1.0 environment.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This also have disadvantages as we are maintaining 2 live servers, there will be a lot of costs involved.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7d26qqg7mw8maslqhc0q.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7d26qqg7mw8maslqhc0q.jpg" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this project, we will be packaging the application -&amp;gt; deploying the application -&amp;gt; uploading the zip file to S3 and then deploying that file to multiple environments using Blue/Green Environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform Implementation:
&lt;/h2&gt;

&lt;p&gt;make sure you clone the &lt;a href="https://github.com/piyushsachdeva/Terraform-Full-Course-Aws" rel="noopener noreferrer"&gt;github_repo&lt;/a&gt; to your local and go to the folder: lessons/day-17&lt;/p&gt;

&lt;p&gt;main.tf: Defines the AWS Provider, IAM roles (EC2 profile, Service role), and the private S3 bucket to store the application zips.&lt;/p&gt;

&lt;p&gt;blue-environment.tf: Defines the initial production environment (Blue) using app-v1.zip.&lt;/p&gt;

&lt;p&gt;green-environment.tf: Defines the initial production environment (Blue) using app-v1.zip.&lt;/p&gt;

&lt;h2&gt;
  
  
  Blue Environment Setup
&lt;/h2&gt;

&lt;p&gt;We set up the Blue environment with version 1.0 in AWS Elastic Beanstalk.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;1. Uploads the app code to S3:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_s3_object" "app_v1" {
  bucket = aws_s3_bucket.app_versions.id
  key    = "app-v1.zip"
  source = "${path.module}/app-v1/app-v1.zip"
  etag   = filemd5("${path.module}/app-v1/app-v1.zip")

  tags = var.tags
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Takes app-v1.zip from your local folder or from repo folder.&lt;/li&gt;
&lt;li&gt;Uploads it to an S3 bucket (named app_versions), make sure bucket is created first.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Calculates MD5 hash to detect changes&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;ol&gt;
&lt;li&gt;Creates an Elastic Beanstalk app version
&lt;/li&gt;
&lt;/ol&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_elastic_beanstalk_application_version" "v1" {
  name        = "${var.app_name}-v1"
  application = aws_elastic_beanstalk_application.app.name
  description = "Application Version 1.0 - Initial Release"
  bucket      = aws_s3_bucket.app_versions.id
  key         = aws_s3_object.app_v1.id

  tags = var.tags
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Registers app-v1.zip as version your-app-name-v1&lt;/li&gt;
&lt;li&gt;Links it to your main EB application&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Stores it in S3 with the version label&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;ol&gt;
&lt;li&gt;Launches the production environment (Blue)
&lt;/li&gt;
&lt;/ol&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_elastic_beanstalk_environment" "blue" {
  name                = "${var.app_name}-blue"
  application         = aws_elastic_beanstalk_application.app.name
  solution_stack_name = var.solution_stack_name
  tier                = "WebServer"
  version_label       = aws_elastic_beanstalk_application_version.v1.name

  # IAM Settings
  setting {
    ....
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Creates an environment called your-app-name-blue&lt;/li&gt;
&lt;li&gt;Deploys version 1.0 to it&lt;/li&gt;
&lt;li&gt;Uses your specified platform (solution_stack_name)&lt;/li&gt;
&lt;li&gt;Sets it as a WebServer tier (production-ready)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In simple terms: It bundles the app → uploads to S3 → registers as EB version 1.0 → deploys to production (blue environment).&lt;/p&gt;

&lt;h2&gt;
  
  
  Green Environment Setup:
&lt;/h2&gt;

&lt;p&gt;We will use the same above setup for green environment too with just change in names of the environment from blue to green.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Application Version 2.0 (Green Environment - Staging)
resource "aws_s3_object" "app_v2" {
  bucket = aws_s3_bucket.app_versions.id
  key    = "app-v2.zip"
  source = "${path.module}/app-v2/app-v2.zip"
  etag   = filemd5("${path.module}/app-v2/app-v2.zip")

  tags = var.tags
}

resource "aws_elastic_beanstalk_application_version" "v2" {
  name        = "${var.app_name}-v2"
  application = aws_elastic_beanstalk_application.app.name
  description = "Application Version 2.0 - New Feature Release"
  bucket      = aws_s3_bucket.app_versions.id
  key         = aws_s3_object.app_v2.id

  tags = var.tags
}

# Green Environment (Staging/Pre-production)
resource "aws_elastic_beanstalk_environment" "green" {
  name                = "${var.app_name}-green"
  application         = aws_elastic_beanstalk_application.app.name
  solution_stack_name = var.solution_stack_name
  tier                = "WebServer"
  version_label       = aws_elastic_beanstalk_application_version.v2.name

  # IAM Settings
  ...
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  EC2 Instance Role Creation:
&lt;/h2&gt;

&lt;p&gt;We will be creating EC2 Instance Roles for our app servers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# IAM Role for Elastic Beanstalk EC2 instances
resource "aws_iam_role" "eb_ec2_role" {
  name = "${var.app_name}-eb-ec2-role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Principal = {
          Service = "ec2.amazonaws.com"
        }
      }
    ]
  })

  tags = var.tags
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Creates an IAM role ${your-app}-eb-ec2-role&lt;/li&gt;
&lt;li&gt;Allows EC2 instances to "assume" this role&lt;/li&gt;
&lt;li&gt;Gives your running app permissions to access AWS services (databases, S3, etc.), Currently we gave only permissions to access EC2 service for this project.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Elastic Beanstalk Service Role:
&lt;/h2&gt;

&lt;p&gt;We will also create Elastic Beanstalk Service Role for AWS control plane.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# IAM Role for Elastic Beanstalk Service
resource "aws_iam_role" "eb_service_role" {
  name = "${var.app_name}-eb-service-role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Principal = {
          Service = "elasticbeanstalk.amazonaws.com"
        }
      }
    ]
  })

  tags = var.tags
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Creates an IAM role ${your-app}-eb-service-role&lt;/li&gt;
&lt;li&gt;Allows the Elastic Beanstalk service itself to manage your environments&lt;/li&gt;
&lt;li&gt;AWS needs this to deploy, scale, and monitor your app and all required permissions of Elastic BeanStalk.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Locks down S3 bucket security:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Block public access to S3 bucket
resource "aws_s3_bucket_public_access_block" "app_versions" {
  bucket = aws_s3_bucket.app_versions.id

  block_public_acls       = true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Blocks all public access to your app_versions S3 bucket as we will be making the bucket private without having access to public Internet.&lt;/li&gt;
&lt;li&gt;Prevents accidental public exposure of app-v1.zip as it shoudld not delete accidentally.&lt;/li&gt;
&lt;li&gt;All 4 public access settings = maximum security&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Final Deployment:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo chmod 755 package-apps.sh
./package-apps.sh 
terraform init
terraform plan
terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the deployment was complete, Terraform will provide us the URLs for both environments as outputs:&lt;/p&gt;

&lt;p&gt;Blue URL (Production): Showed "Welcome to blue-green demo v1.0"&lt;/p&gt;

&lt;p&gt;Green URL (Staging): Showed "New features in v2.0"&lt;/p&gt;

&lt;p&gt;The next step was the actual, zero-downtime Blue-Green swap.&lt;br&gt;
You could also get the URL in AWS console of Elastic BeanStack.&lt;/p&gt;
&lt;h2&gt;
  
  
  Doing SWAP:
&lt;/h2&gt;

&lt;p&gt;While the process can be scripted using the AWS CLI or advanced Terraform modules, the simplest way is via the AWS Console:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to the Elastic Beanstalk service.&lt;/li&gt;
&lt;li&gt;Select the Actions dropdown for the environment (e.g., the Blue one).&lt;/li&gt;
&lt;li&gt;Click "Swap Environment Domains."&lt;/li&gt;
&lt;li&gt;Select the Green environment as the target for the swap.&lt;/li&gt;
&lt;li&gt;Similarly go to the Green service of Elastic BeanStalk and do the swap selecting the Blue Environment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The DNS swap took effect quickly. After the propagation time:&lt;/p&gt;

&lt;p&gt;The Blue app URL now showed the new content: "New features in v2.0" (It became the new production site).&lt;/p&gt;

&lt;p&gt;The Green app URL now showed the old content: "Welcome to blue-green demo v1.0" (It became the standby environment).&lt;/p&gt;
&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;Today blog will be a bit complex compared to the previous ones because we have now moved from noob stage, We also need to have knowledge about Blue-Green Deployment and Elastic BeanStalk. We have understood the key-benefits of Blue-Green Deployment and how it will be preferred in the Production of all the companies utilizing key benefits of Zero-Downtime and Instant Rollback.&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/fTVx2m5fEbQ"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>devchallenge</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Day 16/30: Mastering AWS IAM User Management with Terraform</title>
      <dc:creator>Anil KUMAR</dc:creator>
      <pubDate>Thu, 11 Dec 2025 13:43:32 +0000</pubDate>
      <link>https://forem.com/anil_kumar_noolu/day-1630-mastering-aws-iam-user-management-with-terraform-24a7</link>
      <guid>https://forem.com/anil_kumar_noolu/day-1630-mastering-aws-iam-user-management-with-terraform-24a7</guid>
      <description>&lt;p&gt;Today marks the Day 16 of 30 Days of AWS Terraform challenge Initiative by Piyush Sachdeva. Today, we will be mastering a mini-project on AWS main services such as IAM: bulk-creating AWS IAM users from a CSV file, assigning them dynamically to groups based on their attributes (like department and job title), and enabling secure console login.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz2b461xlaifo5jvn7ao7.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz2b461xlaifo5jvn7ao7.jpg" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we all know that we will use AWS IAM resource for creating Users and managing their access. For any large organizations, IAM is a key service as it helps in creating the users with adequate permissions and grouping them into the respective groups so that no authorized access into other services of AWS will take place.&lt;/p&gt;

&lt;p&gt;In this demo, I will show how to manage AWS IAM users, groups, and group memberships using Terraform. It's an AWS equivalent of Azure AD user management, demonstrating Infrastructure as Code (IaC) best practices. This project will solidify your understanding of Terraform's powerful iteration constructs: for_each, for expressions, and conditional filtering.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Demo Does
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Retrieves AWS Account Information - Gets your current AWS Account ID&lt;/li&gt;
&lt;li&gt;Reads User Data from CSV - Loads user information from a CSV file&lt;/li&gt;
&lt;li&gt;Creates IAM Users - Automatically creates IAM users with proper naming conventions&lt;/li&gt;
&lt;li&gt;Sets Up Login Profiles - Configures console access with password reset requirement&lt;/li&gt;
&lt;li&gt;Creates IAM Groups - Sets up organizational groups (Education, Managers, Engineers)&lt;/li&gt;
&lt;li&gt;Manages Group Memberships - Automatically assigns users to appropriate groups.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Mini Project Overview
&lt;/h2&gt;

&lt;p&gt;Goal: Bulk create 26 IAM users (based on a users.csv).&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Features:
&lt;/h2&gt;

&lt;p&gt;Dynamic user assignment to groups (education, engineers, managers).&lt;/p&gt;

&lt;p&gt;Enable console login with mandatory temporary passwords.&lt;/p&gt;

&lt;p&gt;Utilize an S3 remote backend for state management.&lt;/p&gt;

&lt;p&gt;Username Format: First initial + Last name (e.g., Michael Scott -&amp;gt; mscott).&lt;/p&gt;

&lt;h2&gt;
  
  
  What Gets Created:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;26 IAM Users with console access&lt;/li&gt;
&lt;li&gt;3 IAM Groups (Education, Managers, Engineers)&lt;/li&gt;
&lt;li&gt;Group Memberships based on user attributes&lt;/li&gt;
&lt;li&gt;User Tags with metadata (DisplayName, Department, JobTitle)&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Project SetUP:
&lt;/h2&gt;

&lt;p&gt;Below is the project structure for this mini-project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;day16/
├── backend.tf          # S3 backend configuration for state 
├── provider.tf         # AWS provider configuration
├── versions.tf         # Terraform version and required providers
├── main.tf            # Main user creation logic
├── groups.tf          # IAM groups and membership management
├── users.csv          # User data source
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  1. CSV Processing:
&lt;/h2&gt;

&lt;p&gt;The csvdecode() function is the magic here. It takes the CSV file and transforms it into a list of maps, which is a perfect data structure for Terraform to iterate over.&lt;/p&gt;

&lt;p&gt;In the CSV file we will have data coming in the format:&lt;br&gt;
&lt;code&gt;first_name,last_name,department,job_title&lt;br&gt;
Michael,Scott,Education,Regional Manager&lt;br&gt;
Dwight,Schrute,Sales,Assistant to the Regional Manager&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;So to make that into our usable format, We will convert that into a list of maps so that we can get whatever value based on that key.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
  users = csvdecode(file("users.csv"))
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure you have the users.csv located in the same folder.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. IAM User Creation:
&lt;/h2&gt;

&lt;p&gt;This is the main block where we will write main code for creation of IAM users.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_iam_user" "users" {
  for_each = { for user in local.users : user.first_name =&amp;gt; user }

  name = lower("${substr(each.value.first_name, 0, 1)}${each.value.last_name}")
  path = "/users/"

  tags = {
    "DisplayName" = "${each.value.first_name} ${each.value.last_name}"
    "Department"  = each.value.department
    "JobTitle"    = each.value.job_title
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Users are created with a username format: {first_initial}{lastname} (e.g., mscott)&lt;/p&gt;

&lt;p&gt;This is where the for_each pattern shines. We transform the list of maps from our CSV into a map of objects keyed by the desired username. &lt;/p&gt;

&lt;p&gt;for_each Key: The key is dynamically generated using lower() and substr() to enforce our first initial+lastname format (e.g., mscott).&lt;br&gt;
We have used 2 Terraform functions named lower and substr here:&lt;br&gt;
Substr is used to cut short first_name into the first letter followed by last_name and then using the lower function to make the entire string lowercase.&lt;/p&gt;

&lt;p&gt;Tags: We propagate the user attributes (department, job_title) into AWS tags. This is critical for the next step: dynamic group assignment.&lt;/p&gt;
&lt;h2&gt;
  
  
  3. Console Login Profiles:
&lt;/h2&gt;

&lt;p&gt;We need to provide console access and enforce a password change on first login for security.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_iam_user_login_profile" "users" {
  for_each = aws_iam_user.users

  user                    = each.value.name
  password_reset_required = true

  lifecycle {
    ignore_changes = [password_reset_required, password_length]
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Login profiles are created for console access with password reset required.&lt;/p&gt;

&lt;p&gt;Again we will use for_each block to iterate all the users from the aws_iam_users block.&lt;/p&gt;

&lt;p&gt;We have used lifecycle meta argument here because it Prevents Terraform from destroying/recreating the resource every time the user changes their password.&lt;/p&gt;

&lt;h2&gt;
  
  
  4: Create Groups and Memberships:
&lt;/h2&gt;

&lt;p&gt;This is the most advanced section, showcasing conditional logic to manage group membership. &lt;/p&gt;

&lt;h2&gt;
  
  
  Create IAM Groups:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_iam_group" "education" {
  name = "Education"
  path = "/groups/"
}

resource "aws_iam_group_membership" "education_members" {
  name  = "education-group-membership"
  group = aws_iam_group.education.name

  users = [
    for user in aws_iam_user.users : user.name 
    if user.tags.Department == "Education"
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Like the similar above block for education, we can create separate blocks for engineering, manager.&lt;/p&gt;

&lt;p&gt;In the above block, we could see we have created aws_iam_group block to create groups with the name and path.&lt;/p&gt;

&lt;p&gt;In aws_iam_group_membership, we will associate the users with the required groups based on the Department value. We have used a for expression with an if clause to filter the list of all created users, assigning them to a specific group based on their tags.&lt;/p&gt;

&lt;h2&gt;
  
  
  Manager Group Logic (Advanced):
&lt;/h2&gt;

&lt;p&gt;For the managers group, you might need more complex logic and error handling:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_iam_group_membership" "managers_members" {
  name  = "managers-group-membership"
  group = aws_iam_group.managers.name

  users = [
    for user in aws_iam_user.users : user.name if contains(keys(user.tags), "JobTitle") &amp;amp;&amp;amp; can(regex("Manager|CEO", user.tags.JobTitle))
  ]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The 'can()' function is excellent for safely checking if an attribute exists for Members include those with job_title "manager" OR "CEO".&lt;br&gt;
In Simple, it searches for key named "JobTitle" and it tries to find the names like Manager or CEO inside the Job Title and assigns them to Manager group.&lt;/p&gt;
&lt;h2&gt;
  
  
  Execution Commands:
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Initialize terraform
terraform init

# Review the changes (check for 58 resources created)
terraform plan 

# Apply the configuration
terraform apply 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;With this configuration, you will provision 58 resources in total:&lt;/p&gt;

&lt;p&gt;(26{Users} + 26{Login Profiles} + 3{Groups} + 3 {Group Memberships}$)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Plan: 58 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + account_id     = "716145636736"
  + user_names     = [
      + "Michael Scott",
      + "Dwight Schrute",
      + "Jim Halpert",
      + "Pam Beesly",
      + "Ryan Howard",
      + "Andy Bernard",
      + "Robert California",
      + "Stanley Hudson",
      + "Kevin Malone",
      + "Angela Martin",
      + "Oscar Martinez",
      + "Phyllis Vance",
      + "Toby Flenderson",
      + "Kelly Kapoor",
      + "Darryl Philbin",
      + "Creed Bratton",
      + "Meredith Palmer",
      + "Erin Hannon",
      + "Gabe Lewis",
      + "Jan Levinson",
      + "David Wallace",
      + "Holly Flax",
      + "Charles Miner",
      + "Jo Bennett",
      + "Clark Green",
      + "Pete Miller",
    ]
  + user_passwords = (sensitive value)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You know before terraform apply, only 2 IAM users were there.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwe5og626d9ojlymem8oa.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwe5og626d9ojlymem8oa.jpg" alt=" "&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform apply
.
.
.
Apply complete! Resources: 58 added, 0 changed, 0 destroyed.

Outputs:

account_id = "716145636736"
user_names = [
  "Michael Scott",
  "Dwight Schrute",
  "Jim Halpert",
  "Pam Beesly",
  "Ryan Howard",
  "Andy Bernard",
  "Robert California",
  "Stanley Hudson",
  "Kevin Malone",
  "Angela Martin",
  "Oscar Martinez",
  "Phyllis Vance",
  "Toby Flenderson",
  "Kelly Kapoor",
  "Darryl Philbin",
  "Creed Bratton",
  "Meredith Palmer",
  "Erin Hannon",
  "Gabe Lewis",
  "Jan Levinson",
  "David Wallace",
  "Holly Flax",
  "Charles Miner",
  "Jo Bennett",
  "Clark Green",
  "Pete Miller",
]
user_passwords = &amp;lt;sensitive&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you could see that the 26 new users have been created as per the terraform code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpf3kei6n64hzlffcq17i.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpf3kei6n64hzlffcq17i.jpg" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjm50e0d1du57rfnecgtw.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjm50e0d1du57rfnecgtw.jpg" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Ingesting external data using csvdecode().&lt;/li&gt;
&lt;li&gt;Dynamic resource creation using for_each.&lt;/li&gt;
&lt;li&gt;Enabling secure console access for users.&lt;/li&gt;
&lt;li&gt;Using tags to propagate data for later use.&lt;/li&gt;
&lt;li&gt;Creating dynamic group membership lists using for/if expressions.&lt;/li&gt;
&lt;li&gt;Configuring a resilient resource using lifecycle ignore_changes.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Don't forgot to destroy all the resources you have created using the terraform destroy command.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;AWS IAM User Management with Terraform offers a robust and scalable solution for handling access control within your AWS environment. By embracing Infrastructure as Code (IaC) principles, organizations can move beyond manual configurations and achieve a more secure, efficient, and auditable management of user identities and permissions.&lt;br&gt;
This concludes the Day 16 of 30 Days of AWS Terraform challeneg. See you in the next blog.&lt;/p&gt;

&lt;p&gt;Below is the Youtube Video for Reference:&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/33dWo4esH1U"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
    </item>
    <item>
      <title>Day15: Mini-Project: 2 VPC-Peering</title>
      <dc:creator>Anil KUMAR</dc:creator>
      <pubDate>Wed, 10 Dec 2025 11:57:20 +0000</pubDate>
      <link>https://forem.com/anil_kumar_noolu/day15-mini-project-2-vpc-peering-3e0c</link>
      <guid>https://forem.com/anil_kumar_noolu/day15-mini-project-2-vpc-peering-3e0c</guid>
      <description>&lt;p&gt;Today marks the Day 15 of 30days of aws terraform challenge initiative by Piyush Sachdev. Today we will deep into the concept of VPC Peering. What exactly is VPC-Peering, what are its use-cases and why we need that in the first place.&lt;/p&gt;

&lt;h2&gt;
  
  
  VPC-Peering:
&lt;/h2&gt;

&lt;p&gt;As we all know VPC is nothing but a Virtual Private Cloud where we will be hosting our apps or services within the assigned IP addresses of that VPC. Suppose you have 2 VPC's in 2 different regions and you want to have communication between them.&lt;/p&gt;

&lt;p&gt;Like app service inside a EC2 instance of one region VPC needs to communicate with DB service of another region VPC. Then you need to establish VPC-Peering for communication to happen, else there will be no communication happening between them.&lt;/p&gt;

&lt;p&gt;Simply, In the world of cloud infrastructure, network isolation is the default. But what happens when your App service in us-east-1 needs to fetch data from a database in us-west-2&lt;/p&gt;

&lt;p&gt;Routing this traffic over the public internet is slow, insecure, and expensive (NAT Gateway costs add up!). The solution is VPC Peering - a networking connection that allows two VPCs to communicate as if they are in the same network.&lt;/p&gt;

&lt;h2&gt;
  
  
  Low-Level Architecture (LLD):
&lt;/h2&gt;

&lt;p&gt;Before we write code, let's visualize the packet flow. Below is the architecture we are building. Note how traffic flows privately through the AWS backbone, bypassing the public internet entirely for inter-VPC communication.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The Traffic Flow: EC2 Instance (East) sends a packet to 10.1.x.x (West).&lt;/li&gt;
&lt;li&gt;Route Table (East): sees the destination is the Peering Connection (pcx-id), not the Internet Gateway.&lt;/li&gt;
&lt;li&gt;AWS Backbone routes traffic securely across regions.&lt;/li&gt;
&lt;li&gt;Security Group (West) validates the inbound request (is it from 10.0.0.0/16?).&lt;/li&gt;
&lt;li&gt;EC2 Instance (West) processes the request.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsbh2xbduu5tgq55qp9ws.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsbh2xbduu5tgq55qp9ws.jpg" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Below are the resources that will be created during this project:&lt;/p&gt;

&lt;h2&gt;
  
  
  Networking Components
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Two VPCs:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Primary VPC in us-east-1 (10.0.0.0/16)&lt;br&gt;
Secondary VPC in us-west-2 (10.1.0.0/16)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Subnets:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One public subnet in each VPC&lt;br&gt;
Configured with auto-assign public IP&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Internet Gateways:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One for each VPC to allow internet access&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Route Tables:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Custom route tables with routes to internet and peered VPC&lt;br&gt;
Routes for VPC peering traffic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;VPC Peering Connection:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Cross-region peering between the two VPCs&lt;br&gt;
Automatic acceptance configured&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Compute Resources&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;EC2 Instances:&lt;/p&gt;

&lt;p&gt;One t2.micro instance in each VPC&lt;br&gt;
Running Amazon Linux 2&lt;br&gt;
Apache web server installed&lt;br&gt;
Custom web page showing VPC information&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security Groups:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;SSH access from anywhere (port 22)&lt;br&gt;
ICMP (ping) allowed from peered VPC&lt;br&gt;
All TCP traffic allowed between VPCs&lt;/p&gt;
&lt;h2&gt;
  
  
  The Terraform Implementation(Step-by-Step):
&lt;/h2&gt;

&lt;p&gt;Creating 2 new key-pairs for both Ec2 Instances which will be created before:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# For us-east-1
aws ec2 create-key-pair --key-name vpc-peering-demo --region us-east-1 --query 'KeyMaterial' --output text &amp;gt; vpc-peering-demo.pem

# For us-west-2
aws ec2 create-key-pair --key-name vpc-peering-demo --region us-west-2 --query 'KeyMaterial' --output text &amp;gt; vpc-peering-demo.pem

# Set permissions (on Linux/Mac)
chmod 400 vpc-peering-demo.pem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;provider.tf:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "aws" {
  region = "us-east-1"
  alias  = "primary"
}

provider "aws" {
  region = "us-west-2"
  alias  = "secondary"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Creating 2 VPC's:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Primary VPC in us-east-1
resource "aws_vpc" "primary_vpc" {
  provider             = aws.primary
  cidr_block           = var.primary_vpc_cidr
  enable_dns_hostnames = true
  enable_dns_support   = true

  tags = {
    Name        = "Primary-VPC-${var.primary_region}"
    Environment = "Demo"
    Purpose     = "VPC-Peering-Demo"
  }
}

# Secondary VPC in us-west-2
resource "aws_vpc" "secondary_vpc" {
  provider             = aws.secondary
  cidr_block           = var.secondary_vpc_cidr
  enable_dns_hostnames = true
  enable_dns_support   = true

  tags = {
    Name        = "Secondary-VPC-${var.secondary_region}"
    Environment = "Demo"
    Purpose     = "VPC-Peering-Demo"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above block creates 2 new VPC's in 2 different regions with different CIDR's block.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating 2 Subnets:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Subnet in Primary VPC
resource "aws_subnet" "primary_subnet" {
  provider                = aws.primary
  vpc_id                  = aws_vpc.primary_vpc.id
  cidr_block              = var.primary_subnet_cidr
  availability_zone       = data.aws_availability_zones.primary.names[0]
  map_public_ip_on_launch = true

  tags = {
    Name        = "Primary-Subnet-${var.primary_region}"
    Environment = "Demo"
  }
}

# Subnet in Secondary VPC
resource "aws_subnet" "secondary_subnet" {
  provider                = aws.secondary
  vpc_id                  = aws_vpc.secondary_vpc.id
  cidr_block              = var.secondary_subnet_cidr
  availability_zone       = data.aws_availability_zones.secondary.names[0]
  map_public_ip_on_launch = true

  tags = {
    Name        = "Secondary-Subnet-${var.secondary_region}"
    Environment = "Demo"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above block creates 2 subnets in 2 different regions within the VPC CIDR block.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating Internet Gateways:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Internet Gateway for Primary VPC
resource "aws_internet_gateway" "primary_igw" {
  provider = aws.primary
  vpc_id   = aws_vpc.primary_vpc.id

  tags = {
    Name        = "Primary-IGW"
    Environment = "Demo"
  }
}

# Internet Gateway for Secondary VPC
resource "aws_internet_gateway" "secondary_igw" {
  provider = aws.secondary
  vpc_id   = aws_vpc.secondary_vpc.id

  tags = {
    Name        = "Secondary-IGW"
    Environment = "Demo"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above block creates 2 Internet Gateways associating with the already existing VPC in 2 regions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating Route Tables:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Route table for Primary VPC
resource "aws_route_table" "primary_rt" {
  provider = aws.primary
  vpc_id   = aws_vpc.primary_vpc.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.primary_igw.id
  }

  tags = {
    Name        = "Primary-Route-Table"
    Environment = "Demo"
  }
}

# Route table for Secondary VPC
resource "aws_route_table" "secondary_rt" {
  provider = aws.secondary
  vpc_id   = aws_vpc.secondary_vpc.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.secondary_igw.id
  }

  tags = {
    Name        = "Secondary-Route-Table"
    Environment = "Demo"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above block demonstrates aws_route_table for creating Route tables and creating a route for internet gateway.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating Route_table_association
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Associate route table with Primary subnet
resource "aws_route_table_association" "primary_rta" {
  provider       = aws.primary
  subnet_id      = aws_subnet.primary_subnet.id
  route_table_id = aws_route_table.primary_rt.id
}

# Associate route table with Secondary subnet
resource "aws_route_table_association" "secondary_rta" {
  provider       = aws.secondary
  subnet_id      = aws_subnet.secondary_subnet.id
  route_table_id = aws_route_table.secondary_rt.id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this block, we are associating route tables with the subnets.&lt;/p&gt;

&lt;h2&gt;
  
  
  VPC Peering Connection:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# VPC Peering Connection (Requester side - Primary VPC)
resource "aws_vpc_peering_connection" "primary_to_secondary" {
  provider    = aws.primary
  vpc_id      = aws_vpc.primary_vpc.id
  peer_vpc_id = aws_vpc.secondary_vpc.id
  peer_region = var.secondary_region
  auto_accept = false

  tags = {
    Name        = "Primary-to-Secondary-Peering"
    Environment = "Demo"
    Side        = "Requester"
  }
}

# VPC Peering Connection Accepter (Accepter side - Secondary VPC)
resource "aws_vpc_peering_connection_accepter" "secondary_accepter" {
  provider                  = aws.secondary
  vpc_peering_connection_id = aws_vpc_peering_connection.primary_to_secondary.id
  auto_accept               = true

  tags = {
    Name        = "Secondary-Peering-Accepter"
    Environment = "Demo"
    Side        = "Accepter"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;VPC Peering is a two-step process: Request and Accept.&lt;/p&gt;

&lt;p&gt;The Requester (aws_vpc_peering_connection): Initiates the call from the Primary VPC. We must specify the peer_region because this is a cross-region peer.&lt;/p&gt;

&lt;p&gt;The Accepter (aws_vpc_peering_connection_accepter): Lives in the Secondary region. It "picks up the phone" and establishes the tunnel.&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding routes for VPC-Peering
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Add route to Secondary VPC in Primary route table
resource "aws_route" "primary_to_secondary" {
  provider                  = aws.primary
  route_table_id            = aws_route_table.primary_rt.id
  destination_cidr_block    = var.secondary_vpc_cidr
  vpc_peering_connection_id = aws_vpc_peering_connection.primary_to_secondary.id

  depends_on = [aws_vpc_peering_connection_accepter.secondary_accepter]
}

# Add route to Primary VPC in Secondary route table
resource "aws_route" "secondary_to_primary" {
  provider                  = aws.secondary
  route_table_id            = aws_route_table.secondary_rt.id
  destination_cidr_block    = var.primary_vpc_cidr
  vpc_peering_connection_id = aws_vpc_peering_connection.primary_to_secondary.id

  depends_on = [aws_vpc_peering_connection_accepter.secondary_accepter]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Creating the peering connection is not enough. You must tell the VPCs how to use it. We modify the Route Tables in both VPCs to point traffic destined for the other VPC's CIDR block to the peering connection.&lt;/p&gt;

&lt;p&gt;Many engineers create the peering link but forget to update the Route Tables. Without these routes, packets will be dropped.&lt;/p&gt;

&lt;p&gt;Now create terraform blocks for Security Group and EC2 Instances creation too. I was not including that here as it would make blog bigger. We will be using Data Sources block in EC2 instance creation for availability zones and AMI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deployment &amp;amp; Verification
&lt;/h2&gt;

&lt;p&gt;Once all the files were run, run the below 3 commands for creation of Infra.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
terraform plan
terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After running terraform plan or apply, you could see 18 resources to be created.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Plan: 18 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + primary_instance_id           = (known after apply)
  + primary_instance_private_ip   = (known after apply)
  + primary_instance_public_ip    = (known after apply)
  + primary_vpc_cidr              = "10.0.0.0/16"
  + primary_vpc_id                = (known after apply)
  + secondary_instance_id         = (known after apply)
  + secondary_instance_private_ip = (known after apply)
  + secondary_instance_public_ip  = (known after apply)
  + secondary_vpc_cidr            = "10.1.0.0/16"
  + secondary_vpc_id              = (known after apply)
  + test_connectivity_command     = (known after apply)
  + vpc_peering_connection_id     = (known after apply)
  + vpc_peering_status            = (known after apply)

────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The below images shows the creation of VPC Peering Connection from Primary to Secondary.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo6cdyrlf4ff3ejnx6a98.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo6cdyrlf4ff3ejnx6a98.jpg" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jrs5yvrc8ufo415jb1n.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jrs5yvrc8ufo415jb1n.jpg" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You could see the creation of 2 EC2 instances created through Terraform.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F80029j8yr9md9zzhbx53.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F80029j8yr9md9zzhbx53.jpg" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl6zxxyanksopim8h86a5.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl6zxxyanksopim8h86a5.jpg" alt=" "&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ubuntu@ip-10-0-1-126:~$ ping 10.1.1.113
PING 10.1.1.113 (10.1.1.113) 56(84) bytes of data.
64 bytes from 10.1.1.113: icmp_seq=1 ttl=64 time=63.0 ms
64 bytes from 10.1.1.113: icmp_seq=2 ttl=64 time=62.8 ms
64 bytes from 10.1.1.113: icmp_seq=3 ttl=64 time=62.9 ms
64 bytes from 10.1.1.113: icmp_seq=4 ttl=64 time=62.8 ms
64 bytes from 10.1.1.113: icmp_seq=5 ttl=64 time=62.8 ms
^C
--- 10.1.1.113 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4007ms
rtt min/avg/max/mdev = 62.752/62.830/63.021/0.101 ms
ubuntu@ip-10-0-1-126:~$ 
ubuntu@ip-10-0-1-126:~$ 
ubuntu@ip-10-0-1-126:~$ curl 10.1.1.113
&amp;lt;h1&amp;gt;Secondary VPC Instance - us-west-2&amp;lt;/h1&amp;gt;
&amp;lt;p&amp;gt;Private IP: 10.1.1.113 &amp;lt;/p
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You could see that I have Pinged the secondary EC2 instance from Primary Ec2 instance and we are able to connect between them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo5bxpwns08rhcong5z9u.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo5bxpwns08rhcong5z9u.jpg" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You could also see while trying to access Primary EC2 instance from Secondary one.&lt;/p&gt;

&lt;p&gt;Make sure to terminate all the resources once the project is done.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform destroy --auto-approve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This marks the Day 15 of 30 Days of AWS Terraform challenge. Today we have done a mini-project on VPC Peering Connection by trying to establish connection between 2 VPC's in 2 different regions.&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/WGt000THDmQ"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
    </item>
    <item>
      <title>Day 14— AWS Terraform Static Website Hosting</title>
      <dc:creator>Anil KUMAR</dc:creator>
      <pubDate>Tue, 09 Dec 2025 10:33:19 +0000</pubDate>
      <link>https://forem.com/anil_kumar_noolu/day-14-aws-terraform-static-website-hosting-hbk</link>
      <guid>https://forem.com/anil_kumar_noolu/day-14-aws-terraform-static-website-hosting-hbk</guid>
      <description>&lt;p&gt;This blog will mark the Day 14 of 30 days of AWS Terraform Challenge Initiative by Piyush Sachdeva. In this blog, we will be doing a hands-on exercise of hosting a Static website on AWS S3 and accessing that through CloudFront Distribution.&lt;/p&gt;

&lt;p&gt;This mini project demonstrates how to deploy a static website on AWS using Terraform. We'll create a complete static website hosting solution using S3 for storage and CloudFront for global content delivery.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Internet → CloudFront Distribution → S3 Bucket (Static Website)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqsvw2tmk2a3xwqn2mwxe.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqsvw2tmk2a3xwqn2mwxe.jpg" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Components:
&lt;/h2&gt;

&lt;p&gt;S3 Bucket: Hosts static website files (HTML, CSS, JS)&lt;br&gt;
CloudFront Distribution: Global CDN for fast content delivery&lt;br&gt;
Public Access Configuration: Allows public reading of website files.&lt;/p&gt;
&lt;h2&gt;
  
  
  Workflow:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Prepare your static website files: Create your HTML, CSS, JavaScript, and image files.&lt;/li&gt;
&lt;li&gt;Write Terraform configuration: Define the AWS resources (S3 bucket, CloudFront, Route 53) in .tf files.&lt;/li&gt;
&lt;li&gt;Initialize Terraform: Run terraform init to download necessary providers.&lt;/li&gt;
&lt;li&gt;Plan changes: Run terraform plan to see what changes will be applied.&lt;/li&gt;
&lt;li&gt;Apply changes: Run terraform apply to provision the resources in AWS.&lt;/li&gt;
&lt;li&gt;Upload content: If not using Terraform to upload objects, upload your static files to the S3 bucket.&lt;/li&gt;
&lt;li&gt;Access your website: Access your website using the S3 static website endpoint or your custom domain if configured.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  CloudFront with S3 — Why It is Needed and How It Works:
&lt;/h2&gt;

&lt;p&gt;Hosting a static website directly on Amazon S3 is simple and cost-effective, but it has significant limitations when serving real users at scale.&lt;/p&gt;
&lt;h2&gt;
  
  
  Problems with S3-only Hosting:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;High latency for global users:&lt;br&gt;
S3 buckets exist in a single AWS region. Users located far from that region experience slower load times because every request must travel long physical distances to reach the S3 data center.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Higher data-transfer costs&lt;br&gt;
When traffic grows, serving all content from one region increases inter-region data transfer costs. S3 also does not cache responses, which means every request hits the origin, increasing overall expenses.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Security challenges&lt;br&gt;
Traditional S3 static hosting requires making the bucket public, exposing your content directly to the internet. This introduces risks such as:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Direct object access outside your website&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Potential misuse of public URLs&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Difficulty enforcing fine-grained access control&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Amazon CloudFront, AWS’s global CDN, solves these issues by acting as a secure, fast, distributed caching layer in front of your S3 bucket.&lt;/p&gt;
&lt;h2&gt;
  
  
  Key Benefits
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Global low-latency delivery&lt;br&gt;
CloudFront uses edge locations worldwide to cache content close to users.&lt;br&gt;
First request: Served from S3&lt;br&gt;
Subsequent requests: Served from nearest CloudFront edge&lt;br&gt;
This dramatically reduces page load time for users anywhere in the world.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Lower cost with caching&lt;br&gt;
Since CloudFront serves most requests from its cache, fewer requests reach S3. This reduces S3 data-transfer and request costs. CloudFront’s global data transfer rates are also cheaper than S3’s regional egress charges.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enhanced security with Origin Access Control (OAC)&lt;br&gt;
Modern deployments use OAC, which includes:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Gives CloudFront permission to read from your private S3 bucket&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Removes the need for ANY public bucket permissions&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ensures users can ONLY access S3 content through CloudFront&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Complete Architecture Diagram
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftn8mjq4yfr09t2tfo7su.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftn8mjq4yfr09t2tfo7su.jpg" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Key Components:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Private S3 Bucket with static files (HTML, CSS, JS, images).&lt;/li&gt;
&lt;li&gt;Origin Access Control (OAC) - The modern, secure way to authorize CloudFront to access the private S3 bucket. This replaces the deprecated Origin Access Identity (OAI).&lt;/li&gt;
&lt;li&gt;S3 Bucket Policy - Explicitly authorizes the CloudFront Distribution's service principal and restricts access using the OAC condition.&lt;/li&gt;
&lt;li&gt;CloudFront Distribution - Caches content and serves it globally over HTTPS.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Terraform Code:
&lt;/h2&gt;
&lt;h2&gt;
  
  
  1. Setting the static files and variables.tf
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir www
# Add static files: index.html, style.css, script.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "bucket_name" {
  default = "my-static-website-bucket-anilkumar"
}

locals {
  origin_id = "s3-static-site-origin"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  2. Creating S3 with public access blocked
&lt;/h2&gt;

&lt;p&gt;The bucket is defined as a standard S3 bucket, and then an access block resource is used to explicitly block all public access, ensuring it can only be accessed by CloudFront.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// first we create s3 resource 
resource "aws_s3_bucket" "my_first_bucket" {
  bucket = var.bucket_name
}

// here we make the bucket private
resource "aws_s3_bucket_public_access_block" "example" {
  bucket = aws_s3_bucket.my_first_bucket.id

  block_public_acls       = true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3.  Origin Access Control (OAC)
&lt;/h2&gt;

&lt;p&gt;This resource creates the secure identity that CloudFront will use when communicating with S3.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// now we allow the origin to access the bucket
resource "aws_cloudfront_origin_access_control" "my_origin_access_control" {
  name                              = "my_origin_access_control"
  description                       = "OAC for S3 Static Site"
  origin_access_control_origin_type = "s3"
  signing_behavior                  = "always"
  signing_protocol                  = "sigv4"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Purpose: Secure identity for CloudFront S3 communication, using the modern SigV4 signing protocol.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. S3 Bucket Policy (Important):
&lt;/h2&gt;

&lt;p&gt;This policy is the crucial authorization step. It only allows s3:GetObject (read-access for files) and only from the CloudFront service, specifically the ARN of our distribution using the AWS:SourceArn condition.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// bucket policy
resource "aws_s3_bucket_policy" "allow_access_from_cloudfront" {
  bucket = aws_s3_bucket.my_first_bucket.id

  depends_on = [aws_s3_bucket_public_access_block.example]

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Sid    = "AllowCloudFrontS3Access"
        Effect = "Allow"

        # FIXED → Correct Principal for CloudFront Service
        Principal = {
          Service = "cloudfront.amazonaws.com"
        }

        Action = [
          "s3:GetObject",
        ]

        Resource = [
          "${aws_s3_bucket.my_first_bucket.arn}/*"
        ]

        # FIXED → Required OAC condition to ensure only THIS distribution can access
        Condition = {
          StringEquals = {
            "AWS:SourceArn" = aws_cloudfront_distribution.s3_distribution.arn
          }
        }
      }
    ]
  })
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  5. Uploading Static files:
&lt;/h2&gt;

&lt;p&gt;This resource uses the fileset function to iterate over all files in the local www directory and uploads them to the S3 bucket with the correct MIME type (Content-Type) based on the file extension.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// s3 bucket object 
resource "aws_s3_object" "object" {

  bucket   = aws_s3_bucket.my_first_bucket.id
  for_each = fileset("${path.module}/www", "**/*")
  key      = each.value
  source   = "${path.module}/www/${each.value}"

  etag = filemd5("${path.module}/www/${each.value}")
  content_type = lookup({
    # Common MIME types. Simplified lookup logic using `regex` in the previous blog, but this is clear.
    "html"         = "text/html"
    "css"          = "text/css"
    "js"           = "application/javascript"
    "jpeg"         = "image/jpeg"
    "png"          = "image/png"
    "gif"          = "image/gif"
    "jpg"          = "image/jpg"
    # ... other types
  }, split(".", each.value)[length(split(".", each.value)) - 1], "application/octet-stream")
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  6. CloudFront Distribution
&lt;/h2&gt;

&lt;p&gt;This is the main resource that ties everything together. It defines the caching rules, specifies the S3 bucket as the origin, and links the OAC for secure access.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_cloudfront_distribution" "s3_distribution" {
  origin {
    # Use bucket_regional_domain_name for private S3 origin with OAC/OAI
    domain_name              = aws_s3_bucket.my_first_bucket.bucket_regional_domain_name
    origin_access_control_id = aws_cloudfront_origin_access_control.my_origin_access_control.id
    origin_id                = local.origin_id
  }

  enabled             = true
  is_ipv6_enabled     = true
  comment             = "CloudFront distribution for static site"
  default_root_object = "index.html" # Important for serving the index file

  default_cache_behavior {
    allowed_methods    = ["GET", "HEAD"]
    cached_methods     = ["GET", "HEAD"]
    target_origin_id   = local.origin_id

    forwarded_values {
      query_string = false

      cookies {
        forward = "none" # Static sites don't need cookies forwarded
      }
    }

    # IMPROVED: Changed from "allow-all" to "redirect-to-https" for security best practice
    viewer_protocol_policy = "redirect-to-https" 
    min_ttl                = 0
    default_ttl            = 3600 # 1 hour default cache
    max_ttl                = 86400 # 24 hours max cache
  }

  price_class = "PriceClass_100" # Lowest cost, basic global coverage

  restrictions {
    geo_restriction {
      restriction_type = "none"
    }
  }

  viewer_certificate {
    # Use AWS's default certificate for HTTPS
    cloudfront_default_certificate = true 
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  7. Outputs Declaration:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "website_url" {
  description = "The URL of the static website"
  value       = "https://${aws_cloudfront_distribution.s3_distribution.domain_name}"
}

output "cloudfront_distribution_id" {
  description = "The ID of the CloudFront distribution"
  value       = aws_cloudfront_distribution.s3_distribution.id
}

output "s3_bucket_name" {
  description = "The name of the S3 bucket"
  value       = aws_s3_bucket.website.bucket
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  8. Deploy &amp;amp; Test
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
terraform plan
terraform apply --auto-approve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform apply 


Plan: 8 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + cloudfront_distribution_id = (known after apply)
  + s3_bucket_name             = (known after apply)
  + website_url                = (known after apply)
aws_cloudfront_origin_access_control.oac: Creating...
aws_s3_bucket.website: Creating...
aws_cloudfront_origin_access_control.oac: Creation complete after 2s [id=E1KGAVQXOCS06C]
aws_s3_bucket.website: Creation complete after 7s [id=my-static-website-anilkumar20251209102509049200000001]
aws_s3_bucket_public_access_block.website: Creating...
aws_s3_object.website_files["style.css"]: Creating...
aws_s3_object.website_files["script.js"]: Creating...
aws_s3_object.website_files["index.html"]: Creating...
aws_cloudfront_distribution.s3_distribution: Creating...
aws_s3_bucket_public_access_block.website: Creation complete after 1s [id=my-static-website-anilkumar20251209102509049200000001]
aws_s3_object.website_files["script.js"]: Creation complete after 1s [id=script.js]
aws_s3_object.website_files["style.css"]: Creation complete after 1s [id=style.css]
aws_s3_object.website_files["index.html"]: Creation complete after 1s [id=index.html]
aws_cloudfront_distribution.s3_distribution: Still creating... [10s elapsed]
aws_cloudfront_distribution.s3_distribution: Still creating... [20s elapsed]
aws_cloudfront_distribution.s3_distribution: Still creating... [30s elapsed]
aws_cloudfront_distribution.s3_distribution: Still creating... [40s elapsed]
aws_cloudfront_distribution.s3_distribution: Still creating... [50s elapsed]
aws_cloudfront_distribution.s3_distribution: Still creating... [1m0s elapsed]
aws_cloudfront_distribution.s3_distribution: Still creating... [1m10s elapsed]
aws_cloudfront_distribution.s3_distribution: Still creating... [1m20s elapsed]
aws_cloudfront_distribution.s3_distribution: Still creating... [1m30s elapsed]
aws_cloudfront_distribution.s3_distribution: Still creating... [1m40s elapsed]
aws_cloudfront_distribution.s3_distribution: Still creating... [1m50s elapsed]
aws_cloudfront_distribution.s3_distribution: Still creating... [2m0s elapsed]
aws_cloudfront_distribution.s3_distribution: Still creating... [2m10s elapsed]
aws_cloudfront_distribution.s3_distribution: Still creating... [2m20s elapsed]
aws_cloudfront_distribution.s3_distribution: Still creating... [2m30s elapsed]
aws_cloudfront_distribution.s3_distribution: Still creating... [2m40s elapsed]
aws_cloudfront_distribution.s3_distribution: Still creating... [2m50s elapsed]
aws_cloudfront_distribution.s3_distribution: Still creating... [3m0s elapsed]
aws_cloudfront_distribution.s3_distribution: Still creating... [3m10s elapsed]
aws_cloudfront_distribution.s3_distribution: Still creating... [3m20s elapsed]
aws_cloudfront_distribution.s3_distribution: Creation complete after 3m27s [id=E3PI31IGA77HJ]
aws_s3_bucket_policy.website: Creating...
aws_s3_bucket_policy.website: Creation complete after 5s [id=my-static-website-anilkumar20251209102509049200000001]

Apply complete! Resources: 8 added, 0 changed, 0 destroyed.

Outputs:

cloudfront_distribution_id = "E3PI31IGA77HJ"
s3_bucket_name = "my-static-website-anilkumar20251209102509049200000001"
website_url = "https://d2u2vdb3lrnvju.cloudfront.net"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here the outputs of S3 bucket creation and  CloudFront Distribution creation&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh42egci2gbgqozgktpqk.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh42egci2gbgqozgktpqk.jpg" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You could see the creation of S3 bucket with the bucket-prefix we set.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4blhs28ofea1msmudbkr.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4blhs28ofea1msmudbkr.jpg" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You could see the creation of CloudFront Distribution.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy3n07ahf1m5s4gbpfhy0.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy3n07ahf1m5s4gbpfhy0.jpg" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You could see that we are serving the static website through S3 bucket hosting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Imp Tip&lt;/strong&gt;: Always pin provider versions in your configurations to prevent unexpected changes in resource behavior.&lt;/p&gt;

&lt;p&gt;Conclusion:&lt;/p&gt;

&lt;p&gt;This concludes the Day 14 of 30 days of terraform challenge. Today we have done a mini project on Static Website hosting through AWS S3 and CloudFront Distribution. See you tomorrow in a new blog&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/bK6RimAv2nQ"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
      <category>security</category>
    </item>
    <item>
      <title>Day 13: Terraform Data Sources</title>
      <dc:creator>Anil KUMAR</dc:creator>
      <pubDate>Sun, 07 Dec 2025 17:34:09 +0000</pubDate>
      <link>https://forem.com/anil_kumar_noolu/day-13-terraform-data-sources-4m1k</link>
      <guid>https://forem.com/anil_kumar_noolu/day-13-terraform-data-sources-4m1k</guid>
      <description>&lt;p&gt;Today marks the Day 3 of 30 Days of AWS Terraform Challenge Initiative by Piyush Sachdev. Today we will do deep dive into the Terraform Data Sources, what exactly is a Data Source and how it will help us in writing terraform code for AWS resources such as EC2, VPC, subnet and so..&lt;/p&gt;

&lt;h2&gt;
  
  
  Data Source:
&lt;/h2&gt;

&lt;p&gt;Think of Data source like a phone directory with usename and phone number like key and value with an api, so whenever you want any value with the help of API, you can retrieve that using key instead of hardcoding.&lt;/p&gt;

&lt;p&gt;In Short, Terraform data sources with AWS allow you to retrieve information about existing AWS resources or external data that can then be referenced within your Terraform configurations.&lt;/p&gt;

&lt;p&gt;For Example, While Creating a EC2 instance, we need and AMI id for which we need to go the AMI release page and find the latest AMI ID and then fetch that which is not the best approach for our current projects, So we need to find a way to automatically get that AMI ID while creating EC2 instances without any manual intervention or hardcoding. So for this we will be using a resource in Terraform named "Data Source". Almost all resources in AWS have data Source and you can get their ID and all details for that resource using the Data Source Like ask for linux_2 OS based ami and it will fetch you that AMI ID.&lt;/p&gt;

&lt;p&gt;In short, Data sources allow Terraform to read information about existing infrastructure. They:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Don't create, update, or delete resources&lt;/li&gt;
&lt;li&gt;Allow you to reference resources managed elsewhere&lt;/li&gt;
&lt;li&gt;Enable sharing infrastructure between teams&lt;/li&gt;
&lt;li&gt;Are defined with data blocks instead of resource blocks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpa8x68s6jeqfdfyawypg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpa8x68s6jeqfdfyawypg.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Use AWS Data Sources:
&lt;/h2&gt;

&lt;p&gt;You define a data source using the data block in your Terraform configuration. The syntax is as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "provider_type" "name" {
  # Configuration settings for filtering or identifying the data source
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;provider_type: Specifies the type of AWS data source (e.g., aws_ami, aws_vpc, aws_s3_bucket).&lt;/p&gt;

&lt;p&gt;name: A local name to reference this data source within your configuration or for internal communication.&lt;/p&gt;

&lt;p&gt;Configuration Settings: These vary depending on the data source and are used to filter or identify the specific resource you want to retrieve information about. This often includes id, name, or filter blocks with name and values arguments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Examples between Data vs. Resource Block:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Resource Block - Terraform MANAGES this
resource "aws_vpc" "my_vpc" {
  cidr_block = "10.0.0.0/16"
}

# Data Block - Terraform READS this
data "aws_vpc" "existing_vpc" {
  filter {
    name   = "tag:Name"
    values = ["shared-network-vpc"]
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above code block, you can see some key differences between resources block and data sources block.&lt;/p&gt;

&lt;p&gt;In the resource block, you can see that the Terraform entirely manages that VPC based on our CIDR block, whereas in Data source block, we are just referencing the existing vpc in our AWS account with the name "existing_vpc" and we will be using that while creating other resources.&lt;/p&gt;

&lt;p&gt;In filters section, we have given some values telling Terraform to retreive data from the VPC which is matching with the filters provided.&lt;/p&gt;

&lt;h2&gt;
  
  
  Task:
&lt;/h2&gt;

&lt;p&gt;In this task, we will first create a VPC using terraform and then we will try to use data source to create EC2 instance using ami data source to fetch ami_id first and then try to create EC2 in the VPC block we created initially.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd2oxz0ks7uyfmsgrtkqm.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd2oxz0ks7uyfmsgrtkqm.jpg" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Creation of VPC :
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# This simulates infrastructure created by another team
provider "aws" {
  region = "us-east-1"
}

resource "aws_vpc" "shared" {
  cidr_block = "10.0.0.0/16"
  tags = {
    Name = "shared-network-vpc"  
  }
}

resource "aws_subnet" "shared" {
  vpc_id     = aws_vpc.shared.id
  cidr_block = "10.0.1.0/24"
  tags = {
    Name = "shared-primary-subnet"  # ← This tag is important!
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Initially, we didn't have any VPC with the CIDR block: 10.0.0.0/16, now we will create a VPC with the above CIDR with a tag: shared-network-vpc.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
  + create

Terraform will perform the following actions:

  # aws_subnet.shared will be created
  + resource "aws_subnet" "shared" {
      + arn                                            = (known after apply)
      + assign_ipv6_address_on_creation                = false
      + availability_zone                              = (known after apply)
      + availability_zone_id                           = (known after apply)
      + cidr_block                                     = "10.0.1.0/24"
      + enable_dns64                                   = false
      + enable_resource_name_dns_a_record_on_launch    = false
      + enable_resource_name_dns_aaaa_record_on_launch = false
      + id                                             = (known after apply)
      + ipv6_cidr_block_association_id                 = (known after apply)
      + ipv6_native                                    = false
      + map_public_ip_on_launch                        = false
      + owner_id                                       = (known after apply)
      + private_dns_hostname_type_on_launch            = (known after apply)
      + region                                         = "us-east-1"
      + tags                                           = {
          + "Name" = "shared-primary-subnet"
        }
      + tags_all                                       = {
          + "Name" = "shared-primary-subnet"
        }
      + vpc_id                                         = (known after apply)
    }

  # aws_vpc.shared will be created
  + resource "aws_vpc" "shared" {
      + arn                                  = (known after apply)
      + cidr_block                           = "10.0.0.0/16"
      + default_network_acl_id               = (known after apply)
      + default_route_table_id               = (known after apply)
      + default_security_group_id            = (known after apply)
      + dhcp_options_id                      = (known after apply)
      + enable_dns_hostnames                 = (known after apply)
      + enable_dns_support                   = true
      + enable_network_address_usage_metrics = (known after apply)
      + id                                   = (known after apply)
      + instance_tenancy                     = "default"
      + ipv6_association_id                  = (known after apply)
      + ipv6_cidr_block                      = (known after apply)
      + ipv6_cidr_block_network_border_group = (known after apply)
      + main_route_table_id                  = (known after apply)
      + owner_id                             = (known after apply)
      + region                               = "us-east-1"
      + tags                                 = {
          + "Name" = "shared-network-vpc"
        }
      + tags_all                             = {
          + "Name" = "shared-network-vpc"
        }
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fde908mc87kq9w32y9q1u.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fde908mc87kq9w32y9q1u.jpg" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdcrabjdvazy49hbdayyz.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdcrabjdvazy49hbdayyz.jpg" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we can see that a new VPC named/tagged "shared-network-vpc" is created.&lt;/p&gt;

&lt;p&gt;Now we will be creating an EC2 instance, using the ami data source and vpc, subnet data source.&lt;/p&gt;

&lt;p&gt;Below are the Data source blocks for VPC and Subnet resources:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Data source to get the existing VPC
data "aws_vpc" "shared" {
  filter {
    name   = "tag:Name"
    values = ["shared-network-vpc"]
  }
}

# Data source to get the existing subnet
data "aws_subnet" "shared" {
  filter {
    name   = "tag:Name"
    values = ["shared-primary-subnet"]
  }
  vpc_id = data.aws_vpc.shared.id  # ← Using aws_vpc data source
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;data "aws_vpc" - reads VPC information from AWS&lt;/li&gt;
&lt;li&gt;filter - searches for VPC with specific tag&lt;/li&gt;
&lt;li&gt;shared - local name to reference this data source&lt;/li&gt;
&lt;li&gt;Returns: VPC ID, CIDR block, and other attributes&lt;/li&gt;
&lt;li&gt;Searches for subnet with specific tag&lt;/li&gt;
&lt;li&gt;vpc_id - narrows search to our VPC&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now we will go through the Data source for AMI which helps us in fetching the AMI_ID for amazon_linux_2 OS.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Data source for the latest Amazon Linux 2 AMI
data "aws_ami" "amazon_linux_2" {
  most_recent = true
  owners      = ["amazon"]

  filter {
    name   = "name"
    values = ["amzn2-ami-hvm-*-x86_64-gp2"]
  }

  filter {
    name   = "virtualization-type"
    values = ["hvm"]
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;most_recent = true - gets latest matching AMI&lt;/li&gt;
&lt;li&gt;owners = ["amazon"] - only official Amazon AMIs&lt;/li&gt;
&lt;li&gt;Multiple filters for precise matching with tag of names and values matching amzn2 ami values.&lt;/li&gt;
&lt;li&gt;Wildcards (*) allow flexible pattern matching&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now in main.tf, we will create an EC2 instance utilizing all the above Data Sources of VPC, Subnet and AMI ID.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_instance" "main" {
  ami           = data.aws_ami.amazon_linux_2.id    # AMI - Data source
  instance_type = "t2.micro"
  subnet_id     = data.aws_subnet.shared.id           # Subnet - Data source
  private_ip    = "10.0.1.50"

  tags = {
    Name = "day13-instance"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;data.aws_ami.amazon_linux_2.id - references AMI data source matching with Linux_2 OS.&lt;/li&gt;
&lt;li&gt;data.aws_subnet.shared.id - references subnet data source&lt;/li&gt;
&lt;li&gt;Instance will be created in existing infrastructure&lt;/li&gt;
&lt;li&gt;Private IP must be within subnet's CIDR range
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan
data.aws_vpc.shared: Reading...
data.aws_ami.amazon_linux_2: Reading...
aws_vpc.shared: Refreshing state... [id=vpc-09527ed20e76d002e]
data.aws_ami.amazon_linux_2: Read complete after 3s [id=ami-0156001f0548e90b1]
data.aws_vpc.shared: Read complete after 3s [id=vpc-09527ed20e76d002e]
data.aws_subnet.shared: Reading...
data.aws_subnet.shared: Read complete after 1s [id=subnet-0e8357d0d5a07c57b]
aws_subnet.shared: Refreshing state... [id=subnet-0e8357d0d5a07c57b]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
  + create

Terraform will perform the following actions:

  # aws_instance.main will be created
  + resource "aws_instance" "main" {
      + ami                                  = "ami-0156001f0548e90b1"
      + arn                                  = (known after apply)
      + associate_public_ip_address          = (known after apply)
      + availability_zone                    = (known after apply)
      + disable_api_stop                     = (known after apply)
      + disable_api_termination              = (known after apply)
      + ebs_optimized                        = (known after apply)
      + enable_primary_ipv6                  = (known after apply)
      + force_destroy                        = false
      + get_password_data                    = false
      + host_id                              = (known after apply)
      + host_resource_group_arn              = (known after apply)
      + iam_instance_profile                 = (known after apply)
      + id                                   = (known after apply)
      + instance_initiated_shutdown_behavior = (known after apply)
      + instance_lifecycle                   = (known after apply)
      + instance_state                       = (known after apply)
      + instance_type                        = "t2.micro"
      + ipv6_address_count                   = (known after apply)
      + ipv6_addresses                       = (known after apply)
      + key_name                             = (known after apply)
      + monitoring                           = (known after apply)
      + outpost_arn                          = (known after apply)
      + password_data                        = (known after apply)
      + placement_group                      = (known after apply)
      + placement_group_id                   = (known after apply)
      + placement_partition_number           = (known after apply)
      + primary_network_interface_id         = (known after apply)
      + private_dns                          = (known after apply)
      + private_ip                           = "10.0.1.50"
      + public_dns                           = (known after apply)
      + public_ip                            = (known after apply)
      + region                               = "us-east-1"
      + secondary_private_ips                = (known after apply)
      + security_groups                      = (known after apply)
      + source_dest_check                    = true
      + spot_instance_request_id             = (known after apply)
      + subnet_id                            = "subnet-0e8357d0d5a07c57b"
      + tags                                 = {
          + "Name" = "day13-instance"
        }
      + tags_all                             = {
          + "Name" = "day13-instance"
        }
      + tenancy                              = (known after apply)
      + user_data_base64                     = (known after apply)
      + user_data_replace_on_change          = false
      + vpc_security_group_ids               = (known after apply)

      + capacity_reservation_specification (known after apply)

      + cpu_options (known after apply)

      + ebs_block_device (known after apply)

      + enclave_options (known after apply)

      + ephemeral_block_device (known after apply)

      + instance_market_options (known after apply)

      + maintenance_options (known after apply)

      + metadata_options (known after apply)

      + network_interface (known after apply)

      + primary_network_interface (known after apply)

      + private_dns_name_options (known after apply)

      + root_block_device (known after apply)
    }

Plan: 1 to add, 0 to change, 0 to destroy.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above terraform plan execution, you could see it is first referencing data sources of ami, vpc and subnet and then it started creating EC2 instance.&lt;/p&gt;

&lt;p&gt;After running terraform apply, we can see that the EC2 instance is created.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fej114fe9ktnxz9assee7.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fej114fe9ktnxz9assee7.jpg" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As everything has been created using Data sources of AMI, VPC and subnet, we can finally delete the resources using terraform destroy.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform destroy


Plan: 0 to add, 0 to change, 3 to destroy.
aws_subnet.shared: Destroying... [id=subnet-0e8357d0d5a07c57b]
aws_instance.main: Destroying... [id=i-057687d8f123d3e00]
aws_subnet.shared: Still destroying... [id=subnet-0e8357d0d5a07c57b, 10s elapsed]
aws_instance.main: Still destroying... [id=i-057687d8f123d3e00, 10s elapsed]
aws_subnet.shared: Still destroying... [id=subnet-0e8357d0d5a07c57b, 20s elapsed]
aws_instance.main: Still destroying... [id=i-057687d8f123d3e00, 20s elapsed]
aws_instance.main: Still destroying... [id=i-057687d8f123d3e00, 30s elapsed]
aws_subnet.shared: Still destroying... [id=subnet-0e8357d0d5a07c57b, 30s elapsed]
aws_instance.main: Still destroying... [id=i-057687d8f123d3e00, 40s elapsed]
aws_subnet.shared: Still destroying... [id=subnet-0e8357d0d5a07c57b, 40s elapsed]
aws_subnet.shared: Still destroying... [id=subnet-0e8357d0d5a07c57b, 50s elapsed]
aws_instance.main: Still destroying... [id=i-057687d8f123d3e00, 50s elapsed]
aws_instance.main: Still destroying... [id=i-057687d8f123d3e00, 1m0s elapsed]
aws_subnet.shared: Still destroying... [id=subnet-0e8357d0d5a07c57b, 1m0s elapsed]
aws_instance.main: Destruction complete after 1m10s
aws_subnet.shared: Still destroying... [id=subnet-0e8357d0d5a07c57b, 1m10s elapsed]
aws_subnet.shared: Destruction complete after 1m15s
aws_vpc.shared: Destroying... [id=vpc-09527ed20e76d002e]
aws_vpc.shared: Destruction complete after 1s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;This marks the conclusion of Day 13 of 30 days of Terraform Challenge by Piyush Sachdev and we have deep dived into Terraform Data Sources. We have understood what exactly is a Data source and how it helps us to create resources efficiently.&lt;/p&gt;

&lt;p&gt;Below is the Youtube Video for reference:&lt;br&gt;


  &lt;iframe src="https://www.youtube.com/embed/MSr67lWCyD8"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
    </item>
    <item>
      <title>Day 12 - Terraform Functions - Part 2</title>
      <dc:creator>Anil KUMAR</dc:creator>
      <pubDate>Fri, 05 Dec 2025 18:03:56 +0000</pubDate>
      <link>https://forem.com/anil_kumar_noolu/day-12-terraform-functions-part-2-1g3d</link>
      <guid>https://forem.com/anil_kumar_noolu/day-12-terraform-functions-part-2-1g3d</guid>
      <description>&lt;p&gt;Today marks the day 12 of 30 days of Terraform Challenge initiative by Piyush Sachdev. This blog is in continuation from previous blog of Day 11 - Terraform Functions of Part 1. In Part 1, we have discussed about String Functions, Numeric Functions, Collection Functions, Type Conversion, Date/Time Functions.&lt;/p&gt;

&lt;p&gt;In this blog, we will discuss about remaining inbuilt Functions in terraform such as File Functions, Validation Functions and Lookup Functions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqnp1tpm7iy7pxrq5lh8p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqnp1tpm7iy7pxrq5lh8p.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Validation Functions:
&lt;/h2&gt;

&lt;p&gt;In Terraform, you can define validation functions for variables to ensure that inputs confirm to your specific requirements. This is especially useful when you want to enforce constraints like valid string lengths, specific patterns, or predefined allowed values.&lt;/p&gt;

&lt;p&gt;Validation blocks allow you to define strict rules for your variables. Instead of waiting for AWS to reject a resource creation (which takes time), Terraform can catch errors immediately during the plan phase.&lt;/p&gt;

&lt;h2&gt;
  
  
  Examples:
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Example 1: Validating Instance Type
&lt;/h2&gt;

&lt;p&gt;Let’s start by looking at a common validation scenario — checking the instance_type variable to ensure that only certain values are allowed. You might want to validate that the instance type name is between 2 and 10 characters and matches a specific pattern (e.g., t2 or t3 instances).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "aws_ami" "validated_ami" {
  most_recent = true
  owners      = ["amazon"]
  filter {
    name   = "name"
    values = ["amzn2-ami-hvm-*-x86_64-gp2"]
  }
}

resource "aws_instance" "validated_instance" {
  ami           = data.aws_ami.validated_ami.id
  instance_type = var.instance_type
  tags = {
    Name = "validated-instance"
    Type = var.instance_type
  }
}
variable "instance_type" {
  default = "t2.micro"

  validation {
    condition     = length(var.instance_type) &amp;gt;= 2 &amp;amp;&amp;amp; length(var.instance_type) &amp;lt;= 20
    error_message = "Instance type must be between 2 and 10 characters"
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above example, you can see we have set a validation that Instance type length must be between 2 and 10 accomodating something like t2.micro, t3.micro such that, Not larger instance types will be allowed.&lt;br&gt;
I have given a larger instance_type and it leads to below error.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan
data.aws_ami.validated_ami: Reading...
data.aws_ami.validated_ami: Read complete after 6s [id=ami-02610f36df0c59544]

Planning failed. Terraform encountered an error while generating this plan.

╷
│ Error: Invalid value for variable
│
│   on valid.tf line 19:
│   19: variable "instance_type" {
│     ├────────────────
│     │ var.instance_type is "t2.micromediaughgfcghjfd"
│
│ Instance type must be between 2 and 20 characters
│
│ This was checked by the validation rule at valid.tf:22,3-13.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "aws_ami" "validated_ami" {
  most_recent = true
  owners      = ["amazon"]
  filter {
    name   = "name"
    values = ["amzn2-ami-hvm-*-x86_64-gp2"]
  }
}

resource "aws_instance" "validated_instance" {
  ami           = data.aws_ami.validated_ami.id
  instance_type = var.instance_type
  tags = {
    Name = "validated-instance"
    Type = var.instance_type
  }
}
variable "instance_type" {
  default = "t2.micro"

  validation {
    condition     = can(regex("^t[2-3]\\.", var.instance_type))
    error_message = "Instance type must start with t2 or t3"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When I set instance_type as t4.large, then:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan
data.aws_ami.validated_ami: Reading...
data.aws_ami.validated_ami: Read complete after 5s [id=ami-02610f36df0c59544]

Planning failed. Terraform encountered an error while generating this plan.

╷
│ Error: Invalid value for variable
│
│   on valid.tf line 19:
│   19: variable "instance_type" {
│     ├────────────────
│     │ var.instance_type is "t4.micro"
│
│ Instance type must start with t2 or t3
│
│ This was checked by the validation rule at valid.tf:27,3-13.
╵
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Example 2: Validating String Suffix
&lt;/h2&gt;

&lt;p&gt;Another common validation is checking if a string ends with a specific suffix, such as validating backup names.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "backup_name" {
  default = "daily_backup"

  validation {
    condition     = endswith(var.backup_name, "_backup")
    error_message = "Backup name must end with '_backup'."
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above example, you can see that when you give any value which are not ending with _backup, then it will leads to an error.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sensitive Variables:
&lt;/h2&gt;

&lt;p&gt;There were times where you will be giving a sensitive value to a terraform and you dont want that value to be displayed anywhere in the terraform console or needs to be stored in the terraform statefile, In those cases, you can add the parameter of "sensitive=true", which will not make that variable visisble to others in console after executing terraform apply.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "db_password" {
  type      = string
  default = password
  sensitive = true
}

output "db_pass_output" {
  value     = var.db_password
  sensitive = true   # By enabling here, the Output must also be marked sensitive
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan

Changes to Outputs:
  + db_pass_output = (sensitive value)

You can apply this plan to save these new output values to the Terraform state, without changing any real infrastructure.

────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you didn't enable sensitive=true flag in output blog, then you will see like below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; terraform plan
╷
│ Error: Output refers to sensitive values
│
│   on main.tf line 7:
│    7: output "db_pass_output" {
│
│ To reduce the risk of accidentally exporting sensitive data that was intended to be only internal, Terraform requires that any
│ root module output containing sensitive data be explicitly marked as sensitive, to confirm your intent.
│
│ If you do intend to export this data, annotate the output value as sensitive by adding the following argument:
│     sensitive = true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Type Conversion and Concat:
&lt;/h2&gt;

&lt;p&gt;The main point to remember here is we will be using merge for combining maps and we will be using concat for strings.&lt;/p&gt;

&lt;p&gt;We will be using Type Conversions when there is a use case of below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cleans up data automatically&lt;/li&gt;
&lt;li&gt;Makes further operations easier&lt;/li&gt;
&lt;li&gt;Prevents mistakes when working with lists that shouldn’t have duplicates&lt;/li&gt;
&lt;li&gt;Bridges differences between Terraform’s list and set types
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "user_locations" {
  default = ["us-east-1", "us-west-2", "us-east-1"]  
}

variable "default_locations" {
  default = ["us-west-1"]
}

locals {
  all_locations    = concat(var.user_locations, var.default_locations)
  unique_locations = toset(local.all_locations)
}

output "location" {
  value = local.unique_locations
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan

Changes to Outputs:
  + location = [
      + "us-east-1",
      + "us-west-1",
      + "us-west-2",
    ]

You can apply this plan to save these new output values to the Terraform state, without changing any real infrastructure 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You could see that in the above example by using concat we have &lt;br&gt;
merged 2 variables into a single one and also did a type conversion from list to set eliminating duplicate ones.&lt;/p&gt;
&lt;h2&gt;
  
  
  Number Functions: Sum, Max, Min, Absolute, and Average:
&lt;/h2&gt;

&lt;p&gt;Terraform provides several handy number functions that help us perform calculations on variables, especially lists of numbers.&lt;/p&gt;

&lt;p&gt;These functions are useful for things like monthly costs, resource counts, or any scenario where we need math on our infrastructure data.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "monthly_costs" {
  default = [-50, 100, 75, 200]  # -50 is a credit
}

locals {
  positive_costs = [for cost in var.monthly_costs : abs(cost)]
  max_cost       = max(local.positive_costs...)
  total_cost     = sum(local.positive_costs)
  avg_cost       = local.total_cost / length(local.positive_costs)
}

output "max_cost" {
  value = local.max_cost
}

output "positive_cost" {
  value = local.positive_costs
}

output "total_costs" {
  value = local.total_cost
}

output "avg_costs" {
  value = local.avg_cost
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan

Changes to Outputs:
  + avg_costs     = 106.25
  + max_cost      = 200
  + positive_cost = [
      + 50,
      + 100,
      + 75,
      + 200,
    ]
  + total_costs   = 425

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  File Handling Functions:
&lt;/h2&gt;

&lt;p&gt;Terraform provides file handling functions that let us read, decode, and use data from files directly in our configurations.&lt;/p&gt;

&lt;p&gt;This is especially useful when working with JSON configuration files or other structured data.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;locals {
  config_file_exists = fileexists("./config.json")
  config_data        = local.config_file_exists ? jsondecode(file("./config.json")) : {}
}

output "config" {
  value = local.config_data
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Breaking it down&lt;br&gt;
fileexists("config.json")&lt;br&gt;
Checks if the file exists. Returns true or false.&lt;/p&gt;

&lt;p&gt;file("config.json")&lt;br&gt;
Reads the contents of the file.&lt;/p&gt;

&lt;p&gt;jsondecode(...)&lt;br&gt;
Converts the JSON text into a Terraform map or object.&lt;/p&gt;

&lt;p&gt;Conditional operator (? :)&lt;br&gt;
If the file exists → decode it&lt;br&gt;
If not → return an empty map {}&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan

Changes to Outputs:
  + config = {
      + api      = {
          + endpoint = "https://api.example.com"
          + timeout  = "30"
        }
      + database = {
          + host     = "db.example.com"
          + password = "super-secret"
          + port     = "5432"
          + username = "admin"
        }
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;And with this, This marks the end of Day 12 in our 30-days Terraform journey by Piyush Sachdev.&lt;br&gt;
Today we closed the remaining Terraform functions, everything we couldn’t finish in Part 1 finally found its place here in Part 2.&lt;/p&gt;

&lt;p&gt;We explored validations, sensitive values, type conversions, numeric functions, timestamps and file handling.&lt;/p&gt;

&lt;p&gt;Below is the youtube Video for reference: &lt;br&gt;


  &lt;iframe src="https://www.youtube.com/embed/ZYCCu9rZkU8"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
    </item>
  </channel>
</rss>
