<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Hemant Patil</title>
    <description>The latest articles on Forem by Hemant Patil (@hemantpatil).</description>
    <link>https://forem.com/hemantpatil</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/hemantpatil"/>
    <language>en</language>
    <item>
      <title>Why are Containers so much lighter than VMs?</title>
      <dc:creator>Hemant Patil</dc:creator>
      <pubDate>Mon, 04 May 2026 14:04:46 +0000</pubDate>
      <link>https://forem.com/hemantpatil/why-are-containers-so-much-lighter-than-vms-103b</link>
      <guid>https://forem.com/hemantpatil/why-are-containers-so-much-lighter-than-vms-103b</guid>
      <description>&lt;p&gt;While learning about Docker, I wanted to understand exactly why containers are so much lighter than Virtual Machines (VMs).&lt;/p&gt;

&lt;p&gt;If you don't know what a VM is, it’s basically like a computer inside a computer. You have your physical hardware and your main Operating System (OS). To run another computer inside it, you use a Hypervisor.&lt;/p&gt;

&lt;p&gt;The problem is that a VM is heavy. It has its own separate hardware, its own software, and its own full OS. This is why a VM takes a long time to boot up and uses a lot of hardware resources.&lt;/p&gt;

&lt;p&gt;On the other side, Docker containers are light and fast. Here is why:&lt;/p&gt;

&lt;p&gt;Sharing the Kernel: Containers don't need their own OS. They use the host kernel (the brain of the OS already running on your laptop), which makes them much smaller.&lt;/p&gt;

&lt;p&gt;Images are Read-Only: A Docker image is just a read-only blueprint. When you start a container, you are just adding a thin, writable layer (a wrapper) on top of that image.&lt;/p&gt;

&lt;p&gt;Smart Space Usage: If you have an image that is 500MB and you create 10 containers from it, they don't take up 5GB. They still only use 500MB because they all share that same base image.&lt;/p&gt;

&lt;p&gt;Because the container layer is the only writable part, it is temporary. This is exactly where Volumes come in. We use volumes to store important things like source code or databases so the data stays safe even if the container is deleted.&lt;/p&gt;

&lt;p&gt;Understanding this difference helps you see why we use containers to move fast and save space.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>beginners</category>
      <category>devops</category>
      <category>docker</category>
    </item>
    <item>
      <title>Bind Volumes vs. Named Volumes: Which one do you need?</title>
      <dc:creator>Hemant Patil</dc:creator>
      <pubDate>Sun, 03 May 2026 16:41:00 +0000</pubDate>
      <link>https://forem.com/hemantpatil/bind-volumes-vs-named-volumes-which-one-do-you-need-4o0</link>
      <guid>https://forem.com/hemantpatil/bind-volumes-vs-named-volumes-which-one-do-you-need-4o0</guid>
      <description>&lt;p&gt;If you are working with Docker, you need to know which type of volume to use for your application. There are two main types: Bind Volumes and Named Volumes.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Bind Volumes (The Two-Way Mirror)
Imagine you are building a web app and consistently adding new features. Instead of rebuilding the image every time you update your code, you use a Bind Volume. You attach a folder on your laptop directly to a folder inside the container. If you change the code on your laptop, it automatically reflects inside the container.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;h1&gt;
  
  
  1. Create a folder on your laptop
&lt;/h1&gt;

&lt;p&gt;mkdir -p $(pwd)/laptop-dir&lt;/p&gt;

&lt;h1&gt;
  
  
  2. Run a container and "Bind" that specific folder
&lt;/h1&gt;

&lt;p&gt;docker run -d --name bind-test -v $(pwd)/laptop-dir:/data alpine sh -c "echo 'Data from Mac' &amp;gt; /data/hello.txt &amp;amp;&amp;amp; sleep 1000"&lt;/p&gt;

&lt;p&gt;Here &lt;br&gt;
-v: This handles the volume work.&lt;br&gt;
/data: The folder inside the container.&lt;br&gt;
sh -c: This creates a shell to talk to the Linux kernel and run our commands.&lt;br&gt;
sleep 1000: Since a container only runs as long as the main process is alive, we use sleep to give us time to see the file on our laptop before the process ends.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Named Volumes (The Persistent Vault)
For Named Volumes, the Docker engine creates storage internally. You use this for important things like production databases. If a container is deleted, the data usually goes with it—but not if you use a Named Volume. You can simply attach that same storage to a new container.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;h1&gt;
  
  
  1. Create the volume
&lt;/h1&gt;

&lt;p&gt;docker volume create my-sre-vault&lt;/p&gt;

&lt;h1&gt;
  
  
  2. Use it in a container
&lt;/h1&gt;

&lt;p&gt;docker run -d --name temp-db -v my-sre-vault:/data alpine sh -c "echo 'Hemant was here' &amp;gt; /data/file.txt &amp;amp;&amp;amp; sleep 1000"&lt;/p&gt;

&lt;h1&gt;
  
  
  3. Delete the container
&lt;/h1&gt;

&lt;p&gt;docker rm -f temp-db&lt;/p&gt;

&lt;h1&gt;
  
  
  4. Prove the data is still there in a new container
&lt;/h1&gt;

&lt;p&gt;docker run --rm -v my-sre-vault:/data alpine cat /data/file.txt&lt;/p&gt;

&lt;p&gt;Even after deleting the container, we can still get the data. That is the beauty of a Named Volume—it saves your data even if the container is gone.&lt;/p&gt;

&lt;p&gt;The Summary:&lt;br&gt;
Use Bind Volumes for things like source code or files that change often during development.&lt;br&gt;
Use Named Volumes for private, stable things like databases where data persistence is a must.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>devops</category>
      <category>docker</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Docker network mystery</title>
      <dc:creator>Hemant Patil</dc:creator>
      <pubDate>Fri, 01 May 2026 17:01:48 +0000</pubDate>
      <link>https://forem.com/hemantpatil/docker-network-mystery-5866</link>
      <guid>https://forem.com/hemantpatil/docker-network-mystery-5866</guid>
      <description>&lt;p&gt;When I was learning Docker, I ran this command to create a container:&lt;br&gt;
docker run -d --name my-first-container -p 8080:80 nginx&lt;/p&gt;

&lt;p&gt;I knew -d meant the process runs in the background and -p maps the laptop port to Docker. But I had to ask: Why do we actually need to do this?&lt;/p&gt;

&lt;p&gt;The answer forced me to understand networking.&lt;/p&gt;

&lt;p&gt;The Problem: Isolation&lt;br&gt;
Docker is a wrapper of Linux, using namespaces and c-groups. By default, containers are isolated. They have their own IP addresses, but your laptop network doesn't know about the container's IP. Without a connection between your laptop and the container, you cannot open that container in your browser.&lt;/p&gt;

&lt;p&gt;The Solution: The Middleman&lt;br&gt;
You install a Docker engine (like Docker Desktop or OrbStack) on your laptop. That engine acts as the middleman between your laptop and the container. This is why you map the ports. For example, the HTTP port is 80 and your laptop port is 8080. By writing 8080:80, you are saying: If any request comes into the laptop port, send it into container port 80. This is the concept of a Bridge Network.&lt;/p&gt;

&lt;p&gt;Three Concepts I've Mastered:&lt;/p&gt;

&lt;p&gt;Bridge Network: The default way to connect your laptop to a container using port mapping.&lt;/p&gt;

&lt;p&gt;Host Network: In this concept, there is no need to do port mapping because the container uses the host directly.&lt;/p&gt;

&lt;p&gt;Overlay Network: What if 10 different containers on 10 different laptops need to work like they exist in the same network? Overlay creates a tunnel on top of all the laptops, so every container behaves like they are in the same laptop.&lt;/p&gt;

&lt;p&gt;The Overlay concept is heavily used in Kubernetes. Understanding these basics is how I'm moving toward Mastery in SRE and Platform Engineering! 🚀&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>devops</category>
      <category>docker</category>
      <category>networking</category>
    </item>
    <item>
      <title>The SRE Handshake: Securing GitHub Actions with OIDC and Terraform Remote State</title>
      <dc:creator>Hemant Patil</dc:creator>
      <pubDate>Sun, 22 Mar 2026 11:47:38 +0000</pubDate>
      <link>https://forem.com/hemantpatil/the-sre-handshake-securing-github-actions-with-oidc-and-terraform-remote-state-44jp</link>
      <guid>https://forem.com/hemantpatil/the-sre-handshake-securing-github-actions-with-oidc-and-terraform-remote-state-44jp</guid>
      <description>&lt;p&gt;With this project, I’m creating AWS resources—specifically EC2 instances—using GitHub Actions. That’s the core of it, but I’m using advanced methodologies to get it done.&lt;/p&gt;

&lt;p&gt;Normally, people just hardcode AWS AccessKeyID and SecretAccessKeyID. This is not a good practice! If someone gets these values, your resources are hacked. To avoid this issue, I used AWS OpenID Connect (OIDC). With this, there’s no need to add hardcoded AWS secrets to GitHub Actions as a secret variable. It’s a much more secure "handshake" between GitHub and AWS.&lt;br&gt;
terraform  {&lt;br&gt;
  required_version = "~&amp;gt;1.14" //version of terraform inside my mac&lt;/p&gt;

&lt;p&gt;backend "s3" {&lt;br&gt;
    bucket         = "terraform-state-hemantpatil-123456789" &lt;br&gt;
    key            = "sre-project/terraform.tfstate"&lt;br&gt;
    region         = "ap-south-1"&lt;br&gt;&lt;br&gt;
    dynamodb_table = "terraform_state_lock"&lt;br&gt;&lt;br&gt;
    encrypt        = true&lt;br&gt;
  }&lt;/p&gt;

&lt;p&gt;required_providers {&lt;br&gt;
    aws = {&lt;br&gt;
        source  = "hashicorp/aws"&lt;br&gt;
        version = "~&amp;gt; 6.0" //latest version of aws provider&lt;br&gt;
    }&lt;br&gt;
  }&lt;br&gt;
}&lt;br&gt;
 Those who already have Terraform will understand it easily, but I will explain it once again. When we want to create cloud resources, we need to define Terraform versions. We need to define both versions: one is the Terraform version we downloaded on our laptop, and the other is which version of the AWS provider we need to download when using Terraform.&lt;/p&gt;

&lt;p&gt;In this code, the main and special thing I did was storing the state file in an S3 bucket with a DynamoDB table locking mechanism. First, we will understand what a state file is. A state file in Terraform is like a real-time view of your infrastructure; it knows what you have created already in the cloud. Suppose if you move or change that resource, it will note it down and manage your infrastructure in JSON format. Most of the time, we store the state file on our local machine (laptop), but it’s not good practice. This is because if you lose this state file, you will lose control of the cloud resources you have created, so it is good practice to store it in an S3 bucket.&lt;br&gt;
In this, &lt;/p&gt;

&lt;p&gt;I also used DynamoDB table locking. The use of this locking system is that every “terraform apply” becomes unique. The problem it solves is this: Imagine two engineers, A and B, apply the command “terraform apply” at the same time; then only one can apply that command. For every user or engineer, it will create a LockID so there is no merge conflict.&lt;/p&gt;

&lt;p&gt;resource "aws_s3_bucket" "terraform_state" {&lt;br&gt;
    bucket = "terraform-state-hemantpatil-123456789" //name of the s3 bucket where i want to store my terraform state file&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;lifecycle {
  prevent_destroy = true 
}

tags = {
  name = "Terraform state bucket"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;}&lt;/p&gt;

&lt;p&gt;resource "aws_s3_bucket_versioning" "versioning" {&lt;br&gt;
  bucket = aws_s3_bucket.terraform_state.id&lt;br&gt;
  versioning_configuration {&lt;br&gt;
    status = "Enabled"&lt;br&gt;
  }&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource "aws_dynamodb_table" "terraform_lock" {&lt;br&gt;
    name = "terraform_state_lock"&lt;br&gt;
    billing_mode = "PAY_PER_REQUEST"&lt;br&gt;
    hash_key = "LockID"&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;attribute {
    name = "LockID"
    type = "S"
}

tags = {
  name = "Terraform state lock table"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;}&lt;/p&gt;

&lt;p&gt; &lt;br&gt;
This is the code for creating the S3 bucket and DynamoDB table. Especially for the S3 bucket, I have added lifecycle attributes as prevent_destroy = true because I am storing my state there and it is too important; this ensures no one can accidentally destroy that resource. Also, for backup, I’m adding versioning for the S3 bucket for each data change, and in the DynamoDB table, I’m using the hash key as LockID. &lt;/p&gt;

&lt;h1&gt;
  
  
  1. THE SCANNER: Go to the internet and get GitHub's current security certificate.
&lt;/h1&gt;

&lt;p&gt;data "tls_certificate" "github" {&lt;br&gt;
  url = "&lt;a href="https://token.actions.githubusercontent.com/.well-known/openid-configuration" rel="noopener noreferrer"&gt;https://token.actions.githubusercontent.com/.well-known/openid-configuration&lt;/a&gt;"&lt;br&gt;
}&lt;/p&gt;

&lt;h1&gt;
  
  
  2. THE TRUST CENTER: Tell AWS IAM to recognize GitHub as a valid login source.
&lt;/h1&gt;

&lt;p&gt;resource "aws_iam_openid_connect_provider" "github" {&lt;br&gt;
  url             = "&lt;a href="https://token.actions.githubusercontent.com" rel="noopener noreferrer"&gt;https://token.actions.githubusercontent.com&lt;/a&gt;"&lt;br&gt;
  client_id_list  = ["sts.amazonaws.com"] # The standard "Audience" for AWS&lt;/p&gt;

&lt;p&gt;# Use the thumbprint we just fetched automatically&lt;br&gt;
  thumbprint_list = [data.tls_certificate.github.certificates[0].sha1_fingerprint]&lt;br&gt;
}&lt;/p&gt;

&lt;h1&gt;
  
  
  1. THE UNIFORM: Create the Role that GitHub will "assume"
&lt;/h1&gt;

&lt;p&gt;resource "aws_iam_role" "github_actions_role" {&lt;br&gt;
  name = "GitHubActionsTerraformRole"&lt;/p&gt;

&lt;p&gt;# 2. THE TRUST POLICY: The logic that checks the "ID Card" at the gate&lt;br&gt;
  assume_role_policy = jsonencode({&lt;br&gt;
    Version = "2012-10-17"&lt;br&gt;
    Statement = [&lt;br&gt;
      {&lt;br&gt;
        Action = "sts:AssumeRoleWithWebIdentity" # Allows login via GitHub Token&lt;br&gt;
        Effect = "Allow"&lt;br&gt;
        Principal = {&lt;br&gt;
          Federated = aws_iam_openid_connect_provider.github.arn&lt;br&gt;
        }&lt;br&gt;
        Condition = {&lt;br&gt;
          StringLike = {&lt;br&gt;
            # THE LOCK: Replace 'YOUR_USERNAME' with your actual GitHub handle&lt;br&gt;
            "token.actions.githubusercontent.com:sub": "repo:Hemantp1234/sre-project1:*"&lt;br&gt;
          }&lt;br&gt;
        }&lt;br&gt;
      }&lt;br&gt;
    ]&lt;br&gt;
  })&lt;br&gt;
}&lt;/p&gt;

&lt;h1&gt;
  
  
  3. THE PERMISSIONS: Give this role the "Master Keys" (Admin Access)
&lt;/h1&gt;

&lt;p&gt;resource "aws_iam_role_policy_attachment" "admin_access" {&lt;br&gt;
  role       = aws_iam_role.github_actions_role.name&lt;br&gt;
  policy_arn = "arn:aws:iam::aws:policy/AdministratorAccess"&lt;br&gt;
}&lt;br&gt;
 &lt;br&gt;
This code is huge and it looks scary but don’t worry I will explain why portion , let’s begin.  Step 1) Getting the TLS certificate Every secure web app has TLS (the supreme version of an SSL certificate). In order to authenticate with GitHub, we need to get its newer TLS certificate. In the next step, we will understand why we need that.&lt;/p&gt;

&lt;p&gt;Step 2) Getting the thumbprint or hash value of that certificate Here, we are connecting GitHub and AWS in order to create cloud resources on AWS with the help of GitHub Actions. For that, we need to authenticate between AWS and GitHub Actions using the aws_iam_openid_connect_provider resource. Here, we get the hash value of the GitHub TLS certificate so there is no need to manually authenticate every time.&lt;/p&gt;

&lt;p&gt;Step 3) Then we need to create a role We need to create a role for GitHub to assume. In that role, I am setting it up so only my GitHub account with this username and repo can access this role. I’m giving “AdministratorAccess” to this role because only then can that repo code create the resources in the cloud.  &lt;/p&gt;

&lt;p&gt;resource "aws_key_pair" "deployer" {&lt;br&gt;
  key_name   = "sre-project-key"&lt;br&gt;
  public_key = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA5b4UiDxZlMZt+xFyvfUfpWnx8jSwqmvJeXwoRLddPW &lt;a href="mailto:hemantpatil@Hemants-MacBook-Air.local"&gt;hemantpatil@Hemants-MacBook-Air.local&lt;/a&gt;"&lt;br&gt;
}&lt;/p&gt;

&lt;h1&gt;
  
  
  1. First, tell Terraform to find your Default VPC
&lt;/h1&gt;

&lt;p&gt;data "aws_vpc" "default" {&lt;br&gt;
  default = true&lt;br&gt;
}&lt;/p&gt;

&lt;h1&gt;
  
  
  2. Define the Security Group (The resource that was missing!)
&lt;/h1&gt;

&lt;p&gt;resource "aws_security_group" "allow_ssh" {&lt;br&gt;
  name        = "allow_ssh_access"&lt;br&gt;
  description = "Allow SSH inbound traffic"&lt;br&gt;
  vpc_id      = data.aws_vpc.default.id # This links it to the Default VPC&lt;/p&gt;

&lt;p&gt;ingress {&lt;br&gt;
    description = "SSH from anywhere"&lt;br&gt;
    from_port   = 22&lt;br&gt;
    to_port     = 22&lt;br&gt;
    protocol    = "tcp"&lt;br&gt;
    cidr_blocks = ["0.0.0.0/0"] &lt;br&gt;
  }&lt;/p&gt;

&lt;p&gt;egress {&lt;br&gt;
    from_port   = 0&lt;br&gt;
    to_port     = 0&lt;br&gt;
    protocol    = "-1"&lt;br&gt;
    cidr_blocks = ["0.0.0.0/0"]&lt;br&gt;
  }&lt;/p&gt;

&lt;p&gt;tags = {&lt;br&gt;
    Name = "allow_ssh"&lt;br&gt;
  }&lt;br&gt;
}&lt;br&gt;
 &lt;br&gt;
With the help of this code, I want to create an EC2 instance using a key pair. If we want to get into EC2 instances with the SSH protocol, we need to create a key-pair. Here, key-pair means we need both a public key and a private key.&lt;/p&gt;

&lt;p&gt;When we create these keys in the AWS Console, we download the private key to our machine. However, if you want to create EC2 instances with the help of Terraform, first you need to create both public and private keys on your laptop. After that, you need to send the public key to AWS using the aws_key_pair resource. Remember, the private key always stays on your laptop. In the code, I am sending the public key to AWS so it can authenticate with the private key on my machine.&lt;/p&gt;

&lt;p&gt;In this code, I’m also using the Default VPC. I am allowing traffic from the internet for both ingress and egress. Here, Ingress means network traffic coming into the EC2 instance from the outside, and Egress is the opposite (traffic going out from the instance). I am allowing everyone, which is why I am choosing the CIDR block as ["0.0.0.0/0"].&lt;/p&gt;

&lt;p&gt;data "aws_ami" "amazon_linux_2023" {&lt;br&gt;
  most_recent = true&lt;br&gt;
  owners      = ["amazon"]&lt;/p&gt;

&lt;p&gt;filter {&lt;br&gt;
    name   = "name"&lt;br&gt;
    values = ["al2023-ami-2023*-kernel-6.1-x86_64"]&lt;br&gt;
  }&lt;br&gt;
}&lt;/p&gt;

&lt;h1&gt;
  
  
  2. Launch the EC2 Instance
&lt;/h1&gt;

&lt;p&gt;resource "aws_instance" "sre_server" {&lt;br&gt;
  ami           = data.aws_ami.amazon_linux_2023.id&lt;br&gt;
  instance_type = "t3.micro" # Free-tier eligible&lt;/p&gt;

&lt;p&gt;# Attach the SSH Key from Task 2.1&lt;br&gt;
  key_name      = aws_key_pair.deployer.key_name&lt;/p&gt;

&lt;p&gt;# Attach the Security Group from Task 2.2&lt;br&gt;
  vpc_security_group_ids = [aws_security_group.allow_ssh.id]&lt;/p&gt;

&lt;p&gt;# Tagging is essential for SREs to track resources&lt;br&gt;
  tags = {&lt;br&gt;
    Name        = "SRE-Project-Server"&lt;br&gt;
    Environment = "Dev"&lt;br&gt;
    ManagedBy   = "Terraform"&lt;br&gt;
  }&lt;br&gt;
}&lt;/p&gt;

&lt;h1&gt;
  
  
  3. Output the Public IP so you can SSH into it later
&lt;/h1&gt;

&lt;p&gt;output "instance_public_ip" {&lt;br&gt;
  value = aws_instance.sre_server.public_ip&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;In this code, I’m just getting a fresh EC2 instance from AWS. I have also added conditions, like I only want the 2023 version. After that, I’m adding the key to my EC2 instance and the security group from the default VPC too. Finally, I added an output so I can see the Public IP address in my terminal as soon as the instance is ready.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>githubactions</category>
      <category>security</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Automation of creating an EC2 instances with the help of “data” key word in terraform.</title>
      <dc:creator>Hemant Patil</dc:creator>
      <pubDate>Sat, 14 Mar 2026 11:43:28 +0000</pubDate>
      <link>https://forem.com/hemantpatil/automation-of-creating-an-ec2-instances-with-the-help-of-data-key-word-in-terraform-nlg</link>
      <guid>https://forem.com/hemantpatil/automation-of-creating-an-ec2-instances-with-the-help-of-data-key-word-in-terraform-nlg</guid>
      <description>&lt;p&gt;We all know with the help of terraform we can create infra or resources from different cloud providers ( AWS , Azure , GCP ) , for that we need to authenticate terraform and your cloud provider , for example in terms authenticating both terraform and AWS in order to create resource with the help of code ( hcl language ) we need to use " aws configure " command , after that it will ask set of values that include AceesKey id , Security AccessKey id , region and output type ( mainly json ) and after that you need to confirm it by writing the command " aws sts get-caller-identity " it will show values of your account.&lt;/p&gt;

&lt;p&gt;Suppose if we want to create a EC2 instances with ubuntu OS , then we need to focus on 3 main things , those are instance type ( size ) , AMI ( amazon machine image ) , etc.&lt;/p&gt;

&lt;p&gt;Today I will explain how we can create EC2 instace with hardcoded values and without hard coded.&lt;/p&gt;

&lt;p&gt;When we are ceating with hard code values , then we need to write every value of an EC2 insstances. for example &lt;/p&gt;

&lt;p&gt;resource "aws_insace" "web_server" {&lt;br&gt;
  ami = here we mention ami id.&lt;br&gt;
 instace_type = here we mention type of instance you need&lt;/p&gt;

&lt;p&gt;}&lt;/p&gt;

&lt;p&gt;So the problem with this method is that we need to write values manually for example here we need to write ami id , instace type manually so its not automation.&lt;/p&gt;

&lt;p&gt;There is another method with the help of that we can easily craete an EC2 instance without or less hardcore values , here we use "data" keyword , the use of this keyword is that it will fetches info from AWS so there is no need of hardcode values.&lt;/p&gt;

&lt;p&gt;for example &lt;/p&gt;

&lt;p&gt;data "aws_ami" "latest_al2023" {&lt;br&gt;
 most_recent = true&lt;br&gt;
 owners   = ["amazon"]&lt;/p&gt;

&lt;p&gt;filter {&lt;br&gt;
  name  = "name"&lt;br&gt;
  values = ["al2023-ami-2023.*-x86_64"]&lt;br&gt;
 }&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;here we are fetching most recent instance info from AWS to create EC2 instance , we need to use filter to describe which type of instance we need, In this I'm asking AWS create latest 2023 EC2 instances with x86_64 computing architecture, and later we can use this info resource filed to create EC2 instances.&lt;/p&gt;

&lt;p&gt;for example &lt;/p&gt;

&lt;p&gt;resource "aws_instance" "web_server" {&lt;br&gt;
 ami          =  This value is fetched from keyword "data"&lt;br&gt;
 instance_type     = "t3.micro"&lt;br&gt;
tags = {&lt;br&gt;
  Name    = "SRE-Automated-Web"&lt;br&gt;
  ManagedBy  = "Terraform"&lt;br&gt;
 }&lt;br&gt;
}&lt;br&gt;
Why I like this: By using the data keyword, Terraform "queries" AWS for the most recent image. It makes our infrastructure more reliable and saves us from searching for IDs in the console.&lt;/p&gt;

</description>
      <category>automation</category>
      <category>aws</category>
      <category>devops</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Permission bits</title>
      <dc:creator>Hemant Patil</dc:creator>
      <pubDate>Tue, 10 Mar 2026 01:39:24 +0000</pubDate>
      <link>https://forem.com/hemantpatil/permission-bits-35pi</link>
      <guid>https://forem.com/hemantpatil/permission-bits-35pi</guid>
      <description>&lt;p&gt;In linux every file and directory has set of permission.&lt;/p&gt;

&lt;p&gt;The structure contains 10 chracter of strings, that broken down into 4 parts.&lt;/p&gt;

&lt;p&gt;1st character : Type , - for normal file , d for directory , l for symbolic link&lt;/p&gt;

&lt;p&gt;charcter 2 to 4 : Permission for user.&lt;/p&gt;

&lt;p&gt;character 5 to 7 : Permission for group.&lt;/p&gt;

&lt;p&gt;character 8 to 10 : Permission for others.&lt;br&gt;
  basic permission are read , write and execute &lt;/p&gt;

&lt;p&gt;kernel sees this as binary bits , binary is too hard for humans to read , thats why we use octal number representing the permissions.&lt;/p&gt;

&lt;p&gt;rwx= 111 = 4+2+1  = 7&lt;br&gt;
rw_= 110 = 4+2+0.  = 6&lt;br&gt;
r-X = 101  = 4+0+1   = 5&lt;br&gt;
r-- = 100 = 4+0+0  = 4  &lt;/p&gt;

&lt;p&gt;Example , if file1 has permission 755, then&lt;br&gt;
7(Owner) Read , write , execute&lt;br&gt;
5(Group) Read, execute&lt;br&gt;
5(others) Read, execute  &lt;/p&gt;

&lt;p&gt;Managing permissions&lt;br&gt;
chmod&lt;br&gt;
umask.  &lt;/p&gt;

</description>
      <category>beginners</category>
      <category>linux</category>
      <category>security</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
