<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Hemant Patil</title>
    <description>The latest articles on Forem by Hemant Patil (@hemantpatil).</description>
    <link>https://forem.com/hemantpatil</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/hemantpatil"/>
    <language>en</language>
    <item>
      <title>The SRE Handshake: Securing GitHub Actions with OIDC and Terraform Remote State</title>
      <dc:creator>Hemant Patil</dc:creator>
      <pubDate>Sun, 22 Mar 2026 11:47:38 +0000</pubDate>
      <link>https://forem.com/hemantpatil/the-sre-handshake-securing-github-actions-with-oidc-and-terraform-remote-state-44jp</link>
      <guid>https://forem.com/hemantpatil/the-sre-handshake-securing-github-actions-with-oidc-and-terraform-remote-state-44jp</guid>
      <description>&lt;p&gt;With this project, I’m creating AWS resources—specifically EC2 instances—using GitHub Actions. That’s the core of it, but I’m using advanced methodologies to get it done.&lt;/p&gt;

&lt;p&gt;Normally, people just hardcode AWS AccessKeyID and SecretAccessKeyID. This is not a good practice! If someone gets these values, your resources are hacked. To avoid this issue, I used AWS OpenID Connect (OIDC). With this, there’s no need to add hardcoded AWS secrets to GitHub Actions as a secret variable. It’s a much more secure "handshake" between GitHub and AWS.&lt;br&gt;
terraform  {&lt;br&gt;
  required_version = "~&amp;gt;1.14" //version of terraform inside my mac&lt;/p&gt;

&lt;p&gt;backend "s3" {&lt;br&gt;
    bucket         = "terraform-state-hemantpatil-123456789" &lt;br&gt;
    key            = "sre-project/terraform.tfstate"&lt;br&gt;
    region         = "ap-south-1"&lt;br&gt;&lt;br&gt;
    dynamodb_table = "terraform_state_lock"&lt;br&gt;&lt;br&gt;
    encrypt        = true&lt;br&gt;
  }&lt;/p&gt;

&lt;p&gt;required_providers {&lt;br&gt;
    aws = {&lt;br&gt;
        source  = "hashicorp/aws"&lt;br&gt;
        version = "~&amp;gt; 6.0" //latest version of aws provider&lt;br&gt;
    }&lt;br&gt;
  }&lt;br&gt;
}&lt;br&gt;
 Those who already have Terraform will understand it easily, but I will explain it once again. When we want to create cloud resources, we need to define Terraform versions. We need to define both versions: one is the Terraform version we downloaded on our laptop, and the other is which version of the AWS provider we need to download when using Terraform.&lt;/p&gt;

&lt;p&gt;In this code, the main and special thing I did was storing the state file in an S3 bucket with a DynamoDB table locking mechanism. First, we will understand what a state file is. A state file in Terraform is like a real-time view of your infrastructure; it knows what you have created already in the cloud. Suppose if you move or change that resource, it will note it down and manage your infrastructure in JSON format. Most of the time, we store the state file on our local machine (laptop), but it’s not good practice. This is because if you lose this state file, you will lose control of the cloud resources you have created, so it is good practice to store it in an S3 bucket.&lt;br&gt;
In this, &lt;/p&gt;

&lt;p&gt;I also used DynamoDB table locking. The use of this locking system is that every “terraform apply” becomes unique. The problem it solves is this: Imagine two engineers, A and B, apply the command “terraform apply” at the same time; then only one can apply that command. For every user or engineer, it will create a LockID so there is no merge conflict.&lt;/p&gt;

&lt;p&gt;resource "aws_s3_bucket" "terraform_state" {&lt;br&gt;
    bucket = "terraform-state-hemantpatil-123456789" //name of the s3 bucket where i want to store my terraform state file&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;lifecycle {
  prevent_destroy = true 
}

tags = {
  name = "Terraform state bucket"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;}&lt;/p&gt;

&lt;p&gt;resource "aws_s3_bucket_versioning" "versioning" {&lt;br&gt;
  bucket = aws_s3_bucket.terraform_state.id&lt;br&gt;
  versioning_configuration {&lt;br&gt;
    status = "Enabled"&lt;br&gt;
  }&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource "aws_dynamodb_table" "terraform_lock" {&lt;br&gt;
    name = "terraform_state_lock"&lt;br&gt;
    billing_mode = "PAY_PER_REQUEST"&lt;br&gt;
    hash_key = "LockID"&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;attribute {
    name = "LockID"
    type = "S"
}

tags = {
  name = "Terraform state lock table"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;}&lt;/p&gt;

&lt;p&gt; &lt;br&gt;
This is the code for creating the S3 bucket and DynamoDB table. Especially for the S3 bucket, I have added lifecycle attributes as prevent_destroy = true because I am storing my state there and it is too important; this ensures no one can accidentally destroy that resource. Also, for backup, I’m adding versioning for the S3 bucket for each data change, and in the DynamoDB table, I’m using the hash key as LockID. &lt;/p&gt;

&lt;h1&gt;
  
  
  1. THE SCANNER: Go to the internet and get GitHub's current security certificate.
&lt;/h1&gt;

&lt;p&gt;data "tls_certificate" "github" {&lt;br&gt;
  url = "&lt;a href="https://token.actions.githubusercontent.com/.well-known/openid-configuration" rel="noopener noreferrer"&gt;https://token.actions.githubusercontent.com/.well-known/openid-configuration&lt;/a&gt;"&lt;br&gt;
}&lt;/p&gt;

&lt;h1&gt;
  
  
  2. THE TRUST CENTER: Tell AWS IAM to recognize GitHub as a valid login source.
&lt;/h1&gt;

&lt;p&gt;resource "aws_iam_openid_connect_provider" "github" {&lt;br&gt;
  url             = "&lt;a href="https://token.actions.githubusercontent.com" rel="noopener noreferrer"&gt;https://token.actions.githubusercontent.com&lt;/a&gt;"&lt;br&gt;
  client_id_list  = ["sts.amazonaws.com"] # The standard "Audience" for AWS&lt;/p&gt;

&lt;p&gt;# Use the thumbprint we just fetched automatically&lt;br&gt;
  thumbprint_list = [data.tls_certificate.github.certificates[0].sha1_fingerprint]&lt;br&gt;
}&lt;/p&gt;

&lt;h1&gt;
  
  
  1. THE UNIFORM: Create the Role that GitHub will "assume"
&lt;/h1&gt;

&lt;p&gt;resource "aws_iam_role" "github_actions_role" {&lt;br&gt;
  name = "GitHubActionsTerraformRole"&lt;/p&gt;

&lt;p&gt;# 2. THE TRUST POLICY: The logic that checks the "ID Card" at the gate&lt;br&gt;
  assume_role_policy = jsonencode({&lt;br&gt;
    Version = "2012-10-17"&lt;br&gt;
    Statement = [&lt;br&gt;
      {&lt;br&gt;
        Action = "sts:AssumeRoleWithWebIdentity" # Allows login via GitHub Token&lt;br&gt;
        Effect = "Allow"&lt;br&gt;
        Principal = {&lt;br&gt;
          Federated = aws_iam_openid_connect_provider.github.arn&lt;br&gt;
        }&lt;br&gt;
        Condition = {&lt;br&gt;
          StringLike = {&lt;br&gt;
            # THE LOCK: Replace 'YOUR_USERNAME' with your actual GitHub handle&lt;br&gt;
            "token.actions.githubusercontent.com:sub": "repo:Hemantp1234/sre-project1:*"&lt;br&gt;
          }&lt;br&gt;
        }&lt;br&gt;
      }&lt;br&gt;
    ]&lt;br&gt;
  })&lt;br&gt;
}&lt;/p&gt;

&lt;h1&gt;
  
  
  3. THE PERMISSIONS: Give this role the "Master Keys" (Admin Access)
&lt;/h1&gt;

&lt;p&gt;resource "aws_iam_role_policy_attachment" "admin_access" {&lt;br&gt;
  role       = aws_iam_role.github_actions_role.name&lt;br&gt;
  policy_arn = "arn:aws:iam::aws:policy/AdministratorAccess"&lt;br&gt;
}&lt;br&gt;
 &lt;br&gt;
This code is huge and it looks scary but don’t worry I will explain why portion , let’s begin.  Step 1) Getting the TLS certificate Every secure web app has TLS (the supreme version of an SSL certificate). In order to authenticate with GitHub, we need to get its newer TLS certificate. In the next step, we will understand why we need that.&lt;/p&gt;

&lt;p&gt;Step 2) Getting the thumbprint or hash value of that certificate Here, we are connecting GitHub and AWS in order to create cloud resources on AWS with the help of GitHub Actions. For that, we need to authenticate between AWS and GitHub Actions using the aws_iam_openid_connect_provider resource. Here, we get the hash value of the GitHub TLS certificate so there is no need to manually authenticate every time.&lt;/p&gt;

&lt;p&gt;Step 3) Then we need to create a role We need to create a role for GitHub to assume. In that role, I am setting it up so only my GitHub account with this username and repo can access this role. I’m giving “AdministratorAccess” to this role because only then can that repo code create the resources in the cloud.  &lt;/p&gt;

&lt;p&gt;resource "aws_key_pair" "deployer" {&lt;br&gt;
  key_name   = "sre-project-key"&lt;br&gt;
  public_key = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA5b4UiDxZlMZt+xFyvfUfpWnx8jSwqmvJeXwoRLddPW &lt;a href="mailto:hemantpatil@Hemants-MacBook-Air.local"&gt;hemantpatil@Hemants-MacBook-Air.local&lt;/a&gt;"&lt;br&gt;
}&lt;/p&gt;

&lt;h1&gt;
  
  
  1. First, tell Terraform to find your Default VPC
&lt;/h1&gt;

&lt;p&gt;data "aws_vpc" "default" {&lt;br&gt;
  default = true&lt;br&gt;
}&lt;/p&gt;

&lt;h1&gt;
  
  
  2. Define the Security Group (The resource that was missing!)
&lt;/h1&gt;

&lt;p&gt;resource "aws_security_group" "allow_ssh" {&lt;br&gt;
  name        = "allow_ssh_access"&lt;br&gt;
  description = "Allow SSH inbound traffic"&lt;br&gt;
  vpc_id      = data.aws_vpc.default.id # This links it to the Default VPC&lt;/p&gt;

&lt;p&gt;ingress {&lt;br&gt;
    description = "SSH from anywhere"&lt;br&gt;
    from_port   = 22&lt;br&gt;
    to_port     = 22&lt;br&gt;
    protocol    = "tcp"&lt;br&gt;
    cidr_blocks = ["0.0.0.0/0"] &lt;br&gt;
  }&lt;/p&gt;

&lt;p&gt;egress {&lt;br&gt;
    from_port   = 0&lt;br&gt;
    to_port     = 0&lt;br&gt;
    protocol    = "-1"&lt;br&gt;
    cidr_blocks = ["0.0.0.0/0"]&lt;br&gt;
  }&lt;/p&gt;

&lt;p&gt;tags = {&lt;br&gt;
    Name = "allow_ssh"&lt;br&gt;
  }&lt;br&gt;
}&lt;br&gt;
 &lt;br&gt;
With the help of this code, I want to create an EC2 instance using a key pair. If we want to get into EC2 instances with the SSH protocol, we need to create a key-pair. Here, key-pair means we need both a public key and a private key.&lt;/p&gt;

&lt;p&gt;When we create these keys in the AWS Console, we download the private key to our machine. However, if you want to create EC2 instances with the help of Terraform, first you need to create both public and private keys on your laptop. After that, you need to send the public key to AWS using the aws_key_pair resource. Remember, the private key always stays on your laptop. In the code, I am sending the public key to AWS so it can authenticate with the private key on my machine.&lt;/p&gt;

&lt;p&gt;In this code, I’m also using the Default VPC. I am allowing traffic from the internet for both ingress and egress. Here, Ingress means network traffic coming into the EC2 instance from the outside, and Egress is the opposite (traffic going out from the instance). I am allowing everyone, which is why I am choosing the CIDR block as ["0.0.0.0/0"].&lt;/p&gt;

&lt;p&gt;data "aws_ami" "amazon_linux_2023" {&lt;br&gt;
  most_recent = true&lt;br&gt;
  owners      = ["amazon"]&lt;/p&gt;

&lt;p&gt;filter {&lt;br&gt;
    name   = "name"&lt;br&gt;
    values = ["al2023-ami-2023*-kernel-6.1-x86_64"]&lt;br&gt;
  }&lt;br&gt;
}&lt;/p&gt;

&lt;h1&gt;
  
  
  2. Launch the EC2 Instance
&lt;/h1&gt;

&lt;p&gt;resource "aws_instance" "sre_server" {&lt;br&gt;
  ami           = data.aws_ami.amazon_linux_2023.id&lt;br&gt;
  instance_type = "t3.micro" # Free-tier eligible&lt;/p&gt;

&lt;p&gt;# Attach the SSH Key from Task 2.1&lt;br&gt;
  key_name      = aws_key_pair.deployer.key_name&lt;/p&gt;

&lt;p&gt;# Attach the Security Group from Task 2.2&lt;br&gt;
  vpc_security_group_ids = [aws_security_group.allow_ssh.id]&lt;/p&gt;

&lt;p&gt;# Tagging is essential for SREs to track resources&lt;br&gt;
  tags = {&lt;br&gt;
    Name        = "SRE-Project-Server"&lt;br&gt;
    Environment = "Dev"&lt;br&gt;
    ManagedBy   = "Terraform"&lt;br&gt;
  }&lt;br&gt;
}&lt;/p&gt;

&lt;h1&gt;
  
  
  3. Output the Public IP so you can SSH into it later
&lt;/h1&gt;

&lt;p&gt;output "instance_public_ip" {&lt;br&gt;
  value = aws_instance.sre_server.public_ip&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;In this code, I’m just getting a fresh EC2 instance from AWS. I have also added conditions, like I only want the 2023 version. After that, I’m adding the key to my EC2 instance and the security group from the default VPC too. Finally, I added an output so I can see the Public IP address in my terminal as soon as the instance is ready.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>githubactions</category>
      <category>security</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Automation of creating an EC2 instances with the help of “data” key word in terraform.</title>
      <dc:creator>Hemant Patil</dc:creator>
      <pubDate>Sat, 14 Mar 2026 11:43:28 +0000</pubDate>
      <link>https://forem.com/hemantpatil/automation-of-creating-an-ec2-instances-with-the-help-of-data-key-word-in-terraform-nlg</link>
      <guid>https://forem.com/hemantpatil/automation-of-creating-an-ec2-instances-with-the-help-of-data-key-word-in-terraform-nlg</guid>
      <description>&lt;p&gt;We all know with the help of terraform we can create infra or resources from different cloud providers ( AWS , Azure , GCP ) , for that we need to authenticate terraform and your cloud provider , for example in terms authenticating both terraform and AWS in order to create resource with the help of code ( hcl language ) we need to use " aws configure " command , after that it will ask set of values that include AceesKey id , Security AccessKey id , region and output type ( mainly json ) and after that you need to confirm it by writing the command " aws sts get-caller-identity " it will show values of your account.&lt;/p&gt;

&lt;p&gt;Suppose if we want to create a EC2 instances with ubuntu OS , then we need to focus on 3 main things , those are instance type ( size ) , AMI ( amazon machine image ) , etc.&lt;/p&gt;

&lt;p&gt;Today I will explain how we can create EC2 instace with hardcoded values and without hard coded.&lt;/p&gt;

&lt;p&gt;When we are ceating with hard code values , then we need to write every value of an EC2 insstances. for example &lt;/p&gt;

&lt;p&gt;resource "aws_insace" "web_server" {&lt;br&gt;
  ami = here we mention ami id.&lt;br&gt;
 instace_type = here we mention type of instance you need&lt;/p&gt;

&lt;p&gt;}&lt;/p&gt;

&lt;p&gt;So the problem with this method is that we need to write values manually for example here we need to write ami id , instace type manually so its not automation.&lt;/p&gt;

&lt;p&gt;There is another method with the help of that we can easily craete an EC2 instance without or less hardcore values , here we use "data" keyword , the use of this keyword is that it will fetches info from AWS so there is no need of hardcode values.&lt;/p&gt;

&lt;p&gt;for example &lt;/p&gt;

&lt;p&gt;data "aws_ami" "latest_al2023" {&lt;br&gt;
 most_recent = true&lt;br&gt;
 owners   = ["amazon"]&lt;/p&gt;

&lt;p&gt;filter {&lt;br&gt;
  name  = "name"&lt;br&gt;
  values = ["al2023-ami-2023.*-x86_64"]&lt;br&gt;
 }&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;here we are fetching most recent instance info from AWS to create EC2 instance , we need to use filter to describe which type of instance we need, In this I'm asking AWS create latest 2023 EC2 instances with x86_64 computing architecture, and later we can use this info resource filed to create EC2 instances.&lt;/p&gt;

&lt;p&gt;for example &lt;/p&gt;

&lt;p&gt;resource "aws_instance" "web_server" {&lt;br&gt;
 ami          =  This value is fetched from keyword "data"&lt;br&gt;
 instance_type     = "t3.micro"&lt;br&gt;
tags = {&lt;br&gt;
  Name    = "SRE-Automated-Web"&lt;br&gt;
  ManagedBy  = "Terraform"&lt;br&gt;
 }&lt;br&gt;
}&lt;br&gt;
Why I like this: By using the data keyword, Terraform "queries" AWS for the most recent image. It makes our infrastructure more reliable and saves us from searching for IDs in the console.&lt;/p&gt;

</description>
      <category>automation</category>
      <category>aws</category>
      <category>devops</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Permission bits</title>
      <dc:creator>Hemant Patil</dc:creator>
      <pubDate>Tue, 10 Mar 2026 01:39:24 +0000</pubDate>
      <link>https://forem.com/hemantpatil/permission-bits-35pi</link>
      <guid>https://forem.com/hemantpatil/permission-bits-35pi</guid>
      <description>&lt;p&gt;In linux every file and directory has set of permission.&lt;/p&gt;

&lt;p&gt;The structure contains 10 chracter of strings, that broken down into 4 parts.&lt;/p&gt;

&lt;p&gt;1st character : Type , - for normal file , d for directory , l for symbolic link&lt;/p&gt;

&lt;p&gt;charcter 2 to 4 : Permission for user.&lt;/p&gt;

&lt;p&gt;character 5 to 7 : Permission for group.&lt;/p&gt;

&lt;p&gt;character 8 to 10 : Permission for others.&lt;br&gt;
  basic permission are read , write and execute &lt;/p&gt;

&lt;p&gt;kernel sees this as binary bits , binary is too hard for humans to read , thats why we use octal number representing the permissions.&lt;/p&gt;

&lt;p&gt;rwx= 111 = 4+2+1  = 7&lt;br&gt;
rw_= 110 = 4+2+0.  = 6&lt;br&gt;
r-X = 101  = 4+0+1   = 5&lt;br&gt;
r-- = 100 = 4+0+0  = 4  &lt;/p&gt;

&lt;p&gt;Example , if file1 has permission 755, then&lt;br&gt;
7(Owner) Read , write , execute&lt;br&gt;
5(Group) Read, execute&lt;br&gt;
5(others) Read, execute  &lt;/p&gt;

&lt;p&gt;Managing permissions&lt;br&gt;
chmod&lt;br&gt;
umask.  &lt;/p&gt;

</description>
      <category>beginners</category>
      <category>linux</category>
      <category>security</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
