<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Rohan Nalawade</title>
    <description>The latest articles on Forem by Rohan Nalawade (@rohanan07).</description>
    <link>https://forem.com/rohanan07</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/rohanan07"/>
    <language>en</language>
    <item>
      <title>Kubernetes namespaces: concepts &amp; key commands</title>
      <dc:creator>Rohan Nalawade</dc:creator>
      <pubDate>Sat, 17 Jan 2026 13:22:04 +0000</pubDate>
      <link>https://forem.com/rohanan07/kubernetes-namespaces-concepts-key-commands-19hj</link>
      <guid>https://forem.com/rohanan07/kubernetes-namespaces-concepts-key-commands-19hj</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;br&gt;
As part of my Kubernetes learning journey, today I focused on understanding Namespaces — what they are, why they exist, and how to work with them using basic kubectl commands.&lt;br&gt;
I have written down my current understanding of namespaces and the commands I practiced hands-on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What are Namespaces in Kubernetes?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A namespace in Kubernetes is a logical grouping of resources within a cluster.&lt;br&gt;
Namespaces help organize resources and make it easier to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Separate environments (dev, staging, prod)&lt;/li&gt;
&lt;li&gt;Avoid naming conflicts&lt;/li&gt;
&lt;li&gt;Apply access control and quotas&lt;/li&gt;
&lt;li&gt;Manage large clusters more effectively
Namespaces are logical, not physical. They do not create separate clusters or nodes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Important Things about Namespaces&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Kubernetes cluster can have multiple namespaces&lt;/li&gt;
&lt;li&gt;Pods run on nodes, not inside namespaces&lt;/li&gt;
&lt;li&gt;A single node can run Pods from multiple namespaces&lt;/li&gt;
&lt;li&gt;Namespaces do not provide isolation by default&lt;/li&gt;
&lt;li&gt;Resources like Pods, Deployments, and Services are namespace-scoped&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key namespace commands i learned today&lt;/strong&gt;&lt;br&gt;
Below are the core commands I practiced while learning namespaces. &lt;/p&gt;

&lt;p&gt;Get all namespaces in the cluster&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get namespaces
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command lists all namespaces present in the cluster.&lt;/p&gt;

&lt;p&gt;Get Pods from a specific namespace&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -n &amp;lt;namespace-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Displays all Pods running in the specified namespace.&lt;/p&gt;

&lt;p&gt;Create a namespace&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create ns &amp;lt;namespace-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Creates a new namespace with the given name.&lt;/p&gt;

&lt;p&gt;Create a Pod in the default namespace&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl run &amp;lt;pod-name&amp;gt; --image=&amp;lt;image-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Creates a Pod using the specified image in the default namespace.&lt;/p&gt;

&lt;p&gt;Create a Pod in a specific namespace&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl run &amp;lt;pod-name&amp;gt; --image=&amp;lt;image-name&amp;gt; -n &amp;lt;namespace-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Creates a Pod using the specified image in the specified namespace.&lt;/p&gt;

&lt;p&gt;Delete a Pod from a namespace&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl delete pod &amp;lt;pod-name&amp;gt; -n &amp;lt;namespace-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deletes the specified Pod from the given namespace.&lt;/p&gt;

&lt;p&gt;Apply a YAML file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f &amp;lt;file-name.yml&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Creates or updates Kubernetes resources defined in the YAML file. This command is commonly used for declarative configuration.&lt;/p&gt;

&lt;p&gt;Delete a namespace&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl delete namespace &amp;lt;namespace-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deletes the specified namespace and all resources inside it. This is a destructive operation and should be used carefully.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key takeaways&lt;/strong&gt;&lt;br&gt;
Namespaces help organize and manage resources within a Kubernetes cluster, but they do not control where Pods run or provide isolation by themselves.&lt;br&gt;
Understanding namespaces is important before moving on to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deployments&lt;/li&gt;
&lt;li&gt;Services&lt;/li&gt;
&lt;li&gt;RBAC&lt;/li&gt;
&lt;li&gt;Resource quotas&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What’s Next?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Next, I plan to explore:&lt;/li&gt;
&lt;li&gt;Deployments vs Pods&lt;/li&gt;
&lt;li&gt;How controllers manage Pods&lt;/li&gt;
&lt;li&gt;Real-world namespace usage patterns&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I’ll continue documenting my learning as I go.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>containers</category>
      <category>docker</category>
    </item>
    <item>
      <title>Basics &amp; Architecture of Kubernetes.</title>
      <dc:creator>Rohan Nalawade</dc:creator>
      <pubDate>Fri, 16 Jan 2026 17:04:23 +0000</pubDate>
      <link>https://forem.com/rohanan07/basics-architecture-of-kubernetes-27d2</link>
      <guid>https://forem.com/rohanan07/basics-architecture-of-kubernetes-27d2</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;br&gt;
I’ve recently started my Kubernetes learning journey, and instead of waiting until I “know everything,” I decided to document my learnings from day one. This helps me solidify concepts and might also help others who are just getting started.&lt;/p&gt;

&lt;p&gt;This post covers my current understanding of Kubernetes fundamentals, especially its architecture and core components.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is Kubernetes?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Kubernetes is a container orchestration platform.&lt;br&gt;
In simple terms, it helps you deploy, manage, scale, and heal containerized applications automatically.&lt;br&gt;
Instead of manually running and monitoring containers, Kubernetes does that work for you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubernetes Architecture: Nodes&lt;/strong&gt;&lt;br&gt;
A Kubernetes cluster consists of nodes, which are essentially servers (physical or virtual machines).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpaot9wu9pyehqjuy26hx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpaot9wu9pyehqjuy26hx.png" alt=" " width="719" height="563"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are two main types of nodes:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Control Plane&lt;/strong&gt;&lt;br&gt;
The control plane is responsible for managing the cluster.&lt;br&gt;
It does not run application containers directly; instead, it makes decisions and maintains the desired state of the cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Worker Nodes&lt;/strong&gt;&lt;br&gt;
Worker nodes are where the actual application workloads (containers) run.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Control Plane Components&lt;/strong&gt;&lt;br&gt;
The control plane consists of several key components:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;kube-apiserver&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Acts as the entry point to the Kubernetes cluster&lt;/li&gt;
&lt;li&gt;All requests (from users, controllers, scheduler, etc.) go through the API server&lt;/li&gt;
&lt;li&gt;It is the only component that talks directly to etcd&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;etcd&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A distributed key-value store&lt;/li&gt;
&lt;li&gt;Stores all cluster state (Pods, nodes, configs, secrets, etc.)&lt;/li&gt;
&lt;li&gt;Acts as the single source of truth for Kubernetes&lt;/li&gt;
&lt;li&gt;You can think of etcd as the brain’s memory of the cluster.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;kube-scheduler&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Decides which worker node a Pod should run on&lt;/li&gt;
&lt;li&gt;Considers resource availability, constraints, and policies&lt;/li&gt;
&lt;li&gt;It does not create Pods — it only assigns them to nodes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;kube-controller-manager&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Runs multiple controllers&lt;/li&gt;
&lt;li&gt;Each controller continuously compares desired state vs actual state&lt;/li&gt;
&lt;li&gt;If there’s a mismatch, it takes corrective action&lt;/li&gt;
&lt;li&gt;Examples:
Node Controller
ReplicaSet Controller
Job Controller&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is what enables Kubernetes’ self-healing nature.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Worker Node Components&lt;/strong&gt;&lt;br&gt;
Each worker node runs the following components:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;kubelet&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An agent running on every worker node&lt;/li&gt;
&lt;li&gt;Communicates with the kube-apiserver&lt;/li&gt;
&lt;li&gt;Ensures that Pods assigned to the node are running as expected&lt;/li&gt;
&lt;li&gt;Interacts with the container runtime to start/stop containers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Container Runtime&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Responsible for actually running containers&lt;/li&gt;
&lt;li&gt;Examples: containerd, CRI-O, Docker (via CRI)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;kube-proxy&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Handles networking for Services&lt;/li&gt;
&lt;li&gt;Maintains network rules (iptables/IPVS)&lt;/li&gt;
&lt;li&gt;Enables stable access to Pods via Services&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Pods&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Pod is the smallest deployable unit in Kubernetes&lt;/li&gt;
&lt;li&gt;A Pod can contain one or more containers&lt;/li&gt;
&lt;li&gt;Containers in the same Pod:
Share the same network namespace
Can communicate via localhost&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In most cases, a Pod contains one container, but multi-container Pods are used for sidecar patterns.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How kubectl Fits In&lt;/strong&gt;&lt;br&gt;
As a user or DevOps engineer, we interact with Kubernetes using kubectl, which is a command-line interface (CLI).&lt;br&gt;
Flow:&lt;br&gt;
User → kubectl → kube-apiserver → cluster components&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;kubectl sends requests to the API server&lt;/li&gt;
&lt;li&gt;The API server validates and stores state in etcd&lt;/li&gt;
&lt;li&gt;Scheduler and controllers watch the API server and act accordingly&lt;/li&gt;
&lt;li&gt;kubelet executes the decisions on worker nodes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Final Thoughts&lt;/strong&gt;&lt;br&gt;
This is my Day 1 understanding of Kubernetes.&lt;br&gt;
I’m intentionally starting with fundamentals before jumping into deployments, services, and YAML files.&lt;br&gt;
If you’re also learning Kubernetes, I’d highly recommend:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Taking time to understand the architecture&lt;/li&gt;
&lt;li&gt;Building a strong mental model of how components interact&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I’ll be sharing more learnings as I go. Feedback and corrections are welcome.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>cloud</category>
      <category>devops</category>
      <category>aws</category>
    </item>
    <item>
      <title>Blue Green deployment strategy.</title>
      <dc:creator>Rohan Nalawade</dc:creator>
      <pubDate>Thu, 08 Jan 2026 16:22:45 +0000</pubDate>
      <link>https://forem.com/rohanan07/blue-green-deployment-strategy-3fd3</link>
      <guid>https://forem.com/rohanan07/blue-green-deployment-strategy-3fd3</guid>
      <description>&lt;p&gt;Blue-Green deployment is a release management strategy that minimizes downtime and reduces risk by running two identical production environments.&lt;br&gt;
It moves away from the traditional "in-place" upgrade model, where you overwrite the live application, to a model based on traffic routing.&lt;br&gt;
Here is the technical breakdown of how it works, the workflow, and the database constraints you need to know.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Architecture&lt;/strong&gt;&lt;br&gt;
The core concept relies on maintaining two separate environments:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Blue (Live): The currently active environment hosting the old version (v1). It receives 100% of user traffic.&lt;/li&gt;
&lt;li&gt;Green (Idle): A clone of the Blue environment (same infrastructure, OS, configs). It is idle or accessible only via a private network.
Sitting in front of these environments is a Load Balancer (or Router/DNS). This component dictates which environment is "Live" by directing traffic to the appropriate backend target group.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The Deployment Workflow&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Preparation: You have traffic flowing to Blue (v1).&lt;/li&gt;
&lt;li&gt;Deploy: You deploy the new version (v2) to the Green environment. Since Green is disconnected from public traffic, this has no impact on users.&lt;/li&gt;
&lt;li&gt;Verification: The QA team or automated test suites run smoke tests against the Green environment using a private URL or internal port.&lt;/li&gt;
&lt;li&gt;The Switch (Cutover): Once verified, you update the Load Balancer rules to route traffic from Blue to Green.&lt;/li&gt;
&lt;li&gt;Monitoring: You monitor metrics (latency, error rates) on Green.&lt;/li&gt;
&lt;li&gt;Rollback (If needed): If critical errors appear, you immediately revert the Load Balancer routing back to Blue (which is still running v1).&lt;/li&gt;
&lt;li&gt;Cleanup: If Green is stable, Blue is eventually decommissioned or recycled to become the "Green" environment for the next release.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The Database Challenge&lt;/strong&gt;&lt;br&gt;
In stateful applications, Blue and Green generally share the same database. You cannot replicate the database easily because data written to Blue during the deployment would be lost when switching to Green.&lt;/p&gt;

&lt;p&gt;Because both environments access the same DB simultaneously during the switch, database schema changes must be backward compatible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The "N-1" Compatibility Rule&lt;/strong&gt;&lt;br&gt;
If you need to rename a column or change a schema:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Expand: Add the new column but keep the old one. Deploy this schema change first.&lt;/li&gt;
&lt;li&gt;Deploy Code: Deploy the application code (v2) that writes to the new column but can still read the old one if necessary.&lt;/li&gt;
&lt;li&gt;Contract: Only after v1 is completely offline do you remove the old column in a separate cleanup migration.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Pros and Cons&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;The Advantages:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Zero Downtime: The switch happens at the load balancer level, often within milliseconds.&lt;/li&gt;
&lt;li&gt;Instant Rollback: Reverting to the previous version is just a routing change, not a redeployment.&lt;/li&gt;
&lt;li&gt;Environment Isolation: You are testing on the actual infrastructure configuration that will serve production traffic.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The Trade-offs:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cost: You effectively double your infrastructure footprint (compute resources) during the deployment phase.&lt;/li&gt;
&lt;li&gt;Complexity: Requires sophisticated load balancing and CI/CD pipelines.&lt;/li&gt;
&lt;li&gt;Data Handling: Requires strict discipline regarding database migrations and state management.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Summary&lt;/strong&gt;&lt;br&gt;
Blue-Green deployment is the gold standard for mission-critical systems where downtime is not an option. By decoupling the deployment of artifacts from the release of traffic, you gain control, speed, and a significantly lower Mean Time To Recovery (MTTR) in case of failure.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>cloudcomputing</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Terraform Provisioners - local-exec, remote-exec &amp; file.</title>
      <dc:creator>Rohan Nalawade</dc:creator>
      <pubDate>Tue, 06 Jan 2026 13:09:04 +0000</pubDate>
      <link>https://forem.com/rohanan07/terraform-provisioners-local-exec-remote-exec-file-1ag4</link>
      <guid>https://forem.com/rohanan07/terraform-provisioners-local-exec-remote-exec-file-1ag4</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;br&gt;
When Terraform provisions infrastructure, it creates the resources defined in the configuration files, such as virtual machines or databases. However, a newly created resource is often in a default, unconfigured state. To make a server functional, it usually requires software installation, configuration file placement, or initial bootstrapping.&lt;/p&gt;

&lt;p&gt;Terraform provisioners handle this specific stage of the deployment lifecycle. They act as a bridge between infrastructure provisioning and configuration management, allowing users to execute scripts or transfer files as part of the resource creation process. While best practices often recommend using image builders (like Packer) or user-data scripts for these tasks, provisioners provide a necessary solution for immediate, post-deployment actions.&lt;/p&gt;

&lt;p&gt;Here is an overview of the three primary provisioners available in Terraform.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. local-exec&lt;/strong&gt;&lt;br&gt;
The local-exec provisioner invokes a local executable or script on the machine running Terraform, not on the resource being created. This process runs on the device where the terraform apply command is executed, whether that is a developer's laptop or a CI/CD build server.&lt;/p&gt;

&lt;p&gt;This provisioner is typically used for tasks that need to happen outside the infrastructure environment, such as updating local configuration files, saving output values to a disk, or triggering external automation tools like Ansible playbooks.&lt;/p&gt;

&lt;p&gt;Example: In this example, the provisioner saves the public IP address of a newly created EC2 instance to a local text file immediately after the instance is provisioned.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_instance" "web_server" {
  ami           = "ami-12345678"
  instance_type = "t2.micro"

  # Executes on the machine running Terraform
  provisioner "local-exec" {
    command = "echo 'Server IP: ${self.public_ip}' &amp;gt;&amp;gt; server_ips.txt"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. remote-exec&lt;/strong&gt;&lt;br&gt;
The remote-exec provisioner executes commands directly on the remote resource being created. Unlike local-exec, this requires a network connection to the resource. For Linux servers, this is typically done via SSH, and for Windows servers, via WinRM.&lt;/p&gt;

&lt;p&gt;This provisioner is essential for bootstrapping a server. It is commonly used to update package repositories, install necessary software packages, or start system services immediately after the operating system boots. Because it requires access to the machine, a connection block providing credentials (user and private key) is mandatory.&lt;/p&gt;

&lt;p&gt;Example: This configuration logs into a new Ubuntu server and runs commands to install and start the Nginx web server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_instance" "web_server" {
  # ... standard ec2 config ...

  # Define connection details
  connection {
    type        = "ssh"
    user        = "ubuntu"
    private_key = file("~/.ssh/my-keypair.pem")
    host        = self.public_ip
  }

  # Executes on the remote server
  provisioner "remote-exec" {
    inline = [
      "sudo apt-get update",
      "sudo apt-get install -y nginx",
      "sudo systemctl start nginx"
    ]
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. file&lt;/strong&gt;&lt;br&gt;
The file provisioner is used to copy files or directories from the machine running Terraform to the newly created remote resource. It serves as a simple transport mechanism for configuration management.&lt;/p&gt;

&lt;p&gt;This is particularly useful for moving static application files, configuration settings (such as nginx.conf), or scripts that need to reside on the server. Like remote-exec, the file provisioner requires a valid connection block to establish a secure transfer channel.&lt;/p&gt;

&lt;p&gt;Example: This snippet uploads a local configuration file named app.conf to the temporary directory of the remote server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_instance" "web_server" {
  # ... standard ec2 config ...

  connection {
    type        = "ssh"
    user        = "ubuntu"
    private_key = file("~/.ssh/my-keypair.pem")
    host        = self.public_ip
  }

  # Copies local file to remote destination
  provisioner "file" {
    source      = "configs/app.conf"      # Local path
    destination = "/tmp/app.conf"         # Remote path
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Terraform provisioners provide essential functionality for the "last mile" of infrastructure deployment. They enable immediate interaction with resources through local scripting, remote command execution, and file transfers.&lt;/p&gt;

&lt;p&gt;While they are powerful, they should be used judiciously. Since Terraform cannot track the state of changes made inside provisioners, complex configurations are often better handled by dedicated configuration management tools or pre-baked machine images. However, for straightforward bootstrapping and setup tasks, provisioners remain an effective and flexible tool in the Terraform ecosystem.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>devops</category>
      <category>cloud</category>
      <category>aws</category>
    </item>
    <item>
      <title>Lifecycle rules in Terraform.</title>
      <dc:creator>Rohan Nalawade</dc:creator>
      <pubDate>Wed, 31 Dec 2025 13:31:41 +0000</pubDate>
      <link>https://forem.com/rohanan07/lifecycle-rules-in-terraform-2h2j</link>
      <guid>https://forem.com/rohanan07/lifecycle-rules-in-terraform-2h2j</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;br&gt;
Lifecycle rules are used to override the terraform's default behavior.&lt;br&gt;
Lifecycle rules control how terraform creates, destroys, updates a resource. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;There are three lifecycle rules in terraform.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;create before destroy&lt;/li&gt;
&lt;li&gt;prevent destroy&lt;/li&gt;
&lt;li&gt;ignore changes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;1. Create before destroy&lt;/strong&gt;&lt;br&gt;
This rule is used to avoid downtime. It is used when creation of new resource is important before deletion of old resource. Suppose you updated the configurations of a EC2 instance and you have applied this rule there. Terraform will create the new resource first and then delete the old resource. This only works if the resource supports parallel existence (for example, multiple EC2 instances or ALB versions). This is helpful to avoid downtime. &lt;br&gt;
This rule is mainly used for ALB, EC2 replacements, ASG.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Prevent Destroy&lt;/strong&gt;&lt;br&gt;
It is used when you want to prevent the accidental deletion of your important resources, such as a S3 bucket. When you run the terraform destroy command and if this rule is applied to the resource, that resource will not get deleted unless and until you change the rule to false. It is mainly used for Critical S3 buckets, Production databases, Important IAM roles, etc.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Ignore Changes&lt;/strong&gt;&lt;br&gt;
It simply ignores the changes to the resource if the change is made from out of terraform. For example you have defined configurations of ASG such as desired capacity, and you have applied the rule of ignore changes to the resource. If ignore_changes is applied and someone modifies the resource outside Terraform (for example via the AWS Console), Terraform will ignore that drift and will not attempt to bring the resource back to the value defined in the Terraform code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Terraform lifecycle rules provide fine-grained control over how resources are created, updated, and destroyed. By using rules such as create_before_destroy, prevent_destroy, and ignore_changes, you can reduce downtime, protect critical infrastructure, and safely handle changes made outside Terraform. When applied thoughtfully, lifecycle rules help make Terraform-managed infrastructure more reliable, predictable, and production-ready.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>infrastructureascode</category>
      <category>devops</category>
      <category>aws</category>
    </item>
    <item>
      <title>Remote backend and State locking using S3 in terraform.</title>
      <dc:creator>Rohan Nalawade</dc:creator>
      <pubDate>Mon, 29 Dec 2025 11:38:44 +0000</pubDate>
      <link>https://forem.com/rohanan07/remote-backend-and-state-locking-using-s3-in-terraform-41bk</link>
      <guid>https://forem.com/rohanan07/remote-backend-and-state-locking-using-s3-in-terraform-41bk</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;:&lt;br&gt;
Terraform uses a state file with .tfstate extension to provision infrastructure. I compares the actual aws state with what is desirable state and accordingly creates the infrastructure. Now this state file is very much important because it contains all the details about the infrastructure configurations. It also contains some important details like passwords, IDs public ips, etc. So safety of this file is always important. By default this file is created locally on the laptop you are working, and it is totally fine if you are working solo. But if you are working in a team there are several problems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Problems with terraform state file.&lt;/strong&gt;&lt;br&gt;
First of all, "Emailing the State" problem. f you create a server, the state file on your laptop knows about it. If your colleague wants to update that server, they don't know it exists because the state file is on your computer. You would have to email the file to them (which is messy and dangerous).&lt;br&gt;
Then there is state conflict problem. If you and your colleague entered the terraform apply command at the same time. You try to change the server name. They try to delete the server. Terraform has no way to stop this. The infrastructure enters a corrupted state, or the last person to save overwrites the other's work.&lt;br&gt;
Also the Terraform state file is written in plain text. It can contain sensitive data in it like access keys. Security of this file also a problem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Solution: Remote Backend&lt;/strong&gt;&lt;br&gt;
Remote Backend simply means storing the state file into a remote location typically a cloud location instead of storing it locally on your laptop. When you are using AWS as provider for your terraform you use S3 bucket as the remote backend location. This solves two problems. The "Emailing the state" problem is now solved because everyone who is working on the project can access this file from there laptop without you needing to send it to them. It also solves the problem of security. &lt;br&gt;
The only problem remaining is of "State Conflict", because this is the single source of truth and anyone with permissions can access it, meaning two or multiple colleagues can access it at the same time creating the state conflict. For this terraform uses conflict of state locking.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;State Locking&lt;/strong&gt;&lt;br&gt;
It is a mechanism that prevents two or more people modifying the infrastructure at the same time. When you run terraform apply Terraform locks the state file, ensuring no one else could run the terraform apply command at the same time. When the command gets completed it releases the lock. This completely prevents race conditions and corruptions. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Mechanism:&lt;/strong&gt;&lt;br&gt;
To Store the state file into remote backend we use S3 because it is highly durable. If something goes wrong you can easily rewind the file. Also S3 offers built in locking mechanism.&lt;/p&gt;

&lt;p&gt;Code block for the Remote Backend code in terraform:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  backend "s3" {
    bucket       = "my-state-bucket"
    key          = "prod/terraform.tfstate"
    region       = "us-east-1"
    use_lockfile = true 
    encrypt      = true
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Remote backend and state locking is a powerful and much helpful concept in  terraform that makes sure the state file is safe and it prevents the corruption of file, making the infrastructure consistent.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
      <category>infrastructureascode</category>
      <category>devops</category>
    </item>
    <item>
      <title>Variables in Terraform: My learnings.</title>
      <dc:creator>Rohan Nalawade</dc:creator>
      <pubDate>Sun, 28 Dec 2025 11:41:04 +0000</pubDate>
      <link>https://forem.com/rohanan07/variables-in-terraform-my-learnings-5a1a</link>
      <guid>https://forem.com/rohanan07/variables-in-terraform-my-learnings-5a1a</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction:&lt;/strong&gt;&lt;br&gt;
I have been learning terraform and today i came across concept of variables in terraform. Variables allow you to store values which can be used in multiple places in terraform code. There are two types of variables:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Input Variables&lt;/li&gt;
&lt;li&gt;Output Variables&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;1.Input Variables&lt;/strong&gt;&lt;br&gt;
Input variables allows you to store hardcoded values and use them in your terraform code as inputs. For example you can store instance types, region, instance count, etc. The syntax for input variables is like this:&lt;br&gt;
variable "" {&lt;br&gt;
      default = &lt;br&gt;
      type = &lt;br&gt;
      description = &lt;small&gt;&lt;br&gt;
}&lt;/small&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Output Variables&lt;/strong&gt;&lt;br&gt;
Output variables are used to store the output values which are generated after the terraform apply command. You can store values like public ip of ec2 instance, public dns, id of the instance, etc. The syntax for output variables is like this:&lt;br&gt;
output &amp;lt;"variable name"&amp;gt; {&lt;br&gt;
       value = &lt;br&gt;
       description = &lt;small&gt;&lt;br&gt;
}&lt;/small&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data types in terraform&lt;/strong&gt;&lt;br&gt;
Terraform has two types of Data types. Primitive and complex data types. &lt;/p&gt;

&lt;p&gt;The Primitive data types includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;string&lt;/li&gt;
&lt;li&gt;number&lt;/li&gt;
&lt;li&gt;bool&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Complex data types includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;list&lt;/li&gt;
&lt;li&gt;map&lt;/li&gt;
&lt;li&gt;set&lt;/li&gt;
&lt;li&gt;object&lt;/li&gt;
&lt;li&gt;tuple&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The Primitive Data types:&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;1. string&lt;/strong&gt;&lt;br&gt;
string stores any text based values. For example, the input variable named region can store value as "us-east-1" or "ap-south-1".&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. number&lt;/strong&gt;&lt;br&gt;
number is used to store any integer based values. For example the input variable instance_count can have values like 2 or 3.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. bool&lt;/strong&gt;&lt;br&gt;
bool is used to store boolean values. It can include only two type of values which are true or false. For example the variable named is_running can have value like true, which means the instance state is running or it can have value like false, which means the instance state is not running or stopped.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Complex Data types:&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;1. List&lt;/strong&gt;&lt;br&gt;
The list is used to store multiple values of same data type. For example we can store multiple region values in a single list. ["ap-south-1", "us-east-1", "us-west-1"]. The list must contain values of same data type only. The list is mutable, meaning it can be modified after it is created.&lt;br&gt;
The list is always ordered in nature.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. set&lt;/strong&gt;&lt;br&gt;
The set is also similar to list but a set cannot have duplicate values. It can only contain unique values. The values in a set must be of same data type. The set is also mutable in nature. Unlike list set is not ordered. &lt;br&gt;
For example ["ap-south-1", "us-east-1", "us-west-1"]. It is similar to list but it can't contain duplicate values.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. map&lt;/strong&gt;&lt;br&gt;
The map data type is used to store key-value pairs. The keys in a map must be unique and the data type of key is always string. The values in a map can be duplicate and they can be of any data type but all the values should be of similar data type only.&lt;br&gt;
For example we can use map to store tags&lt;br&gt;
tags = {&lt;br&gt;
    name = "ec2_instance"&lt;br&gt;
    description = "ec2 instance for running the backend"&lt;br&gt;
    env = "prod"&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. tuple&lt;/strong&gt;&lt;br&gt;
Tuple is also similar to List and it allows you to store ordered values. The difference between list and tuple is that tuple is immutable, meaning the tuple cannot be modified once after it is created and it has fixed length. Unlike list, a tuple can have multiple data types for its values. &lt;br&gt;
For example we can store network addresses in tuple as ["192.168.1.2", "192.168.1.1"]&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. object&lt;/strong&gt;&lt;br&gt;
The object is used to create variables that contain structured data and named attributes. It is similar to maps but map can contain only similar types of data for its values, whereas object can have multiple data types in its values. For example,&lt;br&gt;
user = {&lt;br&gt;
    name = "Rohan"&lt;br&gt;
    age = 20&lt;br&gt;
    email = "&lt;a href="mailto:rohan@gmail.com"&gt;rohan@gmail.com&lt;/a&gt;"&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt;&lt;br&gt;
Terraform variables are a great way of simplifying the codebase and reusing values without hardcoding literal strings whenever you need to reference something.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>From ClickOps to DevOps: My First Infrastructure as Code Project with Terraform</title>
      <dc:creator>Rohan Nalawade</dc:creator>
      <pubDate>Sat, 27 Dec 2025 12:37:26 +0000</pubDate>
      <link>https://forem.com/rohanan07/from-clickops-to-devops-my-first-infrastructure-as-code-project-with-terraform-3fcn</link>
      <guid>https://forem.com/rohanan07/from-clickops-to-devops-my-first-infrastructure-as-code-project-with-terraform-3fcn</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;br&gt;
Like many cloud enthusiasts, I started my AWS journey using the Management Console—clicking through wizards, manually selecting subnets, and hoping I didn't forget a configuration step. It works, but it’s prone to human error and hard to replicate.&lt;br&gt;
This week, I decided to level up. I started learning Terraform to embrace Infrastructure as Code (IaC).&lt;br&gt;
In this post, I’ll walk you through my very first hands-on task: provisioning a custom network stack and launching an EC2 instance entirely through code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Architecture&lt;/strong&gt;&lt;br&gt;
Instead of just launching a default instance, I wanted to build the network from scratch to understand how the components connect. Here is what I built:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VPC: A custom Virtual Private Cloud.&lt;/li&gt;
&lt;li&gt;Subnet: A public subnet for the instance.&lt;/li&gt;
&lt;li&gt;Internet Gateway (IGW): To allow internet access.&lt;/li&gt;
&lt;li&gt;Route Table: Configuring routes to the IGW.&lt;/li&gt;
&lt;li&gt;Security Group: Allowing SSH, HTTP, and HTTPS.&lt;/li&gt;
&lt;li&gt;EC2 Instance: The server itself.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why Terraform?&lt;/strong&gt;&lt;br&gt;
Before diving into the code, here are the immediate benefits I realized while working on this:&lt;br&gt;
Speed: I can destroy and recreate the entire infrastructure in seconds with one command.&lt;br&gt;
No Human Error: No more accidentally clicking the wrong checkbox. The code is the source of truth.&lt;br&gt;
Documentation: The code itself acts as documentation for the infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Code&lt;/strong&gt;&lt;br&gt;
Here is a look at the main.tf file I created.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. The Network Setup&lt;/strong&gt;&lt;br&gt;
First, we define the VPC and the Internet Gateway.&lt;br&gt;
`resource "aws_vpc" "terra-vpc" {&lt;br&gt;
  cidr_block = "10.0.0.0/16"&lt;br&gt;
  tags = {&lt;br&gt;
    Name = "terra-vpc"&lt;br&gt;
  }&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource "aws_internet_gateway" "terra-igw" {&lt;br&gt;
  vpc_id = aws_vpc.terra-vpc.id&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource "aws_subnet" "terra-subnet1" {&lt;br&gt;
  vpc_id     = aws_vpc.terra-vpc.id&lt;br&gt;
  cidr_block = "10.0.1.0/24"&lt;br&gt;
}`&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Security Groups&lt;/strong&gt;&lt;br&gt;
This was the trickiest part! I learned that enabling traffic requires specific ingress (incoming) and egress (outgoing) rules.&lt;br&gt;
`resource "aws_security_group" "terra-ec2-sg" {&lt;br&gt;
  name   = "terraform-ec2-sg"&lt;br&gt;
  vpc_id = aws_vpc.terra-vpc.id&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;Allow SSH from anywhere&lt;br&gt;
resource "aws_vpc_security_group_ingress_rule" "allow_ssh" {&lt;br&gt;
  security_group_id = aws_security_group.terra-ec2-sg.id&lt;br&gt;
  cidr_ipv4         = "0.0.0.0/0"&lt;br&gt;
  from_port         = 22&lt;br&gt;
  to_port           = 22&lt;br&gt;
  ip_protocol       = "tcp"&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;Allow all outbound traffic&lt;br&gt;
resource "aws_vpc_security_group_egress_rule" "allow_all" {&lt;br&gt;
  security_group_id = aws_security_group.terra-ec2-sg.id&lt;br&gt;
  cidr_ipv4         = "0.0.0.0/0"&lt;br&gt;
  ip_protocol       = "-1" # Represents all protocols&lt;br&gt;
}`&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. The Instance&lt;/strong&gt;&lt;br&gt;
Finally, tying it all together by launching the EC2 instance inside our new security group and subnet.&lt;br&gt;
`resource "aws_instance" "first_terra_instance" {&lt;br&gt;
  ami                    = "ami-02b8269d5e85954ef" # Check your region!&lt;br&gt;
  instance_type          = "t3.micro"&lt;br&gt;
  key_name               = "terra-key-pair"&lt;br&gt;
  vpc_security_group_ids = [aws_security_group.terra-ec2-sg.id]&lt;br&gt;
  subnet_id              = aws_subnet.terra-subnet1.id&lt;/p&gt;

&lt;p&gt;tags = {&lt;br&gt;
    Name = "Terraform-EC2"&lt;br&gt;
  }&lt;br&gt;
}`&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Workflow: 4 Magic Commands&lt;/strong&gt;&lt;br&gt;
Learning the syntax is one thing, but understanding the lifecycle is another. These are the four commands I used constantly:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- terraform init:&lt;/strong&gt; Initializes the directory and downloads the necessary AWS providers.&lt;br&gt;
&lt;strong&gt;- terraform validate:&lt;/strong&gt; A lifesaver! It checks your code for syntax errors before you even try to run it.&lt;br&gt;
&lt;strong&gt;- terraform plan:&lt;/strong&gt; This is my favorite. It shows a "dry run" of what will be created, changed, or destroyed. It gives you confidence before making changes.&lt;br&gt;
&lt;strong&gt;- terraform apply:&lt;/strong&gt; The command that actually makes the API calls to AWS to build the resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Building this project gave me a much deeper appreciation for modern DevOps practices. It’s empowering to see an empty AWS account populate with resources just by typing terraform apply.&lt;br&gt;
My next step? I plan to look into Terraform Variables to stop hardcoding values and make this script reusable for different environments.&lt;br&gt;
If you are just starting with cloud, I highly recommend picking up Terraform. It changes the way you look at infrastructure!&lt;/p&gt;

&lt;p&gt;Have you worked with Terraform? What was the first resource you automated? Let me know in the comments! 👇&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>aws</category>
      <category>devops</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Differences between Terraform &amp; Ansible, when to use what.</title>
      <dc:creator>Rohan Nalawade</dc:creator>
      <pubDate>Fri, 26 Dec 2025 13:38:38 +0000</pubDate>
      <link>https://forem.com/rohanan07/differences-between-terraform-ansible-when-to-use-what-34lb</link>
      <guid>https://forem.com/rohanan07/differences-between-terraform-ansible-when-to-use-what-34lb</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;br&gt;
When I started learning Cloud Computing, I was confused by the sheer number of tools. I knew Terraform was for "Infrastructure as Code." I knew Ansible was for "Configuration Management."&lt;br&gt;
But then I saw people creating AWS EC2 instances using Ansible... and I saw people running shell scripts using Terraform. I asked myself: If they can both do the same things, why do we need both?&lt;br&gt;
After digging into the documentation and building a few labs, I realized that while there is overlap, they have completely different philosophies. Here is what I learned about the battle between Provisioning and Configuration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Core Difference: Builder vs. Interior Designer&lt;/strong&gt;&lt;br&gt;
The best way to visualize the difference is to imagine building a house.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Terraform is the Builder (Provisioning)
Terraform is designed to create the infrastructure from scratch.&lt;/li&gt;
&lt;li&gt;It pours the concrete foundation.&lt;/li&gt;
&lt;li&gt;It builds the walls.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It installs the plumbing and electricity.&lt;br&gt;
In Cloud terms: It creates your VPC, Subnets, EC2 Instances, and Databases.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ansible is the Interior Designer (Configuration)&lt;br&gt;
Ansible is designed to setup the house once it exists.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It paints the walls.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It installs the furniture.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It makes sure the TV is plugged in.&lt;br&gt;
In Cloud terms: It installs Nginx, updates software patches, creates user accounts, and deploys your application code.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;The "State" Debate: Why not just use Ansible for everything?&lt;/strong&gt;&lt;br&gt;
This was my biggest question. Since Ansible has modules to create EC2 instances, why bother learning Terraform?&lt;br&gt;
The answer lies in one file: terraform.tfstate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Terraform has "Memory" (Stateful)&lt;/strong&gt;&lt;br&gt;
When Terraform creates a server, it writes down the details in a State File. It remembers exactly what it built. If you delete a server from your Terraform code and run it again, Terraform looks at its memory (State file), sees that the server shouldn't exist anymore, and destroys it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ansible is "Forgetful" (Stateless)&lt;/strong&gt;&lt;br&gt;
Ansible doesn't have a memory of what it did last time. It just follows your current instructions list. If you remove the "Create Server" task from your Ansible code, Ansible doesn't delete the server. It just ignores it. This leads to "Configuration Drift"—where you have "ghost" servers running that you forgot about, costing you money.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How they work together&lt;/strong&gt;&lt;br&gt;
In the real world, you rarely pick just one. A standard DevOps pipeline looks like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Terraform builds the empty servers and networking.&lt;/li&gt;
&lt;li&gt;Terraform calls Ansible automatically.&lt;/li&gt;
&lt;li&gt;Ansible connects to those new servers and installs the application.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
You can use a hammer to drive a screw, but it's going to be messy.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use Terraform to build the house.&lt;/li&gt;
&lt;li&gt;Use Ansible to make it a home.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I have currently started learning terraform and in the future i will be learning ansible also, if you have any suggestions or tips you can comment below.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>ansible</category>
      <category>aws</category>
      <category>infrastructureascode</category>
    </item>
    <item>
      <title>Server-based vs Serverless Compute Services on AWS: My Notes for Beginners.</title>
      <dc:creator>Rohan Nalawade</dc:creator>
      <pubDate>Thu, 25 Dec 2025 12:06:14 +0000</pubDate>
      <link>https://forem.com/rohanan07/server-based-vs-serverless-compute-services-on-aws-my-notes-for-beginners-1c12</link>
      <guid>https://forem.com/rohanan07/server-based-vs-serverless-compute-services-on-aws-my-notes-for-beginners-1c12</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;br&gt;
When I started learning AWS, I thought "Serverless" meant there were literally no servers. I imagined code floating in the clouds like magic. 🪄&lt;br&gt;
As I dug deeper into services like EC2, Lambda, and Fargate, I realized that "Serverless" is just a buzzword for "Someone else manages the servers for you."&lt;br&gt;
But how do you choose? Should you manage it yourself (Server-based) or let AWS handle it (Serverless)? To understand this, I like to use the Pizza Analogy.🍕&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. The "Do-It-Yourself" Approach: Amazon EC2&lt;/strong&gt;&lt;br&gt;
Think of Amazon EC2 (Elastic Compute Cloud) like baking a pizza at home.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You buy the ingredients (OS, CPU, RAM).&lt;/li&gt;
&lt;li&gt;You pre-heat the oven (Provisioning).&lt;/li&gt;
&lt;li&gt;You bake the pizza (Running the app).
The Catch: You have to clean the kitchen afterwards. If you aren't using the oven, you still paid for it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In Tech Terms: EC2 gives you a Virtual Machine. You have total control. You can install whatever you want, tweak the operating system, and configure the security.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pros: Complete control. Great for long-running applications or legacy software.&lt;/li&gt;
&lt;li&gt;Cons: You pay for the server 24/7, even if no one visits your website. You are responsible for security patches.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. The "Buy a Slice" Approach: AWS Lambda&lt;/strong&gt;&lt;br&gt;
Think of AWS Lambda like walking into a pizza shop and buying one slice.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You don't care what oven they used.&lt;/li&gt;
&lt;li&gt;You don't care who the chef is.&lt;/li&gt;
&lt;li&gt;You just pay for the slice, eat it, and leave.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In Tech Terms: This is true Serverless. You upload your code (a function), and AWS runs it only when needed.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pros: You pay $0 when no one is using it. It scales instantly (from 1 user to 1 million).&lt;/li&gt;
&lt;li&gt;Cons: "Cold Starts" (it takes a split second to wake up). Not good for long tasks (max 15 minutes).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. The Middle Ground: Fargate (Containers)&lt;/strong&gt;&lt;br&gt;
This is where I used to get confused. I kept hearing about ECS and EKS.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ECS/EKS are just "Managers." They organize your containers (like Docker).
But where do those containers run?
You have two choices:&lt;/li&gt;
&lt;li&gt;EC2 Mode: You run the containers on servers you manage. (Server-based).&lt;/li&gt;
&lt;li&gt;Fargate Mode: You tell AWS "Here is my container, just run it." (Serverless).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Think of Fargate like ordering pizza for delivery. You get the whole pizza (custom container), but you don't have to worry about the oven or the kitchen. AWS manages the underlying infrastructure, and you just focus on the app.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Which one should you choose?&lt;/strong&gt;&lt;br&gt;
As I am building my own projects, here is my rule of thumb:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Start with Lambda if you are building a simple API or a background task. It’s cheap and easy.&lt;/li&gt;
&lt;li&gt;Use Fargate if you have a Docker container and want it to run without managing servers.&lt;/li&gt;
&lt;li&gt;Use EC2 only if you need full control over the OS or have a very steady workload where you can reserve capacity to save money.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
There is no "winner" here. Real cloud architects use all of them together. You might use EC2 to host a database, Fargate to run your backend API, and Lambda to process image uploads.&lt;br&gt;
I’m still exploring these services, so if you have a favorite use case for Lambda or Fargate, let me know in the comments!&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>devops</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>SSL/TLS Explained: From the Handshake to the Cloud ☁️</title>
      <dc:creator>Rohan Nalawade</dc:creator>
      <pubDate>Wed, 24 Dec 2025 16:40:09 +0000</pubDate>
      <link>https://forem.com/rohanan07/ssltls-explained-from-the-handshake-to-the-cloud-2e5g</link>
      <guid>https://forem.com/rohanan07/ssltls-explained-from-the-handshake-to-the-cloud-2e5g</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;br&gt;
Have you ever noticed that little padlock icon next to the URL in your browser? We see it every day, but until recently, I had no idea what magic was actually happening behind it.&lt;/p&gt;

&lt;p&gt;I am currently on a journey to learn Cloud Computing. As I was going through tutorials, I kept hitting terms like "SSL," "TLS," and "Handshakes." Honestly, it felt a bit overwhelming at first. To really understand it, I spent sometime watching YouTube tutorials and chatting with Gemini to break down the complex technical jargon into simple English.&lt;/p&gt;

&lt;p&gt;Now that it has finally clicked for me, I want to document what I’ve learned. This blog is my attempt to explain SSL and TLS in the simplest way possible—from a learner, for learners.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Alphabet Soup: HTTP vs. HTTPS&lt;/strong&gt;&lt;br&gt;
First, let's clear up the basics.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;HTTP (HyperText Transfer Protocol): This is the standard way browsers and servers talk. The problem? It's plain text. If I send a password over HTTP, anyone snooping on the network (like a hacker in a coffee shop) can read it as easily as a postcard.&lt;/li&gt;
&lt;li&gt;HTTPS (HTTP Secure): This is HTTP with a security layer on top. It encrypts the data so that even if someone steals it, it looks like gibberish.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But what is that "security layer"? That’s where SSL and TLS come in.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SSL vs. TLS: What’s the Difference?&lt;/strong&gt;&lt;br&gt;
You will hear these terms used interchangeably, but there is a technical difference.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SSL (Secure Sockets Layer): The original protocol developed by Netscape in the 90s. It is now deprecated and considered insecure.&lt;/li&gt;
&lt;li&gt;TLS (Transport Layer Security): The modern, secure successor to SSL.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Fun Fact: We mostly use TLS 1.2 or TLS 1.3 today. However, people still say "SSL Certificate" out of habit. It’s like how we say "dial the number" even though we don't use rotary phones anymore.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How It Works: The "Handshake"&lt;/strong&gt;&lt;br&gt;
When you visit a secure website (like Google or your bank), your browser and the server engage in a conversation called the TLS Handshake. This happens in milliseconds before any data is exchanged.&lt;/p&gt;

&lt;p&gt;Here is the simplified version of what happens:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Client Hello: Your browser says, "Hello! I want to talk securely. Here are the encryption methods I support."&lt;/li&gt;
&lt;li&gt;Server Hello: The server replies, "Hello! Let's use this encryption method. Here is my Certificate to prove I am who I say I am."&lt;/li&gt;
&lt;li&gt;Verification: Your browser checks the certificate. Is it valid? Is it expired? Does it actually belong to this website?&lt;/li&gt;
&lt;li&gt;Key Exchange: If the certificate is good, the browser and server use it to generate a Session Key.&lt;/li&gt;
&lt;li&gt;Secure Connection: Boom! They lock the connection. From now on, everything sent is encrypted using that Session Key.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Encryption: Asymmetric vs. Symmetric&lt;/strong&gt;&lt;br&gt;
This is the coolest part of the process. HTTPS uses two types of encryption to balance security and speed.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Asymmetric Encryption (The Handshake)
This uses two keys: a Public Key (everyone can see it) and a Private Key (kept secret).&lt;/li&gt;
&lt;li&gt;Imagine a mailbox. Anyone can drop a letter in (encrypt with Public Key), but only the person with the key can open the mailbox (decrypt with Private Key).&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We use this only during the handshake to exchange the session key safely.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Symmetric Encryption (The Conversation)&lt;br&gt;
Once the secure connection is established, we switch to Symmetric Encryption.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This uses one key that both the browser and server have.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Why switch? Because Asymmetric encryption is slow. Symmetric encryption is incredibly fast, allowing you to stream video or load heavy pages without lag.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;What is an SSL Certificate?&lt;/strong&gt;&lt;br&gt;
The certificate is like a digital ID card (passport) for a website. It does two things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Encryption: It contains the Public Key needed for the handshake.&lt;/li&gt;
&lt;li&gt;Identity: It proves the server owns the domain.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These certificates are issued by Certificate Authorities (CAs)—trusted organizations that verify website owners. If you try to create your own certificate (Self-Signed), browsers will warn users that the site is "Not Secure" because no trusted third party verified you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SSL/TLS in the Cloud Era&lt;/strong&gt;&lt;br&gt;
If you are deploying your app to the cloud (like AWS, Vercel, or Google Cloud), you rarely handle certificates manually on your server anymore. The cloud has changed the game with two major concepts: SSL Termination and Managed Certificates.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;SSL Termination (The Bouncer)
Decrypting data takes CPU power. If you have a popular website with millions of visitors, your application server (the computer running your code) could get overwhelmed just trying to "handshake" with everyone.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To fix this, we use a Load Balancer.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Load Balancer sits in front of your servers.&lt;/li&gt;
&lt;li&gt;It handles the SSL/TLS "handshake" and decrypts the data.&lt;/li&gt;
&lt;li&gt;It passes the data to your actual application server as plain HTTP.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is called SSL Termination. It’s like having a bouncer at the club door who checks IDs (Security) so the bartender (Your Server) can focus solely on pouring drinks (Serving Content).&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Managed Certificates (No More Panic)
In the old days, you had to buy a certificate, upload it to your server, and set a calendar reminder to renew it in 365 days. If you forgot, your site would go down.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Modern cloud providers (like AWS Certificate Manager or Vercel) offer Managed Certificates.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;They are usually free.&lt;/li&gt;
&lt;li&gt;They automatically renew themselves before they expire.&lt;/li&gt;
&lt;li&gt;You don't touch any private keys; the cloud provider handles the security for you.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;End-to-End Encryption (Zero Trust)
Wait, didn't I just say the Load Balancer sends plain HTTP to the server? Is that safe? Usually, yes, because that traffic happens inside a private, secure cloud network (VPC) that outsiders can't access.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;However, for highly sensitive data (like banking or healthcare), we use End-to-End Encryption.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Load Balancer decrypts the traffic to inspect it.&lt;/li&gt;
&lt;li&gt;It re-encrypts it before sending it to your backend server.&lt;/li&gt;
&lt;li&gt;Your server decrypts it again. This ensures that even if a hacker gets inside your private cloud network, they still can't read the internal traffic.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Understanding SSL/TLS is crucial for any developer. It ensures that the internet remains a safe place for commerce, communication, and privacy.&lt;br&gt;
Whether you are configuring a local server with Let's Encrypt or setting up an enterprise Load Balancer on AWS, the core concept remains the same: Encryption = Trust.&lt;br&gt;
If you are building a website today, HTTPS isn't optional—it's standard. With modern tools making certificates free and auto-renewing, there is no excuse to run a site on plain HTTP anymore.&lt;br&gt;
I hope this cleared up the confusion between the acronyms and gave you a glimpse into how cloud security works!&lt;/p&gt;

&lt;p&gt;I’m still exploring the world of Cloud and DevOps, so if you have any tips or resources that helped you understand security better, please drop them in the comments. Let's learn together!&lt;/p&gt;

</description>
      <category>security</category>
      <category>cloud</category>
      <category>networking</category>
      <category>aws</category>
    </item>
  </channel>
</rss>
