<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Hajarat </title>
    <description>The latest articles on Forem by Hajarat  (@hajixhayjhay).</description>
    <link>https://forem.com/hajixhayjhay</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/hajixhayjhay"/>
    <language>en</language>
    <item>
      <title>Mastering Docker: A Complete, Professional Guide to Containers, Networks, Volumes, Dockerfiles, and Docker Compose</title>
      <dc:creator>Hajarat </dc:creator>
      <pubDate>Tue, 18 Nov 2025 12:20:06 +0000</pubDate>
      <link>https://forem.com/hajixhayjhay/mastering-docker-a-complete-professional-guide-to-containers-networks-volumes-dockerfiles-and-14cd</link>
      <guid>https://forem.com/hajixhayjhay/mastering-docker-a-complete-professional-guide-to-containers-networks-volumes-dockerfiles-and-14cd</guid>
      <description>&lt;p&gt;Docker has become one of the most essential skills for DevOps Engineers, Cloud Engineers, Developers, and Platform Teams. It simplifies application packaging, streamlines deployments, supports microservices architectures, and enables environments that are predictable and portable. This blog provides a complete, professional overview of Docker—from core concepts to advanced usage—designed for engineers already working in cloud and DevOps environments.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Introduction: Why Docker Matters in Modern Infrastructure&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In today’s technology landscape, businesses demand rapid deployments, consistent environments, and applications that scale effortlessly. Traditional deployment models fail to keep up due to dependency conflicts, OS variations, and infrastructure complexity.&lt;/p&gt;

&lt;p&gt;Docker solves these challenges by enabling containerization—a lightweight, portable, and consistent unit that bundles everything an application needs to run.&lt;/p&gt;

&lt;p&gt;Key Benefits of Docker&lt;/p&gt;

&lt;p&gt;Consistency across environments: "Works on my machine" becomes irrelevant.&lt;/p&gt;

&lt;p&gt;Improved CI/CD workflows with artifact-based deployments.&lt;/p&gt;

&lt;p&gt;Lightweight and fast compared to VMs.&lt;/p&gt;

&lt;p&gt;Scales easily with orchestrators like Kubernetes and ECS.&lt;/p&gt;

&lt;p&gt;Better utilization of resources.&lt;/p&gt;

&lt;p&gt;Enhanced developer productivity.&lt;/p&gt;

&lt;p&gt;Docker has become a foundational layer for DevOps and cloud-native development, making it critical for engineers to master container images, networks, volumes, and multi-service orchestration.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Understanding Docker Architecture&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Docker’s architecture is built on three essential components:&lt;/p&gt;

&lt;p&gt;2.1 Docker Engine&lt;/p&gt;

&lt;p&gt;The Docker Engine is the runtime responsible for building, running, and managing containers.&lt;/p&gt;

&lt;p&gt;2.2 Docker Images&lt;/p&gt;

&lt;p&gt;Read-only templates used to create containers.&lt;/p&gt;

&lt;p&gt;2.3 Docker Containers&lt;/p&gt;

&lt;p&gt;Running instances of images—lightweight, isolated, and fast.&lt;/p&gt;

&lt;p&gt;2.4 Key Components Summary&lt;br&gt;
Component   Description&lt;br&gt;
Image   Blueprint for the container (OS + application + dependencies).&lt;br&gt;
Container   Running process based on an image.&lt;br&gt;
Registry    Storage location for images (Docker Hub, ECR, GCR, ACR).&lt;br&gt;
Dockerfile  Instructions for building a custom image.&lt;br&gt;
Volumes Persistent storage for containers.&lt;br&gt;
Networks    Communication layer for containers.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Working with Dockerfiles: Building Custom Images&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A Dockerfile automates the creation of custom images. It defines how an image is built, what dependencies it contains, and how the application runs.&lt;/p&gt;

&lt;p&gt;3.1 Sample Dockerfile&lt;br&gt;
FROM node:18-alpine&lt;br&gt;
WORKDIR /app&lt;br&gt;
COPY package*.json ./&lt;br&gt;
RUN npm install&lt;br&gt;
COPY . .&lt;br&gt;
EXPOSE 3000&lt;br&gt;
CMD [ "npm", "start" ]&lt;br&gt;
3.2 Key Instructions in a Dockerfile&lt;/p&gt;

&lt;p&gt;FROM – base image&lt;/p&gt;

&lt;p&gt;WORKDIR – working directory inside the container&lt;/p&gt;

&lt;p&gt;COPY – copy local files into the container&lt;/p&gt;

&lt;p&gt;RUN – execute commands at build time&lt;/p&gt;

&lt;p&gt;EXPOSE – document exposed ports&lt;/p&gt;

&lt;p&gt;CMD / ENTRYPOINT – define container startup commands&lt;/p&gt;

&lt;p&gt;3.3 Best Practices&lt;/p&gt;

&lt;p&gt;Use lightweight base images (e.g., Alpine).&lt;/p&gt;

&lt;p&gt;Leverage .dockerignore to reduce image bloat.&lt;/p&gt;

&lt;p&gt;Use multi-stage builds to optimize size.&lt;/p&gt;

&lt;p&gt;Run processes as non-root users for security.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Docker Storage: Volumes &amp;amp; Bind Mounts&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Containers are ephemeral; once stopped or deleted, data inside disappears. Docker provides two key mechanisms to persist or share data.&lt;/p&gt;

&lt;p&gt;4.1 Bind Mounts&lt;/p&gt;

&lt;p&gt;Bind mounts map a directory from the host machine into the container.&lt;/p&gt;

&lt;p&gt;docker run -v /host/data:/container/data nginx&lt;/p&gt;

&lt;p&gt;✔ Good for development, real-time code sync. ✘ Not recommended for production due to host dependency.&lt;/p&gt;

&lt;p&gt;4.2 Docker Volumes&lt;/p&gt;

&lt;p&gt;Volumes are managed by Docker and stored under /var/lib/docker/volumes.&lt;/p&gt;

&lt;p&gt;docker volume create app-data&lt;br&gt;
docker run -v app-data:/var/lib/mysql mysql:8.0&lt;/p&gt;

&lt;p&gt;✔ Ideal for production ✔ Portable &amp;amp; easier to back up ✔ Independent of host directory structure&lt;/p&gt;

&lt;p&gt;4.3 Choosing the Right Storage&lt;br&gt;
Use Case    Bind Mount  Docker Volume&lt;br&gt;
Local development   ⭐⭐⭐⭐⭐ ⭐⭐&lt;br&gt;
Production apps ⭐ ⭐⭐⭐⭐⭐&lt;br&gt;
Database storage    ⭐ ⭐⭐⭐⭐⭐&lt;br&gt;
CI/CD   ⭐⭐⭐   ⭐⭐⭐⭐&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Docker Networking Explained&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Networking is one of Docker’s most powerful features, enabling communication between containers and external systems.&lt;/p&gt;

&lt;p&gt;5.1 Default Docker Networks&lt;/p&gt;

&lt;p&gt;Docker creates three networks by default:&lt;/p&gt;

&lt;p&gt;bridge – default network for containers&lt;/p&gt;

&lt;p&gt;host – shares host networking&lt;/p&gt;

&lt;p&gt;none – fully isolated&lt;/p&gt;

&lt;p&gt;5.2 Custom Bridge Networks&lt;/p&gt;

&lt;p&gt;Creating custom networks improves isolation, security, and service discovery.&lt;/p&gt;

&lt;p&gt;docker network create app-network&lt;/p&gt;

&lt;p&gt;Attach containers:&lt;/p&gt;

&lt;p&gt;docker run -d --name mysql --network app-network mysql:8.0&lt;br&gt;
docker run -d --name api --network app-network my-api-image&lt;br&gt;
5.3 Benefits of Custom Networks&lt;/p&gt;

&lt;p&gt;Built‑in DNS (containers resolve each other by name)&lt;/p&gt;

&lt;p&gt;Clean separation of environments&lt;/p&gt;

&lt;p&gt;More secure than using default networks&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Multi-Container Applications with Docker Compose&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Modern applications rarely run as a single container. Docker Compose allows defining multi-container architectures in a simple YAML file.&lt;/p&gt;

&lt;p&gt;6.1 Example docker-compose.yml&lt;br&gt;
version: "3.8"&lt;br&gt;
services:&lt;br&gt;
  db:&lt;br&gt;
    image: mysql:8.0&lt;br&gt;
    environment:&lt;br&gt;
      MYSQL_ROOT_PASSWORD: password&lt;br&gt;
    volumes:&lt;br&gt;
      - db-data:/var/lib/mysql&lt;br&gt;
    networks:&lt;br&gt;
      - app-net&lt;/p&gt;

&lt;p&gt;api:&lt;br&gt;
    build: ./api&lt;br&gt;
    depends_on:&lt;br&gt;
      - db&lt;br&gt;
    networks:&lt;br&gt;
      - app-net&lt;/p&gt;

&lt;p&gt;ui:&lt;br&gt;
    build: ./ui&lt;br&gt;
    ports:&lt;br&gt;
      - "3000:3000"&lt;br&gt;
    depends_on:&lt;br&gt;
      - api&lt;br&gt;
    networks:&lt;br&gt;
      - app-net&lt;/p&gt;

&lt;p&gt;networks:&lt;br&gt;
  app-net:&lt;/p&gt;

&lt;p&gt;volumes:&lt;br&gt;
  db-data:&lt;br&gt;
6.2 Key Features&lt;/p&gt;

&lt;p&gt;Define services, volumes, and networks in one file&lt;/p&gt;

&lt;p&gt;Easy CI/CD integration&lt;/p&gt;

&lt;p&gt;One command to start your full app:&lt;/p&gt;

&lt;p&gt;docker-compose up -d&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Real-World Use Cases of Docker&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Docker is widely used across industries and engineering teams:&lt;/p&gt;

&lt;p&gt;7.1 Development Environments&lt;/p&gt;

&lt;p&gt;Developers use Docker to run isolated language runtimes without installing dependencies on their host machine.&lt;/p&gt;

&lt;p&gt;7.2 Microservices Architecture&lt;/p&gt;

&lt;p&gt;Each service runs in its own container, allowing independent scaling and deployment.&lt;/p&gt;

&lt;p&gt;7.3 CI/CD Pipelines&lt;/p&gt;

&lt;p&gt;Docker images are used as immutable artifacts for deployment.&lt;/p&gt;

&lt;p&gt;7.4 Cloud Deployments&lt;/p&gt;

&lt;p&gt;Platforms like AWS ECS, EKS, Fargate, and Lambda support Docker images.&lt;/p&gt;

&lt;p&gt;7.5 Infrastructure Portability&lt;/p&gt;

&lt;p&gt;Move applications between cloud providers effortlessly.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Why Docker Is Essential for Modern Engineers&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Docker has shifted the way teams build, ship, and operate applications.&lt;/p&gt;

&lt;p&gt;Key Reasons It’s Critical in 2025 and Beyond&lt;/p&gt;

&lt;p&gt;Foundation of Kubernetes workloads.&lt;/p&gt;

&lt;p&gt;Required skill for DevOps and Cloud engineering roles.&lt;/p&gt;

&lt;p&gt;Enables faster iteration, predictable builds, and environment parity.&lt;/p&gt;

&lt;p&gt;Reduces deployment issues and increases team productivity.&lt;/p&gt;

&lt;p&gt;Whether you're deploying microservices, working with CI/CD, or building scalable cloud architectures, Docker is at the center of modern compute workflows.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Final Thoughts&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Docker has become more than a containerization tool—it’s part of a cultural shift toward cloud-native, scalable, and automated engineering. Mastering Docker, networks, volumes, Dockerfiles, and Docker Compose gives you the confidence and capability to design and deploy modern applications with speed and reliability.&lt;/p&gt;

&lt;p&gt;If you truly want to excel in DevOps, Cloud, Platform Engineering, or Backend Development, Docker is not optional—it is fundamental.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>docker</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Automating EpicBook Deployment with Terraform, Ansible, and Azure DevOps Pipelines</title>
      <dc:creator>Hajarat </dc:creator>
      <pubDate>Mon, 10 Nov 2025 06:47:27 +0000</pubDate>
      <link>https://forem.com/hajixhayjhay/automating-epicbook-deployment-with-terraform-ansible-and-azure-devops-pipelines-29j6</link>
      <guid>https://forem.com/hajixhayjhay/automating-epicbook-deployment-with-terraform-ansible-and-azure-devops-pipelines-29j6</guid>
      <description>&lt;p&gt;Automating EpicBook Deployment with Terraform, Ansible, and Azure DevOps Pipelines&lt;/p&gt;

&lt;p&gt;In modern DevOps workflows, managing infrastructure and application deployments efficiently and securely is critical. In my recent capstone project, I automated the end-to-end deployment of the EpicBook application using Azure DevOps Pipelines, Terraform, and Ansible, applying a dual-repository model to separate infrastructure and application responsibilities.&lt;/p&gt;

&lt;p&gt;Here’s how I structured and executed the project:&lt;/p&gt;

&lt;p&gt;Step 1️⃣ — Create and Connect Repositories&lt;/p&gt;

&lt;p&gt;I used two GitHub repositories to organize my workflow:&lt;/p&gt;

&lt;p&gt;epicbook-azure-with-ansible – contains all Terraform configuration files for provisioning Azure resources.&lt;/p&gt;

&lt;p&gt;Epicbook-ansible – contains the EpicBook application source code and Ansible playbooks to configure VMs and deploy the app.&lt;/p&gt;

&lt;p&gt;Instead of importing into Azure Repos, I connected Azure DevOps directly to my GitHub account using a GitHub Service Connection. This allowed Azure Pipelines to automatically pull the latest code and reduced manual steps, creating a seamless CI/CD workflow.&lt;/p&gt;

&lt;p&gt;Step 2️⃣ — Create Azure Resource Manager (ARM) Service Connection&lt;/p&gt;

&lt;p&gt;In Azure DevOps → Project Settings → Service Connections → New Service Connection → Azure Resource Manager, I created a service connection using Service Principal (automatic) by providing:&lt;/p&gt;

&lt;p&gt;Tenant ID&lt;/p&gt;

&lt;p&gt;Subscription ID&lt;/p&gt;

&lt;p&gt;Client ID&lt;/p&gt;

&lt;p&gt;Client Secret&lt;/p&gt;

&lt;p&gt;This service connection enabled Terraform to authenticate securely and provision Azure resources automatically during pipeline execution.&lt;/p&gt;

&lt;p&gt;Step 3️⃣ — Upload SSH Keys and Sensitive Configuration as Secure Files&lt;/p&gt;

&lt;p&gt;To manage access securely, I uploaded sensitive files to Azure DevOps Secure Files:&lt;/p&gt;

&lt;p&gt;id_rsa_azure – private key for VM access&lt;/p&gt;

&lt;p&gt;id_rsa_azure.pub – public key&lt;/p&gt;

&lt;p&gt;dev.tfvars.txt – contains sensitive Terraform variables (renamed in the pipeline to dev.tfvars)&lt;/p&gt;

&lt;p&gt;web.yml.txt – contains Ansible variables for deployment (renamed in the pipeline to web.yml)&lt;/p&gt;

&lt;p&gt;⚠️ Security Note: Never commit files like dev.tfvars or web.yml with secrets to your repository. Uploading them as Secure Files ensures they are encrypted and only available during pipeline runs. This is crucial for protecting credentials, database passwords, and other sensitive information.&lt;/p&gt;

&lt;p&gt;Step 4️⃣ — Infra Repository Setup (Terraform)&lt;/p&gt;

&lt;p&gt;In epicbook-azure-with-ansible, I structured the Terraform code into modules:&lt;/p&gt;

&lt;p&gt;network → resource group, VNet, and subnets&lt;/p&gt;

&lt;p&gt;compute → frontend and backend Ubuntu VMs&lt;/p&gt;

&lt;p&gt;database → MySQL database (PaaS)&lt;/p&gt;

&lt;p&gt;I defined outputs in outputs.tf to expose:&lt;/p&gt;

&lt;p&gt;frontend_public_ip&lt;/p&gt;

&lt;p&gt;mysql_fqdn&lt;/p&gt;

&lt;p&gt;These outputs were later used to configure the application via Ansible.&lt;/p&gt;

&lt;p&gt;Step 5️⃣ — Create Infra Pipeline in Azure DevOps&lt;/p&gt;

&lt;p&gt;I created a YAML pipeline for the infra repository:&lt;/p&gt;

&lt;p&gt;Installed Terraform&lt;/p&gt;

&lt;p&gt;Downloaded the SSH public key and dev.tfvars.txt from Secure Files&lt;/p&gt;

&lt;p&gt;Renamed dev.tfvars.txt to dev.tfvars within the pipeline&lt;/p&gt;

&lt;p&gt;Ran terraform init, terraform plan, and terraform apply&lt;/p&gt;

&lt;p&gt;The pipeline successfully provisioned:&lt;/p&gt;

&lt;p&gt;Frontend and backend Ubuntu VMs&lt;/p&gt;

&lt;p&gt;MySQL database&lt;/p&gt;

&lt;p&gt;VNet and subnets&lt;/p&gt;

&lt;p&gt;Terraform outputs (frontend_public_ip, mysql_fqdn) were visible in the pipeline logs.&lt;/p&gt;

&lt;p&gt;Step 6️⃣ — Manual Handoff of Terraform Outputs&lt;/p&gt;

&lt;p&gt;I copied the Terraform outputs from the infra pipeline logs and updated the app repository:&lt;/p&gt;

&lt;p&gt;inventory.ini → frontend and backend IPs&lt;/p&gt;

&lt;p&gt;group_vars/web.yml → prepared for database connection (kept secrets in Secure Files, not committed)&lt;/p&gt;

&lt;p&gt;These updates ensured Ansible could connect to the correct infrastructure without exposing sensitive values in the repository.&lt;/p&gt;

&lt;p&gt;Step 7️⃣ — App Repository Setup (Ansible)&lt;/p&gt;

&lt;p&gt;The Epicbook-ansible repository contained:&lt;/p&gt;

&lt;p&gt;EpicBook application code for frontend and backend&lt;/p&gt;

&lt;p&gt;Ansible roles and playbooks:&lt;/p&gt;

&lt;p&gt;common → installed common packages&lt;/p&gt;

&lt;p&gt;epicbook → deployed the app and configured database connection&lt;/p&gt;

&lt;p&gt;nginx → configured Nginx to serve the application&lt;/p&gt;

&lt;p&gt;The roles were organized for reusability and clean structure, following best practices.&lt;/p&gt;

&lt;p&gt;Step 8️⃣ — Create App Pipeline in Azure DevOps&lt;/p&gt;

&lt;p&gt;For the app repository, the YAML pipeline:&lt;/p&gt;

&lt;p&gt;Installed Ansible&lt;/p&gt;

&lt;p&gt;Downloaded the SSH private key and web.yml.txt from Secure Files&lt;/p&gt;

&lt;p&gt;Renamed web.yml.txt to web.yml and moved it to group_vars/web.yml&lt;/p&gt;

&lt;p&gt;Set permissions on the key (chmod 600)&lt;/p&gt;

&lt;p&gt;Ran the main Ansible playbook&lt;/p&gt;

&lt;p&gt;The pipeline connected to both VMs and completed all tasks successfully.&lt;/p&gt;

&lt;p&gt;Step 9️⃣ — Validate Application Deployment&lt;/p&gt;

&lt;p&gt;Finally, I:&lt;/p&gt;

&lt;p&gt;Copied the frontend VM public IP from Terraform outputs&lt;/p&gt;

&lt;p&gt;Opened the EpicBook app in a browser&lt;/p&gt;

&lt;p&gt;Submitted a test post to verify backend and database connectivity&lt;/p&gt;

&lt;p&gt;✅ The full workflow—from infrastructure provisioning to app deployment—was fully automated and functional.&lt;/p&gt;

&lt;p&gt;Key Takeaways&lt;/p&gt;

&lt;p&gt;Separation of Concerns – Using two repositories and two pipelines (Infra vs. App) mirrors enterprise best practices.&lt;/p&gt;

&lt;p&gt;Secure Handling of Secrets – dev.tfvars and web.yml were kept out of the repository and accessed securely via Secure Files, preventing accidental leaks.&lt;/p&gt;

&lt;p&gt;End-to-End CI/CD – Terraform provisioning triggers application deployment through Azure Pipelines, demonstrating a production-grade workflow.&lt;/p&gt;

&lt;p&gt;Reusable and Modular Code – Terraform modules and Ansible roles ensure maintainability and scalability.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/Hajixhayjhay/epicbook-azure-with-ansible.git" rel="noopener noreferrer"&gt;https://github.com/Hajixhayjhay/epicbook-azure-with-ansible.git&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/Hajixhayjhay/epicbook-ansible.git" rel="noopener noreferrer"&gt;https://github.com/Hajixhayjhay/epicbook-ansible.git&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cicd</category>
      <category>azure</category>
      <category>automation</category>
    </item>
    <item>
      <title>Strengthening Cloud Security: Authenticating Terraform to Azure Using a Service Principal</title>
      <dc:creator>Hajarat </dc:creator>
      <pubDate>Thu, 09 Oct 2025 21:51:18 +0000</pubDate>
      <link>https://forem.com/hajixhayjhay/strengthening-cloud-security-authenticating-terraform-to-azure-using-a-service-principal-gc5</link>
      <guid>https://forem.com/hajixhayjhay/strengthening-cloud-security-authenticating-terraform-to-azure-using-a-service-principal-gc5</guid>
      <description>&lt;p&gt;Strengthening Cloud Security: Authenticating Terraform to Azure Using a Service Principal&lt;/p&gt;

&lt;p&gt;As someone who has spent most of my hands-on time deploying on AWS, I’ve recently been expanding my expertise into Microsoft Azure. My latest focus was learning how to authenticate Terraform to Azure securely — using a Service Principal instead of personal credentials.&lt;/p&gt;

&lt;p&gt;🔹 Why This Matters&lt;/p&gt;

&lt;p&gt;When managing infrastructure at scale, relying on personal Azure CLI logins is neither secure nor sustainable. Terraform needs a non-human identity — a Service Principal — to interact with Azure in an automated and auditable way.&lt;/p&gt;

&lt;p&gt;⚙️ What I Did&lt;/p&gt;

&lt;p&gt;1️⃣ Created a Service Principal with the Contributor role at subscription scope using:&lt;/p&gt;

&lt;p&gt;az ad sp create-for-rbac --name "sp-terraform-epicbook" --role "Contributor"&lt;/p&gt;

&lt;p&gt;2️⃣ Exported credentials to environment variables:&lt;/p&gt;

&lt;p&gt;export ARM_CLIENT_ID=""&lt;br&gt;
export ARM_CLIENT_SECRET=""&lt;br&gt;
export ARM_TENANT_ID=""&lt;br&gt;
export ARM_SUBSCRIPTION_ID=""&lt;/p&gt;

&lt;p&gt;3️⃣ Logged out of Azure CLI and verified Terraform could still deploy:&lt;/p&gt;

&lt;p&gt;az logout&lt;br&gt;
terraform init&lt;br&gt;
terraform apply -auto-approve&lt;/p&gt;

&lt;p&gt;The resource group was successfully created — confirming that Terraform authenticated purely through the Service Principal.&lt;/p&gt;

&lt;p&gt;🔐 Security Practices I Applied&lt;/p&gt;

&lt;p&gt;Adopted least-privilege access for the SP.&lt;/p&gt;

&lt;p&gt;Avoided committing secrets to Git; stored only in environment variables.&lt;/p&gt;

&lt;p&gt;Learned how to rotate secrets and clean up SP credentials securely.&lt;/p&gt;

&lt;p&gt;Explored how OIDC-based authentication can eliminate long-lived secrets in production pipelines.&lt;/p&gt;

&lt;p&gt;🧠 Reflection&lt;/p&gt;

&lt;p&gt;Even though Azure was new territory, the DevOps principles carried over seamlessly — identity management, IaC automation, and secure secret handling are universal.&lt;br&gt;
Understanding how Terraform integrates with Azure via a Service Principal is a key foundation for future CI/CD automation work.&lt;/p&gt;

</description>
      <category>dev</category>
      <category>aws</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Deploying EPicbook with Production-Grade Terraform</title>
      <dc:creator>Hajarat </dc:creator>
      <pubDate>Thu, 09 Oct 2025 21:42:25 +0000</pubDate>
      <link>https://forem.com/hajixhayjhay/deploying-epicbook-with-production-grade-terraform-13aa</link>
      <guid>https://forem.com/hajixhayjhay/deploying-epicbook-with-production-grade-terraform-13aa</guid>
      <description>&lt;p&gt;Deploying EpicBook with Production-Grade Terraform&lt;br&gt;
From Manual Setup to Automated Perfection: Building EpicBook with Production-Grade Terraform&lt;/p&gt;

&lt;p&gt;After working on multiple Terraform-based deployments across AWS and Azure, I wanted to push further — to structure infrastructure like a true production environment.&lt;br&gt;
The goal: Deploy EpicBook (a Node.js + MySQL app) using Terraform modules, workspaces, and remote backends, all in a way that scales cleanly between development and production.&lt;/p&gt;

&lt;p&gt;🎯 Objective&lt;/p&gt;

&lt;p&gt;To build a complete, secure, and repeatable Terraform stack for EpicBook that includes:&lt;/p&gt;

&lt;p&gt;Modularized infrastructure (network, database, compute)&lt;/p&gt;

&lt;p&gt;Dynamic variables and environment-aware configurations&lt;/p&gt;

&lt;p&gt;Remote backend with state locking&lt;/p&gt;

&lt;p&gt;Independent dev and prod environments managed through workspaces&lt;/p&gt;

&lt;p&gt;🧩 1️⃣ Building the Network Module&lt;/p&gt;

&lt;p&gt;I started with a VNet (10.0.0.0/16) and two subnets:&lt;/p&gt;

&lt;p&gt;public-subnet for the VM&lt;/p&gt;

&lt;p&gt;mysql-subnet for the private Flexible Server&lt;/p&gt;

&lt;p&gt;The Network Security Groups (NSGs) were set up for strict access control:&lt;/p&gt;

&lt;p&gt;Public NSG: only port 22 from my IP and 80 from the internet&lt;/p&gt;

&lt;p&gt;Private NSG: only 3306 from the app subnet&lt;/p&gt;

&lt;p&gt;This structure provided a solid foundation for secure and isolated networking.&lt;/p&gt;

&lt;p&gt;🛢️ 2️⃣ Private Database Module&lt;/p&gt;

&lt;p&gt;Next, I deployed Azure Database for MySQL – Flexible Server with private access (VNet integration).&lt;br&gt;
The database credentials and configurations were managed through Terraform variables — no hardcoding — and a Private DNS Zone linked the DB endpoint to the VNet.&lt;/p&gt;

&lt;p&gt;🖥️ 3️⃣ Compute / App Module&lt;/p&gt;

&lt;p&gt;The compute module handled:&lt;/p&gt;

&lt;p&gt;Creating an Ubuntu VM (B1s)&lt;/p&gt;

&lt;p&gt;Installing Node.js, Nginx, npm, git, and MySQL client&lt;/p&gt;

&lt;p&gt;Cloning and deploying the EpicBook app&lt;/p&gt;

&lt;p&gt;Configuring Nginx to serve the frontend and proxy /api to the backend service&lt;/p&gt;

&lt;p&gt;This ensured both the SPA and API were accessible from the same endpoint.&lt;/p&gt;

&lt;p&gt;🧮 4️⃣ Workspaces, Backends &amp;amp; Locking&lt;/p&gt;

&lt;p&gt;One of the key production features was separating environments via Terraform Workspaces:&lt;/p&gt;

&lt;p&gt;terraform workspace new dev&lt;br&gt;
terraform workspace new prod&lt;/p&gt;

&lt;p&gt;Each workspace used the same root module, with dynamic naming handled through locals and maps.&lt;/p&gt;

&lt;p&gt;The remote backend was configured on AWS S3 with state locking enabled via DynamoDB.&lt;br&gt;
This setup prevents conflicts when multiple engineers work on infrastructure simultaneously — a must-have for collaborative DevOps.&lt;/p&gt;

&lt;p&gt;🌐 5️⃣ Deployment and Verification&lt;/p&gt;

&lt;p&gt;After applying both dev and prod workspaces, I confirmed:&lt;/p&gt;

&lt;p&gt;Separate, isolated resources were created&lt;/p&gt;

&lt;p&gt;Terraform state was properly locked when accessed concurrently&lt;/p&gt;

&lt;p&gt;The EpicBook application loaded successfully in the browser&lt;/p&gt;

&lt;p&gt;The API routes responded correctly via Nginx reverse proxy&lt;/p&gt;

&lt;p&gt;🧠 Reflections&lt;/p&gt;

&lt;p&gt;This assignment pulled together everything I’ve learned about Terraform — and pushed it to production-level rigor.&lt;br&gt;
The key takeaways:&lt;/p&gt;

&lt;p&gt;Modular design keeps infrastructure clean and maintainable&lt;/p&gt;

&lt;p&gt;Workspaces + backends make multi-environment management effortless&lt;/p&gt;

&lt;p&gt;State locking ensures team-safe deployments&lt;/p&gt;

&lt;p&gt;Security and automation go hand-in-hand in every IaC project&lt;/p&gt;

&lt;p&gt;⚡ Next Step&lt;/p&gt;

&lt;p&gt;The natural next move is to integrate this stack into a CI/CD pipeline (GitHub Actions or Azure DevOps) so each environment can deploy automatically — versioned, tested, and monitored.&lt;/p&gt;

&lt;p&gt;In short:&lt;br&gt;
Terraform isn’t just about provisioning resources — it’s about building reliable, reusable, and secure infrastructure at scale.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
    </item>
    <item>
      <title>🚀 Deploying EpicBook on Microsoft Azure — My Full-Stack Cloud Journey</title>
      <dc:creator>Hajarat </dc:creator>
      <pubDate>Tue, 07 Oct 2025 17:31:10 +0000</pubDate>
      <link>https://forem.com/hajixhayjhay/deploying-epicbook-on-microsoft-azure-my-full-stack-cloud-journey-2kof</link>
      <guid>https://forem.com/hajixhayjhay/deploying-epicbook-on-microsoft-azure-my-full-stack-cloud-journey-2kof</guid>
      <description>&lt;h2&gt;
  
  
  🚀 Deploying &lt;em&gt;EpicBook&lt;/em&gt; on Microsoft Azure — My Full-Stack Cloud Journey
&lt;/h2&gt;

&lt;h3&gt;
  
  
  🌍 Introduction
&lt;/h3&gt;

&lt;p&gt;As part of my continuous journey in cloud computing and DevOps, I recently deployed a &lt;strong&gt;full-stack web application — *EpicBook&lt;/strong&gt;* — on &lt;strong&gt;Microsoft Azure&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;Having previously built and deployed applications on AWS, I wanted to explore &lt;strong&gt;Azure’s infrastructure, networking, and database services&lt;/strong&gt; to strengthen my multi-cloud expertise &lt;/p&gt;

&lt;h3&gt;
  
  
  🏗️ Project Overview
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;EpicBook&lt;/em&gt; is a full-stack web application built with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Frontend:&lt;/strong&gt; React
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backend:&lt;/strong&gt; Node.js &amp;amp; Express
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database:&lt;/strong&gt; MySQL
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Web Server:&lt;/strong&gt; Nginx
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cloud Platform:&lt;/strong&gt; Microsoft Azure
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal was to host the frontend and backend on an &lt;strong&gt;Azure Virtual Machine&lt;/strong&gt;, connect securely to an &lt;strong&gt;Azure Database for MySQL&lt;/strong&gt;, and configure proper networking using &lt;strong&gt;Virtual Networks&lt;/strong&gt; and &lt;strong&gt;Network Security Groups&lt;/strong&gt; (NSGs).&lt;/p&gt;




&lt;h3&gt;
  
  
  ⚙️ Step 1 — Setting Up the Infrastructure
&lt;/h3&gt;

&lt;p&gt;I began by provisioning the foundational Azure resources:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Created a Resource Group&lt;/strong&gt; to organize all resources for the project.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configured a Virtual Network (VNet)&lt;/strong&gt; with two subnets — one for the VM and one for the database.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Set up Network Security Groups (NSGs)&lt;/strong&gt; to control inbound and outbound traffic, ensuring only necessary ports (e.g., SSH, HTTP, HTTPS) were open.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Provisioned an Ubuntu 22.04 LTS Virtual Machine&lt;/strong&gt; as the compute resource for the application.
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This setup provided a secure, isolated environment for both the application and database.&lt;/p&gt;




&lt;h3&gt;
  
  
  🧩 Step 2 — Installing the Application Stack
&lt;/h3&gt;

&lt;p&gt;After SSHing into the VM, I installed and configured the required tools:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;nginx &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;nodejs npm &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;mysql-client &lt;span class="nt"&gt;-y&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;br&gt;
bash&lt;/p&gt;

&lt;p&gt;Then, I cloned the EpicBook repository from GitHub and installed all backend dependencies&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
git clone &amp;lt;your-repo-url&amp;gt;
&lt;span class="nb"&gt;cd &lt;/span&gt;theepicbook
npm &lt;span class="nb"&gt;install&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;br&gt;
bash&lt;/p&gt;

&lt;p&gt;💾 Step 3 — Configuring the Database Connection&lt;br&gt;
I provisioned Azure Database for MySQL – Flexible Server and configured private VNet access for security.&lt;br&gt;
Then, I updated the config/config.json file in the project to include the database credentials:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="s2"&gt;"development"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="s2"&gt;"username"&lt;/span&gt;: &lt;span class="s2"&gt;"your_username"&lt;/span&gt;,
  &lt;span class="s2"&gt;"password"&lt;/span&gt;: &lt;span class="s2"&gt;"your_password"&lt;/span&gt;,
  &lt;span class="s2"&gt;"database"&lt;/span&gt;: &lt;span class="s2"&gt;"epicbook"&lt;/span&gt;,
  &lt;span class="s2"&gt;"host"&lt;/span&gt;: &lt;span class="s2"&gt;"epicbook-db.mysql.database.azure.com"&lt;/span&gt;,
  &lt;span class="s2"&gt;"dialect"&lt;/span&gt;: &lt;span class="s2"&gt;"mysql"&lt;/span&gt;,
  &lt;span class="s2"&gt;"dialectOptions"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"ssl"&lt;/span&gt;: &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="s2"&gt;"rejectUnauthorized"&lt;/span&gt;: &lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;br&gt;
bash&lt;/p&gt;

&lt;p&gt;After configuration, I started the backend server and verified the database connection using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
node server.js

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;br&gt;
bash&lt;/p&gt;

&lt;p&gt;Step 4 — Configuring Nginx as a Reverse Proxy&lt;/p&gt;

&lt;p&gt;To serve the React frontend and route traffic to the Node.js backend, I configured Nginx as a reverse proxy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;nano /etc/nginx/sites-available/epicbook




server &lt;span class="o"&gt;{&lt;/span&gt;
  listen 80&lt;span class="p"&gt;;&lt;/span&gt;
  server_name _&lt;span class="p"&gt;;&lt;/span&gt;

  location / &lt;span class="o"&gt;{&lt;/span&gt;
    root /var/www/epicbook-frontend/build&lt;span class="p"&gt;;&lt;/span&gt;
    index index.html&lt;span class="p"&gt;;&lt;/span&gt;
    try_files &lt;span class="nv"&gt;$uri&lt;/span&gt; /index.html&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;

  location /api/ &lt;span class="o"&gt;{&lt;/span&gt;
    proxy_pass http://localhost:8080/&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;br&gt;
bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="nb"&gt;sudo ln&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; /etc/nginx/sites-available/epicbook /etc/nginx/sites-enabled/
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart nginx

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;br&gt;
bash&lt;/p&gt;

&lt;p&gt;✅ Step 5 — Testing the Deployment&lt;br&gt;
Once everything was configured, I accessed the public IP of the VM in a browser and verified that:&lt;br&gt;
The React frontend loaded successfully.&lt;br&gt;
The backend APIs responded correctly.&lt;br&gt;
The database stored and retrieved data securely via SSL.&lt;br&gt;
The application was now live and fully functional — hosted entirely on Microsoft Azure.&lt;br&gt;
💡 Key Learnings&lt;br&gt;
This project deepened my understanding of:&lt;br&gt;
Azure networking concepts (VNet, NSG, subnets)&lt;br&gt;
Secure database connectivity via private endpoints&lt;br&gt;
Nginx reverse proxy configuration for full-stack apps&lt;br&gt;
VM provisioning and Linux-based server management&lt;br&gt;
Multi-cloud architecture principles (Azure + AWS)&lt;br&gt;
🧠 Conclusion&lt;br&gt;
Deploying EpicBook on Azure was an exciting hands-on experience that combined my cloud, DevOps, and full-stack skills.&lt;br&gt;
It reinforced how vital it is to understand not only the application logic but also the infrastructure and networking layers that make scalable cloud deployments possible.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>devops</category>
      <category>node</category>
      <category>react</category>
    </item>
    <item>
      <title>[Boost]</title>
      <dc:creator>Hajarat </dc:creator>
      <pubDate>Sun, 28 Sep 2025 05:15:19 +0000</pubDate>
      <link>https://forem.com/hajixhayjhay/-2f95</link>
      <guid>https://forem.com/hajixhayjhay/-2f95</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/hajixhayjhay" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F2981350%2F2d364121-ea27-4c49-94c1-cab84aeec756.png" alt="hajixhayjhay"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/hajixhayjhay/automating-scalable-web-infrastructure-with-awd-cloudformation-3c78" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Automating Scalable Web Infrastructure with AWD Cloudformation&lt;/h2&gt;
      &lt;h3&gt;Hajarat  ・ Sep 28&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#a&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>a</category>
    </item>
    <item>
      <title>Automating Scalable Web Infrastructure with AWD Cloudformation</title>
      <dc:creator>Hajarat </dc:creator>
      <pubDate>Sun, 28 Sep 2025 05:08:23 +0000</pubDate>
      <link>https://forem.com/hajixhayjhay/automating-scalable-web-infrastructure-with-awd-cloudformation-3c78</link>
      <guid>https://forem.com/hajixhayjhay/automating-scalable-web-infrastructure-with-awd-cloudformation-3c78</guid>
      <description>&lt;p&gt;In modern cloud engineering, Infrastructure as Code (IaC) is no longer optional—it’s essential. As someone working in the cloud space, I’ve seen how manual resource provisioning can slow down deployment cycles and increase the risk of human error. This project demonstrates how I used AWS CloudFormation to deploy a dynamic website with a fully automated, scalable, and secure infrastructure.&lt;br&gt;
Project Overview&lt;br&gt;
The goal of this project was to deploy a dynamic website backed by a database, with a setup capable of automatically scaling based on traffic demand. Using CloudFormation, I automated the creation of all essential AWS resources:&lt;br&gt;
VPC &amp;amp; Subnets – for secure and isolated networking&lt;br&gt;
NAT Gateway – enabling private subnets to access the internet&lt;br&gt;
RDS Instance from Snapshot – ensuring continuity of database data&lt;br&gt;
Application Load Balancer (ALB) – distributing traffic across EC2 instances&lt;br&gt;
Auto Scaling Group (ASG) – managing dynamic application scaling&lt;br&gt;
Route 53 – handling DNS resolution for custom domains&lt;br&gt;
By treating infrastructure as code, every deployment is repeatable, versioned, and easy to maintain.&lt;br&gt;
Project Structure&lt;br&gt;
The project is organized into multiple CloudFormation YAML templates, each handling a specific aspect of the infrastructure:&lt;br&gt;
Template    Purpose&lt;br&gt;
vpc.yaml    Defines VPC, subnets, internet gateway, and route tables&lt;br&gt;
nat-gateway.yaml    Configures NAT Gateway and private subnet routing&lt;br&gt;
rds-snapshot.yaml   Restores an RDS instance from a database snapshot&lt;br&gt;
alb.yaml    Sets up an Application Load Balancer and target groups&lt;br&gt;
asg.yaml    Creates an Auto Scaling Group with scaling policies&lt;br&gt;
route-53.yaml   Configures Route 53 DNS records for the domain&lt;br&gt;
Deployment Steps&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Prepare the Templates
Ensure all parameters are correctly defined in the YAML files, including:
Instance types
VPC and subnet IDs
Database credentials
ALB and target group configurations&lt;/li&gt;
&lt;li&gt;Deploy the Stacks
Using the AWS Management Console:
Navigate to CloudFormation → Create Stack → Upload template.
Deploy each stack in sequence:
vpc.yaml
nat-gateway.yaml
rds-snapshot.yaml
alb.yaml
asg.yaml
route-53.yaml
Monitor stack creation; each stack may take several minutes.&lt;/li&gt;
&lt;li&gt;Access and Test
Retrieve the ALB URL from the console.
Open the URL in your browser to test the website.
Monitor the ASG to see automatic scaling in response to traffic.&lt;/li&gt;
&lt;li&gt;Logging &amp;amp; Monitoring
CloudWatch is configured to monitor the application health and performance metrics, providing visibility and alerting for potential issues.
Key Components in Detail
VPC and Networking
vpc.yaml creates a secure, isolated network, with public and private subnets, routing tables, and an Internet Gateway. This foundation ensures network traffic is correctly segmented and secured.
NAT Gateway
The NAT Gateway allows private instances to access the internet for updates or external API calls while keeping them shielded from direct inbound traffic.
RDS from Snapshot
Using rds-snapshot.yaml, I restored a database from a snapshot. This ensures data persistence and allows the environment to replicate production-like conditions.
Load Balancer &amp;amp; Auto Scaling
The ALB distributes traffic to EC2 instances in the ASG. The Auto Scaling Group automatically adjusts instance counts based on load, ensuring high availability and cost efficiency.
Route 53 DNS
Finally, route-53.yaml allows the website to be accessed via a custom domain, routing users to the ALB efficiently.
Lessons Learned
CloudFormation enables full automation of complex infrastructure setups.
IaC allows repeatable and predictable deployments—essential for production-grade environments.
Monitoring and logging with CloudWatch are critical for scaling and health management.
Conclusion
This project reinforced why CloudFormation is a core tool for DevOps and cloud engineers. By automating infrastructure provisioning, I was able to focus on optimizing performance, security, and scalability instead of manual setup.
For engineers looking to advance their AWS skills, mastering CloudFormation is a major step toward professional-grade IaC deployment.&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>a</category>
    </item>
    <item>
      <title>🔹 The Importance of AWS in Modern Cloud Computing 🔹</title>
      <dc:creator>Hajarat </dc:creator>
      <pubDate>Sun, 21 Sep 2025 18:10:21 +0000</pubDate>
      <link>https://forem.com/hajixhayjhay/the-importance-of-aws-in-modern-cloud-computing-18b1</link>
      <guid>https://forem.com/hajixhayjhay/the-importance-of-aws-in-modern-cloud-computing-18b1</guid>
      <description>&lt;p&gt;Amazon Web Services (AWS): Powering Modern Cloud Computing&lt;br&gt;
In today’s fast-paced digital world, cloud computing is no longer optional—it’s essential. Among the cloud platforms available, Amazon Web Services (AWS) stands out as the most comprehensive and widely adopted solution, powering startups, enterprises, and global applications alike.&lt;br&gt;
Whether you’re a developer, IT professional, or business leader, understanding AWS is crucial for building scalable, reliable, and cost-efficient applications.&lt;br&gt;
Why AWS Matters&lt;br&gt;
AWS provides a suite of cloud services that allow organizations to:&lt;br&gt;
Scale Easily: Handle traffic spikes or seasonal demand without over-provisioning.&lt;br&gt;
Ensure Reliability: Multi-AZ deployments reduce downtime and increase fault tolerance.&lt;br&gt;
Control Costs: Pay only for the resources you use—no upfront infrastructure investment.&lt;br&gt;
Enhance Security: Built-in security features and compliance certifications protect critical data.&lt;br&gt;
Reach Globally: Deploy applications closer to users worldwide, reducing latency and improving user experience.&lt;br&gt;
Suggested Visual: Diagram showing AWS global regions and availability zones.&lt;br&gt;
Core AWS Services Every Professional Should Know&lt;br&gt;
AWS offers hundreds of services, but these are the most widely used:&lt;br&gt;
Compute: EC2 (virtual servers), Lambda (serverless functions), Elastic Beanstalk (application deployment)&lt;br&gt;
Storage: S3 (object storage), EBS (block storage), Glacier (long-term archival)&lt;br&gt;
Databases: RDS (managed relational databases), DynamoDB (NoSQL), Aurora (high-performance SQL)&lt;br&gt;
Networking &amp;amp; Delivery: VPC (isolated networks), CloudFront (CDN), Route 53 (DNS management)&lt;br&gt;
Monitoring &amp;amp; Management: CloudWatch (metrics and logs), CloudTrail (audit logs)&lt;br&gt;
Security &amp;amp; Identity: IAM (access management), KMS (encryption), AWS Shield (DDoS protection)&lt;br&gt;
Suggested Visual: Flowchart showing how these services interact in a typical web application deployment.&lt;br&gt;
The Importance of High Availability in AWS&lt;br&gt;
High availability (HA) ensures your applications are always accessible, even during failures or traffic spikes. Key HA strategies in AWS include:&lt;br&gt;
Redundancy: Deploying servers and resources across multiple Availability Zones.&lt;br&gt;
Automatic Failover: Using services like RDS Multi-AZ to recover from failures seamlessly.&lt;br&gt;
Scalability: Leveraging Auto Scaling and Load Balancers to handle varying workloads.&lt;br&gt;
Monitoring &amp;amp; Maintenance: Proactive alerts and automated backups using CloudWatch and S3 snapshots.&lt;br&gt;
Suggested Visual: Two-tier architecture diagram (Web Tier + Database Tier) showing HA setup.&lt;br&gt;
Real-World Use Cases&lt;br&gt;
Startups: Rapidly deploy applications without heavy upfront costs.&lt;br&gt;
E-commerce: Handle traffic surges during sales events seamlessly.&lt;br&gt;
Enterprises: Migrate legacy workloads to the cloud for efficiency and reliability.&lt;br&gt;
Data Analytics &amp;amp; AI/ML: Use services like AWS SageMaker, EMR, and Redshift for insights and innovation.&lt;br&gt;
Conclusion&lt;br&gt;
AWS is more than just a cloud provider—it’s a comprehensive ecosystem that empowers organizations to innovate, scale, and operate with confidence. By leveraging AWS services effectively, businesses can deliver resilient, high-performing, and cost-efficient applications to users worldwide.&lt;br&gt;
Whether you are building a startup, supporting enterprise systems, or exploring cloud careers, mastering AWS is a critical skill for modern IT and DevOps professionals.&lt;br&gt;
Call-to-Action for Readers&lt;br&gt;
If you’re exploring AWS, start with EC2, S3, and RDS to understand the core concepts, then expand into serverless, analytics, and machine learning services to unlock the full potential of the cloud.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>[Boost]</title>
      <dc:creator>Hajarat </dc:creator>
      <pubDate>Sun, 21 Sep 2025 18:00:21 +0000</pubDate>
      <link>https://forem.com/hajixhayjhay/-3ef0</link>
      <guid>https://forem.com/hajixhayjhay/-3ef0</guid>
      <description></description>
    </item>
    <item>
      <title>The Software Development Lifecycle (SDLC)</title>
      <dc:creator>Hajarat </dc:creator>
      <pubDate>Sat, 13 Sep 2025 04:37:51 +0000</pubDate>
      <link>https://forem.com/hajixhayjhay/the-software-development-lifecycle-sdlc-33cd</link>
      <guid>https://forem.com/hajixhayjhay/the-software-development-lifecycle-sdlc-33cd</guid>
      <description>&lt;p&gt;The Software Development Life Cycle (SDLC): A Roadmap for Reliable Applications&lt;br&gt;
In the world of software and DevOps, success doesn’t just come from writing good code. It comes from following a structured process that ensures every stage of development is intentional, efficient, and aligned with business goals. That’s where the Software Development Life Cycle (SDLC) comes in.&lt;br&gt;
Why SDLC Matters&lt;br&gt;
The SDLC provides a systematic framework for building software, from the very first idea to long-term maintenance. Instead of leaving development to chance, it helps teams deliver software that is high-quality, secure, and scalable.&lt;br&gt;
The Key Phases of SDLC&lt;br&gt;
While models may vary (waterfall, agile, spiral, etc.), the core phases remain consistent:&lt;br&gt;
Planning &amp;amp; Requirement Gathering&lt;br&gt;
Defining what the software should achieve and aligning it with business goals.&lt;br&gt;
Design&lt;br&gt;
Creating blueprints for architecture, workflows, and user experience.&lt;br&gt;
Development&lt;br&gt;
Writing clean, functional, and scalable code.&lt;br&gt;
Testing&lt;br&gt;
Ensuring performance, security, and reliability through rigorous checks.&lt;br&gt;
Deployment&lt;br&gt;
Delivering the application to end users with minimal downtime.&lt;br&gt;
Maintenance&lt;br&gt;
Continuously monitoring, updating, and improving the software as user needs evolve.&lt;br&gt;
SDLC in the Agile &amp;amp; DevOps Era&lt;br&gt;
Traditionally, these phases followed a strict step-by-step order (the waterfall model). But in modern agile and DevOps practices, the boundaries blur. Teams work iteratively, allowing planning, coding, testing, and deployment to overlap. This speeds up delivery and enables faster adaptation to change.&lt;br&gt;
Continuous Learning in Practice&lt;br&gt;
Working in DevOps, I see firsthand how SDLC provides structure to complex projects. At the same time, continuous learning sharpens how we apply these principles. Recently, I joined Pravin’s free class, which offered practical insights into how SDLC ties directly into real-world DevOps workflows. It was a powerful reminder that no matter your experience level, there’s always something new to learn.&lt;br&gt;
Final Thoughts&lt;br&gt;
The SDLC isn’t just a theory — it’s the backbone of reliable software development. Whether you’re building a small application or scaling enterprise solutions, having this framework ensures better collaboration, reduced risks, and improved outcomes.&lt;br&gt;
✨ Combine SDLC with agile and DevOps, and you’ll have a recipe for faster delivery and long-term success.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Mastering Git: The Backbone of Modern Development</title>
      <dc:creator>Hajarat </dc:creator>
      <pubDate>Thu, 28 Aug 2025 06:35:52 +0000</pubDate>
      <link>https://forem.com/hajixhayjhay/mastering-git-the-backbone-of-modern-development-1f21</link>
      <guid>https://forem.com/hajixhayjhay/mastering-git-the-backbone-of-modern-development-1f21</guid>
      <description>&lt;p&gt;In today’s tech-driven world, version control is non-negotiable. Whether you’re a solo developer, part of a global DevOps team, or managing complex cloud deployments, Git sits at the core of modern software development.&lt;br&gt;
For me, as a Cloud/DevOps Engineer, Git isn’t just a tool — it’s part of my daily workflow. From managing Infrastructure-as-Code (Terraform, CloudFormation) to automating CI/CD pipelines, Git helps me deliver faster, more reliable solutions.&lt;br&gt;
Beyond work, I’m constantly upskilling — recently, I’ve been learning from Pravin’s free classes, ensuring I stay ahead in an industry that never stops evolving.&lt;/p&gt;

&lt;p&gt;What is Git and Why Does It Matter?&lt;br&gt;
Git is a distributed version control system. In simple terms, it:&lt;br&gt;
Tracks changes in your project over time.&lt;br&gt;
Enables multiple developers to work on the same project without conflict.&lt;br&gt;
Lets you roll back to previous &lt;br&gt;
versions if something breaks.&lt;br&gt;
Think of Git as a time machine for your projects — one that also supports teamwork at scale.&lt;/p&gt;

&lt;p&gt;Core Concepts Every Developer Should Know&lt;br&gt;
Repository (Repo) – A folder Git tracks. Can be local or on a remote platform like GitHub.&lt;br&gt;
Staging Area – Prepares changes before committing them to history.&lt;br&gt;
Commit – A snapshot of your project at a specific point in time.&lt;br&gt;
Branch – A parallel workspace for features or fixes.&lt;br&gt;
Merge – Combines changes from one branch into another.&lt;/p&gt;

&lt;p&gt;‘’’bash&lt;/p&gt;

&lt;h1&gt;
  
  
  Initialize a new Git repository
&lt;/h1&gt;

&lt;p&gt;git init  &lt;/p&gt;

&lt;h1&gt;
  
  
  Clone an existing repository
&lt;/h1&gt;

&lt;p&gt;git clone &lt;/p&gt;

&lt;h1&gt;
  
  
  Check current file status
&lt;/h1&gt;

&lt;p&gt;git status&lt;/p&gt;

&lt;h1&gt;
  
  
  Stage changes for commit
&lt;/h1&gt;

&lt;p&gt;git add &lt;/p&gt;

&lt;h1&gt;
  
  
  Commit changes with a message
&lt;/h1&gt;

&lt;p&gt;git commit -m "Added new feature"&lt;/p&gt;

&lt;h1&gt;
  
  
  Create and switch to a new branch
&lt;/h1&gt;

&lt;p&gt;git checkout -b feature-login&lt;/p&gt;

&lt;h1&gt;
  
  
  Merge a feature branch into main
&lt;/h1&gt;

&lt;p&gt;git checkout main&lt;br&gt;
git merge feature-login&lt;/p&gt;

&lt;h1&gt;
  
  
  Push changes to remote repository
&lt;/h1&gt;

&lt;p&gt;git push origin main&lt;/p&gt;

&lt;p&gt;Best Practices for Using Git Professionally&lt;br&gt;
Meaningful Commit Messages: Fix login API timeout is better than Update file.&lt;br&gt;
Branching Strategy: Use feature branches, release branches, and hotfix branches for structured workflows.&lt;br&gt;
Pull Before Push: Always fetch the latest updates before pushing your own to avoid conflicts.&lt;br&gt;
Automation: Use GitHub Actions to integrate testing, building, and deployment pipelines.&lt;/p&gt;

&lt;p&gt;Advanced Techniques for Real-World Projects&lt;br&gt;
Rebasing for a Clean History&lt;br&gt;
Keep your commit history linear:&lt;/p&gt;

&lt;p&gt;‘’’bash&lt;/p&gt;

&lt;p&gt;git checkout feature-login&lt;br&gt;
git rebase main&lt;/p&gt;

&lt;p&gt;Handling Merge Conflicts&lt;br&gt;
Conflicts happen — here’s how to fix them quickly:&lt;/p&gt;

&lt;p&gt;‘’’bash&lt;/p&gt;

&lt;p&gt;git status       # Identify conflicts&lt;/p&gt;

&lt;h1&gt;
  
  
  Edit files to resolve (look for &amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt; and &amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;)
&lt;/h1&gt;

&lt;p&gt;git add &lt;br&gt;
git commit&lt;/p&gt;

&lt;p&gt;Git in CI/CD Pipelines&lt;br&gt;
In DevOps workflows, Git often acts as the single source of truth:&lt;br&gt;
Code pushed to main triggers automated build → test → deploy pipelines.&lt;br&gt;
Ensures changes are validated before reaching production.&lt;/p&gt;

&lt;p&gt;How I Use Git in My Workflows&lt;br&gt;
Infrastructure-as-Code: All Terraform and CloudFormation templates are versioned in Git repos.&lt;br&gt;
Continuous Integration/Deployment: GitHub Actions run automated builds/tests before merging to production.&lt;br&gt;
Collaboration: Pull requests and code reviews ensure quality and security.&lt;br&gt;
Git is not just about tracking code — it’s about delivering reliable solutions faster, together.&lt;br&gt;
Visuals to Include (Optional)&lt;br&gt;
[IMAGE] Git Workflow Diagram – Working Directory → Staging → Commit → Remote Repo&lt;br&gt;
[IMAGE] Branching Strategy – Show main with feature/* branches merging back.&lt;br&gt;
[SCREENSHOT] GitHub Pull Request – Example of code review in action.&lt;br&gt;
[SCREENSHOT] GitHub Actions Pipeline – Successful CI/CD run.&lt;br&gt;
Final Thoughts&lt;br&gt;
Git is the backbone of modern development — empowering teams to collaborate, innovate, and deliver at scale. My journey with Git has been shaped by real-world projects and continuous learning, like the free sessions from Pravin, which keep me evolving as a professional.&lt;br&gt;
What’s your favorite Git tip or trick? Share in the comments — let’s learn from each other!&lt;/p&gt;

&lt;h1&gt;
  
  
  Git #DevOps #CloudEngineering #CI/CD #GitHub #ContinuousLearning #Automation #Upskilling
&lt;/h1&gt;

</description>
    </item>
  </channel>
</rss>
