<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Piyush Bagani</title>
    <description>The latest articles on Forem by Piyush Bagani (@piyushbagani15).</description>
    <link>https://forem.com/piyushbagani15</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/piyushbagani15"/>
    <language>en</language>
    <item>
      <title>An overview of AWS ECS</title>
      <dc:creator>Piyush Bagani</dc:creator>
      <pubDate>Thu, 03 Apr 2025 02:52:56 +0000</pubDate>
      <link>https://forem.com/piyushbagani15/an-overview-of-aws-ecs-22j1</link>
      <guid>https://forem.com/piyushbagani15/an-overview-of-aws-ecs-22j1</guid>
      <description>&lt;h1&gt;
  
  
  AWS Elastic Container Service (ECS)
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;AWS Elastic Container Service (ECS) is a fully managed container orchestration service that enables you to run, manage, and scale containerized applications. ECS supports both AWS Fargate (serverless compute) and EC2 instances for hosting containers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Features
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Fully Managed&lt;/strong&gt;: No need to set up and manage Kubernetes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexible Compute&lt;/strong&gt;: Choose between AWS Fargate (serverless) or EC2 instances.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deep AWS Integration&lt;/strong&gt;: Works with IAM, CloudWatch, ALB, Route 53, and more.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auto Scaling &amp;amp; Load Balancing&lt;/strong&gt;: Adjusts resources dynamically based on demand.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost-Effective&lt;/strong&gt;: Pay for only the resources used.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Architecture Components
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Cluster&lt;/strong&gt; - Logical grouping of ECS instances or Fargate tasks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Task Definition&lt;/strong&gt; - Blueprint for running containers, specifying CPU, memory, and networking.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Task&lt;/strong&gt; - Running instance of a task definition.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Service&lt;/strong&gt; - Maintains the desired number of tasks and ensures high availability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Container Agent&lt;/strong&gt; - Manages communication between ECS and EC2 instances.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Elastic Load Balancer (ELB)&lt;/strong&gt; - Distributes traffic across running tasks.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Quick Deployment Guide (AWS Fargate)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Create an ECS Cluster
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Open AWS &lt;strong&gt;ECS Console&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Clusters &amp;gt; Create Cluster&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Choose &lt;strong&gt;Networking only (AWS Fargate)&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Provide a cluster name and create it.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step 2: Define a Task
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Task Definitions&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Create new task definition&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Fargate&lt;/strong&gt; as the launch type.&lt;/li&gt;
&lt;li&gt;Configure container settings:

&lt;ul&gt;
&lt;li&gt;Set &lt;strong&gt;container name&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Use a &lt;strong&gt;Docker image&lt;/strong&gt; (e.g., &lt;code&gt;nginx:latest&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Define &lt;strong&gt;port mapping&lt;/strong&gt; (e.g., 80:80).&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Create&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step 3: Deploy as a Service
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Go to &lt;strong&gt;ECS &amp;gt; Services&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Create Service&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Choose &lt;strong&gt;Fargate&lt;/strong&gt; as the launch type.&lt;/li&gt;
&lt;li&gt;Select the cluster and task definition.&lt;/li&gt;
&lt;li&gt;Configure &lt;strong&gt;networking &amp;amp; load balancing&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Deploy Service&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step 4: Access Your Application
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Find the &lt;strong&gt;Public IP/DNS&lt;/strong&gt; from the ECS Console.&lt;/li&gt;
&lt;li&gt;Open it in a browser to see the deployed application.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Monitoring &amp;amp; Scaling
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Logging&lt;/strong&gt;: Use &lt;strong&gt;CloudWatch Logs&lt;/strong&gt; for application logs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auto Scaling&lt;/strong&gt;: Configure ECS &lt;strong&gt;Service Auto Scaling&lt;/strong&gt; based on CPU/memory usage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Metrics&lt;/strong&gt;: Monitor performance with &lt;strong&gt;CloudWatch Metrics&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;AWS ECS simplifies containerized application deployment with minimal management. You can build scalable and cost-efficient solutions by leveraging ECS with Fargate or EC2. &lt;/p&gt;

</description>
      <category>aws</category>
      <category>ecs</category>
      <category>containers</category>
      <category>docker</category>
    </item>
    <item>
      <title>Superfile: A Comprehensive Guide to Streamlined File Management in the Terminal</title>
      <dc:creator>Piyush Bagani</dc:creator>
      <pubDate>Wed, 04 Dec 2024 14:09:10 +0000</pubDate>
      <link>https://forem.com/piyushbagani15/superfile-a-comprehensive-guide-to-streamlined-file-management-in-the-terminal-58e8</link>
      <guid>https://forem.com/piyushbagani15/superfile-a-comprehensive-guide-to-streamlined-file-management-in-the-terminal-58e8</guid>
      <description>&lt;p&gt;In the fast-paced world of development, a robust file manager can significantly enhance productivity. Superfile is a feature-rich, terminal-based file manager designed to simplify file operations while leveraging the power of command-line tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Choose Superfile?
&lt;/h2&gt;

&lt;p&gt;Superfile stands out due to its:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Multi-Panel Interface:&lt;/strong&gt; Work with multiple directories simultaneously, making navigation intuitive.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Keyboard Shortcuts:&lt;/strong&gt; Execute tasks like copying, moving, and renaming files with speed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;File Previews:&lt;/strong&gt; View metadata and content inline without external commands.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Batch Operations:&lt;/strong&gt; Perform actions on multiple files or folders at once.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integration:&lt;/strong&gt; Seamlessly integrates with editors, compression tools, and custom scripts.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Getting Started with Superfile
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Installation: Download and install Superfile from its &lt;a href="https://superfile.netlify.app/getting-started/installation/" rel="noopener noreferrer"&gt;official site&lt;/a&gt;. Follow platform-specific installation steps to get started.&lt;/li&gt;
&lt;li&gt;Launching Superfile: Run spf in your terminal to open the multi-panel interface.&lt;/li&gt;
&lt;li&gt;Navigation:

&lt;ul&gt;
&lt;li&gt;Use arrow keys to explore directories.&lt;/li&gt;
&lt;li&gt;Toggle between panels to compare or move files.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Operations:

&lt;ul&gt;
&lt;li&gt;Press c to copy, m to move, and d to delete files.&lt;/li&gt;
&lt;li&gt;Use Enter to preview file content or metadata.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Advanced Features
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Bulk Management: Select multiple files with Space and perform operations.&lt;/li&gt;
&lt;li&gt;Compression and Extraction: Easily handle .zip and .tar files using built-in commands.&lt;/li&gt;
&lt;li&gt;Custom Workflows: Configure Superfile to integrate with your favorite tools or scripts for specialized tasks.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Real-World Use Cases
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Developers: Manage project directories, preview logs, or edit configuration files directly from the terminal.&lt;/li&gt;
&lt;li&gt;Sysadmins: Handle server files efficiently, minimizing context switching.&lt;/li&gt;
&lt;li&gt;Data Analysts: Organize datasets and quickly navigate through massive directory structures.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Superfile is more than a file manager—it’s a productivity enhancer for anyone who thrives in the terminal. Its sleek design and powerful features make it an indispensable tool for developers, sysadmins, and power users alike.&lt;/p&gt;

&lt;p&gt;Ready to transform your file management workflow? Dive into the Superfile tutorial and unleash its full potential!&lt;/p&gt;

</description>
      <category>terminal</category>
      <category>cli</category>
    </item>
    <item>
      <title>Automate Your Scripts with Systemd Services: Benefits and Step-by-Step Guide</title>
      <dc:creator>Piyush Bagani</dc:creator>
      <pubDate>Sat, 02 Nov 2024 04:29:02 +0000</pubDate>
      <link>https://forem.com/piyushbagani15/automate-your-scripts-with-systemd-services-benefits-and-step-by-step-guide-3nik</link>
      <guid>https://forem.com/piyushbagani15/automate-your-scripts-with-systemd-services-benefits-and-step-by-step-guide-3nik</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;Introduction:&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Scripts are a core part of many IT and DevOps workflows, performing everything from monitoring tasks to triggering automated responses. However, running these scripts manually or relying on cron jobs can introduce reliability issues. One powerful solution to streamline script execution is to run them as systemd services. This blog post dives into the benefits of using systemd for your scripts and provides a step-by-step guide to setting it up.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Run Scripts as Systemd Services?
&lt;/h2&gt;

&lt;p&gt;Traditionally, scripts are often executed manually or scheduled via cron jobs, but this approach has some limitations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reliability Issues: If a script crashes or fails, cron will not restart it.&lt;/li&gt;
&lt;li&gt;Environment Inconsistencies: Cron jobs and manual runs can vary in terms of environment variables and user permissions, leading to unpredictable results.&lt;/li&gt;
&lt;li&gt;Limited Monitoring: Without a mechanism to check if a script is running successfully, monitoring and debugging can be a challenge.&lt;/li&gt;
&lt;li&gt;Complexity in Control: Managing script processes (like stopping, restarting, or checking status) is not straightforward.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Systemd, a service manager for Linux, provides a robust alternative for managing scripts as services. With systemd, your scripts can run in a controlled, stable environment, making management and troubleshooting significantly easier.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Benefits of Running Scripts as Systemd Services
&lt;/h2&gt;

&lt;p&gt;Using systemd brings several benefits:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Automatic Restarts: Systemd can automatically restart a service if it fails, ensuring high availability.&lt;/li&gt;
&lt;li&gt;Environment Control: You can define environment variables, working directories, and permissions directly in the service file, providing a consistent runtime environment.&lt;/li&gt;
&lt;li&gt;Enhanced Monitoring: Systemd logs outputs to the journal, so you can easily view logs and status updates.&lt;/li&gt;
&lt;li&gt;Streamlined Control: You can start, stop, enable, disable, or check the status of your script with a single command.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step-by-Step Guide: Configuring Your Script as a Systemd Service
&lt;/h2&gt;

&lt;p&gt;Follow these steps to set up your script as a systemd service:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Create Your Script&lt;/strong&gt;&lt;br&gt;
Ensure that your script is ready and accessible. Let’s assume your script is located at /usr/local/bin/my-script.sh. This script could be anything from a data processor to a system monitor.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Create the Systemd Service File&lt;/strong&gt;&lt;br&gt;
Open a terminal, and create a new service file in the /etc/systemd/system/ directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo nano /etc/systemd/system/my-script.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the following configuration to define the service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Unit]
Description=My Custom Script
After=network.target
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Description: A brief summary of the service.&lt;br&gt;
After=network.target: Ensures that the network is available before the script starts, helpful if the script requires network access.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Configure the Service Parameters&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The [Service] section is where you define how the script runs, its environment, and restart behavior.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Service]
ExecStart=/usr/local/bin/my-script.sh
Restart=on-failure
Environment="API_KEY=12345"
WorkingDirectory=/home/myuser/scripts
User=myuser
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;ExecStart: This line specifies the command to start your script. Use the absolute path to avoid path-related issues.&lt;/li&gt;
&lt;li&gt;Restart: Set this to on-failure to restart the service if it fails. Other options include always (restarts regardless of the exit code) or no (never restarts).&lt;/li&gt;
&lt;li&gt;Environment: Define any environment variables the script requires. You can add multiple variables here or point to an environment file with EnvironmentFile=/path/to/env.&lt;/li&gt;
&lt;li&gt;WorkingDirectory: Sets the directory in which the service will run. This is useful if your script depends on relative paths or requires specific permissions.&lt;/li&gt;
&lt;li&gt;User: Specifies the user under which the service should run, which improves security by restricting unnecessary permissions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Define When to Start the Service&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the [Install] section, specify the desired target. To ensure that the service starts on boot in a multi-user mode, use the multi-user.target.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Install]
WantedBy=multi-user.target

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What Is multi-user.target?&lt;/strong&gt;&lt;br&gt;
The multi-user.target is one of several targets in systemd, each representing a different state or mode of the operating system. Here’s a quick overview of what multi-user.target represents and how it fits in:&lt;/p&gt;

&lt;p&gt;multi-user.target: This target is similar to "runlevel 3" in traditional Linux systems, a non-GUI mode where the system is fully operational and allows multiple users to connect. It’s typically used on servers and systems without a graphical user interface (GUI).&lt;/p&gt;

&lt;p&gt;By setting WantedBy=multi-user.target, you’re telling systemd to start your service during the system’s multi-user mode, which is active on most Linux systems running in a non-GUI environment.&lt;br&gt;
This setup is ideal for server processes, background scripts, and other tasks that need to be available as soon as the system is ready for normal operation but don’t require a graphical environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Save and Close the File&lt;/strong&gt;&lt;br&gt;
Save the file (in nano, press Ctrl+O, then Enter to save, and Ctrl+X to exit).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6: Reload Daemon, Enable and Start the Service&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl daemon-reload
sudo systemctl enable my-script.service
sudo systemctl start my-script.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 7: Managing and Troubleshooting Your Systemd Service&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once your service is running, systemd makes it easy to manage:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check Service Status:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl status my-script.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;View Logs:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;journalctl -u my-script
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Stop or Restart the Service:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl stop my-script
sudo systemctl restart my-script

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Running a script as a systemd service is a simple but powerful way to automate tasks with reliability and control. By leveraging systemd’s capabilities, you can ensure scripts run consistently in a controlled environment, automatically restart on failure, and provide clear, accessible logs. Whether you’re managing monitoring scripts, scheduled tasks, or any other automated workflows, systemd offers a robust solution that enhances script reliability and makes managing Linux-based systems much easier.&lt;/p&gt;

&lt;p&gt;Give it a try, and see how it simplifies your workflow!&lt;/p&gt;

&lt;p&gt;Happy scripting!&lt;/p&gt;

</description>
      <category>automation</category>
      <category>linux</category>
      <category>cli</category>
    </item>
    <item>
      <title>Building a Resilient AWS Infrastructure with Terraform</title>
      <dc:creator>Piyush Bagani</dc:creator>
      <pubDate>Tue, 16 Jul 2024 12:31:11 +0000</pubDate>
      <link>https://forem.com/piyushbagani15/building-a-resilient-aws-infrastructure-with-terraform-1e8j</link>
      <guid>https://forem.com/piyushbagani15/building-a-resilient-aws-infrastructure-with-terraform-1e8j</guid>
      <description>&lt;p&gt;Building resilient and scalable infrastructure is critical in today's era, where downtime or poor performance can directly impact customer satisfaction and business revenue. This blog explores the setup of a high availability architecture within Amazon Web Services (AWS) using Terraform, an Infrastructure as Code (IaC) tool. By the end of this guide, you'll understand how to use Terraform to create a fault-tolerant architecture that supports robust, scalable web applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Use Terraform?
&lt;/h2&gt;

&lt;p&gt;Terraform is a powerful tool for building, changing, and versioning infrastructure safely and efficiently. It supports numerous service providers, including AWS, and allows users to define infrastructure through a high-level configuration language. Terraform shines in multi-cloud and complex system setups, making it an ideal choice for managing sophisticated cloud environments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Project Setup Overview
&lt;/h3&gt;

&lt;p&gt;We aim to deploy a VPC in AWS with all the necessary components to support a fault-tolerant, scalable web server. This includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A VPC with separate public and private subnets across multiple Availability Zones.&lt;/li&gt;
&lt;li&gt;NAT Gateways to provide internet access to instances in private subnets. To improve resiliency NAT gateways(to be deployed in public subnets) lie in both AZs.&lt;/li&gt;
&lt;li&gt;An Application Load Balancer to distribute incoming traffic evenly.&lt;/li&gt;
&lt;li&gt;Auto Scaling Groups to handle dynamic scaling based on traffic.&lt;/li&gt;
&lt;li&gt;For additional security, instances are in private subnets.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The full architecture diagram is shown below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwr4cey2brxb2rnas7825.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwr4cey2brxb2rnas7825.png" alt="Image description" width="611" height="481"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Reference link: &lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/vpc-example-private-subnets-nat.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/vpc/latest/userguide/vpc-example-private-subnets-nat.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: Here we will not provision the S3 gateway.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Install Terraform

&lt;ul&gt;
&lt;li&gt;Goto this &lt;a href="https://developer.hashicorp.com/terraform/install" rel="noopener noreferrer"&gt;link&lt;/a&gt; and install terraform for your operating system.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Access to an AWS account

&lt;ul&gt;
&lt;li&gt;Sign Up for a Free tier AWS Account, most of the items we aim to create will come under the free tier. &lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So let's get started by provisioning the above-given infrastructure step by step.&lt;/p&gt;

&lt;h3&gt;
  
  
  Structure of terraform configuration
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws-vpc-subnet-architecture/
├── aws_alb.tf
├── aws_asg.tf
├── aws_networking.tf
├── outputs.tf
├── providers.tf
├── setup.sh
├── terraform.tfvars
└── variables.tf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will discuss the purpose of each file one by one.&lt;/p&gt;

&lt;h3&gt;
  
  
  Provisioning the Networking Components:
&lt;/h3&gt;

&lt;p&gt;Here's the code that provisions all the networking components:&lt;br&gt;
&lt;code&gt;aws_networking.tf&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Creating the VPC
resource "aws_vpc" "main" {
  cidr_block           = var.vpc_cidr # value defined in terraform.tfvars
  enable_dns_support   = true
  enable_dns_hostnames = true

  tags = merge(
    var.common_tags,
    {
      Name = var.vpc_name
    }
  )
}

# Creating the Internet Gateway
resource "aws_internet_gateway" "internet_gateway" {
  vpc_id = aws_vpc.main.id

  tags = merge(
    var.common_tags,
    {
      Name = "Main Internet Gateway"
    }
  )
}

# Creating the Public Subnets
resource "aws_subnet" "public_subnet" {
  count                   = length(var.public_subnet_cidrs)
  vpc_id                  = aws_vpc.main.id
  cidr_block              = var.public_subnet_cidrs[count.index]
  availability_zone       = var.availability_zones[count.index]
  map_public_ip_on_launch = true

  tags = merge(
    var.common_tags,
    {
      Name = "Public Subnet ${count.index + 1}"
    }
  )
}

# Creating the Private Subnets
resource "aws_subnet" "private_subnet" {
  count             = length(var.private_subnet_cidrs)
  vpc_id            = aws_vpc.main.id
  cidr_block        = var.private_subnet_cidrs[count.index]
  availability_zone = var.availability_zones[count.index]

  tags = merge(
    var.common_tags,
    {
      Name = "Private Subnet ${count.index + 1}"
    }
  )
}

# Creating the NAT Gateways with Elastic IPs
resource "aws_eip" "nat_eip" {
  count  = length(var.availability_zones)
  domain = "vpc"

  tags = merge(
    var.common_tags,
    {
      Name = "NAT EIP ${count.index + 1}"
    }
  )
}

resource "aws_nat_gateway" "nat_gateway" {
  count         = length(aws_subnet.public_subnet)
  allocation_id = aws_eip.nat_eip[count.index].id
  subnet_id     = aws_subnet.public_subnet[count.index].id

  tags = merge(
    var.common_tags,
    {
      Name = "NAT Gateway ${count.index + 1}"
    }
  )
}

# Creating the Route Table for Public Subnet
resource "aws_route_table" "public_route_table" {
  vpc_id = aws_vpc.main.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.internet_gateway.id
  }

  tags = merge(
    var.common_tags,
    {
      Name = "Public Route Table"
    }
  )
}

resource "aws_route_table_association" "public_route_table_association" {
  count          = length(aws_subnet.public_subnet)
  subnet_id      = aws_subnet.public_subnet[count.index].id
  route_table_id = aws_route_table.public_route_table.id
}

# Creating the Route Table for Private Subnet
resource "aws_route_table" "private_route_table" {
  count  = length(aws_subnet.private_subnet)
  vpc_id = aws_vpc.main.id

  route {
    cidr_block     = "0.0.0.0/0"
    nat_gateway_id = aws_nat_gateway.nat_gateway[count.index].id
  }

  tags = merge(
    var.common_tags,
    {
      Name = "Private Route Table ${count.index + 1}"
    }
  )
}

resource "aws_route_table_association" "private_route_table_association" {
  count          = length(aws_subnet.private_subnet)
  subnet_id      = aws_subnet.private_subnet[count.index].id
  route_table_id = aws_route_table.private_route_table[count.index].id
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This Terraform script is designed to systematically construct a robust network infrastructure within AWS. &lt;/p&gt;

&lt;p&gt;At its core, the script initiates the creation of a Virtual Private Cloud (VPC), with a custom IP address range (CIDR block). The script further enhances network functionality by setting up an Internet Gateway, which is crucial for enabling communication between the VPC and the Internet, thereby facilitating public Internet access for the resources within public subnets.&lt;/p&gt;

&lt;p&gt;Moreover, the code proceeds to systematically deploy both public and private subnets. Each Public subnet is configured to have NAT Gateways and Load Balancer which is ideal for front-end interfaces and services that need to interact with external clients. Conversely, private subnets are used for backend systems that require enhanced security by isolating them from direct internet access, thus they rely on NAT Gateways for external connections. NAT Gateways, strategically placed in each public subnet and equipped with Elastic IPs, ensure that instances in private subnets can reach the internet for necessary updates and downloads while remaining hidden from direct inbound internet traffic.&lt;/p&gt;

&lt;p&gt;The script also creates route tables with predefined routes to manage the traffic flow: public route tables direct traffic to the internet gateway, allowing resources within public subnets to communicate with the internet, whereas private route tables route internal traffic through the NAT Gateways, safeguarding the private resources.&lt;/p&gt;

&lt;p&gt;Finally, the script sets up associations between subnets and their respective route tables, ensuring that each subnet adheres to the correct routing policies for its intended use, whether for exposure to the public internet or protected internal operations. &lt;/p&gt;

&lt;h3&gt;
  
  
  Provisioning the Auto-Scaling Group and Launch Template:
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;aws_asg.tf&lt;/code&gt;: This file contains the main configuration for infrastructure like ASG and the launch template.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#  Creating Launch Template
resource "aws_launch_template" "app_lt" {
  name          = "app-launch-template"
  image_id      = var.ami_id
  instance_type = var.instance_type
  user_data     = base64encode(file("${path.module}/setup.sh")) # Setup script for web server

  vpc_security_group_ids = [aws_security_group.instance_sg.id]

  tag_specifications {
    resource_type = "instance"
    tags = merge(
      var.common_tags,
      {
        Name = "Instance Template"
      }
    )
  }

  lifecycle {
    create_before_destroy = true
  }
}

resource "aws_security_group" "instance_sg" {
  name        = "instance-security-group"
  description = "Security group for instances"
  vpc_id      = aws_vpc.main.id

  ingress {
    from_port       = 80
    to_port         = 80
    protocol        = "tcp"
    security_groups = [aws_security_group.alb_sg.id]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = merge(
    var.common_tags,
    {
      Name = "Instance Security Group"
    }
  )
}

# Creating Auto Scaling Group
resource "aws_autoscaling_group" "app_asg" {

  launch_template {
    id      = aws_launch_template.app_lt.id
    version = "$Latest"
  }

  min_size            = 1
  max_size            = 4
  desired_capacity    = 2
  vpc_zone_identifier = aws_subnet.private_subnet.*.id

  tag {
    key                 = "Name"
    value               = "app-instance-${formatdate("YYYYMMDDHHmmss", timestamp())}"
    propagate_at_launch = true
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;This section of the Terraform script orchestrates the automated deployment and management of EC2 instances within an AWS environment, focusing on scalability, security, and configuration efficiency. It involves setting up a Launch Template, a Security Group, and an Auto Scaling Group. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Launch Template acts as a blueprint for the instances, detailing the Amazon Machine Image (AMI), instance type, and user data, which includes a script for initial setup tasks such as configuring web servers. This template ensures that all instances are uniformly configured as per the defined specifications and is accompanied by a security group that functions as a virtual firewall to regulate inbound and outbound traffic for the instances. It allows inbound HTTP traffic on port 80 from associated load balancers, facilitating access to web services hosted on the instances, while permitting all outbound traffic to ensure seamless external connectivity for updates and API interactions.&lt;/p&gt;

&lt;p&gt;The Auto Scaling Group is a critical component that dynamically adjusts the number of instances based on demand. It utilizes the launch template for creating new instances, ensuring they adhere to the predefined configuration. The group is configured to operate within a range of instance counts, automatically scaling up or down between the minimum and maximum limits based on actual load, thus ensuring cost efficiency and resource availability.&lt;/p&gt;

&lt;p&gt;Moreover, each instance is tagged with a unique timestamp at creation, enhancing manageability within the AWS ecosystem. &lt;/p&gt;

&lt;h3&gt;
  
  
  Provisioning the Application Load Balancer:
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;aws_alb.tf&lt;/code&gt;: This file contains the main configuration that helps to deploy the application load balancer.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#  Creating Application Load Balancer (ALB)
resource "aws_lb" "app_lb" {
  name               = "aws-app-prod-lb"
  internal           = false
  load_balancer_type = "application"
  subnets            = aws_subnet.public_subnet.*.id

  security_groups = [aws_security_group.alb_sg.id]

  tags = merge(
    var.common_tags,
    {
      Name = "Application Load Balancer"
    }
  )
}

#  Creating a Security Group for the Load Balancer
resource "aws_security_group" "alb_sg" {
  name        = "alb-security-group"
  description = "Allow web traffic"
  vpc_id      = aws_vpc.main.id

  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = merge(
    var.common_tags,
    {
      Name = "ALB Security Group"
    }
  )
}


#  Creating a Target Group for ALB
resource "aws_lb_target_group" "tg" {
  name     = "aws-target-group"
  port     = 80
  protocol = "HTTP"
  vpc_id   = aws_vpc.main.id

  health_check {
    enabled             = true
    interval            = 30
    path                = "/"
    protocol            = "HTTP"
    timeout             = 5
    healthy_threshold   = 3
    unhealthy_threshold = 3
  }

  tags = merge(
    var.common_tags,
    {
      Name = "Target Group"
    }
  )
}

# Attaching Target Group to ALB
resource "aws_lb_listener" "front_end" {
  load_balancer_arn = aws_lb.app_lb.arn
  port              = 80
  protocol          = "HTTP"

  default_action {
    type             = "forward"
    target_group_arn = aws_lb_target_group.tg.arn
  }
}

# Attaching Target Group to Auto Scaling Group
resource "aws_autoscaling_attachment" "asg_attachment" {
  autoscaling_group_name = aws_autoscaling_group.app_asg.id
  lb_target_group_arn    = aws_lb_target_group.tg.arn
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This portion of the Terraform script sets up an Application Load Balancer (ALB), along with its dedicated security group and a target group, to efficiently manage and distribute incoming web traffic across multiple EC2 instances. The ALB is designed to be internet-facing, as indicated by the internal flag set to false, allowing it to handle inbound internet traffic. It operates on HTTP protocol across instances located in public subnets, ensuring that the application can serve requests directly from the internet. &lt;/p&gt;

&lt;p&gt;Additionally, a target group is configured to facilitate health checks and manage traffic distribution among instances, ensuring only healthy instances receive traffic. This improves application availability and user experience by optimizing resource use, reducing response times, and increasing uptime. Integrating the target group with both the ALB and an Auto Scaling group allows the system to adjust to traffic changes, enhancing robustness and cost-efficiency dynamically. This setup creates a scalable, fault-tolerant architecture ideal for high-availability web services.&lt;/p&gt;

&lt;p&gt;There are a few more important files, that contribute towards successful provisioning of this whole architecture.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;outputs.tf&lt;/code&gt;: This file specifies the outputs of created resources.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Output for VPC
output "vpc_id" {
  value       = aws_vpc.main.id
  description = "The ID of the VPC"
}

# Output for Public Subnets
output "public_subnet_ids" {
  value       = aws_subnet.public_subnet.*.id
  description = "The IDs of the public subnets"
}

# Output for NAT Gateways
output "nat_gateway_ids" {
  value       = aws_nat_gateway.nat_gateway.*.id
  description = "The IDs of the NAT gateways"
}

# Output for Private Subnets
output "private_subnet_ids" {
  value       = aws_subnet.private_subnet.*.id
  description = "The IDs of the private subnets"
}


# Output for Application Load Balancer
output "alb_dns_name" {
  value       = aws_lb.app_lb.dns_name
  description = "The DNS name of the Application Load Balancer"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;providers.tf&lt;/code&gt;: This file specifies the AWS provider and the region where the infrastructure will be provisioned.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "5.57.0"
    }
  }
}

provider "aws" {
  region = var.aws_region
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;variables.tf&lt;/code&gt;: This file helps to declare the variables that will be used in the terraform configuration. Some variables have default variables as well.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;######################
## Global variables ##  
######################
variable "aws_region" {
  description = "The AWS region to create resources in."
  default     = "ap-south-1"
}

variable "common_tags" {
  default = {
    Project     = "VPC Setup"
    Environment = "Production"
  }
}

variable "vpc_name" {
  type        = string
  description = "The name of the VPC."

}
#####################
## AWS Networking  ##  
#####################

variable "vpc_cidr" {
  description = "CIDR block for the VPC"
}

variable "public_subnet_cidrs" {
  description = "CIDR blocks for public subnets"
  type        = list(string)
  default     = ["10.0.1.0/24", "10.0.2.0/24"]
}

variable "private_subnet_cidrs" {
  description = "CIDR blocks for private subnets"
  type        = list(string)
  default     = ["10.0.3.0/24", "10.0.4.0/24"]
}

variable "availability_zones" {
  description = "Availability zones for subnets"
  type        = list(string)
  default     = ["ap-south-1a", "ap-south-1b"]
}
############################
## AWS Auto-Scaling Group ##  
############################

variable "instance_type" {
  type        = string
  default     = "t2.micro"
  description = "The instance type"
}

variable "ami_id" {
  type        = string
  default     = "ami-0ec0e125bb6c6e8ec"
  description = "The AMI id of AWS Amazon Linux Instance in Mumbai"
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;terraform.tfvars: This file helps to define the values of the variables declared in &lt;code&gt;variables.tf&lt;/code&gt;.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vpc_cidr = "10.0.0.0/16"
vpc_name = "aws_prod"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;setup.sh: This file contains user-data that acts as a start-up script for the instances being launched.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
yum update -y
yum install -y httpd
systemctl start httpd
systemctl enable httpd
echo "Hello from $(hostname -f)" &amp;gt; /var/www/html/index.html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So this was it when it came to creating terraform scripts. Further execute the &lt;code&gt;terraform init&lt;/code&gt; command to initialize terraform, and then execute the &lt;code&gt;terraform plan&lt;/code&gt; to review the infrastructure to be provisioned and execute &lt;code&gt;terraform apply&lt;/code&gt; to finally provision it.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform init&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Famr9b2z6fco1rxurnv6q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Famr9b2z6fco1rxurnv6q.png" alt="Output of terraform init" width="800" height="224"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform plan&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1692kbwknd3bmfdigivg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1692kbwknd3bmfdigivg.png" alt="Image description" width="800" height="309"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform apply&lt;/code&gt;: Finally, type &lt;code&gt;yes&lt;/code&gt; when prompted to approve the infrastructure creation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4j8jsmj4fscau2idlryv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4j8jsmj4fscau2idlryv.png" alt="Output of terraform apply." width="800" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If &lt;code&gt;terraform apply&lt;/code&gt; runs successfully, it will show the below-given output. It displays the outputs we defined in &lt;code&gt;outputs.tf&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwxwvdc16cgi48lhoy2z3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwxwvdc16cgi48lhoy2z3.png" alt="Image description" width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now Let's verify the resources on the AWS Cloud Console.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjs7o6qboiwrafvpfblru.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjs7o6qboiwrafvpfblru.png" alt="Image description" width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above image confirms that the networking components are created properly, aws_prod VPC with 4 subnets, (2 public and 2 private) that also in different AZs, Route tables, NAT Gateways with 2 EIPs, and Internet Gateway have been provisioned.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fexdsin25aqpoau3v1zw6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fexdsin25aqpoau3v1zw6.png" alt="Image description" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg5nj36hmmc98zvhhylao.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg5nj36hmmc98zvhhylao.png" alt="Image description" width="800" height="151"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyxqi0lpnni6wad98wono.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyxqi0lpnni6wad98wono.png" alt="See the SGs" width="800" height="188"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above image confirms that the auto-scaling group with a launch template and with 2 instances have been provisioned. Also, you can see the instances with dedicated SG are created in different AZs that provide high availability and fault tolerance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flzcbchv9jhj0yave48gr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flzcbchv9jhj0yave48gr.png" alt="Image description" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq8a8vdrp0i0gojd7dcw1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq8a8vdrp0i0gojd7dcw1.png" alt="Image description" width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above images confirm that the &lt;code&gt;aws-app-prod-lb&lt;/code&gt; ALB with the &lt;code&gt;aws-target-group&lt;/code&gt; target group has been provisioned. The 2 instances created as part of ASG are registered targets in this target group. The target group is configured to facilitate health checks and traffic distribution among the instances.&lt;/p&gt;

&lt;p&gt;In the above images, one can see the DNS name(A record) of the ALB. If we access this in the browser, we can see that the load-balancing is balancing the load among instances. Below are the images that depict the same.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwqo5pimqvm2ijl8miwnr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwqo5pimqvm2ijl8miwnr.png" alt="Image description" width="800" height="160"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmm368bhcpmuf6gabm9dr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmm368bhcpmuf6gabm9dr.png" alt="Image description" width="800" height="88"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above images clearly show that we have successfully deployed and configured the webserver securely in a Private instance, We can access it through the Internet using the Application Load Balancer securely.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This Terraform setup provides a robust template for deploying a high-availability architecture in AWS. It ensures that your infrastructure is resilient and adaptable to load changes, making it ideal for enterprises aiming to maximize uptime and performance. The entire infrastructure is codified, which simplifies changes and versioning over time.&lt;/p&gt;

&lt;p&gt;By automating infrastructure management with Terraform, organizations can significantly reduce the potential for human errors while enabling faster deployment and scalability. This makes Terraform an indispensable tool in modern cloud environments. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Thank you for reading my blog post, please do share it with your peers.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Keep Learning, Keep Sharing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reference Links:&lt;/strong&gt; &lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/vpc-example-private-subnets-nat.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/vpc/latest/userguide/vpc-example-private-subnets-nat.html&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>infrastructureascode</category>
    </item>
    <item>
      <title>Unleashing the Power of Kubernetes Network Policies</title>
      <dc:creator>Piyush Bagani</dc:creator>
      <pubDate>Sun, 14 Jul 2024 12:48:26 +0000</pubDate>
      <link>https://forem.com/piyushbagani15/unleashing-the-power-of-kubernetes-network-policies-1gp4</link>
      <guid>https://forem.com/piyushbagani15/unleashing-the-power-of-kubernetes-network-policies-1gp4</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;As Kubernetes continues to dominate the container orchestration landscape, understanding and leveraging its networking capabilities becomes essential. One often underutilized feature is Kubernetes Network Policies, which offer a powerful way to manage network traffic and enhance security within your cluster. This blog post will dive deep into Kubernetes Network Policies, their benefits, key concepts, and practical implementation tips.&lt;br&gt;
What Are Kubernetes Network Policies?&lt;/p&gt;

&lt;p&gt;Network Policies in Kubernetes define how groups of pods are allowed to communicate with each other and other network endpoints. They provide a declarative way to manage network traffic, ensuring that on&lt;/p&gt;
&lt;h2&gt;
  
  
  Why Network Policies Matter
&lt;/h2&gt;

&lt;p&gt;Implementing Network Policies offers several key advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enhanced Security: By restricting traffic between pods, you can prevent unauthorized access and potential security breaches.&lt;/li&gt;
&lt;li&gt;Improved Isolation: Network Policies ensure that only authorized services can communicate with each other, enforcing the principle of least privilege.&lt;/li&gt;
&lt;li&gt;Simplified Network Management: Using declarative configurations, you can manage complex network topologies and traffic rules more efficiently.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Key Concepts
&lt;/h2&gt;

&lt;p&gt;Before diving into the implementation, it’s essential to understand the key components of Network Policies:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pod Selector: Defines which pods the network policy applies to based on labels.&lt;/li&gt;
&lt;li&gt;Ingress Rules: Specify which incoming connections are allowed to the selected pods.&lt;/li&gt;
&lt;li&gt;Egress Rules: Control the outgoing connections from the selected pods.&lt;/li&gt;
&lt;li&gt;Namespaces: Use namespaces to apply network policies to different environments or applications within the same cluster.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Creating a Network Policy
&lt;/h2&gt;

&lt;p&gt;Let’s walk through creating a basic Network Policy. Consider a scenario where you want to restrict traffic to a group labeled app: web so that only pods labeled app: backend can communicate with them.&lt;/p&gt;

&lt;p&gt;Define the Policy YAML:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-backend
  namespace: default
spec:
  podSelector:
    matchLabels:
      app: web
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: backend
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the Policy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f allow-backend-policy.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This Network Policy allows incoming traffic to pods labeled app: web only from pods labeled app: backend within the same namespace.&lt;/p&gt;

&lt;h2&gt;
  
  
  Best Practices
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Start with Default Deny: Implement a default deny policy for both ingress and egress to block all traffic by default and then explicitly allow necessary communication.&lt;/li&gt;
&lt;li&gt;Use Namespaces Wisely: Leverage namespaces to create isolated environments and apply network policies accordingly.&lt;/li&gt;
&lt;li&gt;Monitor Traffic: Regularly monitor network traffic to ensure policies are effective and adjust as necessary.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Advanced Example: Combining Ingress and Egress Rules
&lt;/h2&gt;

&lt;p&gt;In a more advanced scenario, you might want to control both incoming and outgoing traffic for a set of pods. Here’s an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: web-policy
  namespace: default
spec:
  podSelector:
    matchLabels:
      app: web
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: backend
  egress:
  - to:
    - podSelector:
        matchLabels:
          app: database
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Kubernetes Network Policies are a powerful tool for managing and securing network traffic within your cluster. By defining clear and concise policies, you can enhance the security, isolation, and manageability of your applications. Start experimenting with Network Policies today to see the benefits in your Kubernetes environments.&lt;/p&gt;

&lt;p&gt;Let’s connect and discuss how you’re leveraging Network Policies in your Kubernetes setups! Share your experiences, challenges, and innovative use cases in the comments below.&lt;/p&gt;

</description>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Understanding Security Context in Kubernetes</title>
      <dc:creator>Piyush Bagani</dc:creator>
      <pubDate>Sun, 30 Jun 2024 13:31:01 +0000</pubDate>
      <link>https://forem.com/piyushbagani15/understanding-security-context-in-kubernetes-1gkn</link>
      <guid>https://forem.com/piyushbagani15/understanding-security-context-in-kubernetes-1gkn</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Kubernetes, a leader in container orchestration, ensures that applications run efficiently and securely across a cluster of machines. An essential component of Kubernetes' security mechanism is the security context, which configures permissions and access controls for Pods and containers. This blog delves into the specifics of security contexts, helping you understand how to deploy secure applications within your Kubernetes environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Features of Kubernetes Security Contexts
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;RunAsUser: Controls the UID with which the container executes. This prevents the container from running with root privileges, which could pose security risks.&lt;/li&gt;
&lt;li&gt;ReadOnlyRootFilesystem: Ensures the container's root filesystem is mounted as read-only, prohibiting modifications to the root filesystem and mitigating some forms of attack.&lt;/li&gt;
&lt;li&gt;Capabilities: Allows administrators to grant or remove specific Linux capabilities for a container, enabling a principle of least privilege to be enforced.&lt;/li&gt;
&lt;li&gt;SELinuxOptions: Specifies the SELinux context that the container should operate within. SELinux can enforce granular access control policies.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; There can be more settings as well, So please check out the official documentation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Best Practices for Configuring Security Contexts
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Non-root Containers: Always try to run containers as non-root users. Even if the container is compromised, this limits the potential for damage.&lt;/li&gt;
&lt;li&gt;Enforce Read-Only Filesystems: Where possible, set ReadOnlyRootFilesystem to true to prevent tampering with system files.&lt;/li&gt;
&lt;li&gt;Restrict Capabilities: Start with minimal necessary capabilities and add more only as needed. This limits the actions a container can perform, reducing the attack surface.&lt;/li&gt;
&lt;li&gt;Configure SELinux Properly: Use SELinux to enforce strict access controls tailored to your operational needs. Understand your applications' requirements to configure these settings accurately.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Pod-Level Security Context Example
&lt;/h2&gt;

&lt;p&gt;Here's an example of a Kubernetes manifest file that specifies security settings at the pod level. The security context applied here affects all containers within the pod.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: example-pod
spec:
  securityContext:
    runAsUser: 1000
  containers:
  - name: example-container
    image: nginx
    ports:
    - containerPort: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Explanation:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;runAsUser: This setting ensures that the container runs as a user with UID 1000, which is a non-root user.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Container-Level Security Context Example
&lt;/h2&gt;

&lt;p&gt;In this example, the security context is specified at the container level, meaning it only affects this particular container within the pod.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Pod
metadata:
  name: example-pod
spec:
  containers:
  - name: secure-container
    image: nginx
    securityContext:
      runAsUser: 1001
      readOnlyRootFilesystem: true
      capabilities:
        drop:
        - ALL
        add:
        - NET_BIND_SERVICE
    ports:
    - containerPort: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Explanation:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;runAsUser: The container runs as a user with UID 1001.&lt;/li&gt;
&lt;li&gt;readOnlyRootFilesystem: This setting makes the root filesystem of the container read-only, preventing any write operations.&lt;/li&gt;
&lt;li&gt;capabilities: This setting customizes the capabilities the container has:

&lt;ul&gt;
&lt;li&gt;drop: ALL removes all capabilities by default.&lt;/li&gt;
&lt;li&gt;add: NET_BIND_SERVICE adds the capability to bind a service to well-known ports (below 1024).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Security contexts are one of those critical characteristics for managing security within Kubernetes. It assures enforcement of security policies and reduces the risk of unauthorized access or escalation occurring within the cluster. This can help to further improve the security of Kubernetes deployments if the concept of security contexts is properly understood and used.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>security</category>
    </item>
    <item>
      <title>Simplifying Persistent Storage in Kubernetes: A Deep Dive into PVs, PVCs, and SCs</title>
      <dc:creator>Piyush Bagani</dc:creator>
      <pubDate>Sat, 22 Jun 2024 13:19:39 +0000</pubDate>
      <link>https://forem.com/piyushbagani15/simplifying-persistent-storage-in-kubernetes-a-deep-dive-into-pvs-pvcs-and-scs-1p3c</link>
      <guid>https://forem.com/piyushbagani15/simplifying-persistent-storage-in-kubernetes-a-deep-dive-into-pvs-pvcs-and-scs-1p3c</guid>
      <description>&lt;p&gt;In the world of Kubernetes, managing persistent storage efficiently stands as a cornerstone for deploying resilient and scalable applications. Kubernetes not only orchestrates containers but also offers robust solutions for handling persistent data across these containers. &lt;/p&gt;

&lt;p&gt;This blog dives into the critical components of Kubernetes storage management: Persistent Volumes (PV), Persistent Volume Claims (PVC), Storage Classes (SC), and Volume Claim Templates. These elements are pivotal in making Kubernetes a powerhouse for maintaining stateful applications amidst the dynamic nature of containerized environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Persistent Volumes (PV)
&lt;/h2&gt;

&lt;p&gt;Persistent Volumes are one of the building blocks of storage in Kubernetes. A PV is a networked storage unit in the cluster that has been provisioned by an administrator or automatically provisioned via Storage Classes. It represents a piece of storage that is physically backed by some underlying mass storage system, like NFS, iSCSI, or a cloud provider-specific storage system.&lt;/p&gt;

&lt;h3&gt;
  
  
  Characteristics of PVs:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Lifecycle Independence: PVs exist independently of pods' lifecycles. This means that the storage persists even after the pods that use them are deleted.&lt;/li&gt;
&lt;li&gt;Storage Abstraction: PVs abstract the details of how storage is provided from how it is consumed, allowing for a separation of concerns between administrators and users.&lt;/li&gt;
&lt;li&gt;Multiple Access Modes: PVs support different access modes like ReadWriteOnce, ReadOnlyMany, and ReadWriteMany, which dictate how the volume can be mounted on a node.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Persistent Volume Claims (PVC)
&lt;/h2&gt;

&lt;p&gt;Persistent Volume Claims are essentially requests for storage by a pod. PVCs consume PV resources by specifying size and access modes, like a kind of "storage lease" that a user requests to store their data.&lt;/p&gt;

&lt;h3&gt;
  
  
  How PVCs Work:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Binding: When a PVC is created, Kubernetes looks for a PV that matches the PVC’s requirements and binds them together. If no suitable PV exists, the PVC will remain unbound until a suitable one becomes available or is dynamically provisioned.&lt;/li&gt;
&lt;li&gt;Dynamic Provisioning: If a PVC specifies a Storage Class, and no PV matches its requirements, a new PV is dynamically created according to the specifics of the Storage Class.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Storage Classes (SC)
&lt;/h2&gt;

&lt;p&gt;Storage Classes define and classify the types of storage available within a Kubernetes cluster. They enable dynamic volume provisioning by describing the "classes" of storage (different levels of performance, backups, and policies).&lt;/p&gt;

&lt;h3&gt;
  
  
  Features of SCs:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Provisioning: Admins can define as many Storage Classes as needed, each specifying a different quality of service or backup policy.&lt;/li&gt;
&lt;li&gt;Automation: Based on the Storage Class specified in a PVC, Kubernetes automates the volume provisioning, without manual PV creation by the administrator.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Example:
&lt;/h2&gt;

&lt;p&gt;Consider a scenario where a Kubernetes cluster needs to dynamically provide storage for a database application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: db-storage
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: fast-disk
  resources:
    requests:
      storage: 100Gi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This PVC requests a 100 GiB disk with read-write access on a single node. The fast-disk Storage Class is designed to provision high-performance SSD-based storage, tailored for database applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Works:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;PVC Creation: The above PVC is created, requesting specific storage characteristics.&lt;/li&gt;
&lt;li&gt;Dynamic Provisioning: If no existing PV matches the PVC, the Storage Class fast-disk triggers the dynamic creation of a new PV that fits the criteria.&lt;/li&gt;
&lt;li&gt;Binding: The newly created PV is automatically bound to the PVC, ensuring the database application has the necessary storage.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Understanding PVs, PVCs, and SCs is crucial for effectively managing storage in Kubernetes. These components offer a flexible, powerful way to handle persistent data, ensuring applications can be highly available and resilient. As Kubernetes continues to evolve, the capabilities and complexity of managing storage will likely increase, offering even more robust solutions for cloud-native environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  In a nutshell
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;PVs act as a bridge between the physical storage and the pods, offering a lifecycle independent of the pods.&lt;/li&gt;
&lt;li&gt;PVCs allow pods to request specific sizes and access modes from the available PVs.&lt;/li&gt;
&lt;li&gt;SCs automate the provisioning of storage based on the desired characteristics, facilitating dynamic storage allocation without manual intervention.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>storage</category>
      <category>persistent</category>
      <category>volume</category>
    </item>
    <item>
      <title>Understanding Jobs and CronJobs in Kubernetes</title>
      <dc:creator>Piyush Bagani</dc:creator>
      <pubDate>Sat, 15 Jun 2024 14:20:01 +0000</pubDate>
      <link>https://forem.com/piyushbagani15/understanding-jobs-and-cronjobs-in-kubernetes-30a3</link>
      <guid>https://forem.com/piyushbagani15/understanding-jobs-and-cronjobs-in-kubernetes-30a3</guid>
      <description>&lt;p&gt;If you’ve ever scheduled an email to go out later or set a reminder to do something at a specific time, you’re already familiar with the concepts behind Jobs and CronJobs in Kubernetes. These tools help manage and automate tasks within your Kubernetes cluster, ensuring things get done when they should. Let's break it down in simple terms.&lt;/p&gt;

&lt;h2&gt;
  
  
  Jobs: The Task Managers of Kubernetes
&lt;/h2&gt;

&lt;p&gt;Imagine you have a list of tasks to complete, like cleaning your room or finishing a report. In Kubernetes, a Job is like a task manager that ensures these chores get done. When you create a Job, you’re telling Kubernetes, “Hey, please run this task for me, and make sure it’s completed successfully.”&lt;/p&gt;

&lt;h2&gt;
  
  
  How it works:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Create the Job: You define what needs to be done in a Job manifest. Think of it as a to-do list.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run the Job: Kubernetes takes this manifest and runs the task.&lt;br&gt;
Check Completion: It makes sure the task finishes. If it fails, Kubernetes will try again until it succeeds (or hits a limit you set).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This is perfect for tasks that need to be run once or just a few times, like processing a batch of data or sending a notification email.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Diving Deeper into Job Configurations
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Parallelism: Controls how many pods can run concurrently. If the parallelism field is not specified, the default value is 1. This means only one pod will be created at a time.&lt;/li&gt;
&lt;li&gt;Completions: Specifies the number of successful completions needed. If the completions field is not specified, the default value is 1. This means the job is considered complete when one pod successfully completes.&lt;/li&gt;
&lt;li&gt;BackoffLimit: Sets the number of retries before the Job is considered failed.  If the backoffLimit field is not specified, the default value is 6. This means the job will retry up to 6 times before it is marked as failed.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Example Manifest file:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# job-definition.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: advanced-job
spec:
  parallelism: 3
  completions: 5
  backoffLimit: 4
  template:
    spec:
      containers:
      - name: example
        image: busybox
        command: ["sh", "-c", "echo Job running... &amp;amp;&amp;amp; sleep 30"]
      restartPolicy: OnFailure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This Kubernetes Job will create a pod using the busybox image, run a command that prints a message, and sleep for 30 seconds. It will run up to 3 pods in parallel until 5 successful completions are achieved. If a pod fails, it will be retried up to 4 times before marking the Job as failed. The pod will be restarted only if it fails.&lt;/p&gt;

&lt;h2&gt;
  
  
  CronJobs: Your Scheduled Assistants
&lt;/h2&gt;

&lt;p&gt;CronJobs extends Jobs by allowing you to schedule tasks at specific times or intervals. If Jobs are your reliable friends, CronJobs are those friends who show up at the same time every week to help with recurring tasks.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it works:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Define the Schedule: You set up a CronJob with a schedule, using the Cron format (a standard way to specify time intervals).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Automate the Task: At the specified times, Kubernetes will automatically create Jobs to perform the task.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Repeat: It keeps running these tasks on the schedule you set, whether that’s every hour, day, week, or month. &lt;br&gt;
CronJobs are ideal for repetitive tasks, such as backing up databases, cleaning up logs, or generating reports.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note: Cron Syntax: Cron format consists of five fields (minute, hour, day of month, month, day of week) and a command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* * * * * - every minute
0 * * * * - every hour
0 0 * * * - every day at midnight
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Example Manifest file:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# cronjob-definition.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
  name: example-cronjob
spec:
  schedule: "0 0 * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: example
            image: busybox
            command: ["sh", "-c", "echo CronJob running... &amp;amp;&amp;amp; sleep 30"]
          restartPolicy: OnFailure

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This CronJob is configured to run a container named "example" using the "busybox" image every day at midnight. The container executes a shell command that prints a message and sleeps for 30 seconds. If the job fails, it will be restarted.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real-World Scenarios
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Data Processing:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Job: Run a data processing script to analyze yesterday’s sales figures.&lt;br&gt;
CronJob: Automatically run this script every night at midnight.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;System Maintenance:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Job: Clean up old, temporary files to free up space.&lt;br&gt;
CronJob: Schedule this cleanup to happen every Sunday at 3 AM.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Notifications:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Job: Send a welcome email to a new user.&lt;br&gt;
CronJob: Send a daily summary email to all users at 8 AM.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Jobs and CronJobs in Kubernetes are your reliable helpers, taking care of tasks efficiently and on time. Jobs ensures one-time tasks get done, while CronJobs handles recurring tasks effortlessly. By leveraging these tools, you can automate and manage your workloads effectively, freeing up time to focus on more critical aspects of your projects.&lt;/p&gt;

</description>
      <category>cronjob</category>
      <category>job</category>
      <category>kubernetes</category>
      <category>k8s</category>
    </item>
    <item>
      <title>Understanding /var/run/docker.sock: The Key to Docker's Inner Workings 🐳</title>
      <dc:creator>Piyush Bagani</dc:creator>
      <pubDate>Thu, 30 May 2024 16:40:18 +0000</pubDate>
      <link>https://forem.com/piyushbagani15/understanding-varrundockersock-the-key-to-dockers-inner-workings-nm7</link>
      <guid>https://forem.com/piyushbagani15/understanding-varrundockersock-the-key-to-dockers-inner-workings-nm7</guid>
      <description>&lt;p&gt;If you're diving into Docker, one term you’ll encounter often is /var/run/docker.sock. But what is it, and why is it so important?&lt;/p&gt;

&lt;p&gt;🔍 What is /var/run/docker.sock?&lt;br&gt;
In simple terms, /var/run/docker.sock is a Unix socket file used by Docker to communicate with the Docker daemon (dockerd). This socket file acts as a bridge between your Docker client (like the Docker CLI) and the Docker daemon, enabling you to manage containers, images, networks, and more.&lt;/p&gt;

&lt;p&gt;🔧 How Does It Work?&lt;br&gt;
Communication Channel: Instead of using network-based protocols (like HTTP or TCP), Docker uses this Unix socket for efficient and secure communication between the client and the daemon on the same host.&lt;br&gt;
API Access: All Docker commands you run via the CLI (docker run, docker ps, etc.) interact with the Docker daemon through this socket. Essentially, it’s the API endpoint for Docker operations.&lt;/p&gt;

&lt;p&gt;🔐 Why Should You Care?&lt;br&gt;
Understanding /var/run/docker.sock is crucial for advanced Docker operations:&lt;br&gt;
Container Management: Tools like Docker Compose and various CI/CD systems use this socket to orchestrate and manage containers.&lt;br&gt;
Security: Be cautious when granting access to this socket. Mounting /var/run/docker.sock inside a container provides that container with root-level access to the host’s Docker daemon, which can pose significant security risks.&lt;/p&gt;

&lt;p&gt;💡 Practical Use Case&lt;br&gt;
Ever wondered how to manage Docker from within a container? By mounting the Docker socket inside your container, you can.&lt;/p&gt;

&lt;p&gt;Check out my blog on How to run docker in docker.&lt;/p&gt;

&lt;p&gt;📈 The Bigger Picture&lt;br&gt;
For developers and DevOps professionals, understanding how Docker operates under the hood, including the role of /var/run/docker.sock, is key to leveraging the full power of containerization. It opens up possibilities for automation, advanced orchestration, and efficient resource management.&lt;/p&gt;

&lt;p&gt;Stay curious, and keep exploring the depths of Docker! 🌊🐳&lt;/p&gt;

&lt;p&gt;Keep Learning, Keep Hustling.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>containers</category>
      <category>devops</category>
    </item>
    <item>
      <title>A Day in the Life of a DevOps Engineer</title>
      <dc:creator>Piyush Bagani</dc:creator>
      <pubDate>Thu, 25 Apr 2024 11:49:06 +0000</pubDate>
      <link>https://forem.com/piyushbagani15/a-day-in-the-life-of-a-devops-engineer-3ph8</link>
      <guid>https://forem.com/piyushbagani15/a-day-in-the-life-of-a-devops-engineer-3ph8</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;DevOps engineers play a pivotal role in blending software development and system operations to enhance both system reliability and deployment efficiency. Their importance has grown significantly in the tech industry, driven by the need for faster software delivery cycles and robust, scalable systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Morning Activities
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Start of Day
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;System Monitoring&lt;/strong&gt;: Check system health and performance using tools like Prometheus, Grafana, or New Relic.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Daily Stand-up Meeting
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Team Collaboration&lt;/strong&gt;: Engage in Agile stand-up meetings to discuss progress, challenges, and plan the day's tasks.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Updating Scripts/Tooling
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Automation&lt;/strong&gt;: Write or refine automation scripts in Bash or Python to improve system management and deployment processes.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Midday Activities
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Code Deployment
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CI/CD Pipelines&lt;/strong&gt;: Deploy code updates using CI/CD tools such as Jenkins, GitLab CI, or CircleCI. Manage configurations and dependencies through IaC tools like Terraform or Ansible.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Collaboration and Planning
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Inter-team Meetings&lt;/strong&gt;: Discuss upcoming features, necessary infrastructure changes, or capacity planning with development teams.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Afternoon Activities
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Incident Management
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Troubleshooting&lt;/strong&gt;: Address and resolve production issues, often collaborating with the site reliability engineering (SRE) team.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Performance Optimization
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;System Tuning&lt;/strong&gt;: Analyze performance data to identify bottlenecks and optimize configurations using tools like Chef or Puppet.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Documentation and Reporting
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Record Keeping&lt;/strong&gt;: Update internal documentation with recent changes and prepare reports on deployments, system status, or incident resolutions.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Evening Wrap-up
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Review and Reflect
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;End of Day Review&lt;/strong&gt;: Check the final metrics and review the completed tasks and any outstanding issues.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Knowledge Sharing
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Community Engagement&lt;/strong&gt;: Participate in webinars, contribute to blogs, or prepare talks on recent challenges or new technologies.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Planning for the Next Day
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Task Prioritization&lt;/strong&gt;: Set up monitoring alerts and organize tasks for the following day.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Tools and Technologies Commonly Used
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cloud Platforms&lt;/strong&gt;: AWS, Azure, Google Cloud&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IaC and Configuration Management&lt;/strong&gt;: Terraform, Ansible, Chef, Puppet&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CI/CD Tools&lt;/strong&gt;: Jenkins, GitLab CI, CircleCI&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scripting Languages&lt;/strong&gt;: Python, Bash&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring Tools&lt;/strong&gt;: Prometheus, Grafana, New Relic, ELK Stack&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Challenges Faced by DevOps Engineers
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Rapid Technological Changes&lt;/strong&gt;: Continuously learning and adapting to new tools.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;System Scalability and Reliability&lt;/strong&gt;: Ensuring systems are robust enough to handle growth and peak loads.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security Concerns&lt;/strong&gt;: Balancing rapid deployment needs with stringent security measures.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;DevOps engineers are crucial to modern tech organizations, ensuring fast and reliable software deployment and system management. The role is expected to evolve with technological advancements in AI, machine learning, and increased automation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Call to Action
&lt;/h2&gt;

&lt;h3&gt;
  
  
  For Aspiring DevOps Engineers
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Skill Development&lt;/strong&gt;: Focus on developing technical skills, pursuing relevant certifications, and joining professional communities.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  For Organizations
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Adopting DevOps Practices&lt;/strong&gt;: Integrate DevOps to enhance operational productivity and system efficiency.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devops</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Mastering the Crontab: A Guide to Automated Tasks in Unix-like Systems</title>
      <dc:creator>Piyush Bagani</dc:creator>
      <pubDate>Sat, 16 Mar 2024 13:49:08 +0000</pubDate>
      <link>https://forem.com/piyushbagani15/mastering-the-crontab-a-guide-to-automated-tasks-in-unix-like-systems-1mn8</link>
      <guid>https://forem.com/piyushbagani15/mastering-the-crontab-a-guide-to-automated-tasks-in-unix-like-systems-1mn8</guid>
      <description>&lt;h2&gt;
  
  
  Introduction:
&lt;/h2&gt;

&lt;p&gt;Have you ever wished you could automate repetitive tasks on your computer, freeing up your time for more important things? Enter crontab, your personal timekeeper in the world of Unix-like systems. In this human-friendly guide, we'll explore crontab together, demystifying its workings and showing you how to leverage its power to simplify your life.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding the Basics:
&lt;/h2&gt;

&lt;p&gt;At its core, crontab operates on the principle of scheduling tasks to run at specific times or intervals. Each user on a Unix-like system can have their own crontab file, which contains a list of commands along with the schedule for when those commands should be executed. These schedules are defined using a syntax that consists of five fields:  &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Minute (0-59)&lt;/li&gt;
&lt;li&gt;Hour (0-23)&lt;/li&gt;
&lt;li&gt;Day of the month (1-31)&lt;/li&gt;
&lt;li&gt;Month (1-12)&lt;/li&gt;
&lt;li&gt;Day of the week (0-7, where both 0 and 7 represent Sunday)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Additionally, there are special characters that can be used in these fields to denote specific intervals or wildcard values. For example, an asterisk (*) represents all possible values for a field, while a hyphen (-) denotes a range, and a comma (,) separates multiple values.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating and Managing Crontab Entries:
&lt;/h2&gt;

&lt;p&gt;To create or edit your crontab file, you can use the &lt;code&gt;crontab -e&lt;/code&gt; command, which opens the default text editor specified in your environment. Each line in the crontab file represents a separate job entry, with the schedule followed by the command to be executed. For example:&lt;br&gt;
`&lt;/p&gt;

&lt;h6&gt;
  
  
  Send out a weekly report every Monday at 8:00 AM
&lt;/h6&gt;

&lt;p&gt;0 8 * * 1 /path/to/weekly_report.sh&lt;/p&gt;

&lt;h6&gt;
  
  
  Run a Script Every Sunday at Midnight:
&lt;/h6&gt;

&lt;p&gt;0 0 * * 0 /path/to/weekly_script.sh&lt;/p&gt;

&lt;h6&gt;
  
  
  Run a Script Every Weekday at 8:30 AM:
&lt;/h6&gt;

&lt;p&gt;30 8 * * 1-5 /path/to/script.sh&lt;br&gt;
`&lt;/p&gt;

&lt;h2&gt;
  
  
  Best Practices for Success:
&lt;/h2&gt;

&lt;p&gt;To make the most of crontab, it's essential to follow a few best practices:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Keep It Organized: Use comments to document your crontab entries, making it easier to understand and manage your schedule.&lt;/li&gt;
&lt;li&gt;Test Before Deploying: Always test your commands or scripts manually before adding them to crontab to ensure they work as expected.&lt;/li&gt;
&lt;li&gt;Monitor and Log: Redirect the output of your cron jobs to log files to track their execution and troubleshoot any issues.&lt;/li&gt;
&lt;li&gt;Stay Secure: Be cautious when running automated tasks as root, and ensure your commands and scripts are secure to prevent any unintended consequences.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;With crontab as your trusty timekeeper, you can automate away the mundane tasks that eat up your precious time. By understanding its basics, setting up your schedule, and following best practices, you'll be well on your way to mastering the art of automation. So go ahead, seize control of your time with crontab, and let it work its magic in the background while you focus on what truly matters.&lt;/p&gt;

</description>
      <category>linux</category>
      <category>crontab</category>
    </item>
    <item>
      <title>How to Ace Your Google Cloud DevOps Certification: Insider Tips and Strategies</title>
      <dc:creator>Piyush Bagani</dc:creator>
      <pubDate>Fri, 09 Feb 2024 16:00:19 +0000</pubDate>
      <link>https://forem.com/piyushbagani15/how-to-ace-your-google-cloud-devops-certification-insider-tips-and-strategies-25k3</link>
      <guid>https://forem.com/piyushbagani15/how-to-ace-your-google-cloud-devops-certification-insider-tips-and-strategies-25k3</guid>
      <description>&lt;p&gt;Stepping into the world of cloud technology and DevOps, I embarked on a transformative journey towards achieving the Google Cloud Professional Cloud DevOps Engineer Certification. It was a path paved with curiosity, challenges, and countless moments of learning.&lt;/p&gt;

&lt;p&gt;In this blog, I will share the strategy and resources of my preparation process, as I delve deep into the world of Google Cloud and hone my skills in DevOps practices. From the initial spark of interest to the triumphant moment of certification, every step of this adventure was filled with invaluable insights and experiences.&lt;/p&gt;

&lt;h3&gt;
  
  
  Navigating the Exam Structure: What to Expect:
&lt;/h3&gt;

&lt;p&gt;The exam price is $200 and You can expect 50 questions to be answered within 2 hours.&lt;/p&gt;

&lt;p&gt;The Exam Guide Covers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Bootstrap a Google Cloud organization for DevOps&lt;/li&gt;
&lt;li&gt;Building and implementing CI/CD pipelines for a service&lt;/li&gt;
&lt;li&gt;Apply site reliability engineering practices to a service&lt;/li&gt;
&lt;li&gt;Implementing Service Monitoring Strategies&lt;/li&gt;
&lt;li&gt;Optimize service performance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can check the Exam guide &lt;a href="https://cloud.google.com/learn/certification/guides/cloud-devops-engineer" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Mastering Google Cloud Tools for DevOps: Key Concepts and Best Practices
&lt;/h3&gt;

&lt;p&gt;This particular exam focuses mainly on the following topics.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Google Kubernetes Engine&lt;/li&gt;
&lt;li&gt;SRE Best practices (Very IMP)&lt;/li&gt;
&lt;li&gt;Binary Authorization&lt;/li&gt;
&lt;li&gt;Monitoring and Logging&lt;/li&gt;
&lt;li&gt;Incident Management&lt;/li&gt;
&lt;li&gt;Artifact registry, Source Repositories&lt;/li&gt;
&lt;li&gt;Deployment and testing strategies&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Resources I followed
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The DevOps and SRE path by Google. &lt;br&gt;
This may be a little time taking but it is worth watching. You can find it &lt;a href="https://www.cloudskillsboost.google/paths/20" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Google SRE Book&lt;br&gt;
This is a valuable resource to understand SRE principles deeply. Worth reading. Find it &lt;a href="https://sre.google/sre-book/table-of-contents/" rel="noopener noreferrer"&gt;here.&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Last Minutes Notes.&lt;br&gt;
This is Ammett William's Prepsheet, good to read while revising the concepts. Here is the &lt;a href="https://drive.google.com/file/d/1cCCTwulZuSBa4XmEh9bGzEwotaaOz9Wt/view" rel="noopener noreferrer"&gt;link&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Google SRE DevOps &lt;a href="https://www.youtube.com/playlist?list=PLIivdWyY5sqJrKl7D2u-gmis8h9K66qoj" rel="noopener noreferrer"&gt;Playlist&lt;/a&gt;&lt;br&gt;
You won't regret watching this playlist right from Google.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This is a Bonus Resource.&lt;br&gt;
I have created an extensive guide, covering all the concepts. In this document, I have covered all the best practices an SRE follows. You can read it &lt;a href="https://drive.google.com/file/d/1-GosSkOg4fzaRVw-NGeC_jGP1u6PkRKZ/view?usp=sharing" rel="noopener noreferrer"&gt;here.&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Important Tips:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Practice with &lt;a href="https://docs.google.com/forms/d/e/1FAIpQLSdpk564uiDvdnqqyPoVjgpBp0TEtgScSFuDV7YQvRSumwUyoQ/viewform" rel="noopener noreferrer"&gt;sample questions&lt;/a&gt;: Spend some time reviewing sample questions or taking practice exams to familiarize yourself with the format and types of questions you may encounter.&lt;/li&gt;
&lt;li&gt;Also practice the questions from a website called examtopics.com. This website includes previous questions that already came in the exam.&lt;/li&gt;
&lt;li&gt;Focus on understanding key concepts rather than memorizing details.&lt;/li&gt;
&lt;li&gt;Manage your time effectively during the exam.&lt;/li&gt;
&lt;li&gt;Stay calm, read questions carefully, and trust in your preparation.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I trust that this guidance aids you in your exam readiness and eventual success. Thank you for taking the time to read through it. Wishing you the best of luck in your performance on the exam!&lt;/p&gt;

&lt;p&gt;Thanks for Reading.&lt;br&gt;
Keep Learning, Keep Sharing&lt;/p&gt;

</description>
      <category>googlecloud</category>
      <category>devops</category>
      <category>certification</category>
    </item>
  </channel>
</rss>
