<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: shashankpai</title>
    <description>The latest articles on Forem by shashankpai (@shashankpai).</description>
    <link>https://forem.com/shashankpai</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/shashankpai"/>
    <language>en</language>
    <item>
      <title>Modern Infrastructure as Code: OpenTofu vs. Crossplane vs. Pulumi</title>
      <dc:creator>shashankpai</dc:creator>
      <pubDate>Mon, 07 Apr 2025 11:38:52 +0000</pubDate>
      <link>https://forem.com/shashankpai/modern-infrastructure-as-code-opentofu-vs-crossplane-vs-pulumi-3gih</link>
      <guid>https://forem.com/shashankpai/modern-infrastructure-as-code-opentofu-vs-crossplane-vs-pulumi-3gih</guid>
      <description>&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;IaC Evolution Timeline&lt;/li&gt;
&lt;li&gt;IaC Comparison At-a-Glance&lt;/li&gt;
&lt;li&gt;OpenTofu: The Open Source Terraform Alternative&lt;/li&gt;
&lt;li&gt;Crossplane: Kubernetes-Native Infrastructure&lt;/li&gt;
&lt;li&gt;Pulumi: Infrastructure as Actual Code&lt;/li&gt;
&lt;li&gt;Choosing the Right Tool&lt;/li&gt;
&lt;li&gt;Quick Start Guide&lt;/li&gt;
&lt;li&gt;Common Pitfalls to Avoid&lt;/li&gt;
&lt;li&gt;FAQ&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In today's rapidly evolving cloud-native landscape, Infrastructure as Code (IaC) has transformed from a novel concept to an essential practice. As organizations embrace DevOps and platform engineering principles, selecting the right IaC tool becomes increasingly critical for operational success.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Who is this guide for?&lt;/strong&gt; Platform engineers, DevOps practitioners, and technical decision-makers who need to select an appropriate IaC tool for their organization's needs.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This guide provides a focused comparison of three leading IaC tools—OpenTofu, Crossplane, and Pulumi—examining their architectures, capabilities, and ideal use cases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🤔 Have you already started implementing Infrastructure as Code in your organization? What challenges are you facing?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  IaC Evolution Timeline
&lt;/h2&gt;

&lt;p&gt;Before diving into specific tools, let's understand how infrastructure management has evolved:&lt;/p&gt;

&lt;h3&gt;
  
  
  DevOps → SRE → Platform Engineering
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;DevOps&lt;/strong&gt; established fundamental principles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cross-functional collaboration&lt;/li&gt;
&lt;li&gt;Infrastructure defined as code&lt;/li&gt;
&lt;li&gt;Automation of repetitive tasks&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/devops/what-is-devops/" rel="noopener noreferrer"&gt;Rapid feedback loops&lt;/a&gt; for continuous improvement&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Site Reliability Engineering (SRE)&lt;/strong&gt; applied DevOps with emphasis on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;System reliability and &lt;a href="https://sre.google/sre-book/service-level-objectives/" rel="noopener noreferrer"&gt;service level objectives&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Error budgets and acceptable failure rates&lt;/li&gt;
&lt;li&gt;Customer experience focus&lt;/li&gt;
&lt;li&gt;Incident response and &lt;a href="https://sre.google/sre-book/postmortem-culture/" rel="noopener noreferrer"&gt;postmortem analysis&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Platform Engineering&lt;/strong&gt; extended SRE concepts by focusing on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Internal developer customers&lt;/li&gt;
&lt;li&gt;Self-service capabilities&lt;/li&gt;
&lt;li&gt;Integrated toolsets and abstractions&lt;/li&gt;
&lt;li&gt;Developer experience optimization&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  IaC Comparison At-a-Glance
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Factor&lt;/th&gt;
&lt;th&gt;OpenTofu&lt;/th&gt;
&lt;th&gt;Crossplane&lt;/th&gt;
&lt;th&gt;Pulumi&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Language&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;HashiCorp Configuration Language (HCL)&lt;/td&gt;
&lt;td&gt;YAML with Functions&lt;/td&gt;
&lt;td&gt;TypeScript, Python, Go, .NET, etc.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Learning Curve&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Moderate&lt;/td&gt;
&lt;td&gt;Steep (requires Kubernetes)&lt;/td&gt;
&lt;td&gt;Low-Moderate&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;State Management&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;State file (local or remote)&lt;/td&gt;
&lt;td&gt;Kubernetes reconciliation&lt;/td&gt;
&lt;td&gt;Service-based state&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Best For&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Terraform users wanting open-source&lt;/td&gt;
&lt;td&gt;Kubernetes-native environments&lt;/td&gt;
&lt;td&gt;Teams with programming expertise&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Community&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Growing (Linux Foundation)&lt;/td&gt;
&lt;td&gt;Strong (CNCF)&lt;/td&gt;
&lt;td&gt;Commercial with open-source core&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Governance&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Open&lt;/td&gt;
&lt;td&gt;Open&lt;/td&gt;
&lt;td&gt;Commercial&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  OpenTofu: The Open Source Terraform Alternative
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://opentofu.org/" rel="noopener noreferrer"&gt;OpenTofu&lt;/a&gt; emerged as a community-driven fork of HashiCorp Terraform following licensing changes. Now under the Linux Foundation with plans to join the CNCF, OpenTofu preserves the familiar HCL syntax while ensuring an open governance model.&lt;/p&gt;

&lt;h3&gt;
  
  
  Project Structure Example
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;my-opentofu-project/
├── main.tf           # Primary configuration file
├── variables.tf      # Input variable declarations
├── outputs.tf        # Output value declarations
├── modules/          # Reusable modules
└── environments/     # Environment-specific configurations
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Code Example: AWS VPC Configuration
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Simple VPC example&lt;/span&gt;
&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_vpc"&lt;/span&gt; &lt;span class="s2"&gt;"main"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;cidr_block&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vpc_cidr&lt;/span&gt;
  &lt;span class="nx"&gt;enable_dns_support&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;enable_dns_hostnames&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;Name&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"${var.environment}-vpc"&lt;/span&gt;
    &lt;span class="nx"&gt;Environment&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;environment&lt;/span&gt;
    &lt;span class="nx"&gt;Project&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;project_name&lt;/span&gt;
    &lt;span class="nx"&gt;ManagedBy&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"OpenTofu"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# Create two public subnets&lt;/span&gt;
&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_subnet"&lt;/span&gt; &lt;span class="s2"&gt;"public"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;count&lt;/span&gt;                   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_id&lt;/span&gt;                  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;main&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;cidr_block&lt;/span&gt;              &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;public_subnet_cidrs&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;count&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;index&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="nx"&gt;availability_zone&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"${var.region}${count.index == 0 ? "&lt;/span&gt;&lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="s2"&gt;" : "&lt;/span&gt;&lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="s2"&gt;"}"&lt;/span&gt;
  &lt;span class="nx"&gt;map_public_ip_on_launch&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;Name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"${var.environment}-public-subnet-${count.index + 1}"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Key Architectural Concepts
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Declarative Resource Definition&lt;/strong&gt;: Define the desired state, and OpenTofu determines how to reach it&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Provider Model&lt;/strong&gt;: Plugins enable interaction with various cloud APIs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;State Management&lt;/strong&gt;: State file maps real-world resources to your configuration&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Plan-Apply Workflow&lt;/strong&gt;: Preview changes before applying them&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  When to Choose OpenTofu
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;When you need compatibility with existing Terraform code&lt;/li&gt;
&lt;li&gt;When you prefer a declarative, configuration-based approach&lt;/li&gt;
&lt;li&gt;When you want an open-source tool with broad provider support&lt;/li&gt;
&lt;li&gt;When state management is acceptable for your use cases&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🤔 &lt;strong&gt;What aspects of OpenTofu's approach most appeal to your team's workflow?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Crossplane: Kubernetes-Native Infrastructure
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://crossplane.io/" rel="noopener noreferrer"&gt;Crossplane&lt;/a&gt; extends Kubernetes by enabling it to provision and manage infrastructure resources across cloud providers. As a CNCF incubating project, it brings cloud resources into the Kubernetes resource model.&lt;/p&gt;

&lt;h3&gt;
  
  
  Project Structure Example
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;my-crossplane-project/
├── apis/                      # Custom resource definitions
├── package/                   # Package configuration
├── examples/                  # Example usage
└── infrastructure/            # Infrastructure definition
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Code Example: PostgreSQL Database in AWS
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Define a PostgreSQL resource claim&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;database.example.org/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PostgreSQLInstance&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;production-db&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;parameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;storageGB&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;50&lt;/span&gt;
    &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;13"&lt;/span&gt;
    &lt;span class="na"&gt;tier&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;medium"&lt;/span&gt;
  &lt;span class="na"&gt;writeConnectionSecretToRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;production-db-conn&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Key Architectural Concepts
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Kubernetes Extension&lt;/strong&gt;: Extends K8s API with custom resources for infrastructure&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Composition-Based&lt;/strong&gt;: Complex infrastructure composed from individual managed resources&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Claim-Based Model&lt;/strong&gt;: Developers "claim" infrastructure without understanding the complexity&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Continuous Reconciliation&lt;/strong&gt;: Controllers ensure actual state matches desired state&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  When to Choose Crossplane
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;When you're already using Kubernetes for application deployment&lt;/li&gt;
&lt;li&gt;When you want to avoid state file management&lt;/li&gt;
&lt;li&gt;When you need a continuous reconciliation model&lt;/li&gt;
&lt;li&gt;When you want to provide self-service infrastructure to developers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🤔 &lt;strong&gt;If you're using Kubernetes, how would continuous reconciliation benefit your infrastructure management?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Pulumi: Infrastructure as Actual Code
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.pulumi.com/" rel="noopener noreferrer"&gt;Pulumi&lt;/a&gt; allows defining infrastructure using general-purpose programming languages like Python, TypeScript, and Go. This approach brings the full power of software development to infrastructure management.&lt;/p&gt;

&lt;h3&gt;
  
  
  Project Structure Example
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;my-pulumi-project/
├── __main__.py          # Main infrastructure code
├── Pulumi.yaml          # Project configuration
├── infrastructure/      # Infrastructure components
└── tests/               # Infrastructure tests
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Code Example: AWS VPC in Python
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pulumi&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pulumi_aws&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;aws&lt;/span&gt;

&lt;span class="c1"&gt;# Create a VPC and subnets
&lt;/span&gt;&lt;span class="n"&gt;vpc&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;aws&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ec2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Vpc&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;my-vpc&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;cidr_block&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;10.0.0.0/16&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;enable_dns_hostnames&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;enable_dns_support&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;tags&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;my-vpc&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Create a public subnet
&lt;/span&gt;&lt;span class="n"&gt;public_subnet&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;aws&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ec2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Subnet&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;public-subnet&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;vpc_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;cidr_block&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;10.0.1.0/24&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;availability_zone&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;us-west-2a&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;map_public_ip_on_launch&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;tags&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;public-subnet&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Export the VPC ID
&lt;/span&gt;&lt;span class="n"&gt;pulumi&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;export&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;vpc_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Key Architectural Concepts
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;General-Purpose Languages&lt;/strong&gt;: Define infrastructure using familiar programming languages&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource Model&lt;/strong&gt;: Resources correspond to cloud provider offerings&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stack Management&lt;/strong&gt;: Different environments managed as separate stacks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Testing Framework&lt;/strong&gt;: Test infrastructure using familiar language testing frameworks&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  When to Choose Pulumi
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;When your team has strong programming skills&lt;/li&gt;
&lt;li&gt;When you need to leverage programming language features (loops, conditionals)&lt;/li&gt;
&lt;li&gt;When you want to use the same language for infrastructure and applications&lt;/li&gt;
&lt;li&gt;When you need comprehensive testing of your infrastructure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;🤔 Would using a familiar programming language improve your team's productivity with infrastructure code?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Choosing the Right Tool
&lt;/h2&gt;

&lt;p&gt;Use this decision flow to help select the most appropriate tool for your situation:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;If you need...&lt;/th&gt;
&lt;th&gt;Then consider...&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Compatibility with existing Terraform code&lt;/td&gt;
&lt;td&gt;OpenTofu&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Kubernetes-native infrastructure management&lt;/td&gt;
&lt;td&gt;Crossplane&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Infrastructure defined in a programming language&lt;/td&gt;
&lt;td&gt;Pulumi&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Open-source solution with broad community support&lt;/td&gt;
&lt;td&gt;OpenTofu or Crossplane&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Avoiding state file management&lt;/td&gt;
&lt;td&gt;Crossplane&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Advanced testing capabilities&lt;/td&gt;
&lt;td&gt;Pulumi&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Decision Flowchart
&lt;/h3&gt;

&lt;p&gt;Consider these questions to determine which tool is right for you:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Are you already invested in Kubernetes?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Yes → Crossplane is likely a good fit&lt;/li&gt;
&lt;li&gt;No → Continue to question 2&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Do you have existing Terraform configurations?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Yes → OpenTofu provides the smoothest transition&lt;/li&gt;
&lt;li&gt;No → Continue to question 3&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Does your team prefer programming languages over configuration languages?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Yes → Pulumi is probably your best choice&lt;/li&gt;
&lt;li&gt;No → OpenTofu offers a simpler learning curve&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Quick Start Guide
&lt;/h1&gt;

&lt;h2&gt;
  
  
  OpenTofu Quick Start
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Installation:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;opentofu/tap/opentofu
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;First Project:&lt;/strong&gt; &lt;a href="https://opentofu.org/docs/intro" rel="noopener noreferrer"&gt;OpenTofu Getting Started Tutorial&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Community Resources:&lt;/strong&gt; &lt;a href="https://community.opentofu.org/" rel="noopener noreferrer"&gt;OpenTofu Community Forum&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Crossplane Quick Start
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Installation:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create namespace crossplane-system
helm &lt;span class="nb"&gt;install &lt;/span&gt;crossplane &lt;span class="nt"&gt;--namespace&lt;/span&gt; crossplane-system crossplane-stable/crossplane
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;First Project:&lt;/strong&gt; &lt;a href="https://docs.crossplane.io/latest/getting-started/" rel="noopener noreferrer"&gt;Crossplane Getting Started Guide&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Pulumi Quick Start
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Installation:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://get.pulumi.com | sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;First Project:&lt;/strong&gt; &lt;a href="https://www.pulumi.com/docs/get-started/" rel="noopener noreferrer"&gt;Pulumi Getting Started Guide&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Common Pitfalls to Avoid
&lt;/h1&gt;

&lt;h2&gt;
  
  
  OpenTofu Pitfalls
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;State File Corruption&lt;/strong&gt;: Always use remote state storage with proper locking.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Provider Version Conflicts&lt;/strong&gt;: Pin provider versions explicitly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Module Management&lt;/strong&gt;: Structure modules for reusability without excessive nesting.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Blast Radius&lt;/strong&gt;: Use workspace separation for critical environments.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Crossplane Pitfalls
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;RBAC Complexity&lt;/strong&gt;: Carefully plan your permissions model from the start.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Composition Learning Curve&lt;/strong&gt;: Start with simple compositions before tackling complex ones.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource Drift&lt;/strong&gt;: Understand how reconciliation handles manual changes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API Versioning&lt;/strong&gt;: Plan for provider API version changes.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Pulumi Pitfalls
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Language-Specific Issues&lt;/strong&gt;: Consider team expertise when choosing a programming language.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;State Management&lt;/strong&gt;: Handle secrets properly in your state storage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Import Existing Resources&lt;/strong&gt;: Have a strategy for importing existing infrastructure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dependency Management&lt;/strong&gt;: Address language-specific package management challenges.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Infrastructure provisioning has come a long way, with IaC tools playing a crucial role in automation and scalability. OpenTofu, Crossplane, and Pulumi each offer unique approaches, making them suitable for different use cases.&lt;/p&gt;

&lt;p&gt;The best tool for your organization depends on your team's skills, existing investments, and specific requirements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;OpenTofu&lt;/strong&gt;: Best for teams with Terraform experience seeking an open-source alternative&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Crossplane&lt;/strong&gt;: Ideal for Kubernetes-centric organizations wanting unified management&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pulumi&lt;/strong&gt;: Perfect for teams wanting to leverage programming language capabilities&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As you evaluate these tools, consider starting with small, non-critical infrastructure components to gain experience before wider adoption.&lt;/p&gt;

&lt;p&gt;What IaC tool are you using? Let us know your thoughts in the comments!&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Further Reading:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://opentofu.org/docs" rel="noopener noreferrer"&gt;OpenTofu Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.crossplane.io/v1.19/learn/" rel="noopener noreferrer"&gt;Crossplane Architecture&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.pulumi.com/docs/iac/concepts/how-pulumi-works/" rel="noopener noreferrer"&gt;Pulumi Programming Model&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://landscape.cncf.io/card-mode?category=provisioning" rel="noopener noreferrer"&gt;CNCF Landscape: Provisioning Tools&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devops</category>
      <category>crossplane</category>
      <category>infrastructureascode</category>
      <category>terraform</category>
    </item>
    <item>
      <title>From Zero to Hosted: Building a Static Website Platform with Pulumi and MinIO</title>
      <dc:creator>shashankpai</dc:creator>
      <pubDate>Wed, 02 Apr 2025 15:56:20 +0000</pubDate>
      <link>https://forem.com/shashankpai/from-zero-to-hosted-building-a-static-website-platform-with-pulumi-and-minio-1n65</link>
      <guid>https://forem.com/shashankpai/from-zero-to-hosted-building-a-static-website-platform-with-pulumi-and-minio-1n65</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Have you ever wanted to create your own cloud infrastructure at home? Whether you're looking to learn more about cloud technologies, test deployments before pushing to production, or simply set up a personal storage solution, a home lab environment can be incredibly valuable. In this guide, I'll walk you through setting up a &lt;strong&gt;HomeLab MiniCloud&lt;/strong&gt; using &lt;strong&gt;Pulumi&lt;/strong&gt; as infrastructure-as-code (IaC) and &lt;strong&gt;Docker&lt;/strong&gt; with &lt;strong&gt;MinIO&lt;/strong&gt; as our object storage service.&lt;/p&gt;

&lt;h2&gt;
  
  
  What You'll Learn
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv4f3t3d6ilq9zpk0uw3v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv4f3t3d6ilq9zpk0uw3v.png" alt="Image description" width="800" height="546"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How to set up Pulumi for infrastructure as code&lt;/li&gt;
&lt;li&gt;Deploying MinIO (an S3-compatible object storage) using Docker&lt;/li&gt;
&lt;li&gt;Configuring NGINX as a reverse proxy with SSL&lt;/li&gt;
&lt;li&gt;Hosting a static website from your MinIO instance&lt;/li&gt;
&lt;li&gt;Troubleshooting common issues&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before we dive in, let's make sure you have all the necessary dependencies installed on your system. We'll need Docker for containerization, Python for our Pulumi code, and a few other utilities:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  docker.io &lt;span class="se"&gt;\&lt;/span&gt;
  python3.10 &lt;span class="se"&gt;\&lt;/span&gt;
  python3.10-venv &lt;span class="se"&gt;\&lt;/span&gt;
  curl &lt;span class="se"&gt;\&lt;/span&gt;
  unzip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  What is Pulumi?
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.pulumi.com/" rel="noopener noreferrer"&gt;Pulumi&lt;/a&gt; is an open-source infrastructure as code (IaC) tool that allows you to define and manage cloud infrastructure using familiar programming languages rather than domain-specific languages. In our case, we'll use Python to define our infrastructure, which means we can leverage Python's full feature set, including loops, conditionals, and functions, making our infrastructure code more flexible and maintainable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installing Pulumi
&lt;/h3&gt;

&lt;p&gt;Let's install Pulumi using their official installation script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://get.pulumi.com | sh
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;PATH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$PATH&lt;/span&gt;:&lt;span class="nv"&gt;$HOME&lt;/span&gt;/.pulumi/bin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To make the Pulumi command available in your shell permanently, add it to your shell configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s1"&gt;'export PATH=$HOME/.pulumi/bin:$PATH'&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; ~/.bashrc
&lt;span class="nb"&gt;source&lt;/span&gt; ~/.bashrc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Setting Up Our MiniCloud Infrastructure
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Creating the Project Directory
&lt;/h3&gt;

&lt;p&gt;Let's start by creating a directory for our project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; ~/Home-Lab/Pulumi/homelab-minicloud
&lt;span class="nb"&gt;cd&lt;/span&gt; ~/Home-Lab/Pulumi/homelab-minicloud
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Initializing a Pulumi Project
&lt;/h3&gt;

&lt;p&gt;Now, let's initialize a new Pulumi project with Python as our language of choice:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pulumi new python &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command creates a new Pulumi project with the necessary scaffolding to get started.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Installing Dependencies
&lt;/h3&gt;

&lt;p&gt;Navigate to the infrastructure directory and install the required Python packages:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;infra
pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I recommend using a virtual environment to isolate our project dependencies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python3 &lt;span class="nt"&gt;-m&lt;/span&gt; venv venv
&lt;span class="nb"&gt;source &lt;/span&gt;venv/bin/activate
pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. Installing the Pulumi Docker Provider
&lt;/h3&gt;

&lt;p&gt;Since we'll be deploying Docker containers, we need to install the Pulumi Docker provider:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;pulumi_docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  5. Defining Our MinIO Infrastructure
&lt;/h3&gt;

&lt;p&gt;Now comes the exciting part! Let's define our MinIO infrastructure by editing the &lt;code&gt;__main__.py&lt;/code&gt; file in the &lt;code&gt;infra&lt;/code&gt; directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;A Python Pulumi program&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;

&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pulumi&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pulumi_docker&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;docker&lt;/span&gt;

&lt;span class="c1"&gt;# MinIO credentials
&lt;/span&gt;&lt;span class="n"&gt;minio_access_key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;minioadmin&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;minio_secret_key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;minioadmin&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="c1"&gt;# Create a shared Docker network
&lt;/span&gt;&lt;span class="n"&gt;network&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;docker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Network&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;homelab-network&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;homelab-network&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Pull MinIO image
&lt;/span&gt;&lt;span class="n"&gt;minio_image&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;docker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;RemoteImage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;minio-image&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;minio/minio:latest&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;keep_locally&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Run MinIO container
&lt;/span&gt;&lt;span class="n"&gt;minio_container&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;docker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Container&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;minio-container&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;minio_image&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;repo_digest&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;minio&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;ports&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="n"&gt;docker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;ContainerPortArgs&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;internal&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;9000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;external&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;9000&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="n"&gt;docker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;ContainerPortArgs&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;internal&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;9001&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;external&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;9001&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;envs&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;MINIO_ROOT_USER=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;minio_access_key&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;MINIO_ROOT_PASSWORD=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;minio_secret_key&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;command&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;server&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/data&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--console-address&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;:9001&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;volumes&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="n"&gt;docker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;ContainerVolumeArgs&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;host_path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/home/arjun//minio/data&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;container_path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/data&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;networks_advanced&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;docker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;ContainerNetworksAdvancedArgs&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;network&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;)],&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;pulumi&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;export&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;minio_container_name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;minio_container&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's break down this code:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We define MinIO credentials (in a production environment, you'd want to use Pulumi's secrets management)&lt;/li&gt;
&lt;li&gt;We create a Docker network for our containers to communicate&lt;/li&gt;
&lt;li&gt;We pull the latest MinIO image&lt;/li&gt;
&lt;li&gt;We define and run the MinIO container with:

&lt;ul&gt;
&lt;li&gt;Port mappings (9000 for the API, 9001 for the web console)&lt;/li&gt;
&lt;li&gt;Environment variables for credentials&lt;/li&gt;
&lt;li&gt;Volume mapping to persist data&lt;/li&gt;
&lt;li&gt;Network configuration&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  6. Deploying MinIO
&lt;/h3&gt;

&lt;p&gt;Now let's deploy our infrastructure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;infra
pulumi up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Preview the changes Pulumi will make&lt;/li&gt;
&lt;li&gt;Ask for confirmation&lt;/li&gt;
&lt;li&gt;Deploy the infrastructure according to our code&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After deployment, you can verify that MinIO is running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker ps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see output similar to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CONTAINER ID   IMAGE       COMMAND     STATUS    PORTS           NAMES
xxxxxxxxxxxx   minio/minio "..."       Up       0.0.0.0:9000-&amp;gt;9000/tcp   minio
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  7. Accessing MinIO
&lt;/h3&gt;

&lt;p&gt;Once deployed, you can access the MinIO console at &lt;a href="http://localhost:9001" rel="noopener noreferrer"&gt;http://localhost:9001&lt;/a&gt; with the following credentials:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Username:&lt;/strong&gt; &lt;code&gt;minioadmin&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Password:&lt;/strong&gt; &lt;code&gt;minioadmin&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Enhancing Our Setup with NGINX and SSL
&lt;/h2&gt;

&lt;p&gt;Now that we have MinIO running, let's enhance our setup by adding NGINX as a reverse proxy and implementing SSL for secure connections.&lt;/p&gt;

&lt;h3&gt;
  
  
  Generating a Self-Signed Certificate with SAN
&lt;/h3&gt;

&lt;p&gt;For development purposes, we'll create a self-signed SSL certificate with Subject Alternative Name (SAN) support. First, create an OpenSSL configuration file named &lt;code&gt;cert.conf&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ini"&gt;&lt;code&gt;&lt;span class="nn"&gt;[req]&lt;/span&gt;
&lt;span class="py"&gt;default_bits&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;2048&lt;/span&gt;
&lt;span class="py"&gt;distinguished_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;req_distinguished_name&lt;/span&gt;
&lt;span class="py"&gt;req_extensions&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;req_ext&lt;/span&gt;
&lt;span class="py"&gt;x509_extensions&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;v3_ca&lt;/span&gt;
&lt;span class="py"&gt;prompt&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;no&lt;/span&gt;

&lt;span class="nn"&gt;[req_distinguished_name]&lt;/span&gt;
&lt;span class="py"&gt;C&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;IN&lt;/span&gt;
&lt;span class="py"&gt;ST&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;Karnataka&lt;/span&gt;
&lt;span class="py"&gt;L&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;Bangalore&lt;/span&gt;
&lt;span class="py"&gt;O&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;HomeLab&lt;/span&gt;
&lt;span class="py"&gt;OU&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;IT&lt;/span&gt;
&lt;span class="py"&gt;CN&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;minio.local&lt;/span&gt;

&lt;span class="nn"&gt;[req_ext]&lt;/span&gt;
&lt;span class="py"&gt;subjectAltName&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;@alt_names&lt;/span&gt;

&lt;span class="nn"&gt;[v3_ca]&lt;/span&gt;
&lt;span class="py"&gt;subjectAltName&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;@alt_names&lt;/span&gt;
&lt;span class="py"&gt;basicConstraints&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;critical, CA:true&lt;/span&gt;

&lt;span class="nn"&gt;[alt_names]&lt;/span&gt;
&lt;span class="py"&gt;DNS.1&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;minio.local&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, generate the certificate and key:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openssl req &lt;span class="nt"&gt;-x509&lt;/span&gt; &lt;span class="nt"&gt;-nodes&lt;/span&gt; &lt;span class="nt"&gt;-days&lt;/span&gt; 365 &lt;span class="se"&gt;\ &lt;/span&gt; 
  &lt;span class="nt"&gt;-newkey&lt;/span&gt; rsa:2048 &lt;span class="se"&gt;\ &lt;/span&gt; 
  &lt;span class="nt"&gt;-keyout&lt;/span&gt; minio.key &lt;span class="se"&gt;\ &lt;/span&gt; 
  &lt;span class="nt"&gt;-out&lt;/span&gt; minio.crt &lt;span class="se"&gt;\ &lt;/span&gt; 
  &lt;span class="nt"&gt;-config&lt;/span&gt; cert.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates two files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;minio.crt&lt;/code&gt; - The certificate&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;minio.key&lt;/code&gt; - The private key&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Setting Up NGINX as a Reverse Proxy
&lt;/h3&gt;

&lt;p&gt;We have two options for configuring NGINX: subpath routing or subdomain split. Let's look at both approaches.&lt;/p&gt;

&lt;h4&gt;
  
  
  Option 1: Subpath Routing
&lt;/h4&gt;

&lt;p&gt;With this approach, we'll serve both the MinIO console and our static site from the same domain but different paths:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="k"&gt;server&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kn"&gt;listen&lt;/span&gt; &lt;span class="mi"&gt;443&lt;/span&gt; &lt;span class="s"&gt;ssl&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;server_name&lt;/span&gt; &lt;span class="s"&gt;minio.local&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="kn"&gt;ssl_certificate&lt;/span&gt;     &lt;span class="n"&gt;/etc/nginx/ssl/minio.crt&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;ssl_certificate_key&lt;/span&gt; &lt;span class="n"&gt;/etc/nginx/ssl/minio.key&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="c1"&gt;# MinIO Console&lt;/span&gt;
    &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_pass&lt;/span&gt; &lt;span class="s"&gt;http://minio:9001/&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;Host&lt;/span&gt; &lt;span class="nv"&gt;$host&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;X-Real-IP&lt;/span&gt; &lt;span class="nv"&gt;$remote_addr&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;X-Forwarded-For&lt;/span&gt; &lt;span class="nv"&gt;$proxy_add_x_forwarded_for&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;X-Forwarded-Proto&lt;/span&gt; &lt;span class="nv"&gt;$scheme&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="c1"&gt;# Static Site&lt;/span&gt;
    &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/static/&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_pass&lt;/span&gt; &lt;span class="s"&gt;http://minio:9000/static-site/&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_ssl_verify&lt;/span&gt; &lt;span class="no"&gt;off&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;Host&lt;/span&gt; &lt;span class="nv"&gt;$host&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;X-Real-IP&lt;/span&gt; &lt;span class="nv"&gt;$remote_addr&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;X-Forwarded-For&lt;/span&gt; &lt;span class="nv"&gt;$proxy_add_x_forwarded_for&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;X-Forwarded-Proto&lt;/span&gt; &lt;span class="nv"&gt;$scheme&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this configuration:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The MinIO console will be accessible at &lt;code&gt;https://minio.local/&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;The static site will be accessible at &lt;code&gt;https://minio.local/static/&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Option 2: Subdomain Split
&lt;/h4&gt;

&lt;p&gt;Alternatively, we can use different subdomains for the MinIO console and the static site:&lt;/p&gt;

&lt;p&gt;First, update your &lt;code&gt;/etc/hosts&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;127.0.0.1 minio.local static.local
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, configure NGINX:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Static site on static.local&lt;/span&gt;
&lt;span class="k"&gt;server&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kn"&gt;listen&lt;/span&gt; &lt;span class="mi"&gt;443&lt;/span&gt; &lt;span class="s"&gt;ssl&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;server_name&lt;/span&gt; &lt;span class="s"&gt;static.local&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="kn"&gt;ssl_certificate&lt;/span&gt;     &lt;span class="n"&gt;/etc/nginx/ssl/minio.crt&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;ssl_certificate_key&lt;/span&gt; &lt;span class="n"&gt;/etc/nginx/ssl/minio.key&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_pass&lt;/span&gt; &lt;span class="s"&gt;http://minio:9000/static-site/&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_ssl_verify&lt;/span&gt; &lt;span class="no"&gt;off&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# MinIO Console on minio.local&lt;/span&gt;
&lt;span class="k"&gt;server&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kn"&gt;listen&lt;/span&gt; &lt;span class="mi"&gt;443&lt;/span&gt; &lt;span class="s"&gt;ssl&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;server_name&lt;/span&gt; &lt;span class="s"&gt;minio.local&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="kn"&gt;ssl_certificate&lt;/span&gt;     &lt;span class="n"&gt;/etc/nginx/ssl/minio.crt&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;ssl_certificate_key&lt;/span&gt; &lt;span class="n"&gt;/etc/nginx/ssl/minio.key&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_pass&lt;/span&gt; &lt;span class="s"&gt;http://minio:9001/&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this configuration:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The MinIO console will be accessible at &lt;code&gt;https://minio.local/&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;The static site will be accessible at &lt;code&gt;https://static.local/&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After configuring NGINX, check the configuration and reload:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;nginx &lt;span class="nt"&gt;-t&lt;/span&gt;  &lt;span class="c"&gt;# Check config&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl reload nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deploying a Static Website to MinIO
&lt;/h2&gt;

&lt;p&gt;Now that we have MinIO and NGINX set up, let's deploy a static website to our MinIO bucket and make it publicly accessible.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting Up the MinIO Client (mc)
&lt;/h3&gt;

&lt;p&gt;The MinIO Client (mc) is a command-line tool for working with MinIO and other S3-compatible storage services. Let's create a script to upload our static site to MinIO:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;

&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt;

&lt;span class="c"&gt;# Connect to our MinIO instance (using --insecure for self-signed certificates)&lt;/span&gt;
mc &lt;span class="nb"&gt;alias set local &lt;/span&gt;https://minio.local minioadmin minioadmin &lt;span class="nt"&gt;--insecure&lt;/span&gt;

&lt;span class="c"&gt;# Create a bucket for our static site (if it doesn't exist)&lt;/span&gt;
mc mb &lt;span class="nb"&gt;local&lt;/span&gt;/static-site &lt;span class="nt"&gt;--insecure&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;true&lt;/span&gt;

&lt;span class="c"&gt;# Set the bucket to allow anonymous (public) downloads&lt;/span&gt;
mc anonymous &lt;span class="nb"&gt;set &lt;/span&gt;download &lt;span class="nb"&gt;local&lt;/span&gt;/static-site &lt;span class="nt"&gt;--insecure&lt;/span&gt;

&lt;span class="c"&gt;# Upload our static site files&lt;/span&gt;
mc &lt;span class="nb"&gt;cp&lt;/span&gt; &lt;span class="nt"&gt;--recursive&lt;/span&gt; ../static-site/ &lt;span class="nb"&gt;local&lt;/span&gt;/static-site &lt;span class="nt"&gt;--insecure&lt;/span&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"✅ Static site uploaded to MinIO!"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Handling Self-Signed Certificate Issues
&lt;/h3&gt;

&lt;p&gt;Since we're using a self-signed certificate, you might encounter certificate verification errors. Here are two ways to handle them:&lt;/p&gt;

&lt;h4&gt;
  
  
  Option 1: Using the &lt;code&gt;--insecure&lt;/code&gt; Flag
&lt;/h4&gt;

&lt;p&gt;As shown in the script above, you can use the &lt;code&gt;--insecure&lt;/code&gt; flag to skip certificate verification. This is not recommended for production but is fine for a development environment.&lt;/p&gt;

&lt;h4&gt;
  
  
  Option 2: Trusting the Self-Signed Certificate
&lt;/h4&gt;

&lt;p&gt;A better approach is to add the certificate to your system's trusted certificates:&lt;/p&gt;

&lt;p&gt;For Ubuntu/Debian:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo cp &lt;/span&gt;minio.crt /usr/local/share/ca-certificates/minio.crt
&lt;span class="nb"&gt;sudo &lt;/span&gt;update-ca-certificates
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For RedHat-based systems:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo cp &lt;/span&gt;minio.crt /etc/pki/ca-trust/source/anchors/minio.crt
&lt;span class="nb"&gt;sudo &lt;/span&gt;update-ca-trust extract
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After adding the certificate, restart your terminal and try the script without the &lt;code&gt;--insecure&lt;/code&gt; flag.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuring MinIO for Website Hosting
&lt;/h3&gt;

&lt;p&gt;MinIO has built-in support for static website hosting. Let's configure it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;mc &lt;span class="nb"&gt;alias set local &lt;/span&gt;https://minio.local
mc anonymous &lt;span class="nb"&gt;set &lt;/span&gt;download &lt;span class="nb"&gt;local&lt;/span&gt;/static-site
mc website &lt;span class="nb"&gt;set local&lt;/span&gt;/static-site &lt;span class="nt"&gt;--index&lt;/span&gt; index.html &lt;span class="nt"&gt;--error&lt;/span&gt; index.html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this configuration, MinIO will serve &lt;code&gt;index.html&lt;/code&gt; for directory requests and as the error page.&lt;/p&gt;

&lt;h2&gt;
  
  
  Troubleshooting Common Issues
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Issue: ModuleNotFoundError: No module named 'pulumi_docker'
&lt;/h3&gt;

&lt;p&gt;If you encounter this error, follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Ensure you are inside the &lt;code&gt;infra&lt;/code&gt; directory&lt;/li&gt;
&lt;li&gt;Activate the virtual environment:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   &lt;span class="nb"&gt;source &lt;/span&gt;venv/bin/activate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Reinstall the Pulumi Docker provider:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   pip &lt;span class="nb"&gt;install &lt;/span&gt;pulumi_docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Verify the installation:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   pip list | &lt;span class="nb"&gt;grep &lt;/span&gt;pulumi_docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Try running &lt;code&gt;pulumi up&lt;/code&gt; again&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Issue: Pulumi asks for a passphrase
&lt;/h3&gt;

&lt;p&gt;Pulumi encrypts secrets. If prompted for a passphrase, you can set an environment variable to avoid re-entering it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;PULUMI_CONFIG_PASSPHRASE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"your-passphrase"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Cleaning Up Resources
&lt;/h2&gt;

&lt;p&gt;When you're done with your HomeLab MiniCloud, you can clean up all resources using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pulumi destroy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will remove all resources created by Pulumi.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Congratulations! You've successfully set up a HomeLab MiniCloud environment with MinIO for object storage, protected it with SSL, and configured it to host a static website. This setup provides a great foundation for learning cloud concepts, testing deployments, or simply running your own personal cloud services.&lt;/p&gt;

&lt;p&gt;Some next steps you might consider:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add more services to your MiniCloud (like a database or a web application)&lt;/li&gt;
&lt;li&gt;Implement proper authentication for production use&lt;/li&gt;
&lt;li&gt;Set up automated backups of your MinIO data&lt;/li&gt;
&lt;li&gt;Explore other Pulumi providers to expand your infrastructure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Building a home lab is an excellent way to gain hands-on experience with cloud technologies without incurring significant costs. As you grow more comfortable with these tools, you'll find it easier to design, deploy, and manage cloud infrastructure in professional environments as well.&lt;/p&gt;

&lt;p&gt;Happy cloud building!&lt;/p&gt;

</description>
      <category>pulumichallenge</category>
      <category>nginx</category>
      <category>webdev</category>
      <category>devops</category>
    </item>
    <item>
      <title>A Beginner’s Guide to Distributed Tracing with OpenTelemetry and Jaeger</title>
      <dc:creator>shashankpai</dc:creator>
      <pubDate>Thu, 27 Mar 2025 08:08:04 +0000</pubDate>
      <link>https://forem.com/shashankpai/a-beginners-guide-to-distributed-tracing-with-opentelemetry-and-jaeger-fn1</link>
      <guid>https://forem.com/shashankpai/a-beginners-guide-to-distributed-tracing-with-opentelemetry-and-jaeger-fn1</guid>
      <description>&lt;p&gt;Distributed systems are complex, with requests flowing through multiple services. Debugging issues in such systems can be challenging. Distributed tracing, using tools like OpenTelemetry and Jaeger, provides visibility into these interactions, helping us identify performance bottlenecks and troubleshoot effectively.&lt;/p&gt;

&lt;p&gt;In this blog, we'll explore how to instrument a simple Python application with OpenTelemetry and send trace data to Jaeger for visualization.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Why Distributed Tracing?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Distributed tracing captures the lifecycle of a request across multiple services. Key benefits include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;End-to-End Visibility:&lt;/strong&gt; Understand how a request flows through your system.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance Insights:&lt;/strong&gt; Identify slow components and bottlenecks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error Tracking:&lt;/strong&gt; Pinpoint where failures occur in your application stack.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Overview of the Setup&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;We'll create a simple Python application and configure OpenTelemetry to send trace data to Jaeger. Here's what we’ll cover:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Setting Up Jaeger&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Instrumenting a Python App with OpenTelemetry&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Visualizing Traces in Jaeger&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Code Walkthrough&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Demo: Step-by-Step Execution&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;1. Setting Up Jaeger&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Jaeger is an open-source tool for tracing and monitoring distributed systems. We’ll use the &lt;strong&gt;all-in-one&lt;/strong&gt; Docker image for simplicity.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Command to Run Jaeger&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; jaeger &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;COLLECTOR_ZIPKIN_HTTP_PORT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;9411 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-p&lt;/span&gt; 5775:5775 &lt;span class="nt"&gt;-p&lt;/span&gt; 6831:6831/udp &lt;span class="nt"&gt;-p&lt;/span&gt; 6832:6832/udp &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-p&lt;/span&gt; 5778:5778 &lt;span class="nt"&gt;-p&lt;/span&gt; 16686:16686 &lt;span class="nt"&gt;-p&lt;/span&gt; 14250:14250 &lt;span class="nt"&gt;-p&lt;/span&gt; 14268:14268 &lt;span class="se"&gt;\&lt;/span&gt;
  jaegertracing/all-in-one:1.31
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;16686&lt;/code&gt;&lt;/strong&gt;: Jaeger UI (View traces).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;14250&lt;/code&gt;&lt;/strong&gt;: GRPC endpoint (Receive traces from OpenTelemetry).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;9411&lt;/code&gt;&lt;/strong&gt;: Zipkin-compatible endpoint.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Why Jaeger?&lt;/strong&gt; It provides a comprehensive solution for distributed tracing with a simple UI for visualizing trace data.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;2. Instrumenting a Python App with OpenTelemetry&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;OpenTelemetry is a set of APIs and tools for collecting telemetry data like traces and metrics. We'll use it to instrument a Python app.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Install OpenTelemetry Libraries&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;flask opentelemetry-api opentelemetry-sdk &lt;span class="se"&gt;\&lt;/span&gt;
    opentelemetry-exporter-jaeger opentelemetry-instrumentation-flask
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Code: A Simple Flask Application&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;File: &lt;code&gt;app.py&lt;/code&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;flask&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Flask&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;jsonify&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;opentelemetry&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;trace&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;opentelemetry.exporter.jaeger.proto.grpc&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;JaegerExporter&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;opentelemetry.sdk.resources&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Resource&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;opentelemetry.sdk.trace&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;TracerProvider&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;opentelemetry.sdk.trace.export&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;BatchSpanProcessor&lt;/span&gt;

&lt;span class="c1"&gt;# Initialize Flask app
&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Flask&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;__name__&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Configure OpenTelemetry
&lt;/span&gt;&lt;span class="n"&gt;trace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_tracer_provider&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nc"&gt;TracerProvider&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;resource&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;Resource&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;service.name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;python_app&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}))&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;tracer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;trace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_tracer_provider&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;get_tracer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;__name__&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Export traces to Jaeger
&lt;/span&gt;&lt;span class="n"&gt;jaeger_exporter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;JaegerExporter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;collector_endpoint&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://localhost:14250&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;insecure&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;span_processor&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;BatchSpanProcessor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;jaeger_exporter&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;trace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_tracer_provider&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;add_span_processor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;span_processor&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nd"&gt;@app.route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;home&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;tracer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;start_as_current_span&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;home-span&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;jsonify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Welcome to the home page!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nd"&gt;@app.route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/process&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;process&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;tracer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;start_as_current_span&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;process-span&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;jsonify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Processing done!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;__main__&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;host&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;0.0.0.0&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;5000&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Trace Provider:&lt;/strong&gt; Defines the application as &lt;code&gt;python_app&lt;/code&gt; for Jaeger.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Jaeger Exporter:&lt;/strong&gt; Sends traces to Jaeger’s GRPC endpoint.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Spans:&lt;/strong&gt; Represent units of work, like handling a request.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;3. Visualizing Traces in Jaeger&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;After instrumenting the application:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Access the Jaeger UI: &lt;a href="http://localhost:16686" rel="noopener noreferrer"&gt;http://localhost:16686&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Select the service &lt;code&gt;python_app&lt;/code&gt; from the dropdown.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Find Traces&lt;/strong&gt; to view recent traces.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;4. Dockerizing the Application&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To simplify deployment, we’ll use Docker Compose to run both Jaeger and the Flask app.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Docker Compose File&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;File: &lt;code&gt;docker-compose.yml&lt;/code&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3.7'&lt;/span&gt;

&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;jaeger&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;jaegertracing/all-in-one:1.31&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;5775:5775/udp"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;6831:6831/udp"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;6832:6832/udp"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;5778:5778"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;16686:16686"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;14250:14250"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;14268:14268"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;9411:9411"&lt;/span&gt;

  &lt;span class="na"&gt;python_app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;5000:5000"&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;OTEL_EXPORTER_JAEGER_ENDPOINT=http://jaeger:14250&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Dockerfile for Flask App&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;File: &lt;code&gt;Dockerfile&lt;/code&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; python:3.9-slim&lt;/span&gt;

&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; requirements.txt requirements.txt&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;

&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["python", "app.py"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;File: &lt;code&gt;requirements.txt&lt;/code&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;flask
opentelemetry-api
opentelemetry-sdk
opentelemetry-exporter-jaeger
opentelemetry-instrumentation-flask
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  &lt;strong&gt;5. Running the Demo&lt;/strong&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Build and Start the Stack:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   docker-compose up &lt;span class="nt"&gt;--build&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Access the App:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Visit &lt;a href="http://localhost:5000" rel="noopener noreferrer"&gt;http://localhost:5000&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Try &lt;code&gt;/&lt;/code&gt; and &lt;code&gt;/process&lt;/code&gt; routes to generate traces.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;View Traces:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open &lt;a href="http://localhost:16686" rel="noopener noreferrer"&gt;http://localhost:16686&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;&lt;code&gt;python_app&lt;/code&gt;&lt;/strong&gt; and analyze the traces.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Code Explanation&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Spans:&lt;/strong&gt; Represent individual operations in your app (e.g., &lt;code&gt;home-span&lt;/code&gt;, &lt;code&gt;process-span&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Trace Attributes:&lt;/strong&gt; Metadata added to spans for better debugging.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Jaeger Exporter:&lt;/strong&gt; Sends trace data to the Jaeger instance.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In this tutorial, we learned to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Set up Jaeger for distributed tracing.&lt;/li&gt;
&lt;li&gt;Instrument a Python app using OpenTelemetry.&lt;/li&gt;
&lt;li&gt;Use Docker Compose to simplify the setup.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With this foundation, you can extend the project by adding custom spans, simulating failures, or tracing across microservices. Distributed tracing is a vital skill for debugging and optimizing modern applications.&lt;/p&gt;

</description>
      <category>observability</category>
      <category>docker</category>
      <category>opentelemetry</category>
      <category>devops</category>
    </item>
    <item>
      <title>Building a Simple CDN with NGINX and Docker: A Step-by-Step Guide</title>
      <dc:creator>shashankpai</dc:creator>
      <pubDate>Thu, 27 Mar 2025 08:02:33 +0000</pubDate>
      <link>https://forem.com/shashankpai/building-a-simple-cdn-with-nginx-and-docker-a-step-by-step-guide-lkg</link>
      <guid>https://forem.com/shashankpai/building-a-simple-cdn-with-nginx-and-docker-a-step-by-step-guide-lkg</guid>
      <description>&lt;h3&gt;
  
  
  &lt;strong&gt;Introduction: Why Use a CDN?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;A Content Delivery Network (CDN) is a distributed network of servers that delivers content to users based on their geographic location. CDNs improve website performance, reduce latency, and ensure better availability by caching content closer to end users. In this blog, we'll create a simple CDN setup using &lt;strong&gt;NGINX&lt;/strong&gt; and &lt;strong&gt;Docker&lt;/strong&gt;, simulating an edge and origin server to demonstrate how content caching works.&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Prerequisites:&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Basic knowledge of Docker and NGINX.&lt;/li&gt;
&lt;li&gt;Docker and Docker Compose installed on your local machine.&lt;/li&gt;
&lt;li&gt;Administrative access to edit the &lt;code&gt;/etc/hosts&lt;/code&gt; file (or &lt;code&gt;C:\Windows\System32\drivers\etc\hosts&lt;/code&gt; on Windows).&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Step 1: Setup Overview&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8xbfo7a06s8v81dlndu8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8xbfo7a06s8v81dlndu8.png" alt="A setup of what we will try to simulate " width="800" height="464"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We'll simulate the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Origin Server:&lt;/strong&gt; Stores the original content.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Edge Server:&lt;/strong&gt; Caches content from the origin and serves it to clients.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DNS Simulation:&lt;/strong&gt; Redirect traffic to the edge server using a custom domain (&lt;code&gt;cdn.local&lt;/code&gt;) by configuring your local hosts file.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Both servers will run as Docker containers, with the edge server fetching content from the origin server.&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Step 2: Creating the Origin Server&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The origin server hosts the original content that the CDN will cache.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Create a Dockerfile for the origin server:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;# Dockerfile.origin&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; nginx:latest&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; ./origin/nginx.conf /etc/nginx/conf.d/default.conf&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; ./origin-content /usr/share/nginx/html&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Prepare the NGINX configuration for the origin server:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="c1"&gt;# origin/nginx.conf&lt;/span&gt;
&lt;span class="k"&gt;server&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kn"&gt;listen&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;server_name&lt;/span&gt; &lt;span class="s"&gt;localhost&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kn"&gt;root&lt;/span&gt; &lt;span class="n"&gt;/usr/share/nginx/html&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;index&lt;/span&gt; &lt;span class="s"&gt;index.html&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;root&lt;/code&gt; directive specifies the directory containing the website files.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;index&lt;/code&gt; directive sets the default file (&lt;code&gt;index.html&lt;/code&gt;) to serve.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Prepare the content directory:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a folder named &lt;code&gt;origin-content&lt;/code&gt; and add a simple HTML file (&lt;code&gt;index.html&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="c"&gt;&amp;lt;!-- origin-content/index.html --&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;h1&amp;gt;&lt;/span&gt;Welcome to the NGINX CDN Origin Server!&lt;span class="nt"&gt;&amp;lt;/h1&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. Build and run the origin container:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nt"&gt;-t&lt;/span&gt; origin-server &lt;span class="nt"&gt;-f&lt;/span&gt; Dockerfile.origin &lt;span class="nb"&gt;.&lt;/span&gt;
docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; origin &lt;span class="nt"&gt;-p&lt;/span&gt; 8080:80 origin-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can access the origin content at:&lt;br&gt;&lt;br&gt;
&lt;code&gt;http://localhost:8080&lt;/code&gt;&lt;/p&gt;


&lt;h3&gt;
  
  
  &lt;strong&gt;Step 3: Creating the Edge Server&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The edge server will cache content from the origin server and serve it to clients.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Create an NGINX configuration for the edge server:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="c1"&gt;# edge/nginx.conf&lt;/span&gt;
&lt;span class="k"&gt;proxy_cache_path&lt;/span&gt; &lt;span class="n"&gt;/var/cache/nginx&lt;/span&gt; &lt;span class="s"&gt;levels=1:2&lt;/span&gt; &lt;span class="s"&gt;keys_zone=my_cache:10m&lt;/span&gt; &lt;span class="s"&gt;max_size=1g&lt;/span&gt; &lt;span class="s"&gt;inactive=60m&lt;/span&gt; &lt;span class="s"&gt;use_temp_path=off&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;server&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kn"&gt;listen&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;server_name&lt;/span&gt; &lt;span class="s"&gt;cdn.local&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_pass&lt;/span&gt; &lt;span class="s"&gt;http://origin-server&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_cache&lt;/span&gt; &lt;span class="s"&gt;my_cache&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_cache_valid&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt; &lt;span class="mi"&gt;60m&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;add_header&lt;/span&gt; &lt;span class="s"&gt;X-Proxy-Cache&lt;/span&gt; &lt;span class="nv"&gt;$upstream_cache_status&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation of Key Directives:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;proxy_cache_path&lt;/code&gt;: Defines the cache storage location and size.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;proxy_pass&lt;/code&gt;: Forwards requests to the origin server.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;proxy_cache&lt;/code&gt;: Activates caching for the defined &lt;code&gt;my_cache&lt;/code&gt; zone.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;X-Proxy-Cache&lt;/code&gt;: Adds a custom header to indicate cache status (&lt;code&gt;HIT&lt;/code&gt;, &lt;code&gt;MISS&lt;/code&gt;, or &lt;code&gt;EXPIRED&lt;/code&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Create a Dockerfile for the edge server:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;# Dockerfile.edge&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; nginx:latest&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; ./edge/nginx.conf /etc/nginx/conf.d/default.conf&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Build and run the edge container:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nt"&gt;-t&lt;/span&gt; edge-server &lt;span class="nt"&gt;-f&lt;/span&gt; Dockerfile.edge &lt;span class="nb"&gt;.&lt;/span&gt;
docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; edge &lt;span class="nt"&gt;-p&lt;/span&gt; 8081:80 &lt;span class="nt"&gt;--link&lt;/span&gt; origin origin-server edge-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can access the edge server at:&lt;br&gt;&lt;br&gt;
&lt;code&gt;http://localhost:8081&lt;/code&gt;&lt;/p&gt;


&lt;h3&gt;
  
  
  &lt;strong&gt;Step 4: Configuring DNS Simulation with &lt;code&gt;/etc/hosts&lt;/code&gt;&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;To simulate DNS resolution for our custom domain (&lt;code&gt;cdn.local&lt;/code&gt;), we’ll modify the &lt;code&gt;/etc/hosts&lt;/code&gt; file to map &lt;code&gt;cdn.local&lt;/code&gt; to the edge server.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Edit the &lt;code&gt;/etc/hosts&lt;/code&gt; file:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Open the file with administrative privileges:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;On Linux or macOS:&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  &lt;span class="nb"&gt;sudo &lt;/span&gt;nano /etc/hosts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;On Windows:&lt;/strong&gt;
Open &lt;code&gt;C:\Windows\System32\drivers\etc\hosts&lt;/code&gt; with a text editor as an administrator.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Add the following entry:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;127.0.0.1   cdn.local
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
This maps &lt;code&gt;cdn.local&lt;/code&gt; to &lt;code&gt;127.0.0.1&lt;/code&gt;, which is your local machine. When you access &lt;code&gt;http://cdn.local&lt;/code&gt;, the request will be directed to the edge server running on &lt;code&gt;localhost&lt;/code&gt;.&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Step 5: Testing the CDN Setup&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Initial Request (Cache MISS):&lt;/strong&gt;
When you first access &lt;code&gt;http://cdn.local:8081&lt;/code&gt;, the edge server fetches the content from the origin. You should see the response header:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   X-Proxy-Cache: MISS
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Subsequent Requests (Cache HIT):&lt;/strong&gt;
Reload the page, and this time the response will be served from the edge server’s cache:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   X-Proxy-Cache: HIT
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In this exercise, we created a basic CDN using NGINX and Docker. We set up an origin server hosting the original content and an edge server that caches and serves content. By configuring DNS simulation with the &lt;code&gt;/etc/hosts&lt;/code&gt; file, we routed traffic to the edge server using a custom domain (&lt;code&gt;cdn.local&lt;/code&gt;). This setup demonstrates how CDNs improve performance by caching content closer to the end user.&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Future Enhancements:&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Implement SSL/TLS:&lt;/strong&gt; Secure your CDN with HTTPS.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Load Balancing:&lt;/strong&gt; Add multiple edge servers for redundancy.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring and Logging:&lt;/strong&gt; Use tools like Prometheus and Grafana to monitor CDN performance.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>nginx</category>
      <category>docker</category>
      <category>webdev</category>
      <category>devops</category>
    </item>
    <item>
      <title>Automatic Image Update to Git using Flux and GitHub Actions</title>
      <dc:creator>shashankpai</dc:creator>
      <pubDate>Sun, 14 Apr 2024 18:30:00 +0000</pubDate>
      <link>https://forem.com/infracloud/automatic-image-update-to-git-using-flux-and-github-actions-3c4f</link>
      <guid>https://forem.com/infracloud/automatic-image-update-to-git-using-flux-and-github-actions-3c4f</guid>
      <description>&lt;p&gt;Have you ever had to manually update your container images, only to forget to do it or make a mistake? Automatic image updates can help you avoid these problems and ensure that your applications are always running the latest and most secure images. Manual image updates can be time-consuming and error-prone, especially if you have a large number of containerized applications. &lt;/p&gt;

&lt;p&gt;In this blog post, we'll explore the advantages of automatic image updates and explain how to implement them in your environment using &lt;a href="https://dev.to/blogs/github-actions-demystified/"&gt;GitHub Actions&lt;/a&gt;. Moreover, we'll cover the process of temporarily pausing these updates to ensure application stability during incidents or any unforeseen issues.&lt;/p&gt;

&lt;p&gt;So, let’s deep dive into automatic image updates to Git using Flux and GitHub Actions!&lt;/p&gt;

&lt;h2&gt;
  
  
  GitOps: A modern approach to software delivery
&lt;/h2&gt;

&lt;p&gt;In the world of application management, GitOps fills a crucial need by simplifying and enhancing how we handle applications. But why do we need it? Managing applications can be complex and prone to errors. To address this, GitOps establishes Git as the central hub for handling configuration and infrastructure changes, allowing for streamlined, efficient, and reliable management. It speeds up deployment, ensures consistent and error-free setups, and reduces operational costs.&lt;/p&gt;

&lt;p&gt;Moreover, it's important to note that GitOps isn't just about managing applications; it also intersects with the concept of auto-image updates. These updates involve automatically refreshing the underlying components of your application by regularly updating to the latest images pushed to the container registry, ensuring your application stays in sync with the most up-to-date software and dependencies.&lt;/p&gt;

&lt;p&gt;GitOps streamlines software deployment by automating build, test, and deployment processes triggered by Git commits. This enhances reliability through a review and approval system, reducing errors and downtime. Collaboration is promoted through Git's features for change tracking and code reviews, with seamless integration of image updates. &lt;/p&gt;

&lt;h3&gt;
  
  
  Current pain points of dealing with image updates without GitOps
&lt;/h3&gt;

&lt;p&gt;Here are several challenges you may face when dealing with image updates in the absence of GitOps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Manual updates to manifest&lt;/strong&gt;: When a new image is released, the manifest must be manually updated to reference the new image tag. This can be error-prone and time-consuming, especially for large and complex applications.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Delayed deployments&lt;/strong&gt;: If the manifest is not updated promptly, the deployment of the new image will be delayed. This can impact the time to market for new features or bug fixes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Inconsistent deployments&lt;/strong&gt;: If the manifest is not updated consistently across all environments, it can lead to inconsistent deployments. This can make it difficult to troubleshoot problems and ensure that applications are running as expected in all environments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Manual build triggers&lt;/strong&gt;: When a new image is released, a new build must be manually triggered. This can be inefficient and time-consuming, especially for large and complex applications.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;GitOps using auto-image updates can help to solve all of these pain points by providing a declarative and automated approach to application deployment and management. With GitOps, the manifest is automatically updated when a new image is released, and a new build is automatically triggered. This ensures that deployments are consistent and timely across all environments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Benefits of automatic image updates using GitOps
&lt;/h3&gt;

&lt;p&gt;Here are several advantages of using GitOps for image updates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Saves time and effort&lt;/strong&gt;: GitOps streamlines image updates by automating the process of updating images, tags, and manifests. This eliminates the need for manual intervention, reducing the risk of errors and ensuring a quick and reliable deployment of the latest application version across all environments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Promotes consistency across environments&lt;/strong&gt;: Ensuring consistent application versions across various environments become seamless with automated image updates. This practice simplifies application management, reducing the risk of inconsistencies between environments. Consequently, maintenance and troubleshooting become more straightforward.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fosters collaboration and transparency&lt;/strong&gt;: Encouraging teamwork and transparency, GitOps facilitates automatic image updates for development teams. Leveraging Git commits for all changes simplifies change tracking, code reviews, and maintains visibility in the software delivery process. This promotes efficient collaboration among developers and cultivates a culture of continuous improvement.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Improves application delivery&lt;/strong&gt;: Leveraging GitOps for automatic image updates speeds up application delivery and boosts reliability. Streamlining infrastructure and application management enables developers to focus more efficiently on delivering customer value.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Approaches to automating image updates
&lt;/h2&gt;

&lt;p&gt;In this section, we will explore two primary methods for automating image updates to keep your applications up-to-date and secure. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Scripted approach&lt;/li&gt;
&lt;li&gt;Automated approach using GitOps tools&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Scripted approach
&lt;/h3&gt;

&lt;p&gt;A script can be configured to run on a schedule using a cron job or when certain events occur, such as when a new commit is pushed to the Git repository.&lt;/p&gt;

&lt;p&gt;Here are a few pros and cons of the scripted approach:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros of scripted approach&lt;/strong&gt;:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Flexible and customizable method&lt;/li&gt;
&lt;li&gt;Can be tailored to meet specific needs&lt;/li&gt;
&lt;li&gt;No need to rely on third-party tools&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons of scripted approach&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Complex and time-consuming to implement&lt;/li&gt;
&lt;li&gt;Requires expertise in scripting &lt;/li&gt;
&lt;li&gt;Maintaining and troubleshooting the scripts can be challenging&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Using GitOps tools for automating image updates
&lt;/h3&gt;

&lt;p&gt;In the previous section, we saw that scripting can be complex and time-consuming. That's where dedicated tools come in handy. In this section, we will explore how GitOps tools make automating image updates easier. We will specifically look at the two most popular tools, &lt;a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/architecture/"&gt;Argo CD&lt;/a&gt; and Flux, to understand how they help with automatic image updates.&lt;/p&gt;

&lt;h4&gt;
  
  
  Argo CD image updater
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F32m5pvlnv5yw0h5ancyj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F32m5pvlnv5yw0h5ancyj.png" alt="Image description" width="800" height="518"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;(Image update process using Argo CD Image Updater)&lt;/center&gt;

&lt;p&gt;The Argo CD image updater is a tool that can help you update your Kubernetes workloads to the latest versions of their container images. It does this by setting appropriate application parameters for your &lt;a href="https://dev.to/argo-cd-consulting-support/"&gt;Argo CD&lt;/a&gt; applications.&lt;/p&gt;

&lt;p&gt;To use the image updater, you annotate your Argo CD application resources with a &lt;a href="https://argocd-image-updater.readthedocs.io/en/stable/basics/update/"&gt;list of images to be considered for update&lt;/a&gt;, along with a version constraint to restrict the maximum allowed new version for each image. The image updater will then periodically check for new versions of the images and update them if necessary.&lt;/p&gt;

&lt;p&gt;The image updater can be used to update images in a variety of ways, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Automatic updates&lt;/strong&gt;: The image updater can be configured to automatically &lt;a href="https://argocd-image-updater.readthedocs.io/en/stable/basics/update-methods/"&gt;update images to their latest allowed version on a schedule&lt;/a&gt;. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Manual updates&lt;/strong&gt;: The image updater can also be triggered manually to update images. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Event-driven updates&lt;/strong&gt;: The image updater can also be configured to be triggered by events, &lt;a href="https://argocd-image-updater.readthedocs.io/en/stable/basics/update-methods/"&gt;such as when a new image tag is pushed to a registry&lt;/a&gt;. &lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Image automation and reflector controllers in Flux
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy5gfc6mcarb9fx2yejsj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy5gfc6mcarb9fx2yejsj.png" alt="Image description" width="800" height="207"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;(Image update process using Image Automation and Image Reflector Controller)&lt;/center&gt;

&lt;p&gt;Image automation in Flux can be used to automate the following tasks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Updating image tags in Git&lt;/strong&gt;: Image automation can automatically update the image tags in your Git repository to the latest stable versions of your images. This can save you time and reduce the risk of errors.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Creating and managing image promotion pipelines&lt;/strong&gt;: Image automation can be used to create and manage image promotion pipelines. This allows you to automate the process of rolling out new image versions to production in a safe and controlled manner.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Implementing image policies&lt;/strong&gt;: Image automation can be used to implement image policies, such as requiring all images to be tagged with a specific version number or only using images from a specific registry.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Image automation controllers&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;In Flux CD, the image automation controllers, which include the &lt;code&gt;Image Reflector Controller&lt;/code&gt; and &lt;code&gt;Image Automation Controller&lt;/code&gt;, are responsible for maintaining the synchronization between the image metadata in Kubernetes and the latest image metadata from the registry. The image reflector controller achieves this by regularly scanning the registry for changes to image metadata. When a change is detected, they update the Kubernetes resources that reference the affected images accordingly.&lt;/p&gt;

&lt;p&gt;Moreover, the image automation controllers are capable of triggering image updates based on changes to the image metadata. For instance, you can configure automation to automatically update the image tags in your Kubernetes deployments to the latest stable versions of your images.&lt;/p&gt;

&lt;p&gt;For more details about the implementation and design of the image automation controllers in Flux CD, refer to the official documentation on &lt;a href="https://fluxcd.io/flux/components/image/"&gt;image reflector and automation Controllers&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Pros and cons of using GitOps approach for automating image updates
&lt;/h4&gt;

&lt;p&gt;Let us explore the pros and cons of &lt;a href="https://dev.to/gitops-consulting/"&gt;employing GitOps&lt;/a&gt; for automating image updates, gaining insights into its efficiency and potential challenges.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros of GitOps approach&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Reducing human error&lt;/strong&gt;: Both Argo CD and Flux CD offerings for image update aim to reduce the risk of human error by automating the process of updating container images. This helps in maintaining consistency and accuracy.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Workflow efficiency&lt;/strong&gt;: Automation in image updates improves workflow efficiency by saving time and allowing teams to focus on more critical tasks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automatic updates&lt;/strong&gt;: GitOps tools, including both Argo CD and Flux CD, provide a mechanism for automatic updates, which enhances the security, reliability, and efficiency of Kubernetes workloads.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexibility&lt;/strong&gt;: Both tools offer flexibility in how image updates can be handled, supporting automatic updates, manual updates, and event-driven updates, allowing users to choose the best approach for their specific requirements.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons of GitOps approach&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Complex setup&lt;/strong&gt;: Setting up and configuring GitOps tools for image automation, especially for users unfamiliar with the underlying technologies like Kubernetes or Flux, can be complex.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Troubleshooting complexity&lt;/strong&gt;: The complexity of interactions with various components (e.g., Kubernetes, registries, Git repositories) can make troubleshooting challenging if issues arise during the image update process.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dependency on GitOps tool&lt;/strong&gt;: Both Argo CD and Flux CD are limited to updating container images for applications managed by their respective tools. This dependency might restrict users who prefer or have other GitOps tools in their workflows.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Image pull secrets location&lt;/strong&gt;: GitOps tools often require image pull secrets to be present in the same cluster where the tool is running, which might pose challenges for scenarios where secrets need to be fetched from other clusters.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Limited PR support&lt;/strong&gt;: Neither Argo CD nor Flux CD, as described, fully supports the creation of pull requests (PRs). External CI tools may be required for more comprehensive integration with version control systems.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Production readiness&lt;/strong&gt;: When considering GitOps tools for image automation in production, it's crucial to note that Flux CD is deemed "production-ready," while Argo CD's image updater lacks the same designation. This underscores the necessity of carefully evaluating the maturity and stability of GitOps tools to align with specific production environment requirements.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How does image automation and image reflector controllers in Flux work
&lt;/h2&gt;

&lt;p&gt;Let's understand how Flux employs two controllers, i.e. image automation controller and image reflector controller, to achieve image update automation.&lt;/p&gt;

&lt;p&gt;These controllers, together, collaborate to update a Git repository whenever new container image tags become available. &lt;/p&gt;

&lt;h3&gt;
  
  
  Image reflector controller
&lt;/h3&gt;

&lt;p&gt;It is composed of two custom resources: &lt;a href="https://fluxcd.io/flux/components/image/imagerepositories/"&gt;Image Repository&lt;/a&gt; and &lt;a href="https://fluxcd.io/flux/components/image/imagepolicies/"&gt;Image Policy&lt;/a&gt;. Let us understand how they operate together.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Image repository&lt;/strong&gt;: This component scans the container image repository and retrieves image metadata, including tags and versions. You can create an &lt;code&gt;Image Repository&lt;/code&gt; resource and configure the scanning interval.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Image policy&lt;/strong&gt;: This custom resource informs Flux which semantic versioning range to apply when filtering image tags.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Image automation controller
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://fluxcd.io/flux/components/image/imageupdateautomations/"&gt;image automation controller&lt;/a&gt; includes a custom resource called &lt;code&gt;Image Update Automation&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Image Update&lt;/code&gt; resource clones the Git repository, updates YAML files based on the latest images from the image reflector controller, and commits changes to the specified Git repository. &lt;br&gt;
After cloning, the image automation controller identifies and updates the deployment YAML manifest, using comments to mark fields for updating. The automation process checks the image policy in the comment and updates the field value accordingly.&lt;/p&gt;

&lt;p&gt;Once updated, the image update automation controller commits and pushes changes to the specified branch. The source controller pulls the updated manifest, and the &lt;a href="https://github.com/fluxcd/kustomize-controller"&gt;Kustomization controller&lt;/a&gt; applies the changes. This is how the Flux image automation controller streamlines the process of automating image version updates.&lt;/p&gt;
&lt;h2&gt;
  
  
  Automating container image updates with Flux image automation and GitHub Actions
&lt;/h2&gt;

&lt;p&gt;We will explore a live demonstration of a Flux GitOps configuration that seamlessly manages a workflow from staging to production, showcasing the automatic deployment process when making changes to source code. Additionally, we'll highlight its integration with GitHub Actions to enhance the efficiency of the entire deployment pipeline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj6me7lsinhyz6cvz90ty.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj6me7lsinhyz6cvz90ty.png" alt="Image description" width="800" height="487"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In staging, developers make changes to application code, triggering a new build. The new image tag is then propagated to the Kubernetes manifest and finally deployed to the staging environment without requiring any approval in between.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb5bcgqbnwthcfu3zq0qc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb5bcgqbnwthcfu3zq0qc.png" alt="Image description" width="800" height="487"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the case of production, a new PR is raised with an updated Kubernetes manifest, requiring approval for merge in the config repository. Only after approval is given does it get deployed to the production environment.&lt;/p&gt;
&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;Ensure you have following things setup:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Flux CLI&lt;/strong&gt; - which can be downloaded from the &lt;a href="https://fluxcd.io/docs/cmd/"&gt;official docs&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kubernetes cluster&lt;/strong&gt; -  for this demo, we will use minikube.&lt;/li&gt;
&lt;li&gt;GitHub account - which has GitHub Actions enabled.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  Environment setup details
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Hello Env Flask application&lt;/strong&gt;: A simple application displaying the environment name and release number.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Two environments&lt;/strong&gt;: Staging and production, housed in separate namespaces.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Continuous deployment&lt;/strong&gt;: Source code changes trigger automated build and deployment to the staging environment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Release tagging&lt;/strong&gt;: Tagging a release initiates automatic build processes and creates a pull request for production deployment.&lt;/li&gt;
&lt;/ol&gt;
&lt;h4&gt;
  
  
  Git repositories
&lt;/h4&gt;

&lt;p&gt;Let's take a look at two Git repositories we will be using.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Application repository&lt;/strong&gt; - &lt;a href="https://github.com/infracloudio/flux-helloenv-app"&gt;https://github.com/infracloudio/flux-helloenv-app&lt;/a&gt;  - code and Kustomization.

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Kustomization for environment separation&lt;/strong&gt;: Utilizing Kustomize to segregate staging and production environments. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Structured repository&lt;/strong&gt;: The repository combines source code and Kubernetes manifests for a unified demo setup. There are many possible ways to &lt;a href="https://fluxcd.io/flux/guides/repository-structure/"&gt;structure your git repositories.&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CI and automation&lt;/strong&gt;: Leveraging GitHub Actions for &lt;a href="https://dev.to/ci-cd-consulting/"&gt;continuous integration&lt;/a&gt; and automation.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Management Repository&lt;/strong&gt;-  &lt;a href="https://github.com/infracloudio/flux-gitops-helloenv"&gt;https://github.com/infracloudio/flux-gitops-helloenv&lt;/a&gt; - GitOps manifests.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  Flux bootstrap configuration
&lt;/h3&gt;

&lt;p&gt;Beginning with an empty cluster, our initial task is to &lt;a href="https://fluxcd.io/flux/cmd/flux_bootstrap/"&gt;bootstrap Flux itself&lt;/a&gt;. Flux serves as the foundation upon which we'll bootstrap all other components.&lt;/p&gt;

&lt;p&gt;In the following sections, we'll take a detailed look at each file within the apps directory, one by one.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;├── apps
│   ├── git-repo.yaml
│   ├── image-auto-prod.yaml
│   ├── image-auto-staging.yaml
│   ├── image-policy-prod.yaml
│   ├── image-policy-staging.yaml
│   ├── image-repo.yaml
│   ├── kustomization-prod.yaml
│   └── kustomization-staging.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  git-repo.yaml
&lt;/h4&gt;

&lt;p&gt;This instructs Flux on how to interact with the Git repository where your application's source code resides. It contains the following configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="na"&gt;ref&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;main&lt;/span&gt;
  &lt;span class="na"&gt;secretRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ssh-credentials&lt;/span&gt;
  &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ssh://git@github.com/infracloudio/flux-helloenv-app&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;ref:&lt;/code&gt; Specifies the Git branch to watch for changes, in this case, main.&lt;br&gt;&lt;br&gt;
&lt;code&gt;secretRef:&lt;/code&gt; Refers to a Kubernetes Secret named ssh-credentials, which likely contains SSH keys for secure Git access.&lt;br&gt;&lt;br&gt;
The &lt;code&gt;ssh-credentials&lt;/code&gt; is a secret that needs to be created.  &lt;/p&gt;

&lt;p&gt;&lt;code&gt;url:&lt;/code&gt; Indicates the URL of the Git repository, Change this to &lt;code&gt;url: ssh://git@github.com/&amp;lt;your_github_username&amp;gt;/flux-helloenv-app&lt;/code&gt; once you fork it.  &lt;/p&gt;
&lt;h4&gt;
  
  
  image-repo.yaml
&lt;/h4&gt;

&lt;p&gt;This file is responsible for scanning the Docker image registry and fetching image tags based on the defined policy. Here's the configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker.io/shapai/helloenv&lt;/span&gt;
&lt;span class="na"&gt;interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1m0s&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;image:&lt;/code&gt; Specifies the Docker image repository (for e.g. docker.io/shapai/helloenv) to scan for image tags. Change this to have your relevant Docker image repository, where the CI job will build and push the image. This same registry needs to be updated in your forked CI file, where the CI build will push images.&lt;br&gt;&lt;br&gt;
&lt;code&gt;interval:&lt;/code&gt; Sets the interval at which Flux will scan the image repository (every 1 minute in this case) and fetch image tags according to the defined policy.  &lt;/p&gt;

&lt;p&gt;Next, we will go through the image policy files image-policy-staging.yaml and image-policy-prod.yaml  &lt;/p&gt;
&lt;h4&gt;
  
  
  image-policy-staging.yaml
&lt;/h4&gt;

&lt;p&gt;This file defines the image tagging policy for the staging environment. Here's the configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="na"&gt;filterTags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;extract&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$ts&lt;/span&gt;
    &lt;span class="na"&gt;pattern&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;^main-[a-f0-9]+-(?P&amp;lt;ts&amp;gt;[0-9]+)&lt;/span&gt;
  &lt;span class="na"&gt;imageRepositoryRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;helloenv&lt;/span&gt;
  &lt;span class="na"&gt;policy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;numerical&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;order&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;asc&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;filterTags:&lt;/code&gt; This section specifies how to filter image tags. It extracts the timestamp ($ts) from tags that match the specified pattern. Tags are filtered in ascending order based on this timestamp, ensuring that Flux fetches the latest built image for the staging environment.  &lt;/p&gt;

&lt;h4&gt;
  
  
  image-policy-prod.yaml
&lt;/h4&gt;

&lt;p&gt;This file defines the image tagging policy for the production environment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="na"&gt;imageRepositoryRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;helloenv&lt;/span&gt;
  &lt;span class="na"&gt;policy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;semver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;range&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;&amp;gt;=1.0.0'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;imageRepositoryRef:&lt;/code&gt; Refers to the image repository named helloenv.&lt;br&gt;&lt;br&gt;
&lt;code&gt;policy:&lt;/code&gt; Defines a Semantic Versioning (SemVer) policy that specifies a range for acceptable image tags (in this case, any version greater than or equal to 1.0.0).&lt;br&gt;&lt;br&gt;
These image policies are crucial in ensuring that Flux deploys the correct images to the respective environments (staging and production) based on the defined criteria.  &lt;/p&gt;
&lt;h4&gt;
  
  
  kustomization-prod.yaml and kustomization-staging.yaml
&lt;/h4&gt;

&lt;p&gt;These YAML files define Kustomization resources for managing Kubernetes resources in both staging and production environments within the flux-system namespace. They are configured to synchronize with a specified Git repository, allowing for automated deployment and management of Kubernetes resources.&lt;/p&gt;

&lt;p&gt;Now, let's go through the &lt;strong&gt;ImageUpdateAutomation&lt;/strong&gt; files.&lt;/p&gt;
&lt;h4&gt;
  
  
  helloenv-staging.yaml
&lt;/h4&gt;

&lt;p&gt;This YAML file configures an ImageUpdateAutomation object to update the image tags in the &lt;code&gt;./demo/kustomize/staging&lt;/code&gt; directory of the flux-helloenv-app Git repository. It will scan the repo every 1 minute (specified by the interval field).   &lt;/p&gt;

&lt;p&gt;The &lt;code&gt;git&lt;/code&gt; section specifies the branch to checkout and the commit message template. The sourceRef section specifies the Git repository containing the Kubernetes manifests to update.  &lt;/p&gt;

&lt;p&gt;The &lt;code&gt;update&lt;/code&gt; section specifies the path to the Kubernetes manifests to update and the strategy to use.&lt;/p&gt;

&lt;p&gt;When this &lt;em&gt;ImageUpdateAutomation&lt;/em&gt; object is deployed, Flux will periodically check for new image updates. If it finds any new updates, it will update the image tags in the Kubernetes manifests and commit the changes to the Git repository.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;git&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;checkout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;ref&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;branch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;main&lt;/span&gt;
    &lt;span class="na"&gt;commit&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;author&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluxbot@users.noreply.github.com&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluxbot&lt;/span&gt;
      &lt;span class="na"&gt;messageTemplate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;{{range&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;.Updated.Images}}{{println&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;.}}{{end}}'&lt;/span&gt;
  &lt;span class="na"&gt;interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1m0s&lt;/span&gt;
  &lt;span class="na"&gt;sourceRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;GitRepository&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;flux-helloenv-app&lt;/span&gt;
  &lt;span class="na"&gt;update&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./demo/kustomize/staging&lt;/span&gt;
    &lt;span class="na"&gt;strategy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Setters&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  helloenv-prod.yaml
&lt;/h4&gt;

&lt;p&gt;This YAML file is very similar to the previous one but has an additional push configuration. This means that after updating the image tags in the Git repository, Flux will commit and push the changes to the flux-image-update branch. This is done specially for prod setup since we are not going to push directly to the main branch for prod manifests.  &lt;/p&gt;

&lt;p&gt;This is useful in our case, as this will help us create a PR to the main branch from the flux-image-update branch, which will then go through manual approval. The process of creating PR is handled by the CI section in GitHub Actions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;image.toolkit.fluxcd.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ImageUpdateAutomation&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;helloenv-prod&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;flux-system&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;git&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;checkout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;ref&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;branch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;main&lt;/span&gt;
    &lt;span class="na"&gt;commit&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;author&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluxbot@users.noreply.github.com&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluxbot&lt;/span&gt;
      &lt;span class="na"&gt;messageTemplate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;{{range&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;.Updated.Images}}{{println&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;.}}{{end}}'&lt;/span&gt;
    &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;branch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;flux-image-update&lt;/span&gt;
  &lt;span class="na"&gt;interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1m0s&lt;/span&gt;
  &lt;span class="na"&gt;sourceRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;GitRepository&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;flux-helloenv-app&lt;/span&gt;
  &lt;span class="na"&gt;update&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./demo/kustomize/prod&lt;/span&gt;
    &lt;span class="na"&gt;strategy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Setters&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Setting up Flux
&lt;/h3&gt;

&lt;p&gt;Now that we've seen all the configurations, we can proceed to the bootstrap command.&lt;/p&gt;

&lt;p&gt;Note the &lt;code&gt;--owner&lt;/code&gt; and &lt;code&gt;--repository&lt;/code&gt; switches here: we are explicitly looking for the &lt;code&gt;${GITHUB_USER}/flux-gitops-helloenv&lt;/code&gt; repo. Make sure you fork both repos under your user and follow on.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;flux bootstrap github &lt;span class="se"&gt;\ &lt;/span&gt;
  &lt;span class="nt"&gt;--components-extra&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;image-reflector-controller,image-automation-controller &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--owner&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$GITHUB_USER&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--repository&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;flux-gitops-helloenv &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;./clusters/my-cluster/ &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--branch&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;main &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--read-write-key&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--personal&lt;/span&gt; &lt;span class="nt"&gt;--private&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;false&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Understanding &lt;code&gt;flux bootstrap&lt;/code&gt; command
&lt;/h4&gt;

&lt;p&gt;Let's understand what the above command does.&lt;/p&gt;

&lt;p&gt;To enable the Flux image automation feature, the extra components can be specified with the &lt;a href="https://fluxcd.io/flux/installation/configuration/optional-components/"&gt;&lt;code&gt;--components-extra&lt;/code&gt; flag&lt;/a&gt;. We are enabling the image reflector and automation controllers.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;--path=./clusters/my-cluster/&lt;/code&gt; specifies the path within the repository where the configuration files will be stored.  &lt;/p&gt;

&lt;p&gt;&lt;code&gt;my-cluster&lt;/code&gt; folder in the repository contains two files &lt;code&gt;infrastructure.yaml&lt;/code&gt; and &lt;code&gt;apps.yaml&lt;/code&gt;.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;infrastructure.yaml&lt;/code&gt; defines a Kustomization resource called ingress-nginx. This Kustomization lives in the flux-system namespace, doesn't depend on anything, and has kustomize files at infrastructure/ingress-nginx.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;apps.yaml&lt;/code&gt;, which defines the &lt;code&gt;flux-helloenv-app&lt;/code&gt; application itself. Within this file, you'll find references to the apps directory. This directory is a central location containing Kustomizations and configuration settings for setting up the &lt;code&gt;flux-helloenv-app&lt;/code&gt; application in both the prod and staging environments. Additionally, it includes configurations for image automation for &lt;code&gt;helloenv&lt;/code&gt; app in staging and prod env.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The process of bootstrapping everything may take some time. To monitor the progress and ensure everything is proceeding as expected, we can utilize the &lt;code&gt;--watch&lt;/code&gt; switch.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;flux get kustomizations &lt;span class="nt"&gt;--watch&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify that all Flux pods are in running state by running get pods.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; flux-system get pods

NAME                                           READY   STATUS    RESTARTS   AGE
image-automation-controller-6c4fb698d4-zrp78   1/1     Running   0          29s
image-reflector-controller-5dfa39212d-hnnvj    1/1     Running   0          29s
kustomize-controller-424f5ab2a2-u2hwb          1/1     Running   0          29s
source-controller-2wc41z892-axkr1              1/1     Running   0          29s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since our flux-helloenv-app repository is public, application will get deployed as part of the bootstrap step. You can check both staging and prod environments with following commands.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl rollout status &lt;span class="nt"&gt;-n&lt;/span&gt; helloenv-staging deployments
watch kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; helloenv-staging

kubectl rollout status &lt;span class="nt"&gt;-n&lt;/span&gt; helloenv-prod deployments
watch kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; helloenv-prod
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Setup Git authentication for Flux
&lt;/h4&gt;

&lt;p&gt;After bootstrapping Flux, we will grant it write access to our GitHub repositories. This will allow Flux to update image tags in manifests, create pull requests etc.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://fluxcd.io/flux/cmd/flux_create_secret_git/"&gt;&lt;code&gt;flux create secret git&lt;/code&gt;&lt;/a&gt; command creates an SSH key pair for the specified host and puts it into a named Kubernetes secret in Flux's management namespace (by default flux-system). The command also outputs the public key, which should be added to the forked repo's "Deploy keys" in GitHub.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;GITHUB_USER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;your_github_username&amp;gt;
flux create secret git ssh-credentials &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ssh://git@github.com/&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;GITHUB_USER&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;/flux-helloenv-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you need to retrieve the public key later, you can extract it from the secret as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get secret ssh-credentials &lt;span class="nt"&gt;-n&lt;/span&gt; flux-system &lt;span class="nt"&gt;-ojson&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.data."identity.pub"'&lt;/span&gt; | &lt;span class="nb"&gt;base64&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use the public key as a Deploy key in your fork of the flux-helloenv-app repo. Browse to the following URL, replacing &lt;code&gt;&amp;lt;your_github_username&amp;gt;&lt;/code&gt; with your GitHub username: &lt;code&gt;https://github.com/&amp;lt;your_github_username&amp;gt;/flux-helloenv-app/settings/keys&lt;/code&gt;.&lt;br&gt;&lt;br&gt;
The page will appear as follows.     &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwq6lhb5xeob63ylt0vzl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwq6lhb5xeob63ylt0vzl.png" alt="Image description" width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click "Add deploy key" and paste the key data (starts with &lt;code&gt;ssh-&amp;lt;alg&amp;gt;...&lt;/code&gt; ) into the contents. The name is arbitrary, we use flux-helloenv-app-secret here.&lt;/p&gt;
&lt;h3&gt;
  
  
  Accessing the application
&lt;/h3&gt;

&lt;p&gt;The default image version can be checked by accessing our application using curl command. If you are using minikube, make sure you enable the ingress addon so the ingress functionality works.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;minikube addons &lt;span class="nb"&gt;enable &lt;/span&gt;ingress
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you are running on the local cluster, you will have to add the minikube IP to your &lt;code&gt;/etc/hosts&lt;/code&gt; file. The command &lt;code&gt;minikube ip&lt;/code&gt; will give the IP of your cluster.  &lt;/p&gt;

&lt;p&gt;For example, the entry in &lt;code&gt;/etc/hosts&lt;/code&gt; file will look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;192.168.49.2 helloenv.prod.com helloenv.stage.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can check the ingress that is available for stage and prod.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get ingress --all-namespaces

NAMESPACE          NAME       CLASS    HOSTS                ADDRESS   PORTS   AGE
helloenv-prod      helloenv   &amp;lt;none&amp;gt;   helloenv.prod.com              80      9m22s
helloenv-staging   helloenv   &amp;lt;none&amp;gt;   helloenv.stage.com             80      9m21s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, you can access the application with following commands.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl helloenv.prod.com

curl helloenv.stage.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The result of these &lt;code&gt;curl&lt;/code&gt; commands will show the current default image versions that the deployment is using.&lt;/p&gt;

&lt;h3&gt;
  
  
  Releasing to stage
&lt;/h3&gt;

&lt;p&gt;Now, let's make some changes to the app and release it to the stage. We can make a minor change to the &lt;a href="https://github.com/infracloudio/flux-helloenv-app/blob/main/app/app.py"&gt;app.py&lt;/a&gt; file in the application code repository &lt;code&gt;flux-helloenv-app&lt;/code&gt;, by changing the message and pushing it to the main branch, of course, in real case it will be pushed to main through a PR.  &lt;/p&gt;

&lt;p&gt;This change triggers the &lt;a href="https://github.com/infracloudio/flux-helloenv-app/blob/main/.github/workflows/ci.yaml"&gt;CI&lt;/a&gt; job configured on the application code repository. This job contains the logic to check the tag and then create the image tag based on it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
  &lt;span class="s"&gt;if [[ ${{ github.event.ref }} =~ ^refs/tags/[0-9]+\.[0-9]+\.[0-9]+$ ]]; then&lt;/span&gt;
    &lt;span class="s"&gt;echo "IMAGE_ID=${{ github.ref_name }}" &amp;gt;&amp;gt; "$GITHUB_OUTPUT"&lt;/span&gt;
  &lt;span class="s"&gt;else&lt;/span&gt;
    &lt;span class="s"&gt;ts=$(date +%s)&lt;/span&gt;
    &lt;span class="s"&gt;branch=${GITHUB_REF##*/}&lt;/span&gt;
    &lt;span class="s"&gt;echo "IMAGE_ID=${branch}-${GITHUB_SHA::8}-${ts}" &amp;gt;&amp;gt; "$GITHUB_OUTPUT"&lt;/span&gt;
  &lt;span class="s"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From this snippet, it can be seen that it checks for the condition for prod and stage, as well as how we set the tagging of images.  &lt;/p&gt;

&lt;p&gt;So, since we are doing this for stage, it will tag the image as &lt;code&gt;${branch}-${GITHUB_SHA::8}-${ts}&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;A useful tag format is &lt;code&gt;&amp;lt;branch&amp;gt;-&amp;lt;sha1&amp;gt;-&amp;lt;timestamp&amp;gt;&lt;/code&gt;.  &lt;/p&gt;

&lt;p&gt;Including the branch information with an image makes it easier to trace the source code's branch and commit associated with that image. Additionally, having the branch information and &lt;a href="https://en.wikipedia.org/wiki/Unix_time"&gt;unix time&lt;/a&gt; allows you to filter for images originating from a &lt;a href="https://fluxcd.io/flux/guides/sortable-image-tags/"&gt;specific branch when needed&lt;/a&gt;.  &lt;/p&gt;

&lt;p&gt;Since this CI stage for Staging will push the image to the image repository, the image update and image policy will kick in and replace the image with the latest built image based on timestamp.  &lt;/p&gt;

&lt;h4&gt;
  
  
  Check status of stage environment
&lt;/h4&gt;

&lt;p&gt;We can check the status of image automation and image policy for staging.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;flux get image all

NAME                        LAST SCAN               SUSPENDED   READY   MESSAGE                       
imagerepository/helloenv    2023-10-17T12:19:26Z    False       True    successful scan: found 9 tags   

NAME                            LATEST IMAGE                                        READY   MESSAGE                                                                                
imagepolicy/helloenv-prod       docker.io/shapai/helloenv:1.0.2                     True    Latest image tag &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="s1"&gt;'docker.io/shapai/helloenv'&lt;/span&gt; resolved to 1.0.2                    
imagepolicy/helloenv-staging    docker.io/shapai/helloenv:main-b470560d-1696668188  True    Latest image tag &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="s1"&gt;'docker.io/shapai/helloenv'&lt;/span&gt; resolved to Build-b470560d-1696668188

NAME                                    LAST RUN                SUSPENDED   READY   MESSAGE                                                      
imageupdateautomation/helloenv-prod     2023-10-17T12:18:41Z    False       True    no updates made                                                 
imageupdateautomation/helloenv-staging  2023-10-17T12:18:39Z    False       True    no updates made&lt;span class="p"&gt;;&lt;/span&gt; last commit f798e64 at 2023-10-07T09:25:32Z    
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's curl the stage domain and see the output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;curl helloenv.stage.com

This is staging environment with &lt;span class="o"&gt;(&lt;/span&gt;Version: main-b470560d-1696668188&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output will give you the latest version, which you can verify with commit id and timestamp with which the image was created.&lt;/p&gt;

&lt;h3&gt;
  
  
  Releasing to production
&lt;/h3&gt;

&lt;p&gt;Now, assuming we have tested the release in stage and got go-ahead, we are ready to release in production by tagging the release.&lt;/p&gt;

&lt;p&gt;We will be releasing it in prod by tagging the release.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git tag &lt;span class="nt"&gt;-a&lt;/span&gt; 1.0.2 &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"prod modified"&lt;/span&gt;
git push &lt;span class="nt"&gt;--tags&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will, in turn, trigger the CI and create an image tag with that Git tag that we pushed.&lt;br&gt;
Once this is triggered and pushed, the image automation in Flux will commit and push the changes to the flux-image-update branch.   &lt;/p&gt;

&lt;p&gt;This will then create a PR using workflow in &lt;a href="https://github.com/infracloudio/flux-helloenv-app/blob/main/.github/workflows/auto-pr.yaml"&gt;flux-helloenv-app repo&lt;/a&gt;. This PR needs a manual review and approval since it is a prod environment. Once this gets approved, it changes the tag with the latest tag in &lt;a href="https://github.com/infracloudio/flux-helloenv-app/blob/main/demo/kustomize/prod/kustomization.yaml"&gt;prod kustomization&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyjd4xxh5thx4o0hmdv73.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyjd4xxh5thx4o0hmdv73.png" alt="Image description" width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;curl to the prod DNS should give you the latest tag as the version.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;curl helloenv.prod.com

This is production environment &lt;span class="o"&gt;(&lt;/span&gt;Version: 1.0.2&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Incident Management
&lt;/h2&gt;

&lt;p&gt;During an incident, you may want to halt Flux from updating images in your Git repository. You can accomplish this by suspending image automation either in-cluster or by editing the ImageUpdateAutomation manifest in Git.&lt;/p&gt;

&lt;h3&gt;
  
  
  In-cluster suspension
&lt;/h3&gt;

&lt;p&gt;You can suspend image automation directly in a cluster using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;flux &lt;span class="nb"&gt;suspend &lt;/span&gt;image update helloenv-prod
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Alternatively, you can suspend image automation by editing the ImageUpdateAutomation manifest in Git. Here's an example of the manifest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ImageUpdateAutomation&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;helloenv-prod&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;flux-system&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;suspend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Resuming automation
&lt;/h3&gt;

&lt;p&gt;Once the incident is resolved, you can resume the automation using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;flux resume image update helloenv-prod
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Pausing automation for a specific image
&lt;/h3&gt;

&lt;p&gt;If you want to pause automation for a particular image only, you can suspend and resume image scanning for that specific image. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;flux &lt;span class="nb"&gt;suspend &lt;/span&gt;image repository helloenv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Reverting image updates
&lt;/h3&gt;

&lt;p&gt;Assuming you've configured Flux to update an application to its latest stable version and an incident occurs, you can instruct Flux to revert to a previous image version.  &lt;/p&gt;

&lt;h4&gt;
  
  
  Reverting via command
&lt;/h4&gt;

&lt;p&gt;For instance, to revert from version 1.0.1 to 1.0.0, you can use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;flux create image policy helloenv-prod &lt;span class="nt"&gt;--image-ref&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;helloenv &lt;span class="nt"&gt;--select-semver&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1.0.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Reverting via Git manifest
&lt;/h4&gt;

&lt;p&gt;You can also make this change by editing the ImagePolicy manifest in Git. Here's an example of the manifest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ImagePolicy&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;helloenv-prod&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;flux-system&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;policy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;semver&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;range&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1.0.0&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Updating the image policy
&lt;/h3&gt;

&lt;p&gt;When a new version, e.g., 1.0.2, becomes available, you can update the policy again to consider only versions greater than 1.0.1. This can be achieved using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;flux create image policy helloenv-prod &lt;span class="nt"&gt;--image-ref&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;helloenv &lt;span class="nt"&gt;--select-semver&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&amp;gt;1.0.1"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This change will prompt Flux to update the podinfo deployment manifest in Git and roll out the specified image version in-cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Flux's image automation and GitOps are powerful solutions for managing container image updates. By combining image automation and image reflector controllers, organizations can automate image version updates in their Git repositories. This not only simplifies the process but also ensures consistency and reliability in application deployment.&lt;/p&gt;

&lt;p&gt;The practical example illustrated how GitOps with Flux can streamline the workflow from staging to production, providing a structured approach to managing deployments as code changes occur. This approach enhances efficiency and reliability in the development pipeline, making it a valuable asset in modern &lt;a href="https://dev.to/blogs/successful-devops-assessment/"&gt;DevOps practices&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Thank you for taking the time to read our post. We hope you found it both informative and engaging. We highly value your feedback and would love to hear your thoughts on this topic. Let's kickstart a meaningful conversation on &lt;a href="https://in.linkedin.com/in/shashank-pai-405928a9"&gt;LinkedIn&lt;/a&gt; to exchange ideas and insights.&lt;/p&gt;

&lt;p&gt;If you're seeking assistance in crafting a robust DevOps strategy or considering outsourcing your DevOps operations to seasoned experts, we invite you to discover why numerous startups and enterprises regard us as one of the &lt;a href="https://dev.to/devops-consulting-services/"&gt;top-tier DevOps consulting and services companies&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://fluxcd.io/flux/concepts/"&gt;Flux Core Concepts&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://fluxcd.io/flux/guides/image-update/"&gt;Flux Automate image updates to Git&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://fluxcd.io/flux/guides/sortable-image-tags/"&gt;How to make sortable image tags to use with automation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kodekloud.com/courses/gitops-with-fluxcd/"&gt;GitOps with Flux CD - KodeKloud&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>gitops</category>
      <category>flux</category>
      <category>githubactions</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Unlocking Linux Host Performance Insights with Prometheus Monitoring (Part-1)</title>
      <dc:creator>shashankpai</dc:creator>
      <pubDate>Fri, 14 Jul 2023 07:19:26 +0000</pubDate>
      <link>https://forem.com/shashankpai/unlocking-linux-host-performance-insights-with-prometheus-monitoring-ka0</link>
      <guid>https://forem.com/shashankpai/unlocking-linux-host-performance-insights-with-prometheus-monitoring-ka0</guid>
      <description>&lt;h2&gt;
  
  
  Table Of Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;Importance of monitoring Linux host metrics&lt;/li&gt;
&lt;li&gt;How Prometheus can help?&lt;/li&gt;
&lt;li&gt;What is Node Exporter and its role ?&lt;/li&gt;
&lt;li&gt;Setting up Prometheus server to monitor Linux hosts&lt;/li&gt;
&lt;li&gt;Install and Configure Node Exporter on the  Server&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Introduction:&lt;a&gt;&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;Welcome to part 1 of our 2-part blog series on Linux host monitoring with Prometheus. This series is designed for DevOps professionals, monitoring enthusiasts, and Prometheus Certified Associate aspirants.&lt;/p&gt;

&lt;p&gt;In this series, we'll explore the importance of monitoring Linux host metrics and how Prometheus can help. We'll cover topics such as exporting metrics through node exporter and how prometheus scrapes metrics.&lt;/p&gt;

&lt;p&gt;Stay tuned for part 2, where we'll dive into practical implementation, configuration, and creating dashboards with grafana for monitoring Linux hosts with Prometheus.&lt;/p&gt;

&lt;p&gt;Get ready to elevate your DevOps and monitoring skills as we unlock the power of Linux host monitoring with Prometheus. Let's begin this journey together.&lt;/p&gt;

&lt;h1&gt;
  
  
  Importance of monitoring Linux host metrics:&lt;a&gt;&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;Monitoring Linux host metrics is crucial for optimizing performance, planning capacity, troubleshooting issues, ensuring security, and proactively maintaining systems. By tracking metrics like CPU, memory, disk, and network usage, you can identify problems, make data-driven decisions, and ensure the smooth operation of your Linux environment.&lt;/p&gt;

&lt;h1&gt;
  
  
  How Prometheus can help?&lt;a&gt;&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;Prometheus is a powerful monitoring tool that simplifies the process of monitoring Linux host metrics. It collects data from various sources using lightweight exporters like &lt;strong&gt;Node Exporter&lt;/strong&gt; and stores it in a time-series database. With PromQL, you can query and analyze the collected metrics, enabling you to retrieve specific Linux host metrics and gain insights into system performance. &lt;/p&gt;

&lt;h1&gt;
  
  
  What is Node Exporter and its role ?&lt;a&gt;&lt;/a&gt;
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--z1kklKQs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/58vul3p9tktczjj60gvx.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--z1kklKQs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/58vul3p9tktczjj60gvx.gif" alt="Prometheus Server scraping metrics from Node Exporter service running on linux hosts" width="800" height="434"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Prometheus Server scraping metrics from Node Exporter service running on Linux hosts&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Linux doesn't provide native Prometheus metrics out of the box. However, Node Exporter comes to the rescue by bridging this gap. Node Exporter is an exporter specifically designed for Linux systems, serving as a reliable source for collecting Linux host metrics.&lt;/p&gt;

&lt;p&gt;Node Exporter runs as a service on the Linux host, exposing a wide range of metrics related to system resources, network, disk, CPU, memory, and more. It periodically collects these metrics from the Linux kernel and other system sources, making them available for scraping by Prometheus.&lt;/p&gt;

&lt;p&gt;By installing and configuring Node Exporter on Linux hosts, you can unlock a wealth of Linux-specific metrics that would otherwise be inaccessible to Prometheus. This includes detailed information about CPU utilization, memory usage, disk I/O, network traffic, and other crucial indicators of system performance.&lt;/p&gt;

&lt;p&gt;Node Exporter acts as a bridge between the Linux operating system and Prometheus, ensuring that Linux host metrics can be easily monitored and analyzed using Prometheus' powerful features. It provides a standardized and reliable way to expose Linux-specific metrics, enabling seamless integration with the Prometheus ecosystem.&lt;/p&gt;

&lt;p&gt;With Node Exporter, system administrators and DevOps teams can gain valuable insights into the performance and health of their Linux hosts, allowing them to proactively optimize resource allocation, troubleshoot issues, and ensure the smooth operation of their Linux-based infrastructure.&lt;/p&gt;
&lt;h1&gt;
  
  
  Setting up Prometheus server to monitor Linux hosts&lt;a&gt;&lt;/a&gt;
&lt;/h1&gt;
&lt;h3&gt;
  
  
  Download and install Prometheus
&lt;/h3&gt;
&lt;h5&gt;
  
  
  1.Create the &lt;code&gt;prometheus&lt;/code&gt; user
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo useradd -M -r -s /bin/false prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  2.Create prometheus directories
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mkdir /etc/prometheus /var/lib/prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  3. Then download latest binary archive for Prometheus.
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir -p /tmp/prometheus &amp;amp;&amp;amp; cd /tmp/prometheus

curl -s https://api.github.com/repos/prometheus/prometheus/releases/latest | grep browser_download_url | grep linux-amd64 | cut -d '"' -f 4 | wget -qi -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  4. Untar the the downloaded binary
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tar xvf prometheus*.tar.gz
prometheus-2.45.0.linux-amd64/
prometheus-2.45.0.linux-amd64/LICENSE
prometheus-2.45.0.linux-amd64/prometheus.yml
prometheus-2.45.0.linux-amd64/console_libraries/
prometheus-2.45.0.linux-amd64/console_libraries/prom.lib
prometheus-2.45.0.linux-amd64/console_libraries/menu.lib
prometheus-2.45.0.linux-amd64/consoles/
prometheus-2.45.0.linux-amd64/consoles/node-overview.html
prometheus-2.45.0.linux-amd64/consoles/node-cpu.html
prometheus-2.45.0.linux-amd64/consoles/index.html.example
prometheus-2.45.0.linux-amd64/consoles/node.html
prometheus-2.45.0.linux-amd64/consoles/node-disk.html
prometheus-2.45.0.linux-amd64/consoles/prometheus-overview.html
prometheus-2.45.0.linux-amd64/consoles/prometheus.html
prometheus-2.45.0.linux-amd64/promtool
prometheus-2.45.0.linux-amd64/prometheus
prometheus-2.45.0.linux-amd64/NOTICE
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  5. Move the binary files to /usr/local/bin/ directory and set their ownership to &lt;code&gt;prometheus&lt;/code&gt; user.
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mv prometheus promtool /usr/local/bin/
sudo chown prometheus:prometheus /usr/local/bin/{prometheus,promtool}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  6. Move the files from the tmp archive to the appropriate locations, and set ownership on these files and directories to the &lt;code&gt;prometheus&lt;/code&gt; user
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo cp -r prometheus-2.45.0.linux-amd64/{consoles,console_libraries} /etc/prometheus/

sudo cp prometheus-2.45.0.linux-amd64/prometheus.yml /etc/prometheus/prometheus.yml

sudo chown -R prometheus:prometheus /etc/prometheus
sudo chown prometheus:prometheus /var/lib/prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h5&gt;
  
  
  7. Run Prometheus in the foreground to make sure everything is set up correctly so far:
&lt;/h5&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;prometheus --config.file=/etc/prometheus/prometheus.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vl2wExhm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zm7pmpq6h8opcjupjpwa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vl2wExhm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zm7pmpq6h8opcjupjpwa.png" alt="Image description" width="800" height="260"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_rix9eqt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y9fvj2lp8jp8m7bi42sn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_rix9eqt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y9fvj2lp8jp8m7bi42sn.png" alt="Image description" width="800" height="170"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h5&gt;
  
  
  8. Configure Prometheus as a systemd Service
&lt;/h5&gt;

&lt;p&gt;Create a systemd unit file for Prometheus:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo vim /etc/systemd/system/prometheus.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Unit] Description=Prometheus Time Series Collection and Processing Server Wants=network-online.target After=network-online.target

[Service] User=prometheus Group=prometheus Type=simple ExecStart=/usr/local/bin/prometheus \ --config.file /etc/prometheus/prometheus.yml \ --storage.tsdb.path /var/lib/prometheus/ \ --web.console.templates=/etc/prometheus/consoles \ --web.console.libraries=/etc/prometheus/console_libraries 

[Install] WantedBy=multi-user.target
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  9. Make sure systemd picks up the changes we made:
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl daemon-reload
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  10. Start the Prometheus service:
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl start prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  11. Enable the Prometheus service so it will automatically start at boot:
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl enable prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  12. Verify the Prometheus service is healthy:
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl status prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We should see its state is active (running).&lt;/p&gt;

&lt;h5&gt;
  
  
  13. Make an HTTP request to Prometheus to verify it is able to respond:
&lt;/h5&gt;

&lt;p&gt;curl localhost:9090&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HvBbQpZj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x8706tvu63vw38t5sokv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HvBbQpZj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x8706tvu63vw38t5sokv.png" alt="Image description" width="800" height="73"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  14. In a new browser tab, access Prometheus by navigating to http://:9090 (replacing  with the IP listed on the lab page). We should then see the Prometheus expression browser.
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GLFbsvey--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kr6grcr124409bo48dkb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GLFbsvey--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kr6grcr124409bo48dkb.png" alt="Image description" width="800" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Install and Configure Node Exporter on the  Server&lt;a&gt;&lt;/a&gt;
&lt;/h1&gt;

&lt;h5&gt;
  
  
  1. Create a user and group that will be used to run Node Exporter
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo useradd -M -r -s /bin/false node_exporter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  2. Get the node exporter binary
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget https://github.com/prometheus/node_exporter/releases/download/v1.6.0/node_exporter-1.6.0.linux-amd64.tar.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  3. Extract the Node Exporter binary:
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tar -xvzf node_exporter-1.6.0.linux-amd64.tar.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  4. Copy the Node Exporter binary to the appropriate location
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo cp node_exporter-1.6.0.linux-amd64/node_exporter /usr/local/bin/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  5. Set ownership on the Node Exporter binary
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
sudo chown node_exporter:node_exporter /usr/local/bin/node_exporter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  6. Create a systemd unit file for Node Exporter:
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo vim /etc/systemd/system/node_exporter.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  7.Define the Node Exporter service in the unit file
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Unit]
Description=Prometheus Node Exporter
Wants=network-online.target
After=network-online.target

[Service]
User=node_exporter
Group=node_exporter
Type=simple
ExecStart=/usr/local/bin/node_exporter

[Install]
WantedBy=multi-user.target
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  8. Make sure systemd picks up the changes we made:
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl daemon-reload
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  9. Enable the node_exporter service so it will automatically start at boot:
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl enable node_exporter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  10. Test that your Node Exporter is working by making a request to it from localhost:
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl localhost:9100/metrics
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  11.Configure Prometheus to Scrape Metrics from the Acme Web Server
&lt;/h5&gt;

&lt;p&gt;Edit the Prometheus config file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo vim /etc/prometheus/prometheus.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Locate the scrape_configs section and add the following beneath it (ensuring it's indented to align with the existing job_name section):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;...

  - job_name: 'LimeDrop Web Server'
    static_configs:
    - targets: ['10.0.1.102:9100']

...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_6-VnXnL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7kkleogeud3n8io4qa9u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_6-VnXnL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7kkleogeud3n8io4qa9u.png" alt="Image description" width="800" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  11. Restart Prometheus to load the new config:
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl restart prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  12. Navigate to the Prometheus expression browser in your web browser using the public IP address of your Prometheus server:
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;PROMETHEUS_SERVER_PUBLIC_IP&amp;gt;:9090
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  13.In the expression field (the box at the top of the page), paste in the following query to verify you are able to get some metric data from the LimeDrop web server:
&lt;/h5&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;node_filesystem_avail_bytes{job="Acme Web Server"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--P--C6m6Y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0z010gsqd5p07a7j33qa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P--C6m6Y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0z010gsqd5p07a7j33qa.png" alt="Image description" width="800" height="540"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1bUgltAp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lmitx3x1esxpquo8q84p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1bUgltAp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lmitx3x1esxpquo8q84p.png" alt="Image description" width="800" height="466"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the second part of our blog series on Linux host monitoring with Prometheus, we will delve into more advanced concepts of Node Exporter and explore how to create insightful dashboards and queries.&lt;/p&gt;

</description>
      <category>prometheus</category>
      <category>linux</category>
      <category>monitoring</category>
      <category>devops</category>
    </item>
    <item>
      <title>Demystifying Prometheus and PromQL: A Beginner's Guide to Monitoring and Querying Metrics</title>
      <dc:creator>shashankpai</dc:creator>
      <pubDate>Fri, 07 Jul 2023 09:02:03 +0000</pubDate>
      <link>https://forem.com/shashankpai/demystifying-prometheus-and-promql-a-beginners-guide-to-monitoring-and-querying-metrics-2bi3</link>
      <guid>https://forem.com/shashankpai/demystifying-prometheus-and-promql-a-beginners-guide-to-monitoring-and-querying-metrics-2bi3</guid>
      <description>&lt;p&gt;&lt;strong&gt;1. Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Monitoring and alerting play a critical role in ensuring the reliability and performance of software systems. In today's complex and distributed environments, it is essential to collect, analyze, and visualize metrics to identify and resolve issues proactively. Prometheus, an open-source monitoring and alerting toolkit, offers powerful features to meet these needs effectively.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Importance of Monitoring and Alerting:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Monitoring and alerting provide several benefits for software systems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Proactive issue detection and resolution.&lt;/li&gt;
&lt;li&gt;Performance optimization and resource allocation.&lt;/li&gt;
&lt;li&gt;Capacity planning and scalability.&lt;/li&gt;
&lt;li&gt;Troubleshooting and root cause analysis.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. What is Prometheus?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lJVsh0Z4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zm6ttcskp1r0jz8yjflb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lJVsh0Z4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zm6ttcskp1r0jz8yjflb.png" alt="Image description" width="800" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.1 Origin and History:&lt;/strong&gt;&lt;br&gt;
Prometheus was initially developed at SoundCloud in 2012 to monitor their dynamic, containerized infrastructure. It was later donated to the CNCF in 2016, gaining popularity and an active community.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.2 Key Features and Benefits:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Time-series Data Model:&lt;/strong&gt; Prometheus stores metrics with labels and timestamps, enabling efficient storage and analysis over time.&lt;br&gt;
&lt;strong&gt;2. PromQL:&lt;/strong&gt; A flexible querying language for extracting insights and performing operations on metrics.&lt;br&gt;
&lt;strong&gt;3. Service Discovery and Dynamic Configuration:&lt;/strong&gt; Built-in support for monitoring dynamic environments like Kubernetes.&lt;br&gt;
&lt;strong&gt;4. Alerting and Notifications:&lt;/strong&gt; Define alert rules and receive notifications via various channels.&lt;br&gt;
&lt;strong&gt;5. Data Export and Integration:&lt;/strong&gt; Export metrics to external systems and integrate with tools like Grafana.&lt;br&gt;
&lt;strong&gt;6. Scalability and Performance:&lt;/strong&gt; Designed to handle large-scale deployments with real-time monitoring capabilities.&lt;br&gt;
&lt;strong&gt;7. Active Community and Ecosystem:&lt;/strong&gt; Supported by a vibrant community, ensuring ongoing development and availability of extensions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Prometheus Architecture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JGutBwtf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/djqqfn9z82be5k9hh5o4.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JGutBwtf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/djqqfn9z82be5k9hh5o4.gif" alt="Image description" width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.1 Components of Prometheus:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prometheus Server:&lt;/strong&gt; The central component responsible for data ingestion, storage, querying, and alerting. It scrapes metrics from configured targets and stores them in a time-series database. The server exposes an HTTP API for querying and retrieving metrics.&lt;br&gt;
&lt;strong&gt;Exporters:&lt;/strong&gt; Specialized components or libraries that expose metrics from various systems in a format that Prometheus can scrape. Exporters allow Prometheus to collect metrics from different sources such as web servers, databases, operating systems, and cloud platforms.&lt;br&gt;
&lt;strong&gt;Pushgateway:&lt;/strong&gt; A tool that allows pushing metrics from short-lived or batch jobs into Prometheus. This is useful for cases where scraping metrics periodically from these jobs is not feasible, such as cron jobs or ephemeral tasks.&lt;br&gt;
&lt;strong&gt;Alertmanager:&lt;/strong&gt; Handles alerts generated by Prometheus based on predefined rules. It allows operators to define alert rules and configure notification channels. Alertmanager coordinates the sending of alert notifications through email, Slack, PagerDuty, or other supported channels.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.2 Pull-based Model Explained:&lt;/strong&gt;&lt;br&gt;
Prometheus follows a pull-based model for data collection. The Prometheus Server periodically scrapes metrics from configured targets by making HTTP requests to their endpoints. It retrieves the metrics in the Prometheus exposition format, which includes metric names, labels, values, and timestamps. The server then stores the scraped data in its time-series database for querying and analysis.&lt;br&gt;
The pull-based model provides flexibility and resilience. Prometheus determines the frequency of scraping for each target, allowing customization based on the importance and stability of the metrics. Additionally, it handles cases where targets may have different scrape intervals or temporary unavailability without losing data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.3 Role of Exporters and Service Discovery:&lt;/strong&gt;&lt;br&gt;
Exporters play a crucial role in the Prometheus ecosystem. They act as adapters between Prometheus and various systems, providing the capability to expose metrics in a format Prometheus can understand. Exporters can be official integrations, community-contributed projects, or custom-built solutions specific to the system being monitored. They enable Prometheus to collect metrics from a wide range of sources without requiring modifications to the source systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.4 Service Discovery&lt;/strong&gt;&lt;br&gt;
Service discovery is another essential aspect of Prometheus architecture. It simplifies the process of dynamically identifying and monitoring targets in a dynamic environment like Kubernetes. Prometheus supports multiple service discovery mechanisms, such as DNS-based service discovery, Kubernetes service discovery, file-based discovery, and more. These mechanisms automate the process of target discovery and ensure that Prometheus can adapt to changes in the environment, allowing for seamless monitoring of dynamic infrastructures.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Why Prometheus is worth considering&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dimensional data model&lt;/li&gt;
&lt;li&gt;Powerful query language&lt;/li&gt;
&lt;li&gt;Simple architecture and efficient server&lt;/li&gt;
&lt;li&gt;Service discovery integration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5.1 Data model&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The data model in Prometheus revolves around time series, which are sequences of data points representing the values of a specific metric over time. Each data point consists of a timestamp and a corresponding value. Time series in Prometheus are uniquely identified by metric names and a set of key-value pairs called labels.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is timeseries?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--08yevH8B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7x39x4fzmwthf7obyvec.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--08yevH8B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7x39x4fzmwthf7obyvec.png" alt="Image description" width="800" height="313"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Identifier:&lt;/strong&gt;&lt;br&gt;
The identifier for a time series is formed by the metric name and its associated labels. It is the combination of these two elements that distinguishes one time series from another. For example, if we have a metric named cpu_usage_percent and two labels, host and region, the identifier for a specific time series would be the metric name along with the label values. These identifiers enable Prometheus to differentiate and query specific subsets of time series.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Timestamp:&lt;/strong&gt;&lt;br&gt;
The timestamp in a time series represents the point in time at which a data point was recorded. In Prometheus, timestamps are typically represented as integers, often using Unix timestamps, which are the number of seconds or milliseconds since January 1, 1970. The timestamp indicates when a specific value in the time series was measured, allowing for the analysis of time-based patterns and trends.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Values:&lt;/strong&gt;&lt;br&gt;
The values in a time series correspond to the recorded measurements or observations of the metric at specific timestamps. In Prometheus, these values are typically floating-point numbers or decimal values, representing the metric's magnitude or measurement quantity. For example, in the case of cpu_usage_percent, the values could be decimals indicating the percentage of CPU utilization.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---JwcQ40B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ei4ror0e4z176c2qlaph.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---JwcQ40B--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ei4ror0e4z176c2qlaph.png" alt="Image description" width="800" height="288"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let's consider an example time series for the metric cpu_usage_percent with labels host="server1" and region="us-east". Here's how it might look:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;cpu_usage_percent{host="server1", region="us-east"}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;In this example, the PromQL query selects the time series for the cpu_usage_percent metric with the label values &lt;code&gt;host="server1"&lt;/code&gt; and &lt;code&gt;region="us-east"&lt;/code&gt;. It retrieves the CPU usage data specifically for &lt;code&gt;"server1"&lt;/code&gt; in the &lt;code&gt;"us-east"&lt;/code&gt; region.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cpu_usage_percent{host="server1", region="us-east"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Output:
Timestamp: 1626937200
Value: 75.6


Timestamp: 1626938100
Value: 81.2


Timestamp: 1626939000
Value: 78.9
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, the output shows multiple data points representing the cpu_usage_percent metric for the time series with labels host="server1" and region="us-east". Each data point consists of a timestamp (e.g., 1626937200) and the corresponding value (e.g., 75.6). The timestamps represent specific points in time when the CPU usage percentage values were recorded for the given label combination.&lt;/p&gt;

&lt;p&gt;** 5.2 Querying**&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PromQL&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;PromQL is a functional query language designed for time series data in Prometheus. It excels at performing computations and transformations on time series, making it great for analyzing monitoring data. Unlike SQL-style languages, PromQL focuses on time series computations rather than structured tabular data. With PromQL, you can aggregate, filter, and perform mathematical calculations on time series, enabling you to derive meaningful insights and perform advanced analysis on your monitoring data efficiently. Its intuitive syntax and functional approach make it a preferred choice for working with time series data in Prometheus.&lt;/p&gt;

&lt;p&gt;Lets try out few PromQL queries &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;List all partitions in my infrastructure with more than 100 GB capacity that are not mounted on root&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;node_filesystem_size_bytes{mountpoint!="/"} / 1e9 &amp;gt; 100
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;node_filesystem_size_bytes:&lt;/strong&gt; This metric represents the size of the filesystem in bytes.&lt;br&gt;
&lt;strong&gt;{mountpoint!="/"}:&lt;/strong&gt; This selector filters out the root filesystem, as indicated by the mountpoint label not equal to "/".&lt;br&gt;
&lt;strong&gt;/ 1e9:&lt;/strong&gt; This division converts the size from bytes to gigabytes.&lt;br&gt;
&lt;strong&gt;&amp;gt; 100:&lt;/strong&gt; This condition filters the time series based on a capacity threshold of 100GB.&lt;br&gt;
The query selects all the time series that satisfy the condition of having a filesystem size greater than 100GB (&amp;gt; 100) and are not mounted on the root (mountpoint!="/") in your infrastructure.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Output

node_filesystem_size_bytes{mountpoint="/home"}: 150
node_filesystem_size_bytes{mountpoint="/data"}: 250
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2nd example&lt;/strong&gt; , we will create a promql query to get the ratio of request errors across all service instances&lt;/p&gt;

&lt;p&gt;To calculate the ratio of request errors across all service instances, you can use the following PromQL query:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
sum(rate(http_requests_total{status_code=~"5.."}[5m])) / sum(rate(http_requests_total[5m]))

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;http_requests_total:&lt;/strong&gt; This metric represents the total number of HTTP requests made to the service.&lt;br&gt;
&lt;strong&gt;{status_code=~"5.."}:&lt;/strong&gt; This selector filters the metric to include only requests with status codes starting with "5", indicating server errors.&lt;br&gt;
&lt;strong&gt;rate(http_requests_total{status_code=~"5.."}[5m]):&lt;/strong&gt; This calculates the per-second rate of HTTP requests with status codes indicating server errors over the past 5 minutes.&lt;br&gt;
rate(http_requests_total[5m]): This calculates the per-second rate of all HTTP requests over the past 5 minutes.&lt;/p&gt;

&lt;p&gt;The query divides the rate of HTTP requests with status codes indicating server errors by the rate of all HTTP requests to calculate the ratio of request errors across all service instances.&lt;br&gt;
The result of this query will be a decimal value representing the ratio of request errors. For example, a value of 0.05 indicates that 5% of the requests across all service instances resulted in server errors.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>monitoring</category>
      <category>prometheus</category>
      <category>promql</category>
    </item>
  </channel>
</rss>
