<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Christian Ameachi</title>
    <description>The latest articles on Forem by Christian Ameachi (@chris-amaechi).</description>
    <link>https://forem.com/chris-amaechi</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/chris-amaechi"/>
    <language>en</language>
    <item>
      <title>[Boost]</title>
      <dc:creator>Christian Ameachi</dc:creator>
      <pubDate>Sun, 22 Feb 2026 11:51:59 +0000</pubDate>
      <link>https://forem.com/chris-amaechi/-1n8e</link>
      <guid>https://forem.com/chris-amaechi/-1n8e</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/chris-amaechi" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F1272535%2Fb53a7193-5ea6-4285-90f8-9c80a76a51a8.png" alt="chris-amaechi"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/chris-amaechi/building-a-secure-and-observable-e-commerce-microservices-platform-on-aws-eks-2o18" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Building a Secure and Observable E-commerce Microservices Platform on AWS EKS&lt;/h2&gt;
      &lt;h3&gt;Christian Ameachi ・ Feb 22&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
    </item>
    <item>
      <title>Building a Secure and Observable E-commerce Microservices Platform on AWS EKS</title>
      <dc:creator>Christian Ameachi</dc:creator>
      <pubDate>Sun, 22 Feb 2026 11:50:16 +0000</pubDate>
      <link>https://forem.com/chris-amaechi/building-a-secure-and-observable-e-commerce-microservices-platform-on-aws-eks-2o18</link>
      <guid>https://forem.com/chris-amaechi/building-a-secure-and-observable-e-commerce-microservices-platform-on-aws-eks-2o18</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;Building a production-ready microservices architecture involves more than just writing code. It requires a robust delivery pipeline, automated infrastructure, and deep observability. In my latest project, &lt;strong&gt;ShopMicro-Production&lt;/strong&gt;, I set out to build a fully automated e-commerce engine deployed on Amazon EKS.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Stack
&lt;/h3&gt;

&lt;p&gt;The platform follows a classic microservices pattern:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Frontend&lt;/strong&gt;: A sleek React/Vite interface served via Nginx.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backend&lt;/strong&gt;: A Node.js/Express API handling the core logic.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ML Service&lt;/strong&gt;: A Python/Flask recommendation engine for intelligent product suggestions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Layers&lt;/strong&gt;: PostgreSQL for persistent storage and Redis for high-performance caching.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Infrastructure as Code (IaC)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkjqa8g11q9uvzpmbabwq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkjqa8g11q9uvzpmbabwq.png" alt="Iac-Terraform" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One of the core principles of this project was "Everything as Code." I used &lt;strong&gt;Terraform&lt;/strong&gt; to provision the entire AWS EKS cluster, including managed node groups and all necessary IAM roles. &lt;/p&gt;

&lt;p&gt;Early on, I also experimented with &lt;strong&gt;Ansible&lt;/strong&gt; to bootstrap self-managed Kubernetes nodes on EC2, which provided a deep understanding of control plane orchestration before moving to the managed EKS experience.&lt;/p&gt;

&lt;h3&gt;
  
  
  The CI/CD Engine
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1cbwhmea7alr91qycqbw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1cbwhmea7alr91qycqbw.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The automation is powered by GitHub Actions with four distinct pipelines:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;App CI&lt;/strong&gt;: Automatically runs linting and unit tests on every PR, then builds and pushes Docker images to GHCR.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;App CD&lt;/strong&gt;: Sequentially deploys services to EKS, ensuring dependencies like Redis and Postgres are ready before the apps start.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IaC CI&lt;/strong&gt;: Validates Terraform code using &lt;code&gt;tflint&lt;/code&gt; and ensures compliance with &lt;strong&gt;OPA (Open Policy Agent)&lt;/strong&gt; policies.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Drift Detection&lt;/strong&gt;: A daily automated check to ensure no manual changes have deviated from our Terraform source of truth.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Zero-Downtime Reliability
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftjiufoibw38qkxg9zhp2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftjiufoibw38qkxg9zhp2.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To ensure the system stays healthy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;HPA (Horizontal Pod Autoscaler)&lt;/strong&gt;: Automatically scales the backend and ML services based on CPU/Memory thresholds.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rollback Proof&lt;/strong&gt;: Implemented a "fail-safe" procedure where failed deployments can be reverted instantly using &lt;code&gt;kubectl rollout undo&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Persistence&lt;/strong&gt;: Fixed complex volume binding issues on EKS by implementing the AWS EBS CSI driver and custom &lt;code&gt;PGDATA&lt;/code&gt; pathing.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Observability: The Full Stack
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fftc34cfif55si59h2vsn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fftc34cfif55si59h2vsn.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can't fix what you can't see. I implemented the full LGTM stack (Loki, Grafana, Tempo, Metrics):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Metrics&lt;/strong&gt;: Prometheus scraping service endpoints.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Logs&lt;/strong&gt;: Loki aggregating distributed container logs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Traces&lt;/strong&gt;: Tempo providing end-to-end request tracing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Visualize&lt;/strong&gt;: Custom Grafana dashboards for a single pane of glass monitoring.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;This project was a deep dive into the realities of cloud-native engineering. From handling stateful persistence in Kubernetes to enforcing policy-as-code, it taught me that the best systems are the ones that are both automated and transparent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check out the repo here&lt;/strong&gt;: &lt;a href="https://github.com/Amae69/ShopMicro-Production.git" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>kubernetes</category>
      <category>microservices</category>
    </item>
    <item>
      <title>[Boost]</title>
      <dc:creator>Christian Ameachi</dc:creator>
      <pubDate>Fri, 09 Jan 2026 20:48:51 +0000</pubDate>
      <link>https://forem.com/chris-amaechi/-195e</link>
      <guid>https://forem.com/chris-amaechi/-195e</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/chris-amaechi" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F1272535%2Fb53a7193-5ea6-4285-90f8-9c80a76a51a8.png" alt="chris-amaechi"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/chris-amaechi/from-linux-primitives-to-docker-swarm-a-deep-dive-into-container-networking-2376" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;From Linux Primitives to Docker Swarm: A Deep Dive into Container Networking 🚀&lt;/h2&gt;
      &lt;h3&gt;Christian Ameachi ・ Jan 9&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
    </item>
    <item>
      <title>From Linux Primitives to Docker Swarm: A Deep Dive into Container Networking 🚀</title>
      <dc:creator>Christian Ameachi</dc:creator>
      <pubDate>Fri, 09 Jan 2026 20:47:36 +0000</pubDate>
      <link>https://forem.com/chris-amaechi/from-linux-primitives-to-docker-swarm-a-deep-dive-into-container-networking-2376</link>
      <guid>https://forem.com/chris-amaechi/from-linux-primitives-to-docker-swarm-a-deep-dive-into-container-networking-2376</guid>
      <description>&lt;p&gt;Have you ever wondered what actually happens under the hood when you run docker run? How do containers "talk" to each other while staying isolated?&lt;/p&gt;

&lt;p&gt;Lately, I've been taking a "from scratch" approach to understanding container networking. I didn't start with Docker. I started with the Linux Kernel.&lt;/p&gt;

&lt;p&gt;Here is the story of my journey through the layers of modern infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  🏗️ Phase 1: The Hard Way (Linux Primitives)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgm8pi9jb83alzeb5nju5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgm8pi9jb83alzeb5nju5.png" alt="linux primitive" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before touching a single Dockerfile, I built a microservices environment using raw Linux commands.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Ingredients:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Network Namespaces:&lt;/strong&gt; To create isolated network stacks for each service.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Veth Pairs:&lt;/strong&gt; Think of these as virtual "ethernet cables" connecting namespaces.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Linux Bridges:&lt;/strong&gt; Acting as virtual switches to manage traffic between services.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Iptables &amp;amp; Routing:&lt;/strong&gt; Manually configuring NAT and firewall rules to secure the frontend, backend, and database tiers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The Lesson:&lt;/strong&gt; Building the network manually teaches you exactly how much "magic" Docker does for us. It’s brittle and complex, but the performance is incredibly raw and fast.&lt;/p&gt;

&lt;h2&gt;
  
  
  🐳 Phase 2: Optimization with Docker
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcp140jmu5qfe6xopve1d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcp140jmu5qfe6xopve1d.png" alt="Docker" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, I migrated the stack to Docker. But I didn't just wrap the apps; I optimized them for production-grade reliability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Highlights:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Multi-Stage Builds:&lt;/strong&gt; Using python:3.11-slim to keep image sizes small (under 150MB).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Embedded Healthchecks:&lt;/strong&gt; Ensuring the API Gateway only routes traffic to "ready" services.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource Constraints:&lt;/strong&gt; Pinning CPU and Memory (e.g., 0.5 vCPU, 128MB RAM) to prevent noisy neighbor scenarios.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Internal DNS:&lt;/strong&gt; Leveraging Docker's embedded DNS for seamless service discovery.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🌐 Phase 3: The Distributed Horizon (Multi-Host Networking)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flcp2xpvf8z9p3jp02pw4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flcp2xpvf8z9p3jp02pw4.png" alt="multi-host" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The final challenge was scaling across two separate Linux hosts. This is where things got really interesting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Strategy:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;VXLAN Overlay:&lt;/strong&gt; I manually established a VXLAN tunnel to bridge two different VM network segments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker Swarm:&lt;/strong&gt; I initialized a Swarm cluster and deployed our stack using the overlay network driver.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;The Result:&lt;/strong&gt; A resilient, self-healing system. Even when my Manager node faced storage issues, Swarm's orchestration automatically failed over the services to the healthy Worker node. &lt;strong&gt;That is the power of distributed systems.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  📊 Performance Benchmark: Linux vs. Docker
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmv4ruqkciyj1duk53jcy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmv4ruqkciyj1duk53jcy.png" alt="performance-benchmark" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I ran load tests using Apache Bench (AB) and the results were eye-opening:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Linux Namespaces:&lt;/strong&gt; ~54 Requests Per Second (RPS)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker Compose:&lt;/strong&gt; ~29 Requests Per Second (RPS)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While the raw Linux setup was &lt;strong&gt;~46% faster&lt;/strong&gt;, Docker's portability, ease of scaling, and management tools make it the clear winner for modern software engineering.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;💡 Key Takeaway&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Understanding the "low-level" doesn't mean you should always build from scratch—it means you know exactly what to do when your high-level tools break.&lt;/p&gt;

&lt;p&gt;If you're a DevOps or Backend Engineer, I highly recommend spending a day building a bridge and a namespace from scratch. It will change how you look at your docker-compose.yml forever.&lt;/p&gt;

&lt;p&gt;I've documented the entire process, including setup scripts and architecture diagrams, in my GitHub repo.&lt;/p&gt;

&lt;p&gt;Check out the full project here: &lt;a href="https://github.com/Amae69/container-networking-project" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>docker</category>
      <category>linux</category>
      <category>networking</category>
    </item>
    <item>
      <title>Very simple and useful cli</title>
      <dc:creator>Christian Ameachi</dc:creator>
      <pubDate>Sun, 23 Nov 2025 08:41:58 +0000</pubDate>
      <link>https://forem.com/chris-amaechi/very-simple-and-useful-cli-1p8o</link>
      <guid>https://forem.com/chris-amaechi/very-simple-and-useful-cli-1p8o</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/chris-amaechi" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F1272535%2Fb53a7193-5ea6-4285-90f8-9c80a76a51a8.png" alt="chris-amaechi"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/chris-amaechi/building-a-simple-ticket-tracker-cli-in-go-39fp" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Building a Simple Ticket Tracker CLI in Go&lt;/h2&gt;
      &lt;h3&gt;Christian Ameachi ・ Nov 22&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#cli&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#go&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#productivity&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#tooling&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
    </item>
    <item>
      <title>Building a Simple Ticket Tracker CLI in Go</title>
      <dc:creator>Christian Ameachi</dc:creator>
      <pubDate>Sat, 22 Nov 2025 21:49:28 +0000</pubDate>
      <link>https://forem.com/chris-amaechi/building-a-simple-ticket-tracker-cli-in-go-39fp</link>
      <guid>https://forem.com/chris-amaechi/building-a-simple-ticket-tracker-cli-in-go-39fp</guid>
      <description>&lt;h3&gt;
  
  
  A lightweight, command-line alternative to complex ticketing systems.
&lt;/h3&gt;

&lt;p&gt;Using golang and cobra-cli, I built a simple command-line interface for managing support tickets. Tickets are stored locally in a CSV file.&lt;/p&gt;




&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;As developers, we often find ourselves juggling multiple tasks, bugs, and feature requests. While tools like Jira, Trello, or GitHub Issues are powerful, sometimes you just need something &lt;strong&gt;simple&lt;/strong&gt;, &lt;strong&gt;fast&lt;/strong&gt;, and &lt;strong&gt;local&lt;/strong&gt; to track your daily work without leaving the terminal.&lt;/p&gt;

&lt;p&gt;That's why I built &lt;strong&gt;Ticket CLI&lt;/strong&gt;—a simple command-line tool written in Go to track daily tickets and store them in a CSV file. No servers, no databases, just a binary and a text file.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Tech Stack
&lt;/h2&gt;

&lt;p&gt;For this project, I choose:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://go.dev/" rel="noopener noreferrer"&gt;Go&lt;/a&gt;&lt;/strong&gt;: For its speed, simplicity, and ability to compile into a single binary.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://github.com/spf13/cobra" rel="noopener noreferrer"&gt;Cobra&lt;/a&gt;&lt;/strong&gt;: The industry standard for building modern CLI applications in Go. It handles flag parsing, subcommands, and help text generation effortlessly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Standard Library (&lt;code&gt;encoding/csv&lt;/code&gt;)&lt;/strong&gt;: To keep dependencies low, I used Go's built-in CSV support for data persistence.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;p&gt;The project follows a standard Go CLI structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ticket-cli/
├── cmd/            # Cobra commands (add, list, delete)
├── internal/       # Business logic
│   └── storage/    # CSV handling
└── main.go         # Entry point
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  1. The Command Structure
&lt;/h3&gt;

&lt;p&gt;Using Cobra, I defined commands like &lt;code&gt;add&lt;/code&gt;, &lt;code&gt;list&lt;/code&gt;, and &lt;code&gt;delete&lt;/code&gt;. Here's a snippet of how the &lt;code&gt;add&lt;/code&gt; command handles flags to create a new ticket:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="c"&gt;// cmd/add.go&lt;/span&gt;
&lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;addCmd&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;cobra&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Command&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;Use&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;   &lt;span class="s"&gt;"add"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;Short&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"Add a new ticket"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;Run&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cmd&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;cobra&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Command&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;args&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="c"&gt;// default values logic...&lt;/span&gt;

        &lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;storage&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Ticket&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;ID&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;          &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Sprintf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"%d"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Now&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;UnixNano&lt;/span&gt;&lt;span class="p"&gt;()),&lt;/span&gt;
            &lt;span class="n"&gt;Title&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;       &lt;span class="n"&gt;flagTitle&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;Customer&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;    &lt;span class="n"&gt;flagCustomer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;Priority&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;    &lt;span class="n"&gt;flagPriority&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;Status&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;      &lt;span class="n"&gt;flagStatus&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;Description&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;flagDescription&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;storage&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;AppendTicket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Error saving ticket:"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Ticket saved with ID:"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ID&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Data Persistence (The "Database")
&lt;/h3&gt;

&lt;p&gt;Instead of setting up SQLite or a JSON store, I opted for CSV. It's human-readable and easy to debug. The &lt;code&gt;internal/storage&lt;/code&gt; package handles reading and writing to &lt;code&gt;tickets.csv&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="c"&gt;// internal/storage/storage.go&lt;/span&gt;
&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;AppendTicket&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;Ticket&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c"&gt;// ... (file opening logic)&lt;/span&gt;
    &lt;span class="n"&gt;w&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;csv&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NewWriter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;defer&lt;/span&gt; &lt;span class="n"&gt;w&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Flush&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="n"&gt;rec&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ID&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Customer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Priority&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Status&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Description&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;w&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;rec&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Installation &amp;amp; Usage
&lt;/h2&gt;

&lt;p&gt;You can clone the repo and build it yourself:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/yourusername/ticket-cli
&lt;span class="nb"&gt;cd &lt;/span&gt;ticket-cli
go mod tidy
go build &lt;span class="nt"&gt;-o&lt;/span&gt; ticket-cli &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Adding a Ticket
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./ticket-cli add &lt;span class="nt"&gt;--title&lt;/span&gt; &lt;span class="s2"&gt;"Fix login bug"&lt;/span&gt; &lt;span class="nt"&gt;--priority&lt;/span&gt; high &lt;span class="nt"&gt;--customer&lt;/span&gt; &lt;span class="s2"&gt;"Acme Corp"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Listing Tickets
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./ticket-cli list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Filter by date
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./ticket-cli list &lt;span class="nt"&gt;--date&lt;/span&gt; 2025-11-15
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  CSV Storage
&lt;/h3&gt;

&lt;p&gt;A CSV file is created automatically at the Project directory, and it keeps getting updated, once a new ticket is added using the &lt;code&gt;.ticket-cli add --flags&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Columns :
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ID, Date, Title, Customer, Priority, Status, Description
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Future Improvements
&lt;/h2&gt;

&lt;p&gt;This is just an MVP. Some ideas for the future include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;JSON/SQLite Storage&lt;/strong&gt;: For more complex querying.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TUI (Text User Interface)&lt;/strong&gt;: Using &lt;code&gt;bubbletea&lt;/code&gt; for an interactive dashboard.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cloud Sync&lt;/strong&gt;: Syncing tickets to a Gist or S3 bucket.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Building CLI tools in Go is a rewarding experience. Cobra makes the interface professional, and Go's standard library handles the rest. If you're looking for a weekend project, try building your own developer tools!&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Check out the code on &lt;a href="https://github.com/Amae69/support_ticket-cli" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>cli</category>
      <category>go</category>
      <category>productivity</category>
      <category>tooling</category>
    </item>
    <item>
      <title>Experience Continuous Integration with Jenkins | Ansible | Artifactory | SonarQube | PHP</title>
      <dc:creator>Christian Ameachi</dc:creator>
      <pubDate>Sun, 25 Feb 2024 00:07:34 +0000</pubDate>
      <link>https://forem.com/chris-amaechi/experience-continuous-integration-with-jenkins-ansible-artifactory-sonarqube-php-3eo2</link>
      <guid>https://forem.com/chris-amaechi/experience-continuous-integration-with-jenkins-ansible-artifactory-sonarqube-php-3eo2</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;Experience Continuous Integration with Jenkins | Ansible | Artifactory | SonarQube | PHP&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;IMPORTANT NOTICE&lt;/strong&gt; - This project has some initial theoretical concepts that must be well understood before moving on to the practical part. Please read it carefully as many times as needed to completely digest the most important and fundamental DevOps concepts. To successfully implement this project, it is crucial to grasp the importance of the entire CI/CD process, roles of each tool and success metrics - so we encourage you to thoroughly study the following theory until you feel comfortable explaining all the concepts (e.g., to your new junior colleague or during a job interview).&lt;/p&gt;

&lt;p&gt;In previous projects, you have been deploying a tooling website directly into the &lt;code&gt;/var/www/html&lt;/code&gt; directory on dev servers. Well, even though that worked fine, and we were able to access the website, it is not the best way to do it. Real world web application code written on &lt;a href="https://en.wikipedia.org/wiki/Java_(programming_language)" rel="noopener noreferrer"&gt;&lt;strong&gt;Java&lt;/strong&gt;&lt;/a&gt;, &lt;a href="https://en.wikipedia.org/wiki/.NET_Framework" rel="noopener noreferrer"&gt;&lt;strong&gt;.NET&lt;/strong&gt;&lt;/a&gt; or other &lt;strong&gt;compiled&lt;/strong&gt; programming languages require a build stage to create an executable file. The executable file (e.g., &lt;code&gt;jar&lt;/code&gt; file in case of &lt;code&gt;Java&lt;/code&gt;) contains all the codes embedded, and the necessary library dependencies, which the application needs to run and work successfully. &lt;/p&gt;

&lt;p&gt;Some other programs written languages such as &lt;strong&gt;&lt;a href="https://en.wikipedia.org/wiki/PHP" rel="noopener noreferrer"&gt;PHP&lt;/a&gt;&lt;/strong&gt;, &lt;a href="https://en.wikipedia.org/wiki/JavaScript" rel="noopener noreferrer"&gt;&lt;strong&gt;JavaScript&lt;/strong&gt;&lt;/a&gt; or &lt;a href="https://en.wikipedia.org/wiki/Python_(programming_language)" rel="noopener noreferrer"&gt;&lt;strong&gt;Python&lt;/strong&gt;&lt;/a&gt; work directly without being built into an executable file - these languages are called &lt;strong&gt;interpreted&lt;/strong&gt;. That is why we could easily deploy the entire code from git into &lt;code&gt;var/www/html&lt;/code&gt; and immediately the webserver was able to render the pages in a browser. However, it is not optimal to download code directly from &lt;code&gt;Git&lt;/code&gt; onto our servers. There is a smarter way to package the entire application code, and track release versions. We can package the entire code and all its dependencies into an archive such as &lt;code&gt;.tar.gz&lt;/code&gt; or &lt;code&gt;.zip&lt;/code&gt;, so that it can be easily unpacked on a respective environment's servers. &lt;/p&gt;

&lt;p&gt;For a better understanding of the difference between &lt;strong&gt;compiled&lt;/strong&gt; vs &lt;strong&gt;interpreted&lt;/strong&gt; programming languages - read &lt;a href="https://www.freecodecamp.org/news/compiled-versus-interpreted-languages/" rel="noopener noreferrer"&gt;this short article&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In this project, you will understand and get hands on experience around the entire concept around CI/CD from applications perspective. To fully gain real expertise around this idea, it is best to see it in action across different programming languages and from the platform perspective too. From the application perspective, we will be focusing on &lt;strong&gt;&lt;a href="https://en.wikipedia.org/wiki/PHP" rel="noopener noreferrer"&gt;PHP&lt;/a&gt;&lt;/strong&gt; here; there are more projects ahead that are based on &lt;strong&gt;&lt;a href="https://en.wikipedia.org/wiki/Java_(programming_language)" rel="noopener noreferrer"&gt;Java&lt;/a&gt;&lt;/strong&gt;, &lt;a href="https://en.wikipedia.org/wiki/Node.js" rel="noopener noreferrer"&gt;&lt;strong&gt;Node.js&lt;/strong&gt;&lt;/a&gt;, &lt;strong&gt;&lt;a href="https://en.wikipedia.org/wiki/.NET_Framework" rel="noopener noreferrer"&gt;.Net&lt;/a&gt;&lt;/strong&gt; and &lt;a href="https://en.wikipedia.org/wiki/Python_(programming_language)" rel="noopener noreferrer"&gt;&lt;strong&gt;Python&lt;/strong&gt;&lt;/a&gt;. By the time you start working on &lt;a href="https://www.terraform.io" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt;, &lt;a href="https://www.docker.com" rel="noopener noreferrer"&gt;Docker&lt;/a&gt; and &lt;a href="https://kubernetes.io" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt; projects, you will get to see the platform perspective of CI/CD in action.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;What is Continuous Integration?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In software engineering, Continuous Integration (CI) is a practice of merging all developers' working copies to a shared mainline (e.g., Git Repository or some other &lt;a href="https://en.wikipedia.org/wiki/Comparison_of_version-control_software" rel="noopener noreferrer"&gt;version control system&lt;/a&gt;) several times per day. Frequent merges reduce chances of any conflicts in code and allow to run tests more often to avoid massive rework if something goes wrong. This principle can be formulated as &lt;strong&gt;&lt;em&gt;Commit early, push often&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The general idea behind multiple commits is to avoid what is generally considered as &lt;em&gt;Merge Hell&lt;/em&gt; or &lt;em&gt;Integration hell&lt;/em&gt;. When a new developer joins a new project, he or she must create a copy of the main codebase by starting a new feature branch from the &lt;em&gt;mainline&lt;/em&gt; to develop his own features (in some organization or team, this could be called a &lt;code&gt;develop&lt;/code&gt;, &lt;code&gt;main&lt;/code&gt; or &lt;code&gt;master&lt;/code&gt; branch). If there are tens of developers working on the same project, they will all have their own branches created from &lt;em&gt;mainline&lt;/em&gt; at different points in time. Once they make a copy of the repository it starts drifting away from the mainline with every new merge of other developers' codes. If this lingers on for a very long time without reconciling the code, then this will cause a lot of code conflict or &lt;em&gt;Merge Hell&lt;/em&gt;, as rightly said. Imagine such a &lt;em&gt;hell&lt;/em&gt; from tens of developers or worse, hundreds. So, the best thing to do, is to continuously commit &amp;amp; push your code to the &lt;code&gt;mainline&lt;/code&gt;. As many times as tens times per day. With this practice, you can avoid &lt;em&gt;Merge Hell&lt;/em&gt; or &lt;em&gt;Integration hell&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CI concept is not only about committing your code. There is a general workflow, let us start it...&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Run tests locally&lt;/strong&gt;: Before developers commit their code to a central repository, it is recommended to test the code locally. So, &lt;a href="https://en.wikipedia.org/wiki/Test-driven_development" rel="noopener noreferrer"&gt;Test-Driven Development (TDD)&lt;/a&gt; approach is commonly used in combination with CI. Developers write tests for their code called  &lt;strong&gt;unit-tests&lt;/strong&gt;, and before they commit their work, they run their tests locally. This practice helps a team to avoid having one developer's work-in-progress code from breaking other developers' copy of the codebase.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Compile code in CI&lt;/strong&gt;: &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After testing codes locally, developers commit and push their work to a central repository. Rather than building the code into an executable locally, a dedicated CI server picks up the code and runs the build there. In this project we will use, already familiar to you, Jenkins as our CI server. Build happens either periodically - by polling the repository at some configured schedule, or after every commit. Having a CI server where builds run is a good practice for a team, as everyone has visibility into each commit and its corresponding builds.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Run further tests in CI&lt;/strong&gt;: &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Even though tests have been run locally by developers, it is important to run the unit-tests on the CI server as well. But, rather than focusing solely on unit-tests, there are other kinds of tests and &lt;a href="https://en.wikipedia.org/wiki/Program_analysis" rel="noopener noreferrer"&gt;code analysis&lt;/a&gt; that can be run using CI server. These are extremely critical to determining the overall quality of code being developed, how it interacts with other developers' work, and how vulnerable it is to attacks. A CI server can use different tools for &lt;strong&gt;Static Code Analysis&lt;/strong&gt;, &lt;strong&gt;Code Coverage Analysis&lt;/strong&gt;, &lt;strong&gt;Code smells Analysis&lt;/strong&gt;, and &lt;strong&gt;Compliance Analysis&lt;/strong&gt;. In addition, it can run other types of tests such as &lt;strong&gt;Integration&lt;/strong&gt; and &lt;strong&gt;Penetration&lt;/strong&gt; tests. Other tasks performed by a CI server include production of code documentation from the source code and facilitate manual quality assurance (QA) testing processes.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Deploy an artifact from CI&lt;/strong&gt;: &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At this stage, the difference between CI and CD is spelt out. As you now know, CI is &lt;strong&gt;Continuous Integration&lt;/strong&gt;, which is everything we have been discussing so far. CD on the other hand is &lt;strong&gt;Continuous Delivery&lt;/strong&gt; which ensures that software checked into the mainline is &lt;strong&gt;always ready&lt;/strong&gt; to be deployed to users. The deployment here is manually triggered after certain QA tasks are passed successfully. There is another CD known as &lt;strong&gt;Continuous Deployment&lt;/strong&gt; which is also about deploying the software to the users, but rather than manual, it makes the entire process &lt;strong&gt;fully automated&lt;/strong&gt;. Thus, Continuous Deployment is just one step ahead in automation than Continuous Delivery.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Continuous Integration in The Real World&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;To emphasize a typical CI Pipeline further, let us explore the diagram below a little deeper.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frbkqzmsyv6456q6y3lrg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frbkqzmsyv6456q6y3lrg.png" alt="CI-Pipeline-Regular" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Version Control&lt;/strong&gt;: This is the stage where developers' code gets committed and pushed after they have tested their work locally.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build&lt;/strong&gt;: Depending on the type of language or technology used, we may need to build the codes into &lt;a href="https://en.wikipedia.org/wiki/Executable" rel="noopener noreferrer"&gt;binary executable files&lt;/a&gt; (in case of &lt;strong&gt;compiled&lt;/strong&gt; languages) or just package the codes together with all necessary dependencies into a deployable package (in case of &lt;strong&gt;interpreted&lt;/strong&gt; languages).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Unit Test&lt;/strong&gt;: Unit tests that have been developed by the developers are tested. Depending on how the CI job is configured, the entire pipeline may fail if part of the tests fails, and developers will have to fix this failure before starting the pipeline again. A &lt;strong&gt;&lt;em&gt;Job&lt;/em&gt;&lt;/strong&gt; by the way, is a phase in the pipeline. &lt;strong&gt;Unit Test&lt;/strong&gt; is a phase, therefore it can be considered a job on its own.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deploy&lt;/strong&gt;: Once the tests are passed, the next phase is to deploy the compiled or packaged code into an artifact repository. This is where all the various versions of code including the latest will be stored. The CI tool will have to pick up the code from this location to proceed with the remaining parts of the pipeline.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auto Test&lt;/strong&gt;: Apart from Unit testing, there are many other kinds of tests that are required to analyse the quality of code and determine how vulnerable the software will be to external or internal attacks. These tests must be automated, and there can be multiple environments created to fulfil different test requirements. For example, a server dedicated for &lt;strong&gt;Integration Testing&lt;/strong&gt; will have the code deployed there to conduct integration tests. Once that passes, there can be other sub-layers in the testing phase in which the code will be deployed to, so as to conduct further tests. Such are &lt;strong&gt;User Acceptance Testing (UAT)&lt;/strong&gt;, and another can be &lt;strong&gt;Penetration Testing&lt;/strong&gt;. These servers will be named according to what they have been designed to do in those environments. A &lt;strong&gt;UAT&lt;/strong&gt; server is generally be used for &lt;strong&gt;UAT&lt;/strong&gt;, &lt;em&gt;SIT&lt;/em&gt; server is for &lt;strong&gt;Systems Integration Testing&lt;/strong&gt;, &lt;em&gt;PEN&lt;/em&gt; Server is for &lt;strong&gt;Penetration Testing&lt;/strong&gt; and they can be named whatever the naming style or convention in which the team is used. An environment does not necessarily have to reside on one single server. In most cases it might be a stack as you have defined in your Ansible Inventory. All the servers in the &lt;em&gt;inventory/dev&lt;/em&gt; are considered as Dev Environment. The same goes for &lt;em&gt;inventory/stage&lt;/em&gt; (Staging Environment) &lt;em&gt;inventory/preprod&lt;/em&gt; (Pre-production environment), &lt;em&gt;inventory/prod&lt;/em&gt; (Production environment), etc. So, it is all down to naming convention as agreed and used company or team wide.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deploy to production&lt;/strong&gt;: Once all the tests have been conducted and either the release manager or whoever has the authority to authorize the release to the production server is happy, he gives green light to hit the deploy button to ship the release to production environment. This is an Ideal Continuous Delivery Pipeline. If the entire pipeline was automated and no human is required to manually give the &lt;strong&gt;Go&lt;/strong&gt; decision, then this would be considered as Continuous Deployment. Because the cycle will be repeated, and every time there is a code commit and push, it causes the pipeline to trigger, and the loop continues over and over again.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Measure and Validate&lt;/strong&gt;: This is where live users are interacting with the application and feedback is being collected for further improvements and bug fixes. There are many metrics that must be determined and observed here. We will quickly go through 13 metrics that MUST be considered.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Common Best Practices of CI/CD
&lt;/h3&gt;

&lt;p&gt;Before we move on to observability metrics - let us list down the principles that define a reliable and robust CI/CD pipeline:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Maintain a code repository&lt;/li&gt;
&lt;li&gt;Automate build process&lt;/li&gt;
&lt;li&gt;Make builds self-tested&lt;/li&gt;
&lt;li&gt;Everyone commits to the baseline every day&lt;/li&gt;
&lt;li&gt;Every commit to baseline should be built&lt;/li&gt;
&lt;li&gt;Every bug-fix commit should come with a test case&lt;/li&gt;
&lt;li&gt;Keep the build fast&lt;/li&gt;
&lt;li&gt;Test in a clone of production environment&lt;/li&gt;
&lt;li&gt;Make it easy to get the latest deliverables&lt;/li&gt;
&lt;li&gt;Everyone can see the results of the latest build&lt;/li&gt;
&lt;li&gt;Automate deployment (if you are confident enough in your CI/CD pipeline and willing to go for a fully automated Continuous Deployment)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Why are we doing everything we are doing? - 13 DevOps Success Metrics&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;After all, DevOps is all about continuous delivery or deployment, and being able to ship out quality code as fast as possible. This is a very ambitious thing to desire; therefore, we must be careful not to break things as we are moving very fast. By tracking these metrics, we can determine our delivery speed and bottlenecks before breaking things. Ultimately, the goals of DevOps are enhanced Velocity, Quality, and Performance. But how do we track these parameters? Let us have a look at the 13 metrics to watch out for.&lt;/p&gt;

&lt;p&gt;1 &lt;strong&gt;Deployment frequency&lt;/strong&gt;: Tracking how often you do deployments is a good DevOps metric. Ultimately, the goal is to do more smaller deployments as often as possible. Reducing the size of deployments makes it easier to test and release. I would suggest counting both production and non-production deployments separately. How often you deploy to QA or pre-production environments is also important. You need to deploy early and often in QA to ensure enough time for testing.&lt;/p&gt;

&lt;p&gt;2 &lt;strong&gt;Lead time&lt;/strong&gt;: If the goal is to ship code quickly, this is a key DevOps metric. I would define lead time as the amount of time that occurs between starting on a work item until it is deployed. This helps you know that if you started on a new work item today, how long would it take on average until it gets to production.&lt;/p&gt;

&lt;p&gt;3 &lt;strong&gt;Customer tickets&lt;/strong&gt;: The best and worst indicator of application problems is customer support tickets and feedback. The last thing you want is your users reporting bugs or having problems with your software. Because of this, customer tickets also serve as a good indicator of application quality and performance problems.&lt;/p&gt;

&lt;p&gt;4 &lt;strong&gt;Percentage of passed automated tests&lt;/strong&gt;: To increase velocity, it is highly recommended that the development team makes extensive usage of unit and functional testing. Since DevOps relies heavily on automation, tracking how well automated tests work is a good DevOps metrics. It is good to know how often code changes break tests.&lt;/p&gt;

&lt;p&gt;5 &lt;strong&gt;Defect escape rate&lt;/strong&gt;: Do you know how many software defects are being found in production versus QA? If you want to ship code fast, you need to have confidence that you can find software defects before they get to production. Defect escape rate is a great DevOps metric to track how often those defects make it to production.&lt;/p&gt;

&lt;p&gt;6 &lt;strong&gt;Availability&lt;/strong&gt;: The last thing we ever want is for our application to be down. Depending on the type of application and how we deploy it, we may have a little downtime as part of scheduled maintenance. It is highly recommended to track this metric and all unplanned outages. Most software companies build status pages to track this. Such as this &lt;a href="https://www.google.co.uk/appsstatus#hl=en-GB&amp;amp;v=status" rel="noopener noreferrer"&gt;Google Products Status Page&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;7 &lt;strong&gt;Service level agreements&lt;/strong&gt;: Most companies have some service level agreement (SLA) that they promise to the customers. It is also important to track compliance with SLAs. Even if there are no formally stated SLAs, there probably are application non-functional requirements or expectations to be met.&lt;/p&gt;

&lt;p&gt;8 &lt;strong&gt;Failed deployments&lt;/strong&gt;: We all hope this never happens, but how often do our deployments cause an outage or major issues for the users? Reversing a failed deployment is something we never want to do, but it is something you should always plan for. If you have issues with failed deployments, be sure to track this metric over time. This could also be seen as tracking &lt;em&gt;**Mean Time To Failure&lt;/em&gt;* (MTTF).&lt;/p&gt;

&lt;p&gt;9 &lt;strong&gt;Error rates&lt;/strong&gt;: Tracking error rates within the application is super important. Not only they serve as an indicator of quality problems, but also ongoing performance and uptime related issues. In software development, errors are also known as &lt;em&gt;exceptions&lt;/em&gt;, and proper exception handling is critical. If they are not handled nicely, we can figure it out while monitoring the rate of errors.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Bugs – Identify new exceptions being thrown in the code after a deployment&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Production issues – Capture issues with database connections, query timeouts, and other related issues&lt;/p&gt;

&lt;p&gt;Presenting error rate metrics like this simply gives greater insights into where to focus attention.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fijrwyj4itdopv4upcaj6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fijrwyj4itdopv4upcaj6.png" alt="Error-Rate" width="800" height="139"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;10 &lt;strong&gt;Application usage &amp;amp; traffic&lt;/strong&gt;: After a deployment, we want to see if the number of transactions or users accessing our system looks normal. If we suddenly have no traffic or a giant spike in traffic, something could be wrong. An attacker may be routing traffic elsewhere, or initiating a &lt;a href="https://us.norton.com/internetsecurity-emerging-threats-what-is-a-ddos-attack-30sectech-by-norton.html" rel="noopener noreferrer"&gt;DDOS attack&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;11 &lt;strong&gt;Application performance&lt;/strong&gt;: Before we even perform a deployment, we should configure monitoring tools like &lt;a href="https://stackify.com/retrace/" rel="noopener noreferrer"&gt;Retrace&lt;/a&gt;, &lt;a href="https://www.datadoghq.com" rel="noopener noreferrer"&gt;DataDog&lt;/a&gt;, &lt;a href="https://newrelic.com" rel="noopener noreferrer"&gt;New Relic&lt;/a&gt;, or &lt;a href="https://en.wikipedia.org/wiki/AppDynamics" rel="noopener noreferrer"&gt;AppDynamics&lt;/a&gt; to look for performance problems, hidden errors, and other issues. During and after the deployment, we should also look for any changes in overall application performance and establish some benchmarks to know when things deviate from the norm.&lt;/p&gt;

&lt;p&gt;It might be common after a deployment to see major changes in the usage of specific SQL queries, web service or HTTP calls, and other application dependencies. These monitoring tools can provide valuable visualizations like this one below that helps make it easy to spot problems.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flb1chnzm7cbki8imuyfb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flb1chnzm7cbki8imuyfb.png" alt="Application-Performance" width="800" height="188"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;12 &lt;strong&gt;Mean time to detection (MTTD)&lt;/strong&gt;: When problems happen, it is important that we identify them quickly. The last thing we want is to have a major partial or complete system outage and not know about it. Having robust application monitoring and good observability tools in place will help us detect issues quickly. Once they are detected, we also must fix them quickly!&lt;/p&gt;

&lt;p&gt;13 &lt;strong&gt;Mean time to recovery (MTTR)&lt;/strong&gt;: This metric helps us track how long it takes to recover from failures. A key metric for the business is keeping failures to a minimum and being able to recover from them quickly. It is typically measured in hours and may refer to business hours, not calendar hours.&lt;/p&gt;

&lt;p&gt;These are the major metrics that any DevOps team should track and monitor to understand how well CI/CD process is established and how it helps to deliver quality application to the users.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Simulating a typical CI/CD Pipeline for a PHP Based application&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;As part of the ongoing infrastructure development with Ansible started from &lt;em&gt;Project 11&lt;/em&gt;, you will be tasked to create a pipeline that simulates continuous integration and delivery. Target end to end CI/CD pipeline is represented by the diagram below. It is important to know that both &lt;strong&gt;Tooling&lt;/strong&gt; and &lt;strong&gt;TODO&lt;/strong&gt; Web Applications are based on an interpreted (&lt;a href="https://en.wikipedia.org/wiki/Scripting_language" rel="noopener noreferrer"&gt;scripting&lt;/a&gt;) language (PHP). It means, it can be deployed directly onto a server and will work without compiling the code to a machine language. &lt;/p&gt;

&lt;p&gt;The problem with that approach is, it would be difficult to package and version the software for different releases. And so, in this project, we will be using a different approach for releases, rather than downloading directly from git, we will be using Ansible &lt;a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/uri_module.html" rel="noopener noreferrer"&gt;&lt;code&gt;uri&lt;/code&gt; module&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdw362wd48cep03x292j7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdw362wd48cep03x292j7.png" alt="CI_CD-Pipeline-For-PHP-ToDo-Application" width="800" height="242"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Set Up
&lt;/h2&gt;

&lt;p&gt;This project is partly a continuation of your Ansible work, so simply add and subtract based on the new setup in this project. It will require a lot of servers to simulate all the different environments from &lt;code&gt;dev/ci&lt;/code&gt; all the way to &lt;code&gt;production&lt;/code&gt;. This will be quite a lot of servers altogether (But you don't have to create them all at once. Only create servers required for an environment you are working with at the moment. For example, when doing deployments for development, do not create servers for integration, pentest, or production yet). &lt;/p&gt;

&lt;p&gt;Try to utilize your AWS free tier as much as you can, you can also register a new account if you have exhausted the current one. Alternatively, you can use &lt;a href="https://cloud.google.com" rel="noopener noreferrer"&gt;Google Cloud (GCP)&lt;/a&gt; to rent virtual machines from this cloud service provider - you can get $300 credit &lt;a href="https://clozon.com/try-google-cloud-services-and-get-300-credit-with-a-12-month-free-trial/" rel="noopener noreferrer"&gt;here&lt;/a&gt;  or &lt;a href="https://www.startups.com/products/benefits/googlecloud" rel="noopener noreferrer"&gt;here&lt;/a&gt; (NOTE: Please read instructions carefully to get your credits)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: This is still NOT a cloud-focus project yet. AWS cloud end to end project begins from &lt;a href="https://expert-pbl.darey.io/en/latest/project15.html" rel="noopener noreferrer"&gt;project-15&lt;/a&gt;. Therefore, do not worry about advanced AWS or GCP configuration. All we need here is virtual machines that can be accessed over &lt;strong&gt;SSH&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;To minimize the cost of cloud servers, you don not have to create all the servers at once, simply spin up a minimal server set up as you progress through the project implementation and have reached a need for more.&lt;/p&gt;

&lt;p&gt;To get started, we will focus on these environments initially.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ci &lt;/li&gt;
&lt;li&gt;Dev &lt;/li&gt;
&lt;li&gt;Pentest&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both &lt;code&gt;SIT&lt;/code&gt; - For System Integration Testing and &lt;code&gt;UAT&lt;/code&gt; - User Acceptance Testing do not require a lot of extra installation or configuration. They are basically the webservers holding our applications. But &lt;code&gt;Pentest&lt;/code&gt; - For Penetration testing is where we will conduct security related tests, so some other tools and specific configurations will be needed. In some cases, it will also be used for &lt;code&gt;Performance and Load&lt;/code&gt; testing. Otherwise, that can also be a separate environment on its own. It all depends on decisions made by the company and the team running the show.&lt;/p&gt;

&lt;p&gt;What we want to achieve, is having Nginx to serve as a &lt;a href="https://en.wikipedia.org/wiki/Reverse_proxy" rel="noopener noreferrer"&gt;reverse proxy&lt;/a&gt; for our sites and tools. Each environment setup is represented in the below table and diagrams.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fioevugulhn2w2qtg0niw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fioevugulhn2w2qtg0niw.png" alt="Environment-setup" width="800" height="214"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  CI-Environment
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe1ik74910vt4qzrxr8c8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe1ik74910vt4qzrxr8c8.png" alt="Project-14-CI-Environment" width="641" height="629"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Other Environments from Lower To Higher&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs8rzt3fvstarpzpcbpwb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs8rzt3fvstarpzpcbpwb.png" alt="Project-14-Pentest-Environment" width="631" height="571"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  DNS requirements
&lt;/h3&gt;

&lt;p&gt;Make DNS entries to create a subdomain for each environment. Assuming your main domain is &lt;code&gt;darey.io&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You should have a subdomains list like this:&lt;/p&gt;

&lt;p&gt;.tg  {border-collapse:collapse;border-spacing:0;}&lt;br&gt;
.tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;&lt;br&gt;
  overflow:hidden;padding:10px 5px;word-break:normal;}&lt;br&gt;
.tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;&lt;br&gt;
  font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;}&lt;br&gt;
.tg .tg-baqh{text-align:center;vertical-align:top}&lt;br&gt;
.tg .tg-c3ow{border-color:inherit;text-align:center;vertical-align:top}&lt;br&gt;
.tg .tg-7btt{border-color:inherit;font-weight:bold;text-align:center;vertical-align:top}&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
  &lt;tr&gt;
    &lt;th&gt;Server&lt;/th&gt;
    &lt;th&gt;Domain&lt;/th&gt;
  &lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
  &lt;tr&gt;
    &lt;td&gt;Jenkins&lt;/td&gt;
    &lt;td&gt;&lt;a href="https://ci.infradev.darey.io/" rel="noopener noreferrer"&gt;https://ci.infradev.darey.io&lt;/a&gt;&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;Sonarqube&lt;/td&gt;
    &lt;td&gt;&lt;a href="https://sonar.infradev.darey.io/" rel="noopener noreferrer"&gt;https://sonar.infradev.darey.io&lt;/a&gt;&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;Artifactory&lt;/td&gt;
    &lt;td&gt;&lt;a href="https://artifacts.infradev.darey.io/" rel="noopener noreferrer"&gt;https://artifacts.infradev.darey.io&lt;/a&gt;&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;Production Tooling&lt;/td&gt;
    &lt;td&gt;&lt;a href="https://tooling.darey.io/" rel="noopener noreferrer"&gt;https://tooling.darey.io&lt;/a&gt;&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;Pre-Prod Tooling &lt;/td&gt;
    &lt;td&gt;&lt;a href="https://tooling.preprod.darey.io/" rel="noopener noreferrer"&gt;https://tooling.preprod.darey.io&lt;/a&gt;&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;Pentest Tooling&lt;/td&gt;
    &lt;td&gt;&lt;a href="https://tooling.pentest.darey.io/" rel="noopener noreferrer"&gt;https://tooling.pentest.darey.io&lt;/a&gt;&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;UAT Tooling&lt;/td&gt;
    &lt;td&gt;&lt;a href="https://tooling.uat.darey.io/" rel="noopener noreferrer"&gt;https://tooling.uat.darey.io&lt;/a&gt;&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;SIT Tooling&lt;/td&gt;
    &lt;td&gt;&lt;a href="https://tooling.sit.darey.io/" rel="noopener noreferrer"&gt;https://tooling.sit.darey.io&lt;/a&gt;&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;Dev Tooling&lt;/td&gt;
    &lt;td&gt;&lt;a href="https://tooling.dev.darey.io/" rel="noopener noreferrer"&gt;https://tooling.dev.darey.io&lt;/a&gt;&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;Production TODO-WebApp&lt;/td&gt;
    &lt;td&gt;&lt;a href="https://todo.darey.io/" rel="noopener noreferrer"&gt;https://todo.darey.io&lt;/a&gt;&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;&lt;span&gt;Pre-Prod TODO-WebApp&lt;/span&gt;&lt;/td&gt;
    &lt;td&gt;&lt;a href="https://todo.preprod.darey.io/" rel="noopener noreferrer"&gt;https://todo.preprod.darey.io&lt;/a&gt;&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;&lt;span&gt;Pentest TODO-WebApp&lt;/span&gt;&lt;/td&gt;
    &lt;td&gt;&lt;a href="https://todo.pentest.darey.io/" rel="noopener noreferrer"&gt;https://todo.pentest.darey.io&lt;/a&gt;&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;UAT TODO-WebApp&lt;/td&gt;
    &lt;td&gt;&lt;a href="https://todo.uat.darey.io/" rel="noopener noreferrer"&gt;https://todo.uat.darey.io&lt;/a&gt;&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;SIT TODO-WebApp&lt;/td&gt;
    &lt;td&gt;&lt;a href="https://todo.sit.darey.io/" rel="noopener noreferrer"&gt;https://todo.sit.darey.io&lt;/a&gt;&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;Dev TODO-WebApp&lt;/td&gt;
    &lt;td&gt;&lt;a href="https://todo.dev.darey.io/" rel="noopener noreferrer"&gt;https://todo.dev.darey.io&lt;/a&gt;&lt;/td&gt;
  &lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Ansible Inventory should look like this
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;├── ci
├── dev
├── pentest
├── pre-prod
├── prod
├── sit
└── uat
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;ci&lt;/code&gt; inventory file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[jenkins]
&amp;lt;Jenkins-Private-IP-Address&amp;gt;

[nginx]
&amp;lt;Nginx-Private-IP-Address&amp;gt;

[sonarqube]
&amp;lt;SonarQube-Private-IP-Address&amp;gt;

[artifact_repository]
&amp;lt;Artifact_repository-Private-IP-Address&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;dev&lt;/code&gt; Inventory file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[tooling]
&amp;lt;Tooling-Web-Server-Private-IP-Address&amp;gt;

[todo]
&amp;lt;Todo-Web-Server-Private-IP-Address&amp;gt;

[nginx]
&amp;lt;Nginx-Private-IP-Address&amp;gt;

[db:vars]
ansible_user=ec2-user
ansible_python_interpreter=/usr/bin/python

[db]
&amp;lt;DB-Server-Private-IP-Address&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;pentest&lt;/code&gt; inventory file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[pentest:children]
pentest-todo
pentest-tooling

[pentest-todo]
&amp;lt;Pentest-for-Todo-Private-IP-Address&amp;gt;

[pentest-tooling]
&amp;lt;Pentest-for-Tooling-Private-IP-Address&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Observations:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You will notice that in the pentest inventory file, we have introduced a new concept &lt;code&gt;pentest:children&lt;/code&gt; This is because, we want to have a group called &lt;code&gt;pentest&lt;/code&gt; which covers Ansible execution against both &lt;code&gt;pentest-todo&lt;/code&gt; and &lt;code&gt;pentest-tooling&lt;/code&gt; simultaneously. But at the same time, we want the flexibility to run specific Ansible tasks against an individual group.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;db&lt;/code&gt; group has a slightly different configuration. It uses a RedHat/Centos Linux distro. Others are based on Ubuntu (in this case user is &lt;code&gt;ubuntu&lt;/code&gt;). Therefore, the user required for connectivity and path to python interpreter are different. If all your environment is based on Ubuntu, you may not need this kind of set up. Totally up to you how you want to do this. Whatever works for you is absolutely fine in this scenario.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This makes us to introduce another Ansible concept called &lt;code&gt;group_vars&lt;/code&gt;. With group vars, we can declare and set variables for each group of servers created in the inventory file.&lt;/p&gt;

&lt;p&gt;For example, If there are variables we need to be common between both &lt;code&gt;pentest-todo&lt;/code&gt; and &lt;code&gt;pentest-tooling&lt;/code&gt;, rather than setting these variables in many places, we can simply use the &lt;code&gt;group_vars&lt;/code&gt; for &lt;code&gt;pentest&lt;/code&gt;. Since in the inventory file it has been created as &lt;code&gt;pentest:children&lt;/code&gt; Ansible recognizes this and simply applies that variable to both children.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Ansible Roles for CI Environment&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Add two more roles to ansible:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://www.sonarqube.org" rel="noopener noreferrer"&gt;SonarQube&lt;/a&gt; (Scroll down to the Sonarqube section to see instructions on how to set up and configure SonarQube manually)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://jfrog.com/artifactory/" rel="noopener noreferrer"&gt;Artifactory&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Why do we need SonarQube?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;SonarQube is an open-source platform developed by SonarSource for continuous inspection of code quality, it is used to perform automatic reviews with static analysis of code to detect bugs, &lt;a href="https://en.wikipedia.org/wiki/Code_smell" rel="noopener noreferrer"&gt;code smells&lt;/a&gt;, and security vulnerabilities. &lt;a href="https://youtu.be/vE39Fg8pvZg" rel="noopener noreferrer"&gt;Watch a short description here&lt;/a&gt;. There is a lot more hands on work ahead with SonarQube and Jenkins. So, the purpose of SonarQube will be clearer to you very soon.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Why do we need Artifactory?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Artifactory is a product by &lt;a href="https://jfrog.com" rel="noopener noreferrer"&gt;JFrog&lt;/a&gt; that serves as a binary repository manager. The binary repository is a natural extension to the source code repository, in that the outcome of your build process is stored. It can be used for certain other automation, but we will it strictly to manage our build artifacts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://youtu.be/upJS4R6SbgM" rel="noopener noreferrer"&gt;Watch a short description here&lt;/a&gt; &lt;em&gt;Focus more on the first 10.08 mins&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Configuring Ansible For Jenkins Deployment&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In previous projects, you have been launching Ansible commands manually from a CLI. Now, with Jenkins, we will start running Ansible from Jenkins UI.&lt;/p&gt;

&lt;p&gt;To do this,&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Navigate to Jenkins URL&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install &amp;amp; Open Blue Ocean Jenkins Plugin&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a new pipeline &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffn20d90iexukv1l4ortw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffn20d90iexukv1l4ortw.png" alt="Jenkins-Create-Pipeline" width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select GitHub&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2es40iz0mrfehi6ptded.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2es40iz0mrfehi6ptded.png" alt="Jenkins-Select-Github" width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Connect Jenkins with GitHub&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;![Jenkins-Create-Access-Token-To-Github](./Images/Images%2014/Jen](&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/79bzke2hjiehukqjzlqq.png" rel="noopener noreferrer"&gt;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/79bzke2hjiehukqjzlqq.png&lt;/a&gt;)&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Login to GitHub &amp;amp; Generate an Access Token&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fespjdt5lc8zfbnub3gtt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fespjdt5lc8zfbnub3gtt.png" alt="Jenkins-Github-Access-Token" width="800" height="551"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjvb4g9xyk4z5nmmf1qyx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjvb4g9xyk4z5nmmf1qyx.png" alt="Jenkins-Github-Generate-Token" width="800" height="477"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Copy Access Token&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhtqgslwjt640cgzamesw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhtqgslwjt640cgzamesw.png" alt="Jenkins-Copy-Token" width="800" height="295"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Paste the token and connect&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsekaivhfu2cmjdq1wzwr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsekaivhfu2cmjdq1wzwr.png" alt="JEnkins-Paste-Token-And-Connect" width="800" height="415"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a new Pipeline&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8lzf679ua4w89m53gjp1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8lzf679ua4w89m53gjp1.png" alt="Create-Pipeline" width="800" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At this point you may not have a &lt;a href="https://www.jenkins.io/doc/book/pipeline/jenkinsfile/" rel="noopener noreferrer"&gt;Jenkinsfile&lt;/a&gt; in the Ansible repository, so Blue Ocean will attempt to give you some guidance to create one. But we do not need that. We will rather create one ourselves. So, click on Administration to exit the Blue Ocean console.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fltemgiwbcvekru1ud0jg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fltemgiwbcvekru1ud0jg.png" alt="Jenkins-Exit-Blue-Ocean" width="800" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is our newly created pipeline. It takes the name of your GitHub repository. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx0zswkziwjdcy1k9e9t3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx0zswkziwjdcy1k9e9t3.png" alt="Jenkins-Ansible-Pipeline" width="800" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Let us create our &lt;code&gt;Jenkinsfile&lt;/code&gt;&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Inside the Ansible project, create a new directory &lt;code&gt;deploy&lt;/code&gt; and start a new file &lt;code&gt;Jenkinsfile&lt;/code&gt; inside the directory.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp3ga649lk8r31zyktvxj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp3ga649lk8r31zyktvxj.png" alt="Ansible-Folder-Structure" width="434" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Add the code snippet below to start building the &lt;code&gt;Jenkinsfile&lt;/code&gt; gradually. This pipeline currently has just one stage called &lt;code&gt;Build&lt;/code&gt; and the only thing we are doing is using the &lt;code&gt;shell script&lt;/code&gt; module to echo &lt;code&gt;Building Stage&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pipeline {
    agent any


  stages {
    stage('Build') {
      steps {
        script {
          sh 'echo "Building Stage"'
        }
      }
    }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now go back into the Ansible pipeline in Jenkins, and select configure &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fddo8r6abish7s63q6c2x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fddo8r6abish7s63q6c2x.png" alt="Jenkins-Select-Configure" width="800" height="519"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Scroll down to &lt;code&gt;Build Configuration&lt;/code&gt; section and specify the location of the &lt;strong&gt;Jenkinsfile&lt;/strong&gt; at &lt;code&gt;deploy/Jenkinsfile&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6is1s4mof3ny2t3vkug0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6is1s4mof3ny2t3vkug0.png" alt="Jenkinsfile-Location" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Back to the pipeline again, this time click "Build now"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fanhz2bsb08hbikia5jpv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fanhz2bsb08hbikia5jpv.png" alt="Jenkins-Build-Now" width="800" height="513"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This will trigger a build and you will be able to see the effect of our basic &lt;code&gt;Jenkinsfile&lt;/code&gt; configuration by going through the console output of the build.&lt;/p&gt;

&lt;p&gt;To really appreciate and feel the difference of Cloud Blue UI, it is recommended to try triggering the build again from Blue Ocean interface.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click on Blue Ocean&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8wxtvaxxi9b4mhoeh7le.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8wxtvaxxi9b4mhoeh7le.png" alt="Jenkins-Click-Blue-Ocean" width="800" height="334"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Select your project&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on the play button against the branch&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff1wzwy80rioi1cmqkay9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff1wzwy80rioi1cmqkay9.png" alt="Jenkins-Ansible-Blue-Ocean-Start-Pipeline" width="800" height="204"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice that this pipeline is a multibranch one. This means, if there were more than one branch in GitHub, Jenkins would have scanned the repository to discover them all and we would have been able to trigger a build for each branch. &lt;/p&gt;

&lt;p&gt;Let us see this in action.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a new git branch and name it &lt;code&gt;feature/jenkinspipeline-stages&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Currently we only have the &lt;code&gt;Build&lt;/code&gt; stage. Let us add another stage called &lt;code&gt;Test&lt;/code&gt;. Paste the code snippet below and push the new changes to GitHub.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   pipeline {
    agent any

  stages {
    stage('Build') {
      steps {
        script {
          sh 'echo "Building Stage"'
        }
      }
    }

    stage('Test') {
      steps {
        script {
          sh 'echo "Testing Stage"'
        }
      }
    }
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;To make your new branch show up in Jenkins, we need to tell Jenkins to scan the repository.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click on the "Administration" button&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjwt2o9svws02xmvxiv0d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjwt2o9svws02xmvxiv0d.png" alt="Jenkins-Ansible-Administration-Button" width="800" height="256"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to the Ansible project and click on "Scan repository now"&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/.%2FImages%2FImages%252014%2F" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/.%2FImages%2FImages%252014%2F" alt="Jenkins-Scan-Repository-Now" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Refresh the page and both branches will start building automatically. You can go into Blue Ocean and see both branches there too.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/.%2FImages%2FImages%252014%2F" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/.%2FImages%2FImages%252014%2F" alt="Jenkins-Discover-New-Branch" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In Blue Ocean, you can now see how the &lt;code&gt;Jenkinsfile&lt;/code&gt; has caused a new step in the pipeline launch build for the new branch.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7r9jb0v8rk1nc0fsp7rx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7r9jb0v8rk1nc0fsp7rx.png" alt="Jenkins-Test-Stage-Blue-Ocean" width="800" height="285"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;A QUICK TASK FOR YOU!&lt;/strong&gt;
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1. Create a pull request to merge the latest code into the `main branch`
2. After merging the `PR`, go back into your terminal and switch into the `main` branch.
3. Pull the latest change.
4. Create a new branch, add more stages into the Jenkins file to simulate below phases. (Just add an `echo` command like we have in `build` and `test` stages)
   1. Package 
   2. Deploy 
   3. Clean up
5. Verify in Blue Ocean that all the stages are working, then merge your feature branch to the main branch
6. Eventually, your main branch should have a successful pipeline like this in blue ocean
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxwjaxvc0sir1r9vel55y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxwjaxvc0sir1r9vel55y.png" alt="Jenkins-Complete-Initial-Pipeline" width="800" height="230"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Running Ansible Playbook from Jenkins&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Now that you have a broad overview of a typical Jenkins pipeline. Let us get the actual Ansible deployment to work by: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Installing Ansible on Jenkins&lt;/li&gt;
&lt;li&gt;Installing Ansible plugin in Jenkins UI&lt;/li&gt;
&lt;li&gt;Creating &lt;code&gt;Jenkinsfile&lt;/code&gt; from scratch. (Delete all you currently have in there and start all over to get Ansible to run successfully)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can &lt;a href="https://youtu.be/PRpEbFZi7nI" rel="noopener noreferrer"&gt;watch a 10 minutes video here&lt;/a&gt; to guide you through the entire setup&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Ensure that Ansible runs against the Dev environment successfully.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Possible errors to watch out for&lt;/strong&gt;: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Ensure that the git module in &lt;code&gt;Jenkinsfile&lt;/code&gt; is checking out SCM to &lt;code&gt;main&lt;/code&gt; branch instead of &lt;code&gt;master&lt;/code&gt; (GitHub has discontinued the use of &lt;code&gt;Master&lt;/code&gt; due to Black Lives Matter. You can read more &lt;a href="https://www.cnet.com/news/microsofts-github-is-removing-coding-terms-like-master-and-slave" rel="noopener noreferrer"&gt;here&lt;/a&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Jenkins needs to export the &lt;code&gt;ANSIBLE_CONFIG&lt;/code&gt; environment variable. You can put the &lt;code&gt;.ansible.cfg&lt;/code&gt; file alongside &lt;code&gt;Jenkinsfile&lt;/code&gt; in the &lt;code&gt;deploy&lt;/code&gt; directory. This way, anyone can easily identify that everything in there relates to deployment. Then, using the Pipeline Syntax tool in Ansible, generate the syntax to create environment variables to set.  &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://wiki.jenkins.io/display/JENKINS/Building+a+software+project" rel="noopener noreferrer"&gt;https://wiki.jenkins.io/display/JENKINS/Building+a+software+project&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F83yc4uo1y6zy5380c1h5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F83yc4uo1y6zy5380c1h5.png" alt="Jenkins-Workspace-Env-Var" width="800" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Possible issues to watch out for when you implement this&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Remember that &lt;code&gt;ansible.cfg&lt;/code&gt; must be exported to environment variable so that Ansible knows where to find &lt;code&gt;Roles&lt;/code&gt;. But because you will possibly run Jenkins from different git branches, the location of Ansible roles will change. Therefore, you must handle this dynamically. You can use Linux &lt;a href="https://www.gnu.org/software/sed/manual/sed.html" rel="noopener noreferrer"&gt;&lt;strong&gt;Stream Editor&lt;/strong&gt; &lt;code&gt;sed&lt;/code&gt;&lt;/a&gt; to update the section &lt;code&gt;roles_path&lt;/code&gt; each time there is an execution. You may not have this issue if you run only from the &lt;strong&gt;main&lt;/strong&gt; branch.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you push new changes to &lt;code&gt;Git&lt;/code&gt; so that Jenkins failure can be fixed. You might observe that your change may sometimes have no effect. Even though your change is the actual fix required. This can be because Jenkins did not download the latest code from GitHub. Ensure that you start the &lt;code&gt;Jenkinsfile&lt;/code&gt; with a &lt;strong&gt;clean up&lt;/strong&gt; step to always delete the previous workspace before running a new one. Sometimes you might need to login to the Jenkins Linux server to verify the files in the workspace to confirm that what you are actually expecting is there. Otherwise, you can spend hours trying to figure out why Jenkins is still failing, when you have pushed up possible changes to fix the error.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Another possible reason for Jenkins failure sometimes, is because you have indicated in the &lt;code&gt;Jenkinsfile&lt;/code&gt; to check out the &lt;code&gt;main&lt;/code&gt; git branch, and you are running a pipeline from another branch. So, always verify by logging onto the Jenkins box to check the workspace, and run &lt;code&gt;git branch&lt;/code&gt; command to confirm that the branch you are expecting is there.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If everything goes well for you, it means, the &lt;code&gt;Dev&lt;/code&gt; environment has an up-to-date configuration. But what if we need to deploy to other environments?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Are we going to manually update the &lt;code&gt;Jenkinsfile&lt;/code&gt; to point inventory to those environments? such as &lt;code&gt;sit&lt;/code&gt;, &lt;code&gt;uat&lt;/code&gt;, &lt;code&gt;pentest&lt;/code&gt;, etc.&lt;/li&gt;
&lt;li&gt;Or do we need a dedicated git branch for each environment, and have the  &lt;code&gt;inventory&lt;/code&gt; part hard coded there. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Think about those for a minute and try to work out which one sounds more like a better solution.&lt;/p&gt;

&lt;p&gt;Manually updating the &lt;code&gt;Jenkinsfile&lt;/code&gt; is definitely not an option. And that should be obvious to you at this point. Because we try to automate things as much as possible.&lt;/p&gt;

&lt;p&gt;Well, unfortunately, we will not be doing any of the highlighted options. What we will be doing is to parameterise the deployment. So that at the point of execution, the appropriate values are applied.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Parameterizing &lt;code&gt;Jenkinsfile&lt;/code&gt; For Ansible Deployment&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;To deploy to other environments, we will need to use parameters.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Update &lt;code&gt;sit&lt;/code&gt; inventory with new servers
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[tooling]
&amp;lt;SIT-Tooling-Web-Server-Private-IP-Address&amp;gt;

[todo]
&amp;lt;SIT-Todo-Web-Server-Private-IP-Address&amp;gt;

[nginx]
&amp;lt;SIT-Nginx-Private-IP-Address&amp;gt;

[db:vars]
ansible_user=ec2-user
ansible_python_interpreter=/usr/bin/python

[db]
&amp;lt;SIT-DB-Server-Private-IP-Address&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Update &lt;code&gt;Jenkinsfile&lt;/code&gt; to introduce parameterization. Below is just one parameter. It has a default value in case if no value is specified at execution. It also has a description so that everyone is aware of its purpose.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pipeline {
    agent any

    parameters {
      string(name: 'inventory', defaultValue: 'dev',  description: 'This is the inventory file for the environment to deploy configuration')
    }
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;In the Ansible execution section, remove the hardcoded &lt;code&gt;inventory/dev&lt;/code&gt; and replace with `${inventory}&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;From now on, each time you hit on execute, it will expect an input.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fts6h0r6ukac9e6pogv64.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fts6h0r6ukac9e6pogv64.png" alt="Jenkins-Parameter" width="800" height="372"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice that the default value loads up, but we can now specify which environment we want to deploy the configuration to. Simply type &lt;code&gt;sit&lt;/code&gt; and hit &lt;strong&gt;Run&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvmkkovswr2x6nso67ok7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvmkkovswr2x6nso67ok7.png" alt="Jenkins-Parameter-Sit" width="800" height="393"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add another parameter. This time, introduce &lt;code&gt;tagging&lt;/code&gt; in Ansible. You can limit the Ansible execution to a specific role or playbook desired.  Therefore, add an Ansible tag to run against &lt;code&gt;webserver&lt;/code&gt; only. Test this locally first to get the experience. Once you understand this, update &lt;code&gt;Jenkinsfile&lt;/code&gt; and run it from Jenkins.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;CI/CD Pipeline for TODO application&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;We already have &lt;code&gt;tooling&lt;/code&gt; website as a part of deployment through Ansible. Here we will introduce another PHP application to add to the list of software products we are managing in our infrastructure. The good thing with this particular application is that it has unit tests, and it is an ideal application to show an end-to-end CI/CD pipeline for a particular application.&lt;/p&gt;

&lt;p&gt;Our goal here is to deploy the application onto servers directly from &lt;code&gt;Artifactory&lt;/code&gt; rather than from &lt;code&gt;git&lt;/code&gt;. If you have not updated Ansible with an Artifactory role, simply use this guide to create an Ansible role for Artifactory (ignore the Nginx part). &lt;a href="https://www.howtoforge.com/tutorial/ubuntu-jfrog/" rel="noopener noreferrer"&gt;Configure Artifactory on Ubuntu 20.04&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Phase 1 - Prepare Jenkins&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Fork the repository below into your GitHub account&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
https://github.com/darey-devops/php-todo.git&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;On you Jenkins server, install PHP, its dependencies and &lt;a href="https://getcomposer.org" rel="noopener noreferrer"&gt;Composer tool&lt;/a&gt; (Feel free to do this manually at first, then update your Ansible accordingly later)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
    sudo apt install -y zip libapache2-mod-php phploc php-{xml,bcmath,bz2,intl,gd,mbstring,mysql,zip}&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install Jenkins plugins

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://plugins.jenkins.io/plot/" rel="noopener noreferrer"&gt;Plot plugin&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.jfrog.com/confluence/display/JFROG/Jenkins+Artifactory+Plug-in" rel="noopener noreferrer"&gt;Artifactory plugin&lt;/a&gt; &lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;We will use &lt;code&gt;plot&lt;/code&gt; plugin to display tests reports, and code coverage information.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;Artifactory&lt;/code&gt; plugin will be used to easily upload code artifacts into an Artifactory server.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;In Jenkins UI configure Artifactory&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn71jpri150o4m3y4cgu1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn71jpri150o4m3y4cgu1.png" alt="Jenkins-Configure-System1" width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Configure the server ID, URL and Credentials, run Test Connection.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft51jb5avxeamaevtkkm2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft51jb5avxeamaevtkkm2.png" alt="Jenkins-Configure-System2" width="800" height="345"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Phase 2 - Integrate Artifactory repository with Jenkins&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Create a dummy &lt;code&gt;Jenkinsfile&lt;/code&gt; in the repository&lt;/li&gt;
&lt;li&gt;Using Blue Ocean, create a multibranch Jenkins pipeline&lt;/li&gt;
&lt;li&gt;On the database server, create database and user&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
Create database homestead;&lt;br&gt;
CREATE USER 'homestead'@'%' IDENTIFIED BY 'sePret^i';&lt;br&gt;
GRANT ALL PRIVILEGES ON * . * TO 'homestead'@'%';&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Update the database connectivity requirements in the file &lt;code&gt;.env.sample&lt;/code&gt; &lt;/li&gt;
&lt;li&gt;Update &lt;code&gt;Jenkinsfile&lt;/code&gt; with proper pipeline configuration&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`&lt;br&gt;
pipeline {&lt;br&gt;
    agent any&lt;/p&gt;

&lt;p&gt;stages {&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; stage("Initial cleanup") {
      steps {
        dir("${WORKSPACE}") {
          deleteDir()
        }
      }
    }

stage('Checkout SCM') {
  steps {
        git branch: 'main', url: 'https://github.com/darey-devops/php-todo.git'
  }
}

stage('Prepare Dependencies') {
  steps {
         sh 'mv .env.sample .env'
         sh 'composer install'
         sh 'php artisan migrate'
         sh 'php artisan db:seed'
         sh 'php artisan key:generate'
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;}&lt;br&gt;
}&lt;br&gt;
`&lt;code&gt;&lt;/code&gt;&lt;br&gt;
Notice the &lt;strong&gt;Prepare Dependencies&lt;/strong&gt; section &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The required file by &lt;code&gt;PHP&lt;/code&gt; is &lt;code&gt;.env&lt;/code&gt; so we are renaming &lt;code&gt;.env.sample&lt;/code&gt; to &lt;code&gt;.env&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Composer is used by PHP to install all the dependent libraries used by the application&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;php artisan&lt;/code&gt; uses the &lt;code&gt;.env&lt;/code&gt; file to setup the required database objects - (After successful run of this step, login to the database, run &lt;code&gt;show tables&lt;/code&gt; and you will see the tables being created for you)&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Update the &lt;code&gt;Jenkinsfile&lt;/code&gt; to include Unit tests step&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
    stage('Execute Unit Tests') &lt;br&gt;
      steps {&lt;br&gt;
             sh './vendor/bin/phpunit'&lt;br&gt;
      } &lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Phase 3 - Code Quality Analysis&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This is one of the areas where developers, architects and many stakeholders are mostly interested in as far as product development is concerned. As a DevOps engineer, you also have a role to play. Especially when it comes to setting up the tools.&lt;/p&gt;

&lt;p&gt;For &lt;strong&gt;PHP&lt;/strong&gt; the most commonly tool used for code quality analysis is &lt;a href="https://phpqa.io/projects/phploc.html" rel="noopener noreferrer"&gt;&lt;strong&gt;phploc&lt;/strong&gt;&lt;/a&gt;. &lt;a href="https://matthiasnoback.nl/2019/09/using-phploc-for-quick-code-quality-estimation-part-1/" rel="noopener noreferrer"&gt;Read the article here for more&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The data produced by &lt;strong&gt;phploc&lt;/strong&gt; can be ploted onto graphs in Jenkins.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Add the code analysis step in &lt;code&gt;Jenkinsfile&lt;/code&gt;. The output of the data will be saved in &lt;code&gt;build/logs/phploc.csv&lt;/code&gt; file.&lt;br&gt;
&lt;code&gt;&lt;/code&gt;`&lt;br&gt;
stage('Code Analysis') {&lt;br&gt;
  steps {&lt;br&gt;
        sh 'phploc app/ --log-csv build/logs/phploc.csv'&lt;/p&gt;

&lt;p&gt;}&lt;br&gt;
}&lt;br&gt;
`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Plot the data using &lt;code&gt;plot&lt;/code&gt; Jenkins plugin.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This plugin provides generic plotting (or graphing) capabilities in Jenkins. It will plot one or more single values variations across builds in one or more plots. Plots for a particular job (or project) are configured in the job configuration screen, where each field has additional help information. Each plot can have one or more lines (called data series). After each build completes the plots' data series latest values are pulled from the CSV file generated by &lt;code&gt;phploc&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`&lt;br&gt;
    stage('Plot Code Coverage Report') {&lt;br&gt;
      steps {&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;        plot csvFileName: 'plot-396c4a6b-b573-41e5-85d8-73613b2ffffb.csv', csvSeries: [[displayTableFlag: false, exclusionValues: 'Lines of Code (LOC),Comment Lines of Code (CLOC),Non-Comment Lines of Code (NCLOC),Logical Lines of Code (LLOC)                          ', file: 'build/logs/phploc.csv', inclusionFlag: 'INCLUDE_BY_STRING', url: '']], group: 'phploc', numBuilds: '100', style: 'line', title: 'A - Lines of code', yaxis: 'Lines of Code'
        plot csvFileName: 'plot-396c4a6b-b573-41e5-85d8-73613b2ffffb.csv', csvSeries: [[displayTableFlag: false, exclusionValues: 'Directories,Files,Namespaces', file: 'build/logs/phploc.csv', inclusionFlag: 'INCLUDE_BY_STRING', url: '']], group: 'phploc', numBuilds: '100', style: 'line', title: 'B - Structures Containers', yaxis: 'Count'
        plot csvFileName: 'plot-396c4a6b-b573-41e5-85d8-73613b2ffffb.csv', csvSeries: [[displayTableFlag: false, exclusionValues: 'Average Class Length (LLOC),Average Method Length (LLOC),Average Function Length (LLOC)', file: 'build/logs/phploc.csv', inclusionFlag: 'INCLUDE_BY_STRING', url: '']], group: 'phploc', numBuilds: '100', style: 'line', title: 'C - Average Length', yaxis: 'Average Lines of Code'
        plot csvFileName: 'plot-396c4a6b-b573-41e5-85d8-73613b2ffffb.csv', csvSeries: [[displayTableFlag: false, exclusionValues: 'Cyclomatic Complexity / Lines of Code,Cyclomatic Complexity / Number of Methods ', file: 'build/logs/phploc.csv', inclusionFlag: 'INCLUDE_BY_STRING', url: '']], group: 'phploc', numBuilds: '100', style: 'line', title: 'D - Relative Cyclomatic Complexity', yaxis: 'Cyclomatic Complexity by Structure'      
        plot csvFileName: 'plot-396c4a6b-b573-41e5-85d8-73613b2ffffb.csv', csvSeries: [[displayTableFlag: false, exclusionValues: 'Classes,Abstract Classes,Concrete Classes', file: 'build/logs/phploc.csv', inclusionFlag: 'INCLUDE_BY_STRING', url: '']], group: 'phploc', numBuilds: '100', style: 'line', title: 'E - Types of Classes', yaxis: 'Count'
        plot csvFileName: 'plot-396c4a6b-b573-41e5-85d8-73613b2ffffb.csv', csvSeries: [[displayTableFlag: false, exclusionValues: 'Methods,Non-Static Methods,Static Methods,Public Methods,Non-Public Methods', file: 'build/logs/phploc.csv', inclusionFlag: 'INCLUDE_BY_STRING', url: '']], group: 'phploc', numBuilds: '100', style: 'line', title: 'F - Types of Methods', yaxis: 'Count'
        plot csvFileName: 'plot-396c4a6b-b573-41e5-85d8-73613b2ffffb.csv', csvSeries: [[displayTableFlag: false, exclusionValues: 'Constants,Global Constants,Class Constants', file: 'build/logs/phploc.csv', inclusionFlag: 'INCLUDE_BY_STRING', url: '']], group: 'phploc', numBuilds: '100', style: 'line', title: 'G - Types of Constants', yaxis: 'Count'
        plot csvFileName: 'plot-396c4a6b-b573-41e5-85d8-73613b2ffffb.csv', csvSeries: [[displayTableFlag: false, exclusionValues: 'Test Classes,Test Methods', file: 'build/logs/phploc.csv', inclusionFlag: 'INCLUDE_BY_STRING', url: '']], group: 'phploc', numBuilds: '100', style: 'line', title: 'I - Testing', yaxis: 'Count'
        plot csvFileName: 'plot-396c4a6b-b573-41e5-85d8-73613b2ffffb.csv', csvSeries: [[displayTableFlag: false, exclusionValues: 'Logical Lines of Code (LLOC),Classes Length (LLOC),Functions Length (LLOC),LLOC outside functions or classes ', file: 'build/logs/phploc.csv', inclusionFlag: 'INCLUDE_BY_STRING', url: '']], group: 'phploc', numBuilds: '100', style: 'line', title: 'AB - Code Structure by Logical Lines of Code', yaxis: 'Logical Lines of Code'
        plot csvFileName: 'plot-396c4a6b-b573-41e5-85d8-73613b2ffffb.csv', csvSeries: [[displayTableFlag: false, exclusionValues: 'Functions,Named Functions,Anonymous Functions', file: 'build/logs/phploc.csv', inclusionFlag: 'INCLUDE_BY_STRING', url: '']], group: 'phploc', numBuilds: '100', style: 'line', title: 'H - Types of Functions', yaxis: 'Count'
        plot csvFileName: 'plot-396c4a6b-b573-41e5-85d8-73613b2ffffb.csv', csvSeries: [[displayTableFlag: false, exclusionValues: 'Interfaces,Traits,Classes,Methods,Functions,Constants', file: 'build/logs/phploc.csv', inclusionFlag: 'INCLUDE_BY_STRING', url: '']], group: 'phploc', numBuilds: '100', style: 'line', title: 'BB - Structure Objects', yaxis: 'Count'


  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You should now see a &lt;code&gt;Plot&lt;/code&gt; menu item on the left menu. Click on it to see the charts. (The analytics may not mean much to you as it is meant to be read by developers. So, you need not worry much about it - this is just to give you an idea of the real-world implementation).&lt;/p&gt;

&lt;p&gt;![Jenkins-PHPloc-Plot]](&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m3x2oc1ndotz0xhjzglw.png" rel="noopener noreferrer"&gt;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m3x2oc1ndotz0xhjzglw.png&lt;/a&gt;)&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Bundle the application code for into an artifact (archived package) upload to Artifactory&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
stage ('Package Artifact') {&lt;br&gt;
    steps {&lt;br&gt;
            sh 'zip -qr php-todo.zip ${WORKSPACE}/*'&lt;br&gt;
     }&lt;br&gt;
    }&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Publish the resulted artifact into Artifactory&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`&lt;br&gt;
stage ('Upload Artifact to Artifactory') {&lt;br&gt;
          steps {&lt;br&gt;
            script { &lt;br&gt;
                 def server = Artifactory.server 'artifactory-server'&lt;br&gt;&lt;br&gt;
                 def uploadSpec = """{&lt;br&gt;
                    "files": [&lt;br&gt;
                      {&lt;br&gt;
                       "pattern": "php-todo.zip",&lt;br&gt;
                       "target": "/php-todo",&lt;br&gt;
                       "props": "type=zip;status=ready"&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                   }
                ]
             }""" 

             server.upload spec: uploadSpec
           }
        }

    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Deploy the application to the &lt;code&gt;dev&lt;/code&gt; environment by launching Ansible pipeline&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
stage ('Deploy to Dev Environment') {&lt;br&gt;
    steps {&lt;br&gt;
    build job: 'ansible-project/main', parameters: [[$class: 'StringParameterValue', name: 'env', value: 'dev']], propagate: false, wait: true&lt;br&gt;
    }&lt;br&gt;
  }&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;build job&lt;/code&gt; used in this step tells Jenkins to start another job. In this case it is the &lt;code&gt;ansible-project&lt;/code&gt; job, and we are targeting the &lt;code&gt;main&lt;/code&gt; branch. Hence, we have &lt;code&gt;ansible-project/main&lt;/code&gt;. Since the Ansible project requires parameters to be passed in, we have included this by specifying the &lt;code&gt;parameters&lt;/code&gt; section. The name of the parameter is &lt;code&gt;env&lt;/code&gt; and its value is &lt;code&gt;dev&lt;/code&gt;. Meaning, deploy to the Development environment.&lt;/p&gt;

&lt;p&gt;But how are we certain that the code being deployed has the quality that meets corporate and customer requirements? Even though we have implemented &lt;strong&gt;Unit Tests&lt;/strong&gt; and &lt;strong&gt;Code Coverage&lt;/strong&gt; Analysis with &lt;code&gt;phpunit&lt;/code&gt; and &lt;code&gt;phploc&lt;/code&gt;, we still need to implement &lt;strong&gt;Quality Gate&lt;/strong&gt; to ensure that ONLY code with the required code coverage, and other quality standards make it through to the environments.&lt;/p&gt;

&lt;p&gt;To achieve this, we need to configure &lt;strong&gt;SonarQube&lt;/strong&gt; - An open-source platform developed by &lt;strong&gt;SonarSource&lt;/strong&gt; for continuous inspection of code quality to perform automatic reviews with static analysis of code to detect bugs, code smells, and security vulnerabilities.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;SonarQube Installation&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Before we start getting hands on with &lt;strong&gt;SonarQube&lt;/strong&gt; configuration, it is incredibly important to understand a few concepts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://en.wikipedia.org/wiki/Software_quality" rel="noopener noreferrer"&gt;Software Quality&lt;/a&gt; - The degree to which a software component, system or process meets specified requirements based on user needs and expectations.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.sonarqube.org/latest/user-guide/quality-gates/" rel="noopener noreferrer"&gt;Software Quality Gates&lt;/a&gt; - Quality gates are basically acceptance criteria which are usually presented as a set of predefined quality criteria that a software development project must meet in order to proceed from one stage of its lifecycle to the next one.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;SonarQube is a tool that can be used to create quality gates for software projects, and the ultimate goal is to be able to ship only quality software code. &lt;/p&gt;

&lt;p&gt;Despite that DevOps CI/CD pipeline helps with fast software delivery, it is of the same importance to ensure the quality of such delivery. Hence, we will need SonarQube to set up Quality gates. In this project we will use predefined Quality Gates (also known as &lt;a href="https://docs.sonarqube.org/latest/instance-administration/quality-profiles/" rel="noopener noreferrer"&gt;The Sonar Way&lt;/a&gt;). Software testers and developers would normally work with project leads and architects to create custom quality gates.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Install SonarQube on Ubuntu 20.04 With PostgreSQL as Backend Database&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Here is a manual approach to installation. Ensure that you can to automate the same with Ansible. &lt;/p&gt;

&lt;p&gt;Below is a step by step guide how to install  &lt;strong&gt;SonarQube 7.9.3&lt;/strong&gt; version. It has a strong prerequisite to have Java installed since the tool is Java-based. MySQL support for SonarQube is deprecated, therefore we will be using PostgreSQL. &lt;/p&gt;

&lt;p&gt;We will make some Linux Kernel configuration changes to ensure optimal performance of the tool - we will increase &lt;code&gt;vm.max_map_count&lt;/code&gt;, &lt;code&gt;file discriptor&lt;/code&gt; and &lt;code&gt;ulimit&lt;/code&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Tune Linux Kernel&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This can be achieved by making session changes which does not persist beyond the current session terminal.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
sudo sysctl -w vm.max_map_count=262144&lt;br&gt;
sudo sysctl -w fs.file-max=65536&lt;br&gt;
ulimit -n 65536&lt;br&gt;
ulimit -u 4096&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To make a permanent change, edit the file &lt;code&gt;/etc/security/limits.conf&lt;/code&gt; and append the below&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
sonarqube   -   nofile   65536&lt;br&gt;
sonarqube   -   nproc    4096&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Before installing, let us update and upgrade system packages:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
sudo apt-get update&lt;br&gt;
sudo apt-get upgrade&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Install &lt;a href="https://www.gnu.org/software/wget/" rel="noopener noreferrer"&gt;wget&lt;/a&gt; and &lt;a href="https://linux.die.net/man/1/unzip" rel="noopener noreferrer"&gt;unzip&lt;/a&gt; packages&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
sudo apt-get install wget unzip -y&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Install &lt;a href="https://openjdk.java.net" rel="noopener noreferrer"&gt;OpenJDK&lt;/a&gt; and &lt;a href="https://docs.oracle.com/goldengate/1212/gg-winux/GDRAD/java.htm" rel="noopener noreferrer"&gt;Java Runtime Environment (JRE) 11&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
 sudo apt-get install openjdk-11-jdk -y&lt;br&gt;
 sudo apt-get install openjdk-11-jre -y&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Set default JDK - To set default JDK or switch to OpenJDK enter below command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
 sudo update-alternatives --config java&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;If you have multiple versions of Java installed, you should see a list like below:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`&lt;br&gt;
Selection    Path                                            Priority   Status&lt;/p&gt;



&lt;p&gt;0            /usr/lib/jvm/java-11-openjdk-amd64/bin/java      1111      auto mode&lt;/p&gt;

&lt;p&gt;1            /usr/lib/jvm/java-11-openjdk-amd64/bin/java      1111      manual mode&lt;/p&gt;

&lt;p&gt;2            /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java   1081      manual mode&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;3            /usr/lib/jvm/java-8-oracle/jre/bin/java          1081      manual mode&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Type "1" to switch &lt;strong&gt;OpenJDK 11&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Verify the set JAVA Version: &lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
java -version&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Output&lt;br&gt;
&lt;code&gt;&lt;/code&gt;`&lt;br&gt;
java -version&lt;/p&gt;

&lt;p&gt;openjdk version "11.0.7" 2020-04-14&lt;/p&gt;

&lt;p&gt;OpenJDK Runtime Environment (build 11.0.7+10-post-Ubuntu-3ubuntu1)&lt;/p&gt;

&lt;p&gt;OpenJDK 64-Bit Server VM (build 11.0.7+10-post-Ubuntu-3ubuntu1, mixed mode, sharing)&lt;br&gt;
`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Install and Setup PostgreSQL 10 Database for SonarQube&lt;/strong&gt;
&lt;/h3&gt;



&lt;p&gt;The command below will add PostgreSQL repo to the repo list:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt/&lt;/code&gt;lsb_release -cs&lt;code&gt;-pgdg main" &amp;gt;&amp;gt; /etc/apt/sources.list.d/pgdg.list'&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Download PostgreSQL software &lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
wget -q https://www.postgresql.org/media/keys/ACCC4CF8.asc -O - | sudo apt-key add -&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Install PostgreSQL Database Server&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
sudo apt-get -y install postgresql postgresql-contrib&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Start PostgreSQL Database Server&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
sudo systemctl start postgresql&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Enable it to start automatically at boot time&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
sudo systemctl enable postgresql&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Change the password for default &lt;code&gt;postgres&lt;/code&gt; user (Pass in the password you intend to use, and remember to save it somewhere)&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
sudo passwd postgres&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Switch to the &lt;code&gt;postgres&lt;/code&gt; user&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
su - postgres&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Create a new user by typing&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
createuser sonar&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Switch to the PostgreSQL shell&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
psql&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Set a password for the newly created user for SonarQube database&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
ALTER USER sonar WITH ENCRYPTED password 'sonar';&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Create a new database for PostgreSQL database by running:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
CREATE DATABASE sonarqube OWNER sonar;&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Grant all privileges to &lt;code&gt;sonar&lt;/code&gt; user on sonarqube Database.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
grant all privileges on DATABASE sonarqube to sonar;&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Exit from the psql shell:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
\q&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Switch back to the &lt;code&gt;sudo&lt;/code&gt; user by running the exit command.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
exit&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Install SonarQube on Ubuntu 20.04 LTS&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Navigate to the &lt;code&gt;tmp&lt;/code&gt; directory to temporarily download the installation files&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
cd /tmp &amp;amp;&amp;amp; sudo wget https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-7.9.3.zip&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Unzip the archive setup to &lt;code&gt;/opt directory&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
sudo unzip sonarqube-7.9.3.zip -d /opt&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Move extracted setup to &lt;code&gt;/opt/sonarqube&lt;/code&gt; directory&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
sudo mv /opt/sonarqube-7.9.3 /opt/sonarqube&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Configure SonarQube&lt;/strong&gt;
&lt;/h3&gt;



&lt;p&gt;We cannot run SonarQube as a root user, if you run using root user it will stop automatically. The ideal approach will be to create a separate group and a user to run SonarQube&lt;/p&gt;

&lt;p&gt;Create a group &lt;code&gt;sonar&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
sudo groupadd sonar&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now add a user with control over the &lt;code&gt;/opt/sonarqube&lt;/code&gt; directory&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
 sudo useradd -c "user to run SonarQube" -d /opt/sonarqube -g sonar sonar &lt;br&gt;
 sudo chown sonar:sonar /opt/sonarqube -R&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Open SonarQube configuration file using your favourite text editor (e.g., nano or vim)&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
sudo vim /opt/sonarqube/conf/sonar.properties&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Find the following lines:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`&lt;/p&gt;
&lt;h1&gt;
  
  
  sonar.jdbc.username=
&lt;/h1&gt;
&lt;h1&gt;
  
  
  sonar.jdbc.password=
&lt;/h1&gt;

&lt;p&gt;`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Uncomment them and provide the values of PostgreSQL Database username and password:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`&lt;/p&gt;
&lt;h1&gt;
  
  
  --------------------------------------------------------------------------------------------------
&lt;/h1&gt;
&lt;h1&gt;
  
  
  DATABASE
&lt;/h1&gt;
&lt;h1&gt;
  
  
  IMPORTANT:
&lt;/h1&gt;
&lt;h1&gt;
  
  
  - The embedded H2 database is used by default. It is recommended for tests but not for
&lt;/h1&gt;
&lt;h1&gt;
  
  
  production use. Supported databases are Oracle, PostgreSQL and Microsoft SQLServer.
&lt;/h1&gt;
&lt;h1&gt;
  
  
  - Changes to database connection URL (sonar.jdbc.url) can affect SonarSource licensed products.
&lt;/h1&gt;
&lt;h1&gt;
  
  
  User credentials.
&lt;/h1&gt;
&lt;h1&gt;
  
  
  Permissions to create tables, indices and triggers must be granted to JDBC user.
&lt;/h1&gt;
&lt;h1&gt;
  
  
  The schema must be created first.
&lt;/h1&gt;

&lt;p&gt;sonar.jdbc.username=sonar&lt;br&gt;
sonar.jdbc.password=sonar&lt;br&gt;
sonar.jdbc.url=jdbc:postgresql://localhost:5432/sonarqube&lt;br&gt;
`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Edit the sonar script file and set RUN_AS_USER&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
sudo nano /opt/sonarqube/bin/linux-x86-64/sonar.sh&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`&lt;/p&gt;
&lt;h1&gt;
  
  
  If specified, the Wrapper will be run as the specified user.
&lt;/h1&gt;
&lt;h1&gt;
  
  
  IMPORTANT - Make sure that the user has the required privileges to write
&lt;/h1&gt;
&lt;h1&gt;
  
  
  the PID file and wrapper.log files.  Failure to be able to write the log
&lt;/h1&gt;
&lt;h1&gt;
  
  
  file will cause the Wrapper to exit without any way to write out an error
&lt;/h1&gt;
&lt;h1&gt;
  
  
  message.
&lt;/h1&gt;
&lt;h1&gt;
  
  
  NOTE - This will set the user which is used to run the Wrapper as well as
&lt;/h1&gt;
&lt;h1&gt;
  
  
  the JVM and is not useful in situations where a privileged resource or
&lt;/h1&gt;
&lt;h1&gt;
  
  
  port needs to be allocated prior to the user being changed.
&lt;/h1&gt;

&lt;p&gt;RUN_AS_USER=sonar&lt;br&gt;
`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now, to start SonarQube we need to do following: &lt;/p&gt;

&lt;p&gt;Switch to &lt;code&gt;sonar&lt;/code&gt; user&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
sudo su sonar&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Move to the script directory&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
cd /opt/sonarqube/bin/linux-x86-64/&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Run the script to start SonarQube&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
./sonar.sh start&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Expected output shall be as:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`&lt;br&gt;
Starting SonarQube...&lt;/p&gt;

&lt;p&gt;Started SonarQube&lt;br&gt;
`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Check SonarQube running status:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
./sonar.sh status&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Sample Output below:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`&lt;br&gt;
$./sonar.sh status&lt;/p&gt;

&lt;p&gt;SonarQube is running (176483).&lt;br&gt;
`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To check SonarQube logs, navigate to &lt;code&gt;/opt/sonarqube/logs/sonar.log&lt;/code&gt; directory&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
tail /opt/sonarqube/logs/sonar.log&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Output&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`&lt;br&gt;
INFO  app[][o.s.a.ProcessLauncherImpl] Launch process[[key='ce', ipcIndex=3, logFilenamePrefix=ce]] from [/opt/sonarqube]: /usr/lib/jvm/java-11-openjdk-amd64/bin/java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djava.io.tmpdir=/opt/sonarqube/temp --add-opens=java.base/java.util=ALL-UNNAMED -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -Dhttp.nonProxyHosts=localhost|127.&lt;em&gt;|[::1] -cp ./lib/common/&lt;/em&gt;:/opt/sonarqube/lib/jdbc/h2/h2-1.3.176.jar org.sonar.ce.app.CeServer /opt/sonarqube/temp/sq-process15059956114837198848properties&lt;/p&gt;

&lt;p&gt;INFO  app[][o.s.a.SchedulerImpl] Process[ce] is up&lt;/p&gt;

&lt;p&gt;INFO  app[][o.s.a.SchedulerImpl] SonarQube is up&lt;br&gt;
`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You can see that SonarQube is up and running&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Configure SonarQube to run as a &lt;code&gt;systemd&lt;/code&gt; service&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Stop the currently running SonarQube service&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
 cd /opt/sonarqube/bin/linux-x86-64/&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Run the script to start SonarQube&lt;br&gt;
&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
./sonar.sh stop&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Create a &lt;code&gt;systemd&lt;/code&gt; service file for SonarQube to run as System Startup.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
 sudo nano /etc/systemd/system/sonar.service&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Add the configuration below for &lt;code&gt;systemd&lt;/code&gt; to determine how to start, stop, check status, or restart the SonarQube service.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`&lt;br&gt;
[Unit]&lt;br&gt;
Description=SonarQube service&lt;br&gt;
After=syslog.target network.target&lt;/p&gt;

&lt;p&gt;[Service]&lt;br&gt;
Type=forking&lt;/p&gt;

&lt;p&gt;ExecStart=/opt/sonarqube/bin/linux-x86-64/sonar.sh start&lt;br&gt;
ExecStop=/opt/sonarqube/bin/linux-x86-64/sonar.sh stop&lt;/p&gt;

&lt;p&gt;User=sonar&lt;br&gt;
Group=sonar&lt;br&gt;
Restart=always&lt;/p&gt;

&lt;p&gt;LimitNOFILE=65536&lt;br&gt;
LimitNPROC=4096&lt;/p&gt;

&lt;p&gt;[Install]&lt;br&gt;
WantedBy=multi-user.target&lt;br&gt;
`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Save the file and control the service with &lt;code&gt;systemctl&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
sudo systemctl start sonar&lt;br&gt;
sudo systemctl enable sonar&lt;br&gt;
sudo systemctl status sonar&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Access SonarQube&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;To access SonarQube using browser, type server's IP address followed by port &lt;code&gt;9000&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
http://server_IP:9000 OR http://localhost:9000&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Login to SonarQube with default administrator username and password - &lt;code&gt;admin&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ie6pcfkvtzi92m3lew9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ie6pcfkvtzi92m3lew9.png" alt="sonarqube-web-interface" width="800" height="260"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, when SonarQube is up and running, it is time to setup our Quality gate in Jenkins.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Configure SonarQube and Jenkins For Quality Gate&lt;/strong&gt;
&lt;/h3&gt;



&lt;ul&gt;
&lt;li&gt;In Jenkins, install &lt;a href="https://docs.sonarqube.org/latest/analysis/scan/sonarscanner-for-jenkins/" rel="noopener noreferrer"&gt;SonarScanner plugin&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;Navigate to configure system in Jenkins. Add SonarQube server as shown below:
&lt;code&gt;&lt;/code&gt;&lt;code&gt;
Manage Jenkins &amp;gt; Configure System
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdnzla5zjbiz3a7geyeba.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdnzla5zjbiz3a7geyeba.png" alt="Jenkins-Sonar-Server" width="800" height="255"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Generate authentication token in SonarQube 
&lt;code&gt;&lt;/code&gt;&lt;code&gt;
User &amp;gt; My Account &amp;gt; Security &amp;gt; Generate Tokens
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ciurecw60siy52ktvks.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ciurecw60siy52ktvks.png" alt="Sonarqube-Token" width="800" height="294"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Configure Quality Gate Jenkins Webhook in SonarQube - The URL should point to your Jenkins server http://{JENKINS_HOST}/sonarqube-webhook/&lt;br&gt;
&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
Administration &amp;gt; Configuration &amp;gt; Webhooks &amp;gt; Create&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcojh23lfrlwa1rj9yvua.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcojh23lfrlwa1rj9yvua.png" alt="Sonar-Jenkins-Webhook" width="800" height="241"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Setup SonarQube scanner from Jenkins - Global Tool Configuration&lt;br&gt;
&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
Manage Jenkins &amp;gt; Global Tool Configuration &lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt; &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffuag5ly5c8ai48lzn2py.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffuag5ly5c8ai48lzn2py.png" alt="Jenkins-SonarScanner" width="800" height="282"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Update Jenkins Pipeline to include SonarQube scanning and Quality Gate
&lt;/h3&gt;

&lt;p&gt;Below is the snippet for a &lt;strong&gt;Quality Gate&lt;/strong&gt; stage in &lt;code&gt;Jenkinsfile&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`&lt;br&gt;
    stage('SonarQube Quality Gate') {&lt;br&gt;
        environment {&lt;br&gt;
            scannerHome = tool 'SonarQubeScanner'&lt;br&gt;
        }&lt;br&gt;
        steps {&lt;br&gt;
            withSonarQubeEnv('sonarqube') {&lt;br&gt;
                sh "${scannerHome}/bin/sonar-scanner"&lt;br&gt;
            }&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; The above step will fail because we have not updated `sonar-scanner.properties&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Configure &lt;code&gt;sonar-scanner.properties&lt;/code&gt; - From the step above, Jenkins will install the scanner tool on the Linux server. You will need to go into the &lt;code&gt;tools&lt;/code&gt; directory on the server to configure the &lt;code&gt;properties&lt;/code&gt; file in which SonarQube will require to function during pipeline execution.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd /var/lib/jenkins/tools/hudson.plugins.sonar.SonarRunnerInstallation/SonarQubeScanner/conf/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Open &lt;code&gt;sonar-scanner.properties&lt;/code&gt; file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo vi sonar-scanner.properties
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add configuration related to &lt;code&gt;php-todo&lt;/code&gt; project&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sonar.host.url=http://&amp;lt;SonarQube-Server-IP-address&amp;gt;:9000
sonar.projectKey=php-todo
#----- Default source code encoding
sonar.sourceEncoding=UTF-8
sonar.php.exclusions=**/vendor/**
sonar.php.coverage.reportPaths=build/logs/clover.xml
sonar.php.tests.reportPath=build/logs/junit.xml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;HINT&lt;/strong&gt;: To know what exactly to put inside the &lt;code&gt;sonar-scanner.properties&lt;/code&gt; file, SonarQube has a configurations page where you can get some directions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs4rsg0e7ia68dt7bgf8x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs4rsg0e7ia68dt7bgf8x.png" alt="sonar-scanner-properties" width="800" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A brief explanation of what is going on the the stage - set the environment variable for the &lt;code&gt;scannerHome&lt;/code&gt; use the same name used when you configured SonarQube Scanner from &lt;strong&gt;Jenkins Global Tool Configuration&lt;/strong&gt;. If you remember, the name was &lt;code&gt;SonarQubeScanner&lt;/code&gt;. Then, within the &lt;code&gt;steps&lt;/code&gt; use shell to run the scanner from &lt;code&gt;bin&lt;/code&gt; directory. &lt;/p&gt;

&lt;p&gt;To further examine the configuration of the scanner tool on the Jenkins server - navigate into the &lt;code&gt;tools&lt;/code&gt; directory&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd /var/lib/jenkins/tools/hudson.plugins.sonar.SonarRunnerInstallation/SonarQubeScanner/bin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;List the content to see the scanner tool &lt;code&gt;sonar-scanner&lt;/code&gt;. That is what we are calling in the pipeline script.&lt;/p&gt;

&lt;p&gt;Output of &lt;code&gt;ls -latr&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ubuntu@ip-172-31-16-176:/var/lib/jenkins/tools/hudson.plugins.sonar.SonarRunnerInstallation/SonarQubeScanner/bin$ ls -latr
total 24
-rwxr-xr-x 1 jenkins jenkins 2550 Oct  2 12:42 sonar-scanner.bat
-rwxr-xr-x 1 jenkins jenkins  586 Oct  2 12:42 sonar-scanner-debug.bat
-rwxr-xr-x 1 jenkins jenkins  662 Oct  2 12:42 sonar-scanner-debug
-rwxr-xr-x 1 jenkins jenkins 1823 Oct  2 12:42 sonar-scanner
drwxr-xr-x 2 jenkins jenkins 4096 Dec 26 18:42 .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So far you have been given code snippets on each of the stages within the &lt;code&gt;Jenkinsfile&lt;/code&gt;. But, you should also be able to generate Jenkins configuration code yourself.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To generate Jenkins code, navigate to the dashboard for the &lt;code&gt;php-todo&lt;/code&gt; pipeline and click on the &lt;strong&gt;Pipeline Syntax&lt;/strong&gt; menu item
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Dashboard &amp;gt; php-todo &amp;gt; Pipeline Syntax 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1zk5knvbykt235z2awye.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1zk5knvbykt235z2awye.png" alt="Jenkins-Pipeline-Syntax" width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click on Steps and select &lt;code&gt;withSonarQubeEnv&lt;/code&gt; - This appears in the list because of the previous SonarQube configurations you have done in Jenkins. Otherwise, it would not be there.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7v0kn05a87jl34n83tu5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7v0kn05a87jl34n83tu5.png" alt="Jenkins-SonarQube-Pipeline-Syntax" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Within the generated block, you will use the &lt;code&gt;sh&lt;/code&gt; command to run shell on the server. For more advanced usage in other projects, you can add to bookmarks this &lt;a href="https://docs.sonarqube.org/latest/analysis/scan/sonarscanner-for-jenkins/" rel="noopener noreferrer"&gt;SonarQube documentation page&lt;/a&gt; in your browser.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;End-to-End Pipeline Overview&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Indeed, this has been one of the longest projects from Project 1, and if everything has worked out for you so far, you should have a view like below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqu0m6yzpa9i0q03vsqkg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqu0m6yzpa9i0q03vsqkg.png" alt="Jenkins-End-To-End" width="800" height="269"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But we are not completely done yet!&lt;/p&gt;

&lt;p&gt;The quality gate we just included has no effect. Why? Well, because if you go to the SonarQube UI, you will realise that we just pushed a poor-quality code onto the development environment. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to &lt;code&gt;php-todo&lt;/code&gt; project in SonarQube&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fghq9ic7mde8641qo44bd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fghq9ic7mde8641qo44bd.png" alt="Sonarqube-Anaysis" width="800" height="302"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are bugs, and there is 0.0% code coverage. (&lt;em&gt;code coverage is a percentage of unit tests added by developers to test functions and objects in the code&lt;/em&gt;)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you click on &lt;code&gt;php-todo&lt;/code&gt; project for further analysis, you will see that there is 6 hours' worth of technical debt, code smells and security issues in the code.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwmavie52a1fqgmu4tpra.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwmavie52a1fqgmu4tpra.png" alt="SonarQube-Analysis2" width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the development environment, this is acceptable as developers will need to keep iterating over their code towards perfection. But as a DevOps engineer working on the pipeline, we must ensure that the quality gate step causes the pipeline to fail if the conditions for quality are not met.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Conditionally deploy to higher environments&lt;/strong&gt;
&lt;/h3&gt;




&lt;p&gt;In the real world, developers will work on feature branch in a repository (e.g., GitHub or GitLab). There are other branches that will be used differently to control how software releases are done. You will see such branches as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Develop&lt;/li&gt;
&lt;li&gt;Master or Main
(The &lt;code&gt;*&lt;/code&gt; is a place holder for a version number, Jira Ticket name or some description. It can be something like &lt;code&gt;Release-1.0.0&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Feature/*&lt;/li&gt;
&lt;li&gt;Release/* &lt;/li&gt;
&lt;li&gt;Hotfix/*&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;etc.&lt;/p&gt;

&lt;p&gt;There is a very wide discussion around release strategy, and git branching strategies which in recent years are considered under what is known as &lt;a href="https://www.atlassian.com/git/tutorials/comparing-workflows/gitflow-workflow" rel="noopener noreferrer"&gt;GitFlow&lt;/a&gt; (Have a read and keep as a bookmark - it is a possible candidate for an interview discussion, so take it seriously!)&lt;/p&gt;

&lt;p&gt;Assuming a basic &lt;code&gt;gitflow&lt;/code&gt; implementation restricts only the &lt;code&gt;develop&lt;/code&gt; branch to deploy code to Integration environment like &lt;code&gt;sit&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;Let us update our &lt;code&gt;Jenkinsfile&lt;/code&gt; to implement this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First, we will include a &lt;code&gt;When&lt;/code&gt; condition to run Quality Gate whenever the running branch is either &lt;code&gt;develop&lt;/code&gt;, &lt;code&gt;hotfix&lt;/code&gt;, &lt;code&gt;release&lt;/code&gt;, &lt;code&gt;main&lt;/code&gt;, or &lt;code&gt;master&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;when { branch pattern: "^develop*|^hotfix*|^release*|^main*", comparator: "REGEXP"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Then we add a timeout step to wait for SonarQube to complete analysis and successfully finish the pipeline only when code quality is acceptable.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    timeout(time: 1, unit: 'MINUTES') {
        waitForQualityGate abortPipeline: true
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The complete stage will now look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    stage('SonarQube Quality Gate') {
      when { branch pattern: "^develop*|^hotfix*|^release*|^main*", comparator: "REGEXP"}
        environment {
            scannerHome = tool 'SonarQubeScanner'
        }
        steps {
            withSonarQubeEnv('sonarqube') {
                sh "${scannerHome}/bin/sonar-scanner -Dproject.settings=sonar-project.properties"
            }
            timeout(time: 1, unit: 'MINUTES') {
                waitForQualityGate abortPipeline: true
            }
        }
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To test, create different branches and push to GitHub. You will realise that only branches other than &lt;code&gt;develop&lt;/code&gt;, &lt;code&gt;hotfix&lt;/code&gt;, &lt;code&gt;release&lt;/code&gt;, &lt;code&gt;main&lt;/code&gt;, or &lt;code&gt;master&lt;/code&gt; will be able to deploy the code.&lt;/p&gt;

&lt;p&gt;If everything goes well, you should be able to see something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs2g71ehzomejmofdo5h0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs2g71ehzomejmofdo5h0.png" alt="Jenkins-Skipped-Deployment" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice that with the current state of the code, it cannot be deployed to Integration environments due to its quality. In the real world, DevOps engineers will push this back to developers to work on the code further, based on SonarQube quality report. Once everything is good with code quality, the pipeline will pass and proceed with sipping the codes further to a higher environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Complete the following tasks to finish Project 14&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Introduce Jenkins agents/slaves - Add 2 more servers to be used as Jenkins slave. Configure Jenkins to run its pipeline jobs randomly on any available slave nodes.&lt;/li&gt;
&lt;li&gt;Configure webhook between Jenkins and GitHub to automatically run the pipeline when there is a code push. &lt;/li&gt;
&lt;li&gt;Deploy the application to all the environments&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Optional&lt;/strong&gt; - Experience pentesting in pentest environment by configuring &lt;a href="https://www.wireshark.org" rel="noopener noreferrer"&gt;Wireshark&lt;/a&gt; there and just explore for information sake only. &lt;a href="https://youtu.be/Yo8zGbCbqd0" rel="noopener noreferrer"&gt;Watch Wireshark Tutorial here&lt;/a&gt; 

&lt;ul&gt;
&lt;li&gt;Ansible Role for Wireshark:&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/ymajik/ansible-role-wireshark" rel="noopener noreferrer"&gt;https://github.com/ymajik/ansible-role-wireshark&lt;/a&gt; (Ubuntu) &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/wtanaka/ansible-role-wireshark" rel="noopener noreferrer"&gt;https://github.com/wtanaka/ansible-role-wireshark&lt;/a&gt; (RedHat) &lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Congratulations! &lt;/p&gt;

&lt;p&gt;You have just experienced one of the most interesting and complex projects in your Project Based Learning journey so far. The vast experience and knowledge you have acquired here will set the stage for the next 6 projects to come. You should be ready to start applying for DevOps jobs after completing Project 20.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnnpj8p2e2hkw618rlzmv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnnpj8p2e2hkw618rlzmv.png" alt="awesome14" width="369" height="461"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Instructions On How To Submit Your Work For Review And Feedback
&lt;/h4&gt;

&lt;p&gt;To submit your work for review and feedback - follow &lt;a href="https://starter-pbl.darey.io/en/latest/submission.html" rel="noopener noreferrer"&gt;&lt;strong&gt;this instruction&lt;/strong&gt;&lt;/a&gt;.&lt;br&gt;
In addition to your GitHub projects (both, Ansible and PHP application) also prepare and submit the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Make a short video on how your pipelines have executed &lt;/li&gt;
&lt;li&gt;In the video, showcase your SonarQube UI&lt;/li&gt;
&lt;/ol&gt;

</description>
    </item>
    <item>
      <title>Tooling Website deployment automation with Continuous Integration. Introduction to Jenkins</title>
      <dc:creator>Christian Ameachi</dc:creator>
      <pubDate>Sat, 24 Feb 2024 22:16:51 +0000</pubDate>
      <link>https://forem.com/chris-amaechi/tooling-website-deployment-automation-with-continuous-integration-introduction-to-jenkins-3im2</link>
      <guid>https://forem.com/chris-amaechi/tooling-website-deployment-automation-with-continuous-integration-introduction-to-jenkins-3im2</guid>
      <description>&lt;h4&gt;
  
  
  Tooling Website deployment automation with Continuous Integration. Introduction to Jenkins
&lt;/h4&gt;

&lt;p&gt;In previous &lt;a href="https://professional-pbl.darey.io/en/latest/project8.html" rel="noopener noreferrer"&gt;Project 8&lt;/a&gt; we introduced &lt;code&gt;horizontal scalability&lt;/code&gt; concept, which allow us to add new Web Servers to our Tooling Website and you have successfully deployed a set up with 2 Web Servers and also a Load Balancer to distribute traffic between them. If it is just two or three servers - it is not a big deal to configure them manually. Imagine that you would need to repeat the same task over and over again adding dozens or even hundreds of servers.&lt;/p&gt;

&lt;p&gt;DevOps is about Agility, and speedy release of software and web solutions. One of the ways to guarantee fast and repeatable deployments is &lt;strong&gt;Automation&lt;/strong&gt; of routine tasks.&lt;/p&gt;

&lt;p&gt;In this project we are going to start automating part of our routine tasks with a free and open source automation server - &lt;a href="https://en.wikipedia.org/wiki/Jenkins_(software)" rel="noopener noreferrer"&gt;Jenkins&lt;/a&gt;. It is one of the mostl popular &lt;a href="https://en.wikipedia.org/wiki/CI/CD" rel="noopener noreferrer"&gt;CI/CD&lt;/a&gt; tools, it was created by a former Sun Microsystems developer Kohsuke Kawaguchi and the project originally had a named "Hudson". &lt;/p&gt;

&lt;p&gt;Acording to Circle CI, &lt;strong&gt;Continuous integration (CI)&lt;/strong&gt; is a software development strategy that increases the speed of development while ensuring the quality of the code that teams deploy. Developers continually commit code in small increments (at least daily, or even several times a day), which is then automatically built and tested before it is merged with the shared repository.&lt;/p&gt;

&lt;p&gt;In our project we are going to utilize Jenkins CI capabilities to make sure that every change made to the source code in GitHub &lt;code&gt;https://github.com/&amp;lt;yourname&amp;gt;/tooling&lt;/code&gt; will be automatically be updated to the Tooling Website.&lt;/p&gt;

&lt;h4&gt;
  
  
  Side Self Study
&lt;/h4&gt;

&lt;p&gt;Read about &lt;a href="https://circleci.com/continuous-integration/" rel="noopener noreferrer"&gt;Continuous Integration, Continuous Delivery and Continuous Deployment&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Task
&lt;/h4&gt;

&lt;p&gt;Enhance the architecture prepared in Project 8 by adding a Jenkins server, configure a job to automatically deploy source codes changes from Git to NFS server.&lt;/p&gt;

&lt;p&gt;Here is how your update architecture will look like upon competition of this project:&lt;/p&gt;

&lt;h1&gt;
  
  
  Tooling Website deployment automation with Continuous Integration. Introduction to Jenkins
&lt;/h1&gt;

&lt;p&gt;In previous &lt;a href="https://professional-pbl.darey.io/en/latest/project8.html" rel="noopener noreferrer"&gt;Project 8&lt;/a&gt; we introduced &lt;code&gt;horizontal scalability&lt;/code&gt; concept, which allow us to add new Web Servers to our Tooling Website and you have successfully deployed a set up with 2 Web Servers and also a Load Balancer to distribute traffic between them. If it is just two or three servers - it is not a big deal to configure them manually. Imagine that you would need to repeat the same task over and over again adding dozens or even hundreds of servers.&lt;/p&gt;

&lt;p&gt;DevOps is about Agility, and speedy release of software and web solutions. One of the ways to guarantee fast and repeatable deployments is &lt;strong&gt;Automation&lt;/strong&gt; of routine tasks.&lt;/p&gt;

&lt;p&gt;In this project we are going to start automating part of our routine tasks with a free and open source automation server - &lt;a href="https://en.wikipedia.org/wiki/Jenkins_(software)" rel="noopener noreferrer"&gt;Jenkins&lt;/a&gt;. It is one of the mostl popular &lt;a href="https://en.wikipedia.org/wiki/CI/CD" rel="noopener noreferrer"&gt;CI/CD&lt;/a&gt; tools, it was created by a former Sun Microsystems developer Kohsuke Kawaguchi and the project originally had a named "Hudson". &lt;/p&gt;

&lt;p&gt;Acording to Circle CI, &lt;strong&gt;Continuous integration (CI)&lt;/strong&gt; is a software development strategy that increases the speed of development while ensuring the quality of the code that teams deploy. Developers continually commit code in small increments (at least daily, or even several times a day), which is then automatically built and tested before it is merged with the shared repository.&lt;/p&gt;

&lt;p&gt;In our project we are going to utilize Jenkins CI capabilities to make sure that every change made to the source code in GitHub &lt;code&gt;https://github.com/&amp;lt;yourname&amp;gt;/tooling&lt;/code&gt; will be automatically be updated to the Tooling Website.&lt;/p&gt;

&lt;h4&gt;
  
  
  Side Self Study
&lt;/h4&gt;

&lt;p&gt;Read about &lt;a href="https://circleci.com/continuous-integration/" rel="noopener noreferrer"&gt;Continuous Integration, Continuous Delivery and Continuous Deployment&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Task
&lt;/h4&gt;

&lt;p&gt;Enhance the architecture prepared in Project 8 by adding a Jenkins server, configure a job to automatically deploy source codes changes from Git to NFS server.&lt;/p&gt;

&lt;p&gt;Here is how your update architecture will look like upon competition of this project:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx3dvvc2yjxa86alpe2d9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx3dvvc2yjxa86alpe2d9.png" alt="add_jenkins" width="800" height="483"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Instructions On How To Submit Your Work For Review And Feedback
&lt;/h4&gt;

&lt;p&gt;To submit your work for review and feedback - follow &lt;a href="https://starter-pbl.darey.io/en/latest/submission.html" rel="noopener noreferrer"&gt;&lt;strong&gt;this instruction&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 1 - Install Jenkins server
&lt;/h4&gt;




&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Create an AWS EC2 server based on Ubuntu Server 20.04 LTS and name it "Jenkins"&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install &lt;a href="https://en.wikipedia.org/wiki/Java_Development_Kit" rel="noopener noreferrer"&gt;JDK&lt;/a&gt; (since Jenkins is a Java-based application)&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update
sudo apt install openjdk-11-jre
#Confirm Java installation
java -version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Install Jenkins&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Refer to official &lt;a href="https://www.jenkins.io/doc/book/installing/linux/#debianubuntu" rel="noopener noreferrer"&gt;Jenkins documentation&lt;/a&gt; for more details.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -fsSL https://pkg.jenkins.io/debian/jenkins.io-2023.key | sudo tee \
  /usr/share/keyrings/jenkins-keyring.asc &amp;gt; /dev/null
echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] \
  https://pkg.jenkins.io/debian binary/ | sudo tee \
  /etc/apt/sources.list.d/jenkins.list &amp;gt; /dev/null
sudo apt-get update
sudo apt-get install -y jenkins
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure Jenkins is up and running&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl status jenkins
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;By default Jenkins server uses TCP port 8080 - open it by creating a new Inbound Rule in your EC2 Security Group&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4f6jg0eyyvv6e7m2ff6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4f6jg0eyyvv6e7m2ff6.png" alt="open_port8080" width="800" height="136"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Perform initial Jenkins setup.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;From your browser access &lt;code&gt;http://&amp;lt;Jenkins-Server-Public-IP-Address-or-Public-DNS-Name&amp;gt;:8080&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You will be prompted to provide a default admin password&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi5hubfou9jxo2ixiu1n0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi5hubfou9jxo2ixiu1n0.png" alt="unlock_jenkins" width="800" height="332"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Retrieve it from your server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo cat /var/lib/jenkins/secrets/initialAdminPassword
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then you will be asked which plugings to install - choose suggested plugins.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqa50rg7q9240fhz2l0wa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqa50rg7q9240fhz2l0wa.png" alt="jenkins_plugins" width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once plugins installation is done - create an admin user and you will get your Jenkins server address.&lt;/p&gt;

&lt;p&gt;The installation is completed!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5xkd9lwsxzngni677l0z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5xkd9lwsxzngni677l0z.png" alt="Jenkins_Ready" width="534" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Step 2 - Configure Jenkins to retrieve source codes from GitHub using Webhooks&lt;/strong&gt;
&lt;/h4&gt;




&lt;p&gt;In this part, you will learn how to configure a simple Jenkins job/project (these two terms can be used interchangeably). This job will will be triggered by GitHub &lt;a href="https://en.wikipedia.org/wiki/Webhook" rel="noopener noreferrer"&gt;webhooks&lt;/a&gt; and will execute a 'build' task to retrieve codes from GitHub and store it locally on Jenkins server.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Enable webhooks in your GitHub repository settings&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftg3migbdpf2qpd1nzgjl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftg3migbdpf2qpd1nzgjl.png" alt="webhook_github" width="800" height="472"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to Jenkins web console, click "New Item" and create a "Freestyle project"&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F63xdumc9tk7vtoyyr4nb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F63xdumc9tk7vtoyyr4nb.png" alt="create_freestyle" width="800" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To connect your GitHub repository, you will need to provide its URL, you can copy from the repository itself&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5mvi8pyf7l5hh90vz7xc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5mvi8pyf7l5hh90vz7xc.png" alt="github_url" width="800" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In configuration of your Jenkins freestyle project choose Git repository, provide there the link to your Tooling GitHub repository and credentials (user/password) so Jenkins could access files in the repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1w627oocu347ltitvrvu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1w627oocu347ltitvrvu.png" alt="github_add_jenkins" width="800" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Save the configuration and let us try to run the build. For now we can only do it manually.&lt;br&gt;
Click "Build Now" button, if you have configured everything correctly, the build will be successfull and you will see it under &lt;code&gt;#1&lt;/code&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Falkkwjiz5fm6ny85i82e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Falkkwjiz5fm6ny85i82e.png" alt="jenkins_run1" width="757" height="624"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can open the build and check in "Console Output" if it has run successfully.&lt;/p&gt;

&lt;p&gt;If so - congratulations! You have just made your very first Jenkins build!&lt;/p&gt;

&lt;p&gt;But this build does not produce anything and it runs only when we trigger it manually. Let us fix it.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click "Configure" your job/project and add these two configurations&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Configure triggering the job from GitHub webhook:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1elgyfujo58e1usufe0y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1elgyfujo58e1usufe0y.png" alt="jenkins_trigger" width="800" height="185"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Configure "Post-build Actions" to archive all the files - files resulted from a build are called "artifacts".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw7b63a1tb2upae99d2un.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw7b63a1tb2upae99d2un.png" alt="archive_artifacts" width="800" height="289"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, go ahead and make some change in any file in your GitHub repository (e.g. &lt;code&gt;README.MD&lt;/code&gt; file) and push the changes to the master branch.&lt;/p&gt;

&lt;p&gt;You will see that a new build has been launched automatically (by webhook) and you can see its results - artifacts, saved on Jenkins server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl3vbgdvf4u79d0wrr0qj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl3vbgdvf4u79d0wrr0qj.png" alt="build_success_archive" width="800" height="687"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You have now configured an automated Jenkins job that receives files from GitHub by webhook trigger (this method is considered as 'push' because the changes are being 'pushed' and files transfer is initiated by GitHub). There are also other methods: trigger one job (downstreadm) from another (upstream), poll GitHub periodically and others.&lt;/p&gt;

&lt;p&gt;By default, the artifacts are stored on Jenkins server locally&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ls /var/lib/jenkins/jobs/tooling_github/builds/&amp;lt;build_number&amp;gt;/archive/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Step 3 - Configure Jenkins to copy files to NFS server via SSH&lt;/strong&gt;
&lt;/h3&gt;




&lt;p&gt;Now we have our artifacts saved locally on Jenkins server, the next step is to copy them to our NFS server to &lt;code&gt;/mnt/apps&lt;/code&gt; directory.&lt;/p&gt;

&lt;p&gt;Jenkins is a highly extendable application and there are 1400+ plugins available. We will need a plugin that is called &lt;a href="https://plugins.jenkins.io/publish-over-ssh/" rel="noopener noreferrer"&gt;"Publish Over SSH"&lt;/a&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install "Publish Over SSH" plugin.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;On main dashboard select "Manage Jenkins" and choose "Manage Plugins" menu item.&lt;/p&gt;

&lt;p&gt;On "Available" tab search for "Publish Over SSH" plugin and install it&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fohnrloccve0g4xxyjoxt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fohnrloccve0g4xxyjoxt.png" alt="plugin_ssh_install" width="800" height="295"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Configure the job/project to copy artifacts over to NFS server.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;On main dashboard select "Manage Jenkins" and choose "Configure System" menu item.&lt;/p&gt;

&lt;p&gt;Scroll down to Publish over SSH plugin configuration section and configure it to be able to connect to your NFS server:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Provide a private key (content of .pem file that you use to connect to NFS server via SSH/Putty)&lt;/li&gt;
&lt;li&gt;Arbitrary name&lt;/li&gt;
&lt;li&gt;Hostname - can be &lt;code&gt;private IP address&lt;/code&gt; of your NFS server&lt;/li&gt;
&lt;li&gt;Username - &lt;code&gt;ec2-user&lt;/code&gt; (since NFS server is based on EC2 with RHEL 8)&lt;/li&gt;
&lt;li&gt;Remote directory - &lt;code&gt;/mnt/apps&lt;/code&gt; since our Web Servers use it as a mointing point to retrieve files from the NFS server&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Test the configuration and make sure the connection returns &lt;code&gt;Success&lt;/code&gt;. Remember, that TCP port 22 on NFS server must be open to receive SSH connections.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6yplb7j7oqq6mrwulfas.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6yplb7j7oqq6mrwulfas.png" alt="publish_ssh_config" width="800" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Save the configuration, open your Jenkins job/project configuration page and add another one "Post-build Action" &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz6s3jclxeodz1z5q8m2y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz6s3jclxeodz1z5q8m2y.png" alt="send_build" width="367" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Configure it to send all files produced by the build into our previously define remote directory. In our case we want to copy all files and directories - so we use &lt;code&gt;**&lt;/code&gt;.&lt;br&gt;
If you want to apply some particular pattern to define which files to send - &lt;a href="http://ant.apache.org/manual/dirtasks.html#patterns" rel="noopener noreferrer"&gt;use this syntax&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe58xfcnqg5g1t95m7yu1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe58xfcnqg5g1t95m7yu1.png" alt="send_build1" width="800" height="485"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Save this configuration and go ahead, change something in &lt;code&gt;README.MD&lt;/code&gt; file in your GitHub Tooling repository.&lt;/p&gt;

&lt;p&gt;Webhook will trigger a new job and in the "Console Output" of the job you will find something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SSH: Transferred 25 file(s)
Finished: SUCCESS
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To make sure that the files in &lt;code&gt;/mnt/apps&lt;/code&gt; have been udated - connect via SSH/Putty to your NFS server and check README.MD file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat /mnt/apps/README.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you see the changes you had previously made in your GitHub - the job works as expected.&lt;/p&gt;

&lt;h4&gt;
  
  
  Congratulations!
&lt;/h4&gt;

&lt;p&gt;You have just implemented your first Continous Integration solution using Jenkins CI. Watch out for advanced CI configurations in upcoming projects.&lt;/p&gt;

</description>
      <category>devops</category>
    </item>
  </channel>
</rss>
