<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Aviator</title>
    <description>The latest articles on Forem by Aviator (@aviator_co).</description>
    <link>https://forem.com/aviator_co</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/aviator_co"/>
    <language>en</language>
    <item>
      <title>🚀 5 Javascript concepts every web developer should know! 👨‍💻</title>
      <dc:creator>Sumit Saurabh</dc:creator>
      <pubDate>Mon, 02 Dec 2024 17:43:44 +0000</pubDate>
      <link>https://forem.com/aviator_co/5-javascript-concepts-every-web-developer-should-know-65g</link>
      <guid>https://forem.com/aviator_co/5-javascript-concepts-every-web-developer-should-know-65g</guid>
      <description>&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;p&gt;This article contains a list of 10 Javascript concepts that can take you from a Javascript newbie to a pro.&lt;/p&gt;

&lt;p&gt;Hey Folks! 👋&lt;/p&gt;

&lt;p&gt;This article contains top 10 Javascript concepts for web developers. It contains something for everyone so whether you've just started your journey or are a seasoned developer, you'll still find something of use here.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb21f9k834ho7el7w3sfz.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb21f9k834ho7el7w3sfz.gif" alt="Gif of a cat reading a book" width="500" height="281"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;h2&gt;
  
  
  Join our exclusive developer community! 🎉
&lt;/h2&gt;

&lt;p&gt;I'm starting a new community for new developers.&lt;/p&gt;

&lt;p&gt;It is my aim to help early-age developers to level up in their careers, make meaningful connections and grow.&lt;/p&gt;

&lt;p&gt;I intend to make it one of the best developer community out there and early members will have exclusive perks!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Wanna join? Here's how to:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click on &lt;a href="https://discord.gg/z92fr9UR" rel="noopener noreferrer"&gt;this invite link&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Write a brief introduction of yourself and post in the &lt;code&gt;#general-chat&lt;/code&gt; channel&lt;/li&gt;
&lt;/ul&gt;



&lt;h2&gt;
  
  
  1. Call stack:
&lt;/h2&gt;

&lt;p&gt;Call stack is a data structure that keeps track of where the program is currently in its execution journey. Every time a function is called, an entry (called 'stack frame') is added to the call stack. &lt;/p&gt;

&lt;p&gt;When the function completes execution, its corresponding stack frame is removed (or 'popped') from the call stack, and control returns to the previous function in the stack.&lt;/p&gt;

&lt;p&gt;Key Points:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;LIFO (Last In, First Out): The call stack operates on this principle, where the last function pushed is the first one popped.&lt;/li&gt;
&lt;li&gt;Every active function call resides here, maintaining execution order.&lt;/li&gt;
&lt;li&gt;If too many function calls are added without being removed (e.g., infinite recursion), the stack runs out of memory, causing a stack overflow.&lt;/li&gt;
&lt;li&gt;Call Stack Trace: Debugging tools use the stack to show the order of function calls leading to an error.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  2. Execution Context:
&lt;/h2&gt;

&lt;p&gt;Execution context defines the environment within which JavaScript code is executed. It consists of everything needed to execute the code, such as variable definitions, the 'this' keyword, and function references.&lt;/p&gt;

&lt;p&gt;There are three primary types of execution contexts in JavaScript:&lt;/p&gt;

&lt;p&gt;i) Global Execution Context: Created by default when the script starts. It contains global variables and functions.&lt;br&gt;
ii) Function Execution Context: Created whenever a function is called. Each function call gets its unique context.&lt;br&gt;
iii) Eval Execution Context: Created when code is executed inside an eval() function.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Call stack and execution context work together. Call stack manages the order of execution, while the execution context manages the details of execution.&lt;/li&gt;
&lt;li&gt;When a function is invoked, a new execution context is created and pushed onto the stack.&lt;/li&gt;
&lt;li&gt;The code runs in the current execution context.&lt;/li&gt;
&lt;li&gt;When the function finishes, its context is removed from the stack.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  3. typeof, loose and strict equality:
&lt;/h2&gt;

&lt;p&gt;While most developers do know what these are, pro developers, in addition to knowing the basics, also know the limitations and when to use what.&lt;/p&gt;
&lt;h3&gt;
  
  
  typeof:
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;typeof&lt;/code&gt; operator returns a string indicating the type of its operand. It’s the easiest way to know the data type of a variable or an expression.&lt;/p&gt;

&lt;p&gt;Key Points about &lt;code&gt;typeof&lt;/code&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The output is always a string, such as "number", "string", "boolean", "object", "function", or "undefined".&lt;/li&gt;
&lt;li&gt;Despite being an object reference, typeof null returns "object".&lt;/li&gt;
&lt;li&gt;You can pass any value or expression to typeof.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;typeof&lt;/span&gt; &lt;span class="mi"&gt;42&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// "number"&lt;/span&gt;
&lt;span class="k"&gt;typeof&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Hello&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// "string"&lt;/span&gt;
&lt;span class="k"&gt;typeof&lt;/span&gt; &lt;span class="p"&gt;{};&lt;/span&gt; &lt;span class="c1"&gt;// "object"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  loose equality:
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;==&lt;/code&gt; operator checks if two values are loosely equal, meaning it compares their values after performing type coercion (converting both operands to the same type).&lt;/p&gt;

&lt;p&gt;Key Points about &lt;code&gt;==&lt;/code&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Converts one or both operands to the same type before comparing.&lt;/li&gt;
&lt;li&gt;The implicit type conversion can lead to unexpected results.&lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;===&lt;/code&gt; instead for clarity and reliability.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="mi"&gt;42&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;42&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// true (string is coerced to a number)&lt;/span&gt;
&lt;span class="kc"&gt;null&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="kc"&gt;undefined&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// true (special case of loose equality)&lt;/span&gt;
&lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// true (type coercion happens)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  Strict equality:
&lt;/h4&gt;

&lt;p&gt;The &lt;code&gt;===&lt;/code&gt; operator checks if two values are strictly equal, meaning it evaluates both value and type without performing type coercion.&lt;/p&gt;

&lt;p&gt;Key Points about &lt;code&gt;===&lt;/code&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Both value and type must match exactly.&lt;/li&gt;
&lt;li&gt;Ensures clarity and prevents unexpected behavior.&lt;/li&gt;
&lt;li&gt;Handles everything from numbers and strings to arrays and objects, and is hence more reliable.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="mi"&gt;42&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;42&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// false (different types)&lt;/span&gt;
&lt;span class="kc"&gt;null&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="kc"&gt;undefined&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// false (different values and types)&lt;/span&gt;
&lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// false (different types)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  When to Use Each Operator:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Use typeof when inspecting or debugging variable types.&lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;===&lt;/code&gt; for comparisons to ensure both value and type match.&lt;/li&gt;
&lt;li&gt;Avoid &lt;code&gt;==&lt;/code&gt; unless you’re confident type coercion will lead to the intended outcome (e.g., when comparing null and undefined).&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  4. Pure and Impure functions:
&lt;/h2&gt;

&lt;p&gt;A pure function is a function that has no side effects and always returns the same output for a given input (i.e., it has deterministic output).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// pure function&lt;/span&gt;
&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// Only depends on input arguments&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// impure function&lt;/span&gt;
&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;counter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;increment&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;counter&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// Modifies external state (impure)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Benefits of Pure Functions
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Easier to Test: Since the output only depends on the input, you don’t need to mock external dependencies.
Example: Testing add(2, 3) always results in 5.&lt;/li&gt;
&lt;li&gt;Predictability: No unintended consequences from hidden state or side effects.&lt;/li&gt;
&lt;li&gt;Improved Debugging: Pure functions are isolated, making it easier to trace issues.&lt;/li&gt;
&lt;li&gt;Reusability: Pure functions are self-contained and can be reused confidently in different contexts.&lt;/li&gt;
&lt;li&gt;Facilitates Functional Programming: Encourages immutability and declarative programming paradigms.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  5. Event propagation:
&lt;/h2&gt;

&lt;p&gt;Event propagation refers to the way in which events are handled in a program. &lt;/p&gt;

&lt;p&gt;In JavaScript, events can bubble up through the DOM (Document Object Model) or they can be captured and handled at each level.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0o94ki9vcq3agt794t7m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0o94ki9vcq3agt794t7m.png" alt="Example of bubbling up of events" width="800" height="911"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Event Handling Efficiency: Use event delegation to manage events on many child elements with a single parent listener.&lt;/li&gt;
&lt;li&gt;Debugging Propagation Issues: Understanding propagation helps resolve issues where unintended elements respond to events.&lt;/li&gt;
&lt;li&gt;Fine Control: Choose capturing or bubbling phases to handle events at specific points in the DOM hierarchy.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;event.stopPropagation()&lt;/code&gt;: Stops further propagation in both capturing and bubbling phases.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;event.stopImmediatePropagation()&lt;/code&gt;: Stops propagation and prevents other listeners on the same element from being called.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With that, I'm concluding this article today. Don't forget to check out my other articles and remember to join the community.&lt;/p&gt;

&lt;p&gt;Also, this isn't a definitive list and I'm sure I've missed many concepts that ought to be here, please add them in the comments down below.&lt;/p&gt;

&lt;p&gt;Thanks for reading and see you in the next one! &lt;/p&gt;

&lt;p&gt;Bye! 👋&lt;/p&gt;

&lt;p&gt;~ SS&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>javascript</category>
      <category>beginners</category>
      <category>programming</category>
    </item>
    <item>
      <title>How to improve DORA metrics as a release engineer</title>
      <dc:creator>Ibrahim Salami</dc:creator>
      <pubDate>Tue, 01 Oct 2024 11:03:23 +0000</pubDate>
      <link>https://forem.com/aviator_co/how-to-improve-dora-metrics-as-a-release-engineer-4and</link>
      <guid>https://forem.com/aviator_co/how-to-improve-dora-metrics-as-a-release-engineer-4and</guid>
      <description>&lt;p&gt;Ensuring efficient, reliable, high-quality software releases is crucial in software development. This is where release engineering comes into play. This blog will explore release engineering, its importance, and how release engineers can significantly influence key DevOps Research and Assessment (DORA) metrics.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What is Release Engineering?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Release engineering is a specialized discipline within software development focused on the processes and practices that ensure software is built, packaged, and delivered efficiently and reliably. It involves coordinating various aspects of software creation, from source code management to deployment.&lt;/p&gt;

&lt;p&gt;A release engineer ensures that software releases are smooth and efficient, maintaining high standards of quality and reliability. They manage the build and deployment pipelines, automate repetitive tasks, and work closely with development, operations, and QA teams.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Key Components of Release Engineering&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Version Control&lt;/strong&gt;: Manage code changes using systems like Git and implement branching strategies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build Automation:&lt;/strong&gt; Utilizing tools like Maven, Gradle, or Make to automate the build process alongside CI tools like Jenkins or GitHub Actions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Artifact Management&lt;/strong&gt;: Storing build artifacts in repositories such as JFrog Artifactory, Nexus, or AWS S3.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Testing:&lt;/strong&gt; Implementing automated testing strategies, including unit, integration, and end-to-end tests.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deployment Automation:&lt;/strong&gt; Using CD tools like Spinnaker or ArgoCD to automate deployments, managed with IaC tools like Terraform or Ansible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configuration Management:&lt;/strong&gt; Handling environment-specific configurations with tools like HashiCorp Consul or AWS Parameter Store.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Monitoring and Logging:&lt;/strong&gt; Employing tools like Prometheus, Grafana, or the ELK Stack to monitor performance and centralized logging.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Importance of Release Engineering&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Release engineering is crucial for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Ensuring efficient and reliable software releases – Streamlined processes reduce downtime and ensure consistent releases.&lt;/li&gt;
&lt;li&gt;  Reducing human error through automation – Automation minimizes the risk of errors, ensuring more predictable outcomes.&lt;/li&gt;
&lt;li&gt;  Enhancing collaboration – Bridging gaps between development, operations, and QA teams improves overall workflow.&lt;/li&gt;
&lt;li&gt;  Quick Rollback and Recovery Mechanisms-Effective release engineering ensures that issues can be swiftly addressed and systems restored.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;DORA (DevOps Research and Assessment) metrics are essential performance indicators used to check the effectiveness of software delivery and operational practices. They provide insights into the performance and health of DevOps processes.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Importance of DORA Metrics&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;DORA metrics are essential because they help organizations understand their software delivery performance, identify areas for improvement, and drive continuous improvement. They offer a data-driven approach to enhancing efficiency and reliability.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Key DORA Metrics&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Deployment Frequency&lt;/strong&gt;: Deployment frequency measures how often new code is deployed to production. Higher frequency indicates a more agile and responsive development process.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Lead Time for Changes&lt;/strong&gt;: Lead time for changes measures the duration from when a code change is committed until it is deployed to production. Shorter lead times indicate a more efficient development pipeline.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Change Failure Rate&lt;/strong&gt;: The duration from a code commit to its successful deployment in production. Change failure rate indicates the percentage of deployments that lead to a failure in the production environment. Lower rates indicate more reliable releases.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Mean Time to Recovery (MTTR)&lt;/strong&gt;: MTTR calculates the duration required to restore service following a failure. A lower MTTR signifies a more resilient and responsive system.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Real-world Implementation&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;We will use a Powershell script to calculate the four critical metrics from Azure DevOps pipelines. The computed result will be stored in a Log Analytics Workspace. We will use Grafana as the data visualization tool to plot the Dashboard.&lt;/p&gt;

&lt;p&gt;Below is the sample dashboard we can see after adding Azure data sources in Grafana. Snippets from the PowerShell scripts used to compute each metric are also below. &lt;/p&gt;

&lt;p&gt;The complete code can be found at:&lt;br&gt;&lt;br&gt;
&lt;a href="https://github.com/rajputrishabh/DORA-Metrics" rel="noopener noreferrer"&gt;https://github.com/rajputrishabh/DORA-Metrics&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXeJpFTjdbie8N9lJ5oLVZSBF4P1POTJPMkJcV_W00il3-rsMxYUa8I4G9lyCmeGq_j5coxFbVn-4_6TIuNQQHsPsWAG-EteRkToZ3Ti-NKjalfFJ5_OFDOAU5QRblkG9CF5RUF7o0jyplw4wGn0s3lvzF0%3Fkey%3DjF0H3Qaei_wG5Mri8T2v0Q" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXeJpFTjdbie8N9lJ5oLVZSBF4P1POTJPMkJcV_W00il3-rsMxYUa8I4G9lyCmeGq_j5coxFbVn-4_6TIuNQQHsPsWAG-EteRkToZ3Ti-NKjalfFJ5_OFDOAU5QRblkG9CF5RUF7o0jyplw4wGn0s3lvzF0%3Fkey%3DjF0H3Qaei_wG5Mri8T2v0Q"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Calculating Mean Time to Recovery&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;To calculate MTTR, sum up the time taken to recover from all incidents over time and divide by the number of incidents.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MTTR = Total downtime / Number of incidents&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#calculate MTTR per day
  if($maintainencetime -eq 0){
    $maintainencetime=1
  }
  if($failureCount -gt 0 -and $noofdays -gt 0){
    $MeanTimetoRestore=($maintainencetime/$failureCount)
  }
  $dailyDeployment=1
  $hourlyrestoration=(1/24)
  $weeklyDeployment=(1/7)
 
  #calculate Maturity
  $rating=""

  if($MeanTimeToRestore -eq 0){
  $rating=" NA"
  }
  elseif($MeanTimeToRestore -lt $hourlyrestoration){
    $rating="Elite"
  }
  elseif($MeanTimeToRestore -lt $dailyDeployment){
    $rating="High"
  }
  elseif($MeanTimeToRestore -lt $weeklyDeployment){
    $rating ="Medium"
  }
  elseif($MeanTimeToRestore -ge $weeklyDeployment){
  $rating="Low"
  } 
  if($failureCount -gt 0 -and $noofdays -gt 0){
    Write-Output "Mean Time to Restore of $($pipelinename) for $($stgname) for release id $($relid)
 over last $($noofdays) days, is $($displaymetric) $($displayunit), with DORA rating of '$rating'"
  }
  else{
    Write-Output "Mean Time to Restore of $($pipelinename) for $($stgname) for release id $($relid) 
 over last $($noofdays) days ,is $($displaymetric) $($displayunit), with DORA rating of '$rating'"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Calculating Deployment Frequency&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Count the number of deployments to production over a specific period to calculate deployment frequency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deployment Frequency = Number of deployments / Time period&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#calculate DF per day
  $deploymentsperday=0
  if($releasetotal -gt 0 -and $noofdays -gt 0){
  $deploymentsperday=$timedifference/$releasetotal
  }

  $dailyDeployment=1
  $weeklyDeployment=(1/7)
  $monthlyDeployment=(1/30)
  $everysixmonthDeployment=(1/(6*30))
  $yearlyDeployment=(1/365)
 
  #calculate Maturity
  $rating=""
  if($deploymentsperday -eq 0){
    $rating=" NA"
  }
  elseif($deploymentsperday -lt $dailyDeployment){
    $rating="Elite"
  }
  elseif($deploymentsperday -ge $dailyDeployment -and  $deploymentsperday -gt 
 $weeklyDeployment){
    $rating="High"
  }
  elseif($deploymentsperday -ge $weeklyDeployment -and $deploymentsperday -gt 
 $monthlyDeployment){
      $rating ="Medium"
  }
  elseif($deploymentsperday -ge $monthlyDeployment -and  $deploymentsperday -ge 
 $everysixmonthDeployment){
      $rating="Low"
  }
  if($releasetotal -gt 0 -and $noofdays -gt 0){
    Write-Output "Deployment frequency of $($pipelinename) for $($stgname)  for release id $($relid) 
 over last $($noofdays)  days, is $($displaymetric) $($displayunit), with DORA rating of  '$rating'"
  }
  else{
    Write-Output "Deployment frequency of $($pipelinename)  for $($stgname)  for release id $($relid)
 over last $($noofdays)  days, is $($displaymetric) $($displayunit), with DORA rating of '$rating'"
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Calculating Change Failure Rate&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;To calculate the change failure rate, divide the number of failed deployments by the total number of deployments and multiply by 100 to get a percentage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Change Failure Rate (%) = (Failed deployments / Total deployments) * 100&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The PowerShell script to calculate CFR is in the &lt;a href="https://github.com/rajputrishabh/DORA-Metrics/blob/main/DoraMetrcis-release-ChangeFailureRate.ps1" rel="noopener noreferrer"&gt;repository linked above&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Calculating Lead Times for Changes&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;To calculate the lead time for changes, measure the time from code commit to deployment for each change and calculate the average.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lead Time for Changes = Sum of (Deployment time – Commit time) / Number of changes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The PowerShell script to calculate LTC can be found in the &lt;a href="https://github.com/rajputrishabh/DORA-Metrics/blob/main/DoraMetrics-release-LeadTimetoChange.ps1" rel="noopener noreferrer"&gt;repository linked above&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;How Release Engineers Can Influence DORA Metrics&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Release engineers play a pivotal role in shaping and improving key DORA metrics, which are crucial for assessing the efficiency and reliability of software delivery. Below, we delve into practical strategies with real-world examples from companies like Etsy, Google, Netflix, and Amazon to illustrate how release engineers can positively impact Deployment Frequency, Change Failure Rate, Lead Time for Changes, and Mean Time to Recovery.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Improving Deployment Frequency&lt;/strong&gt; – Etsy
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;: Implementing CI/CD Pipelines at Etsy&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strategy&lt;/strong&gt;: To enhance deployment frequency, Etsy adopted continuous integration and continuous deployment (CI/CD) practices and several tools, such as &lt;a href="https://github.com/etsy/TryLib" rel="noopener noreferrer"&gt;Try&lt;/a&gt; and &lt;a href="https://github.com/etsy/deployinator" rel="noopener noreferrer"&gt;Deployinator&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Implementation&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Automation&lt;/strong&gt;: They automated their build, test, and deployment processes using Jenkins and custom scripts, enabling multiple daily deployments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Feature Toggles&lt;/strong&gt;: Introduced feature toggles to safely deploy incomplete features without affecting end users.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Outcome&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Etsy achieved the capability to deploy code changes to production around 50 times a day, significantly increasing their deployment frequency.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;a href="https://www.simform.com/blog/etsy-devops-case-study/" rel="noopener noreferrer"&gt;https://www.simform.com/blog/etsy-devops-case-study/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://codeascraft.com/" rel="noopener noreferrer"&gt;https://codeascraft.com/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Reducing Change Failure Rate&lt;/strong&gt; – Google
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;: Comprehensive Testing at Google&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strategy&lt;/strong&gt;: Google emphasizes comprehensive automated testing to reduce the change failure rate.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Implementation&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Testing&lt;/strong&gt;: Google integrated unit tests, integration tests, and end-to-end tests into its CI pipeline. It uses tools like GoogleTest and Selenium for various levels of testing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code Reviews&lt;/strong&gt;: Established a rigorous code review process where peers review each change before it is merged, ensuring high code quality.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Outcome&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;By catching issues early in the development process, Google reduced the number of failed deployments, lowering their change failure rate.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;a href="https://testing.googleblog.com/" rel="noopener noreferrer"&gt;https://testing.googleblog.com/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Shortening Lead Time for Changes&lt;/strong&gt; – Netflix
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;: Streamlined Build Process at Netflix&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strategy&lt;/strong&gt;: Netflix optimized its build and deployment processes to shorten the lead time for changes.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Implementation&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Optimized Pipelines&lt;/strong&gt;: Netflix used Spinnaker, an open-source multi-cloud continuous delivery platform, to streamline their deployment pipelines.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Microservices Architecture&lt;/strong&gt;: Adopted a microservices architecture, which allowed smaller, more manageable changes to be deployed independently.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Outcome&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Netflix reduced its lead time for changes from days to minutes, allowing for rapid iteration and deployment.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;a href="https://thenewstack.io/netflix-built-spinnaker-high-velocity-continuous-delivery-platform/" rel="noopener noreferrer"&gt;https://thenewstack.io/netflix-built-spinnaker-high-velocity-continuous-delivery-platform/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://blog.spinnaker.io/tagged/netflix" rel="noopener noreferrer"&gt;https://blog.spinnaker.io/tagged/netflix&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://netflixtechblog.com/" rel="noopener noreferrer"&gt;https://netflixtechblog.com/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Reducing Mean Time to Recovery (MTTR)&lt;/strong&gt; – Amazon
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;: Robust Monitoring and Quick Rollback at Amazon&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strategy&lt;/strong&gt;: Amazon focuses on robust monitoring and quick rollback mechanisms to minimize MTTR.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Implementation&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Monitoring&lt;/strong&gt;: Extensive monitoring was implemented using AWS CloudWatch, enabling proactive detection of issues.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rollback Mechanisms&lt;/strong&gt;: Developed automated rollback procedures using AWS Lambda functions and CloudFormation scripts to revert to a previous stable state quickly.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Outcome&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Amazon reduced their MTTR significantly, ensuring quick recovery from incidents and maintaining high service availability.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;a href="https://aws.amazon.com/documentation/" rel="noopener noreferrer"&gt;https://aws.amazon.com/documentation/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Deployment Frequency and Lead Time for Changes evaluate the speed of delivery, whereas Change Failure Rate and Time to Restore Service evaluate stability. By tracking and continuously improving these metrics, teams can achieve significantly better business results. Based on these metrics, DORA categorizes teams into Elite, High, Medium, and Low performers, finding that Elite teams are twice as likely to achieve or surpass their organizational performance goals.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdCBhJhKW4iOTrjVNHQYy_gufWeBEsGaDqPLKQVVs0QcZzLCk6GMkQY_vtKM-zfmsublMNSUqczqKpBVnL1rk0duhRvqlS77NBb_3Ser60BJJNAO6JVqgx_YYUOvyjRjDa6cloahNLtXQk2u4Bpn4MBBobg%3Fkey%3DjF0H3Qaei_wG5Mri8T2v0Q" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdCBhJhKW4iOTrjVNHQYy_gufWeBEsGaDqPLKQVVs0QcZzLCk6GMkQY_vtKM-zfmsublMNSUqczqKpBVnL1rk0duhRvqlS77NBb_3Ser60BJJNAO6JVqgx_YYUOvyjRjDa6cloahNLtXQk2u4Bpn4MBBobg%3Fkey%3DjF0H3Qaei_wG5Mri8T2v0Q"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Pitfalls of DORA Metrics for Release Engineers&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;While DORA metrics provide valuable insights into software delivery performance and operational practices, they come with challenges and potential pitfalls. Understanding these can help release engineers avoid common mistakes and make more informed decisions.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Overemphasis on Metrics Over Quality&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Pitfall&lt;/strong&gt;: Focusing solely on improving DORA metrics can lead to overlooking the overall quality of the software. Teams might rush changes to increase deployment frequency or reduce lead time, compromising the product’s robustness and security. This is a classic case of “Goodhart’s Law, which states that when a measure becomes a target, it ceases to be a good measure”.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;: Balance the focus on metrics with a commitment to maintaining high-quality standards. Implement thorough testing and code review processes to ensure quality is not sacrificed for speed.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Misinterpreting Metrics&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Pitfall&lt;/strong&gt;: DORA metrics can be misinterpreted without context. For example, a high deployment frequency might seem optimistic but could indicate frequent hotfixes for recurring issues, highlighting underlying problems rather than improvements.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;: Analyze metrics within the context of overall performance and other relevant data. Use complementary metrics and qualitative insights to view the team’s effectiveness comprehensively.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Neglecting Team Morale&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Pitfall&lt;/strong&gt;: Intense focus on improving DORA metrics can result in burnout and decreased morale among team members. Pushing for more frequent deployments or faster lead times without considering workload can negatively impact the team’s well-being.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;: Foster a healthy work environment by setting realistic goals and ensuring adequate support and resources for the team. Encourage open communication about workloads and stress levels.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Lack of Actionable Insights&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Pitfall&lt;/strong&gt;: Collecting and reporting DORA metrics without deriving actionable insights can lead to data without purpose. Teams might track metrics but fail to implement changes based on the findings.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;: Review and analyze DORA metrics regularly to identify trends and areas for improvement. Using the insights obtained from the metrics, develop and execute action plans.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Insufficient Tooling and Automation&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Pitfall&lt;/strong&gt;: Inadequate tooling and automation can hinder efforts to improve DORA metrics. Manual processes and outdated tools can slow down deployments and increase lead times.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;: Invest in modern CI/CD tools, automated testing frameworks, and infrastructure as code solutions. Continuously evaluate and update the toolchain to ensure it supports efficient workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Release engineering is a cornerstone of modern software development, ensuring that software is released efficiently, reliably, and with high quality. Release engineers can significantly enhance their software delivery performance by understanding and effectively utilizing DORA metrics. However, it’s essential to be mindful of the potential pitfalls and to balance metric improvement with maintaining overall quality and team morale. Best practices and utilizing appropriate tools can help release engineers drive meaningful improvements and achieve better outcomes.&lt;/p&gt;

&lt;p&gt;To effectively influence these metrics, release engineers should focus on:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Automation&lt;/strong&gt;: Automate build, test, and deployment processes using robust CI/CD pipelines to increase deployment frequency and reduce lead times.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Comprehensive Testing&lt;/strong&gt;: Implement comprehensive automated testing to catch issues early and lower the change failure rate.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Efficient Rollback Mechanisms&lt;/strong&gt;: Establish quick rollback strategies and robust monitoring to minimize MTTR.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Continuous Improvement&lt;/strong&gt;: Regularly review and iterate on processes based on DORA metrics to foster continuous improvement and ensure high-quality software delivery.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Frequently Asked Questions&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Q1: What are DORA metrics?&lt;/p&gt;

&lt;p&gt;DORA (DevOps Research and Assessment) metrics are essential performance indicators for evaluating the effectiveness of software delivery and operational practices. The four main DORA metrics are Deployment Frequency (DF), Lead Time for Change, Change Failure Rate, and Mean Time to Recovery (MTTR).&lt;/p&gt;

&lt;p&gt;Q2: Why are DORA metrics important?&lt;/p&gt;

&lt;p&gt;DORA metrics provide valuable insights into the performance and health of software delivery processes. They help identify bottlenecks, measure improvements, and drive continuous improvement in DevOps practices, leading to more efficient and reliable software delivery.&lt;/p&gt;

&lt;p&gt;Q3: How often should I review and analyze DORA metrics?&lt;/p&gt;

&lt;p&gt;Regularly review DORA metrics, ideally on a weekly or bi-weekly basis, to continuously monitor performance and identify areas for improvement. Use these reviews to inform decisions and drive ongoing enhancements in the software delivery process.&lt;/p&gt;

&lt;p&gt;Q4: What tools can help improve DORA metrics?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  CI/CD Tools: Jenkins, GitHub Actions, GitLab CI, CircleCI&lt;/li&gt;
&lt;li&gt;  Build Automation Tools: Maven, Gradle, Make, Ant&lt;/li&gt;
&lt;li&gt;  Artifact Management: JFrog Artifactory, Nexus, AWS S3&lt;/li&gt;
&lt;li&gt;  Configuration Management: HashiCorp Consul, Spring Cloud Config, AWS Parameter Store&lt;/li&gt;
&lt;li&gt;  Monitoring and Logging: Prometheus, Grafana, New Relic, ELK Stack&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Q5: How can I measure the current state of my DORA metrics?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Deployment Frequency: Measure the number of deployments within a defined timeframe.&lt;/li&gt;
&lt;li&gt;  Lead Time for Changes: Measure the time from code commit to production deployment.&lt;/li&gt;
&lt;li&gt;  Change Failure Rate: Divide the number of failed deployments by the total deployments.&lt;/li&gt;
&lt;li&gt;  Mean Time to Recovery: Track and average the time from incident detection to resolution.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Q6: What is release engineering?&lt;/p&gt;

&lt;p&gt;Release engineering is a discipline within software development focused on the processes and practices for building, packaging, and delivering software efficiently and reliably. It involves coordinating various aspects of software creation, from source code management to deployment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.aviator.co/releases" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.aviator.co%2Fblog%2Fwp-content%2Fuploads%2F2024%2F08%2Fblog-cta-9Release_CTA.svg" alt="aviator releases"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>sre</category>
      <category>productivity</category>
    </item>
    <item>
      <title>How to configure IAM using Terraform</title>
      <dc:creator>Ibrahim Salami</dc:creator>
      <pubDate>Fri, 06 Sep 2024 15:36:29 +0000</pubDate>
      <link>https://forem.com/aviator_co/how-to-configure-iam-using-terraform-4hif</link>
      <guid>https://forem.com/aviator_co/how-to-configure-iam-using-terraform-4hif</guid>
      <description>&lt;p&gt;Organizations or individuals typically manage IAM using consoles and hesitate to use Infrastructure-as-code (IaC) as it is complex and sensitive to define IAM policies due to security risks. With frequent dynamic changes, you do not get immediate feedback. And more expertise is needed to configure and manage IAM rules with IaC. However, configuring IAM though IaC also have several benefits. &lt;/p&gt;

&lt;p&gt;In this blog we’ll explore those benefits, discuss strategies for IAM management via Terraform, explain why implementing Zero Trust policies within IAM is crucial for security, and how to enforce IAM best practices, Policy-as-code, and IAM governance.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Why manage IAM through Infrastructure-as-code?&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Automation and consistency&lt;/strong&gt; –  It offers automation, consistency, repeatability, and versioning to IAM policies and role management.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Audit trails&lt;/strong&gt; – It allows you to maintain a comprehensive audit trail of changes to IAM configurations. This helps with compliance requirements and allows you to easily track who made changes when they were made, and why. &lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Least Privileges&lt;/strong&gt; – Terraform’s expressive language allows for defining complex IAM policies with fine-grained control over permissions. Teams can more easily provision their own access in a controlled manner through pull requests, which then undergo a review process before being applied, fostering a self-service infrastructure model.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Zero Trust policies within IAM&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Identity and Access Management (IAM) is a critical component of Zero-Trust security, assuming you do not trust anybody. Zero trust in IAM means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Never Trust, Always Verify&lt;/strong&gt; – There is no automatic trust. Always verify everyone who is trying to access the resource.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Least Privilege Access&lt;/strong&gt; – Limits access to resources to the minimum necessary to perform a specific task and reduce the blast radius. This is to stop people from allowing unnecessary permissions to users/roles, which can cause breaking changes.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;MFA&lt;/strong&gt; – Multi-factor authentication enables the extra security layer on IAM users/roles.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Setting Up AWS IAM Policies with Terraform&lt;/strong&gt;  
&lt;/h2&gt;

&lt;p&gt;Setting up AWS IAM policies with Terraform involves defining your IAM resources in Terraform configuration files, applying best practices for security and organization, and using Terraform’s capabilities to manage these resources as code. Below, we’ll outline a basic approach to setting up IAM policies in AWS using Terraform, including an example configuration.&lt;/p&gt;

&lt;p&gt;In this blog post, we will cover the Terraform configs that are also compatible with OpenTofu.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Prerequisites&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Before you start, ensure you have the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;a href="https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt; is installed on your machine.&lt;/li&gt;
&lt;li&gt;  An AWS account and AWS CLI configured with access credentials.&lt;/li&gt;
&lt;li&gt;  Basic knowledge of IAM concepts (e.g., policies, roles, users) and Terraform syntax.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 1: Initialize Terraform Project&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Create a new directory for your Terraform project and initialize it with a &lt;strong&gt;main.tf&lt;/strong&gt; file. Then, run &lt;em&gt;terraform init&lt;/em&gt; in the project directory to prepare your directory for Terraform operations.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 2: Define the AWS Provider&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In &lt;strong&gt;main.tf&lt;/strong&gt;, start by defining the AWS provider. This specifies which version of the AWS provider to use and configures the region and other provider settings.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider “aws” {
  region = “us-east-1”
  #AWS account access keys credentials
  access_key = “A***************”
  secret_key = “U******************”
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Step 3: Define IAM Policy&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Define an IAM policy using the &lt;strong&gt;aws_iam_policy&lt;/strong&gt; resource. You need to provide a name and a policy document. The policy document can be defined inline using the &lt;em&gt;&amp;lt;&amp;lt;EOF … EOF&lt;/em&gt; syntax, or it can be loaded from a file using the &lt;em&gt;file()&lt;/em&gt; function.&lt;/p&gt;

&lt;p&gt;Example of an inline policy definition:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create IAM policy to allow S3 read access
resource “aws_iam_policy” “s3_read_policy” {
  name        = “s3_read_policy”
  description = “Allows read access to files in the specified S3 bucket”
  policy      = &amp;lt;&amp;lt;EOF
{
  “Version”: “2012-10-17”,
  “Statement”: [
    {
      “Effect”: “Allow”,
      “Action”: [
        “s3:GetObject”,
        “s3:ListBucket”
      ],
      “Resource”: [
        “arn:aws:s3:::your-bucket-name/*”,
        “arn:aws:s3:::your-bucket-name”
      ]
    }
  ]
}
EOF
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can see the &lt;strong&gt;iam_policy&lt;/strong&gt; creation by running the &lt;em&gt;&lt;code&gt;terraform plan&lt;/code&gt;&lt;/em&gt; command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwbijcz1rm6klz9i2roxv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwbijcz1rm6klz9i2roxv.png" alt="terraform plan" width="800" height="96"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;terraform plan command&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq7bwo286jihc9el475ui.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq7bwo286jihc9el475ui.png" alt="s3 read access" width="800" height="901"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;output from terraform plan&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 4: Create IAM user&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Define an IAM user&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create IAM user
resource “aws_iam_user” “iam_user” {
  name = “iam_user”
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can see the &lt;strong&gt;iam_user&lt;/strong&gt; creation by running &lt;em&gt;&lt;code&gt;terraform plan&lt;/code&gt;&lt;/em&gt; command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fydxzbewen7s4m5n4y944.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fydxzbewen7s4m5n4y944.png" alt="iam user output" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;output for terraform plan&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 5: Create an IAM Role and Attach the Policy&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Define an IAM role and set assume role policy to allow the IAM user to assume the role&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create an IAM role
resource “aws_iam_role” “iam_role” {
  name = “iam-role”
  assume_role_policy = jsonencode({
    Version = “2012-10-17”,
    Statement = [{
      Action = “sts:AssumeRole”,
      Principal = {
        AWS = “arn:aws:iam::AWS_ACCOUNT_ID:user/${aws_iam_user.iam_user.name}” # Replace [AWS_ACCOUNT_ID] with Account’s AWS account ID
      },
      Effect = “Allow”,
      Sid    = “ ”
    }]
  })
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can see the &lt;strong&gt;iam_role&lt;/strong&gt; creation by running &lt;em&gt;&lt;code&gt;terraform plan&lt;/code&gt;&lt;/em&gt; command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmmw4riraxsdmqddsxi9u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmmw4riraxsdmqddsxi9u.png" alt="iam role output" width="800" height="816"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;output for terraform plan&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 6: Attach IAM policy to IAM role&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Attach IAM policy to IAM role
resource “aws_iam_policy_attachment” “s3_read_attach” {
  roles       = [aws_iam_role.iam_role.name]
  policy_arn = aws_iam_policy.s3_read_policy.arn
  name     = “Attaching s3 policy to iam role”
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can see the &lt;strong&gt;iam_role&lt;/strong&gt; creation by running &lt;em&gt;terraform plan&lt;/em&gt; command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhontdyigzf7cv0pykwme.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhontdyigzf7cv0pykwme.png" alt="iam s3 policy" width="800" height="289"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;iam s3 policy – terraform plan output&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 7: Apply Configuration&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The Terraform configuration has been defined. Apply it using the command &lt;em&gt;&lt;code&gt;terraform apply&lt;/code&gt;&lt;/em&gt; in your project directory. This command will prompt you to review the proposed changes and confirm them. Upon confirmation, Terraform will create the resources in AWS according to your configuration.&lt;/p&gt;

&lt;p&gt;After running the final &lt;em&gt;&lt;code&gt;terraform apply&lt;/code&gt;&lt;/em&gt; command, we can see the &lt;strong&gt;iam-role, iam_user,&lt;/strong&gt; and &lt;strong&gt;s3ReadPolicy&lt;/strong&gt; resources. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fchhbioj5ur28v4zngosf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fchhbioj5ur28v4zngosf.png" alt="aws user" width="800" height="157"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk8hgx4saf8cg006qgnyc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk8hgx4saf8cg006qgnyc.png" alt="aws iam role" width="800" height="142"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwv7yd2mm5vsoe5bcpuse.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwv7yd2mm5vsoe5bcpuse.png" alt="aws s3 policy" width="800" height="157"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Utilizing Terraform’s templatefile function for dynamic policy generation&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The &lt;em&gt;templatefile()&lt;/em&gt; function in Terraform allows you to dynamically generate configuration files using templates. You can use this function to generate IAM policy documents dynamically, which can be helpful in cases where policies need to be customized based on dynamic inputs.&lt;/p&gt;

&lt;p&gt;Here’s an example of how you can use the &lt;em&gt;templatefile()&lt;/em&gt; function to dynamically generate an IAM policy document for S3 read access:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider “aws” {
  region = “us-east-1”
  #AWS account access keys credentials
  access_key = “A***************”
  secret_key = “U******************”
}

# Define IAM policy template. This works prior to terraform 0.12
data “template_file” “s3_read_policy_template” {
  template = &amp;lt;&amp;lt;EOF
{
  “Version”: “2012-10-17”,
  “Statement”: [
    {
      “Effect”: “Allow”,
      “Action”: [
        “s3:GetObject”,
        “s3:ListBucket”
      ],
      “Resource”: [
        “${bucket_arn}/*”,
        “${bucket_arn}”
      ]
    }
  ]
}
EOF
  vars = {
    bucket_arn = “arn:aws:s3:::your-bucket-name”
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also store the above template snippet into a separate &lt;strong&gt;s3_read_policy.tmpl&lt;/strong&gt; template file for Terraform above version 0.12 and reference it as shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Content of s3_read_policy.tmpl file
 {
  “Version”: “2012-10-17”,
  “Statement”: [
    {
      “Effect”: “Allow”,
      “Action”: [
        “s3:GetObject”,
        “s3:ListBucket”
      ],
      “Resource”: [
        “${bucket_arn}/*”,
        “${bucket_arn}”
      ]
    }
  ]
}

# Create IAM policy using the template
resource “aws_iam_policy” “s3_read_policy” {
  provider = aws.account_a
  name     = “s3_read_policy”
  policy   = templatefile( “${path.module}/template_file.tpl”,
{
  bucket_arn = “arn:aws:s3:::BUCKET_NAME” #provide the s3 bucket name
} )
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can see the &lt;strong&gt;s3_read_policy&lt;/strong&gt; creation by running &lt;em&gt;terraform plan&lt;/em&gt; command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq7bwo286jihc9el475ui.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq7bwo286jihc9el475ui.png" alt="iam s3 read policy" width="800" height="901"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;iam s3 read policy – terraform plan output&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create IAM user
resource “aws_iam_user” “iam_user” {
  name     = “iam_user”
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can see the &lt;strong&gt;iam_user&lt;/strong&gt; creation by running &lt;em&gt;terraform plan&lt;/em&gt; command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fydxzbewen7s4m5n4y944.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fydxzbewen7s4m5n4y944.png" alt="iam user output" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;output for terraform plan&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create an IAM role
resource “aws_iam_role” “reader_role” {
  name = “reader_role”
  assume_role_policy = jsonencode({
    Version   = “2012-10-17”,
    Statement = [{
      Action    = “sts:AssumeRole”,
      Principal = {
        AWS = “arn:aws:iam::AWS_ACCOUNT_ID:user/${aws_iam_user.iam_user.name}” # Replace [AWS_ACCOUNT_ID] with Account’s AWS account ID
      },
      Effect    = “Allow”,
      Sid       = “AssumeRole“
    }]
  })
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can see the &lt;strong&gt;iam_role&lt;/strong&gt; creation by running &lt;em&gt;terraform plan&lt;/em&gt; command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmmw4riraxsdmqddsxi9u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmmw4riraxsdmqddsxi9u.png" alt="iam role output" width="800" height="816"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;output for terraform plan&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Attach IAM policy to IAM role
resource “aws_iam_policy_attachment” “s3_read_attach” {
  name       = “s3_read_attach”
  roles      = [aws_iam_role.reader_role.name]
  policy_arn = aws_iam_policy.s3_read_policy.arn
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can see the &lt;strong&gt;iam_policy_attachment&lt;/strong&gt; creation by running &lt;em&gt;terraform plan&lt;/em&gt; command.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6xhqkkh4z0pin2i5rq4z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6xhqkkh4z0pin2i5rq4z.png" alt="s3 read policy" width="800" height="275"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;iam s3 policy – terraform plan output&lt;/p&gt;

&lt;p&gt;This configuration reads the content of the &lt;strong&gt;s3_read_policy.tmpl&lt;/strong&gt; file using the file function and then using it to create the IAM policy. You can adjust the file path, policy name, and bucket ARN per your use case.&lt;/p&gt;

&lt;p&gt;This approach allows you to generate IAM policies dynamically based on inputs or variables, providing flexibility in your Terraform configurations. Adjust the template and variables as needed for your specific use case.&lt;/p&gt;

&lt;p&gt;After running the final &lt;em&gt;terraform apply&lt;/em&gt; command, we can see the &lt;strong&gt;iam_user, reader-role,&lt;/strong&gt; and attached &lt;strong&gt;s3_read_policy&lt;/strong&gt; resources have been created in the AWS, as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fchhbioj5ur28v4zngosf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fchhbioj5ur28v4zngosf.png" alt="aws user" width="800" height="157"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk8hgx4saf8cg006qgnyc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk8hgx4saf8cg006qgnyc.png" alt="aws iam role" width="800" height="142"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fohdz2uctbqwfbhscnso0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fohdz2uctbqwfbhscnso0.png" alt="aws reader role" width="800" height="354"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Cross-Account Management&lt;/strong&gt; 
&lt;/h2&gt;

&lt;p&gt;Cross-account access is particularly beneficial when organizations maintain multiple AWS accounts for different purposes, such as development, staging, and production.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Benefits of cross-account management&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  Without cross-account access, managing user access across these accounts can become complex and cumbersome.&lt;/li&gt;
&lt;li&gt;  It helps to enforce least privilege principles by allowing administrators to define granular access controls using IAM roles. This ensures users can only access the resources required for their roles or responsibilities.&lt;/li&gt;
&lt;li&gt;  Cross-account access facilitates centralized user management and auditing.&lt;/li&gt;
&lt;li&gt;  Administrators can create and manage users centrally in an AWS management account, reducing the administrative overhead of managing user identities across multiple accounts. &lt;/li&gt;
&lt;li&gt;  Auditing and tracking user access become more straightforward as all access requests and actions are logged centrally in the AWS management account.&lt;/li&gt;
&lt;li&gt;  Cross-account access is crucial for ensuring streamlined operations in large organizations where a security team manages multiple AWS accounts centrally.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Unlocking Cross-Account Access in AWS with Terraform&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Let’s use Terraform to create an IAM user in AWS Account A and establish cross-account access with the “AssumeRole” action.&lt;/p&gt;

&lt;p&gt;In this example, we’ll create an IAM user in AWS Account A and configure a cross-account role in AWS Account B that allows the IAM user in AWS Account A to assume it and allow the CrossAccountUser in AWS Account A to read files from buckets in AWS Account B. We’ll need to define an IAM policy granting the necessary permissions and attach that policy to the cross-account role in AWS Account B.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Here’s how you can achieve this:&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  Set alias for Account A
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider “aws” {
  region = “us-east-1”
#AWS account A access key credentials
 access_key = “A***************”
 secret_key = “U******************”
  alias  = “account_a”
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  Create an IAM user in AWS Account A
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource “aws_iam_user” “cross_account_user” {
  provider = aws.account_a
  name = “CrossAccountUser”
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  Setup AWS Account B
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider “aws” {
  region = “us-east-1”
 #AWS account B access key credentials
 access_key = “A***************”
 secret_key = “U******************”
  alias  = “account_b”
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  Define an IAM role in AWS Account B
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource “aws_iam_role” “cross_account_role” {
  provider = aws.account_b
  name = “CrossAccountRole”
  assume_role_policy = jsonencode({
    Version   = “2012-10-17”,
    Statement = [{
      Effect    = “Allow”,
      Principal = {
        AWS = “arn:aws:iam::Account_A_ID:user/CrossAccountUser”  # Replace [Account A ID] with AWS Account A’s AWS account ID
      },
      Action    = “sts:AssumeRole”,
    }]
  })
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  Define IAM policy to allow reading files from S3 buckets in Account B
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Replace "bucket-name" with the name of your bucket in AWS Account B
resource “aws_iam_policy” “s3_read_policy” {
  provider = aws.account_b
  name = “S3ReadPolicy”
  policy = &amp;lt;&amp;lt;EOF
{
  “Version”: “2012-10-17”,
  “Statement”: [{
    “Effect”: “Allow”,
    “Action”: [
      “s3:GetObject”,
      “s3:ListBucket”
    ],
    “Resource”: [
      “arn:aws:s3:::bucket-name/*”,  
      “arn:aws:s3:::bucket-name”
    ]
  }]
}
EOF
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  Attach IAM policy to the IAM role in AWS Account B
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource “aws_iam_role_policy_attachment” “s3_read_attach_policy” {
  provider = aws.account_b
  role       = aws_iam_role.cross_account_role.name
  policy_arn = aws_iam_policy.s3_read_policy.arn
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure to replace Account_A_ID with the Account A AWS account ID and “bucket-name” with the name of the &lt;strong&gt;s3 bucket&lt;/strong&gt; in AWS Account B that you want to grant access to. Also, ensure that the bucket policy in AWS Account B allows access from the role &lt;strong&gt;CrossAccountRole&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;With this setup, the &lt;strong&gt;CrossAccountUser&lt;/strong&gt; in Account A can assume the &lt;strong&gt;CrossAccountRole&lt;/strong&gt; in Account B and access files from the specified &lt;strong&gt;s3&lt;/strong&gt; bucket in Account B.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Enforcing IAM Best Practices with Policy-as-Code&lt;/strong&gt; 
&lt;/h2&gt;

&lt;p&gt;Enforcing IAM Best Practices with Policy-as-Code ensures that security policies are consistently applied across an organization’s cloud infrastructure. By codifying IAM policies, teams can automate enforcing security controls, reducing the risk of misconfigurations and unauthorized access.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.checkov.io/" rel="noopener noreferrer"&gt;checkov&lt;/a&gt; is one of the Policy-as-Code tools available in cloud security. It is an open-source static code analysis tool developed by Bridgecrew.&lt;/p&gt;

&lt;p&gt;It scans infrastructure as code (IaC) templates like Terraform and CloudFormation to detect security and compliance issues early. By analyzing configurations against predefined policies and industry standards, Checkov helps identify misconfigurations, vulnerabilities, and compliance violations. It focuses on cloud security, particularly in AWS, Azure, and GCP environments, and integrates seamlessly into CI/CD pipelines for proactive issue remediation before deployment.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Utilizing Checkov, highlighting its ability to detect IAM configuration issues early, focusing on preventing overly permissive policies.&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Regarding IAM configuration issues, Checkov plays a crucial role in detecting overly permissive policies early in the development process.&lt;/p&gt;

&lt;p&gt;Here’s how Checkov helps in detecting IAM configuration issues, mainly focusing on preventing overly permissive policies:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Static Analysis with Checkov: Configuring Checkov for IAM policy scans.&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Let us set up checkov to scan this setup for potential security risks and misconfigurations using the Terraform code example&lt;/p&gt;

&lt;p&gt;Example Terraform code we’ll be analyzing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  AWS Provider Configuration – Set the AWS &lt;strong&gt;region to us-east-1&lt;/strong&gt;.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider “aws” {
  region = “us-east-1”
  access_key = “A*********************”
  secret_key =   “U****************************”
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  IAM Policy for S3 Read Access – Create an IAM policy named &lt;strong&gt;s3_read_policy&lt;/strong&gt; that allows read access (s3:GetObject, s3:ListBucket) to a specified S3 bucket.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource “aws_iam_policy” “s3_read_policy” {
  name        = “s3_read_policy”
  description = “Allows read access to files in the specified S3 bucket”
  policy      = &amp;lt;&amp;lt;EOF
{
  “Version”: “2012-10-17”,
  “Statement”: [
    {
      “Effect”: “Allow”,
      “Action”: [
        “s3:GetObject”,
        “s3:ListBucket”
      ],
      “Resource”: [
        “arn:aws:s3:::your-bucket-name/*”,
        “arn:aws:s3:::your-bucket-name”
      ]
    }
  ]
}
EOF
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  IAM User Creation – Define an IAM user named &lt;strong&gt;iam_user&lt;/strong&gt;.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource “aws_iam_user” “iam_user” {
  name = “iam_user”
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  IAM Role and Assume Role Policy – Set up an IAM role named &lt;strong&gt;iam-role&lt;/strong&gt; with an assume role policy that allows the &lt;strong&gt;iam_user&lt;/strong&gt; to assume this role.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource “aws_iam_role” “iam_role” {
  name = “iam-role”
  assume_role_policy = jsonencode({
    Version = “2012-10-17”,
    Statement = [{
      Action = “sts:AssumeRole”,
      Principal = {
        AWS = “arn:aws:iam::AWS_ACCOUNT_ID:user/${aws_iam_user.iam_user.name}”
      },
      Effect = “Allow”,
      Sid    = “AssumeRole”
    }]
  })
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;  Policy Attachment – Attaches the &lt;strong&gt;s3_read_policy&lt;/strong&gt; to the &lt;strong&gt;iam_role&lt;/strong&gt;.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource “aws_iam_policy_attachment” “s3_read_attach” {
  roles       = [aws_iam_role.iam_role.name]
  policy_arn = aws_iam_policy.s3_read_policy.arn
  name     = “Attaching s3 policy to iam role”
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Configuring Checkov for IAM Policy Scans&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Install Checkov&lt;/strong&gt; – First, ensure that checkov is installed in your environment. If not, install it via pip by running&lt;br&gt;
&lt;br&gt;
 &lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install checkov
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Run checkov Scan&lt;/strong&gt; -checkov will provide a detailed report of any issues, including security vulnerabilities, best practice violations, and compliance issues. In our example, checkov can help identify potential risks in &lt;strong&gt;IAM policies&lt;/strong&gt;, such as overly broad permissions, and suggest mitigations, etc.&lt;/p&gt;

&lt;p&gt;For a file – You can run it  for one single file using &lt;em&gt;&lt;code&gt;checkov --file file_name.tf&lt;/code&gt;&lt;/em&gt; command.&lt;/p&gt;

&lt;p&gt;For a directory – You can go to the directory containing your Terraform files to scan all files within the directory and run command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;checkov –d .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fceyxldm8ywahquhwns6h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fceyxldm8ywahquhwns6h.png" alt="checkov" width="800" height="467"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;checkov output&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8cv8nxexo40mlyi8l7nj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8cv8nxexo40mlyi8l7nj.png" alt="checkov output 2" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frd8z17qicxxvavx8h37v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frd8z17qicxxvavx8h37v.png" alt="checkov output 3 " width="800" height="170"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Refining the Scan&lt;/strong&gt; – If you want to focus specifically on IAM-related checks, you can use checkov’s –check flag to include or exclude certain checks based on their IDs, tailoring the scan to your specific needs. For example, to ensure IAM policies that allow full “*-*” administrative privileges are not created, you can run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;checkov -d . --check CKV_AWS_62
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output for Terraform code example scan&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz4prxvlp73xdlbn6h4gn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz4prxvlp73xdlbn6h4gn.png" alt="checkov example scan" width="800" height="228"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Results&lt;/strong&gt; – By running Checkov as described, we can identify potential security issues such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Excessive permissions in the IAM policy.&lt;/li&gt;
&lt;li&gt;  IAM policies are attached directly to users instead of roles.&lt;/li&gt;
&lt;li&gt;  Missing or overly permissive assume role policies.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Addressing these issues involves modifying the Terraform code to adhere to best practices, such as implementing least privilege, using roles for cross-account access, and ensuring policies are scoped appropriately.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Custom Policies&lt;/strong&gt; – checkov allows you to enforce specific security or compliance requirements that Checkov’s built-in checks might not cover. This is helpful if your organization has specific compliance requirements that the default checks do not cover.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Integrating Policy Scans in CI/CD: Automating IAM policy compliance checks before deployment.&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuqt3dxal4yr09iiztwvn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuqt3dxal4yr09iiztwvn.png" alt="policy compliance with CI/CD" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;policy compliance with CI/CD&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Wrapping with IAM governance &amp;amp; best practices&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Focusing on Identity and Access Management (IAM) governance and best practices is essential for ensuring the security and compliance of cloud environments. This approach helps systematically manage digital identities, their authentication, authorization, roles, and privileges within or across system and enterprise boundaries.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Integrating IAM Governance&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;IAM governance should be an integral part of any organization’s security strategy. It involves several key components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Organizations should strive for centralized management of user identities and their access across all systems and platforms. This simplifies the enforcement of access policies and compliance with regulatory requirements.&lt;/li&gt;
&lt;li&gt;  Assigning permissions based on roles tightly aligned with organizational structures and job functions streamlines access management and enforces the principle of least privilege.&lt;/li&gt;
&lt;li&gt;  Conducting regular audits of IAM policies and practices helps identify and remediate unused or excessive permissions and ensures compliance with relevant standards and regulations.&lt;/li&gt;
&lt;li&gt;  Implementing robust processes for the entire lifecycle of user identities – from creation through management, to deletion – ensures that access rights are always up to date and reduces the risk of orphaned accounts.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;IAM Best Practices&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;To enhance IAM governance, organizations should adhere to a set of best practices:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt; Ensuring that users have only the minimum levels of access required to perform their functions minimizes potential damage from errors or malicious intent.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use multi-factor authentication (MFA) and strong password policies to enhance security. For critical resources, consider additional authentication factors and stringent authorization checks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Separate roles and responsibilities to prevent conflicts of interest or fraud. This is crucial in preventing any single point of compromise.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Automate the process of granting and revoking access to minimize the risk of oversight.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Leverage managed policies for easier administration and reuse of standard permission sets.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Methods to perform compliance audits on IAM configurations&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Open Policy Agent (OPA) is an open-source, general-purpose policy engine that unifies policy enforcement across the cloud-native stack. It can be incorporated into IaC workflows. OPA enables you to craft policies that govern and secure your cloud environments without embedding policy logic within your applications and enhances their security, compliance, and governance. &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;OPA Policy Example for Terraform&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;For creating an Open Policy Agent (OPA) policy relevant to the provided Terraform code example (which involves IAM policies for s3 read access, IAM user, and IAM role creations), we’ll focus on enforcing a rule that IAM policies should specify a specific &lt;strong&gt;s3 bucket&lt;/strong&gt; and not allow broad access.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;OPA Policy for Specific S3 Bucket Access in IAM Policies&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;OPA policies are written in a high-level declarative language called Rego. This policy aims to ensure that any IAM policy granting access to s3 buckets explicitly specifies the bucket name, rather than allowing access to all buckets.&lt;/p&gt;

&lt;p&gt;Define a Rego policy file, e.g., &lt;strong&gt;iam_policy.rego&lt;/strong&gt;, that includes the rule to check IAM policy statements for specific S3 bucket access.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package terraform.analysis
default allow = false
# Rule to check for specific bucket access in IAM policies
allow {
    some i
    input.resource.aws_iam_policy[i].policy
    policy := json.unmarshal(input.resource.aws_iam_policy[i].policy)
    policy.Statement[_].Effect == “Allow”
    action_allowed(policy.Statement[_].Action)
    not wildcard_bucket_access(policy.Statement[_].Resource)
}

# Helper to check if actions related to S3 read are allowed
action_allowed(actions) {
    allowed_actions := [“s3:GetObject”, “s3:ListBucket”]
    allowed_actions[_] == actions[_]
}

# Helper to check for wildcard bucket access
wildcard_bucket_access(resources) {
    resources[_] == “arn:aws:s3:::*”
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This Rego policy does the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Checks IAM policies: It looks for &lt;strong&gt;aws_iam_policy&lt;/strong&gt; resources in the Terraform plan.&lt;/li&gt;
&lt;li&gt;  Parses the policy JSON: It unmarshals the JSON policy document to inspect the policy statements.&lt;/li&gt;
&lt;li&gt;  Evaluate policy statements: It checks if any “Allow” statements permit s3:GetObject or s3:ListBucket actions.&lt;/li&gt;
&lt;li&gt;  Ensures specific bucket access: It ensures that resources do not include a wildcard (arn:aws:s3:::*), indicating that the policy specifies particular buckets.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Using the OPA Policy&lt;/strong&gt;:
&lt;/h3&gt;

&lt;p&gt;To use this policy, you would typically evaluate it against your Terraform plan output in JSON format, using the opa eval command. First, generate the Terraform plan:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan -out=tfplan.binary
terraform show -json tfplan.binary &amp;gt; tfplan.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, evaluate your policy with OPA:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;opa eval --format pretty --data iam_policy.rego --input tfplan.json “data.terraform.analysis.allow”
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This OPA policy scrutinizes your Terraform plan, specifically checking whether IAM policies for S3 access are narrowly scoped to specific buckets. Enforcing such policies ensures that your cloud environment adheres to security best practices, significantly mitigating potential risks.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;We’ve explored the nuances of managing AWS IAM through Terraform, highlighting its significance in bolstering cloud security, IAM configurations as Infrastructure-as-code, and the critical role of Zero-Trust policies within IAM. Setting up IAM policies, creating users and roles, and managing cross-account access and trust relationships.&lt;/p&gt;

&lt;p&gt;The exploration into enforcing IAM best practices through policy-as-code with tools like checkov underscored the transformative impact of static code analysis in preempting configuration errors and security risks.&lt;/p&gt;

&lt;p&gt;Finally, we touched upon IAM governance and compliance, underscoring methods like Rego policy definitions with OPA for performing compliance audits on IAM configurations. This ensures alignment with security best practices and regulatory standards, cementing IAM’s role in securing cloud environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Commonly Asked Questions&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;1) &lt;strong&gt;What are the best practices for managing AWS IAM policies with Terraform?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Use Least Privilege Principle – Grant only the permissions necessary for a user, group, or role to perform their intended tasks.&lt;/li&gt;
&lt;li&gt;  Separation of Concerns – Organize IAM policies logically by separating them based on roles, responsibilities, or permissions.&lt;/li&gt;
&lt;li&gt;  Enable Policy Testing – Implement automated tests to validate IAM policies for correctness and compliance with organizational policies and regulatory requirements.&lt;/li&gt;
&lt;li&gt;  Rotate IAM Credentials Regularly – Reduce risk and enhance security by automating the rotation of IAM access keys and credentials using AWS Secrets Manager or AWS IAM Access Analyzer.&lt;/li&gt;
&lt;li&gt;  Use IaC to maintain the IAM policies and changes using Infrastructure-as-a-code to maintain consistency.&lt;/li&gt;
&lt;li&gt;  Monitor and Audit IAM Changes – Implement and review logging and monitoring of IAM actions and changes using AWS CloudTrail and AWS Config.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;2) &lt;strong&gt;Can Terraform manage dynamic IAM policies for temporary access?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Yes, Terraform can manage dynamic IAM policies for temporary access using AWS IAM Roles with session policies and AWS Security Token Service (STS)&lt;/p&gt;

&lt;p&gt;3) &lt;strong&gt;How do I create and manage AWS IAM users and their access keys with Terraform?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Terraform’s AWS provider, iam users, and IAM policies allow you to create and manage AWS IAM users and their access keys.&lt;/p&gt;

&lt;p&gt;4) &lt;strong&gt;What are instance profiles, and how do they relate to IAM roles in AWS?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Instance profiles associate IAM roles with EC2 instances, allowing the instances to inherit the role’s permissions. When an IAM role is attached to an EC2 instance, the corresponding instance profile is attached. This mechanism enables EC2 instances and other services to securely access AWS resources without requiring long-term credentials like access keys or passwords.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.aviator.co/merge-queue" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx5wqgwrk9nbggrwv2wmu.png" width="800" height="97"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>iam</category>
    </item>
    <item>
      <title>Rethinking code reviews with stacked PRs</title>
      <dc:creator>Ibrahim Salami</dc:creator>
      <pubDate>Fri, 06 Sep 2024 15:33:14 +0000</pubDate>
      <link>https://forem.com/aviator_co/rethinking-code-reviews-with-stacked-prs-70f</link>
      <guid>https://forem.com/aviator_co/rethinking-code-reviews-with-stacked-prs-70f</guid>
      <description>&lt;p&gt;The peer code review process is an essential part of software development. It helps maintain software quality and promotes adherence to standards, project requirements, style guides, and facilitates learning and knowledge transfer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aviator.co/mergequeue/quick-setup" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn14hi1ge9h0ymkp0tgsr.png" width="800" height="97"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Code review effectiveness
&lt;/h3&gt;

&lt;p&gt;While the effectiveness is high for reviewing sufficiently small code changes, it drops exponentially with the increase in the size of the change. To sustain the necessary level of mental focus to be effective, large code reviews are exhausting. Usually, the longer the review duration gets, the less effective the overall review becomes:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuaa9t6nlt8pvynp6ek8s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuaa9t6nlt8pvynp6ek8s.png" width="800" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So why can’t we just restrict the size of the pull requests (PRs)? While many changes can start small, suddenly a small two-line change can grow into a 500-line refactor including multiple back-and-forth conversations with reviewers. Some engineering teams also maintain long-running feature branches as they continue working, making it hard to review.&lt;/p&gt;

&lt;p&gt;So, how do we strike the right balance? Simple. Use stacked PRs.&lt;/p&gt;

&lt;h3&gt;
  
  
  What are stacked PRs?
&lt;/h3&gt;

&lt;p&gt;Stacked pull requests make smaller, iterative changes and are stacked on top of each other instead of bundling large monolith changes in a single pull request. Each PR in the stack focuses on one logical change only, making the review process more manageable and less time-consuming.&lt;/p&gt;

&lt;p&gt;We also wrote a post last year explaining how this help represents &lt;a href="https://www.aviator.co/blog/stacked-prs-code-changes-as-narrative/" rel="noopener noreferrer"&gt;code changes as a narrative&lt;/a&gt; instead of breaking things down by files or features.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why stacked PRs?
&lt;/h3&gt;

&lt;p&gt;Other than building a culture of more effective code reviews, there are a few other benefits of stacked PRs:&lt;/p&gt;

&lt;h4&gt;
  
  
  Early code review feedback
&lt;/h4&gt;

&lt;p&gt;Imagine that you are implementing a large feature. Instead of creating the entire feature and then requesting a code review, consider carving out the initial framework and promptly putting it up for feedback. This could potentially save you countless hours by getting early feedback on your design.&lt;/p&gt;

&lt;h4&gt;
  
  
  Faster CI feedback cycle
&lt;/h4&gt;

&lt;p&gt;Stacked PRs support the &lt;a href="https://en.wikipedia.org/wiki/Shift-left_testing" rel="noopener noreferrer"&gt;shift-left&lt;/a&gt; practice because changes are continuously integrated and tested, which allows for early detection and rectification of issues. The changes are merged in bits and pieces catching any issues early vs merging one giant change hoping it does not bring down prod!&lt;/p&gt;

&lt;h4&gt;
  
  
  Knowledge sharing
&lt;/h4&gt;

&lt;p&gt;Code reviews are also wonderful for posterity. Your code changes are narrating your thought process behind implementing a feature, therefore, the breakdown of changes creates more effective knowledge transfer. It’s easier for team members to understand the changes, which promotes better knowledge sharing for the future.&lt;/p&gt;

&lt;h4&gt;
  
  
  Staying unblocked
&lt;/h4&gt;

&lt;p&gt;Waiting on getting code reviewed and approved can be a frustrating process. With stacked PRs, the developers can work on multiple parts of a feature without waiting for reviewers to approve previous PRs&lt;/p&gt;

&lt;h3&gt;
  
  
  What’s the catch?
&lt;/h3&gt;

&lt;p&gt;So, why don’t more developers use stacked PRs for code reviews?&lt;/p&gt;

&lt;p&gt;Although this stacked PR workflow addresses both the desired practices of keeping code reviews manageable and developers productive, unfortunately, it is not supported very well natively by either git or GitHub. As a result, &lt;a href="https://docs.google.com/spreadsheets/d/1riYPbdprf6E3QP1wX1BeASn2g8FKBgbJlrnKmwfU3YE/edit?usp=sharing" rel="noopener noreferrer"&gt;several tools&lt;/a&gt; have been developed across the open-source community to enable engineers to incorporate this stacking technique into the existing git and GitHub platforms. But stacking the PRs is only part of the story.&lt;/p&gt;

&lt;h4&gt;
  
  
  Updating
&lt;/h4&gt;

&lt;p&gt;As we get code review feedback and we make changes to part of the stack, we have to now rebase and resolve conflicts at all subsequent branches.&lt;/p&gt;

&lt;p&gt;Let’s take an example. Imagine that you are working on a change that requires making a schema change, a backend change, and a frontend change. With that, you can now send a simple schema change for review first, and while that’s being reviewed you can start working on the backend and frontend. Using stacked PRs, all these 3 changes can be reviewed by 3 different reviews.&lt;/p&gt;

&lt;p&gt;In this case, you may have a stack that looks like this where &lt;code&gt;demo/schema&lt;/code&gt;, &lt;code&gt;demo/backend&lt;/code&gt; and &lt;code&gt;demo/frontend&lt;/code&gt; represents the 3 branches stacked on top of each other.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu05qj92fau0qnaem5fsw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu05qj92fau0qnaem5fsw.png" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So far this makes sense, but what if you got some code review comments on the schema change that requires creating a new commit? Suddenly your commit history looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6tgv1njfn0jhd1jfqtxo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6tgv1njfn0jhd1jfqtxo.png" width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you have to manually rebase all subsequent branches and resolve conflicts at every stage. Imagine if you have 10 stacked branches where you may have to resolve the conflicts 10 times.&lt;/p&gt;

&lt;h4&gt;
  
  
  Merging
&lt;/h4&gt;

&lt;p&gt;But that’s not all, merging a PR in the stack can be a real nightmare. You have 3 options &lt;code&gt;squash&lt;/code&gt;, &lt;code&gt;merge&lt;/code&gt; and &lt;code&gt;rebase&lt;/code&gt; to merge a PR. Let’s try to understand what goes behind the scenes in each one.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  In the case of a &lt;code&gt;squash&lt;/code&gt; commit, Git takes changes from all the existing commits of the PR and rewrites them into a single commit. In this case, no history is maintained on where those changes came from&lt;/li&gt;
&lt;li&gt;  A &lt;code&gt;merge&lt;/code&gt; commit is a special type of Git commit that is represented by a combination of two or more commits. So, it works very similar to a &lt;code&gt;squash&lt;/code&gt; commit but it also captures information about its parents. In a typical scenario, a merge commit has two parents: the last commit on the base branch (where the PR is merged) and the top commit on the feature branch that was merged. Although this approach gives more context to the commit history, it inadvertently creates &lt;a href="https://idiv-biodiversity.github.io/git-knowledge-base/linear-vs-nonlinear.html" rel="noopener noreferrer"&gt;non-linear git-history&lt;/a&gt; that can be undesirable.&lt;/li&gt;
&lt;li&gt;  Finally, in case of a &lt;code&gt;rebase&lt;/code&gt; and merge, Git will rewrite the commits onto the base branch. So similar to &lt;code&gt;squash&lt;/code&gt; commit option, it will lose any history associated with the original commits.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Typically if you are using the &lt;code&gt;merge&lt;/code&gt; commit strategy while stacking PRs, your life will be a bit simpler, but most teams discourage using that strategy to keep the git-history clean. That means you are likely using either a &lt;code&gt;squash&lt;/code&gt; or a &lt;code&gt;rebase&lt;/code&gt; merge. And that creates a merge conflict for all subsequent unmerged stacked branches.&lt;/p&gt;

&lt;p&gt;In the example above, let’s say we squash merge the first branch &lt;code&gt;demo/schema&lt;/code&gt; into mainline. It will create a new commit &lt;code&gt;D1&lt;/code&gt; that contains changes of &lt;code&gt;A1&lt;/code&gt; and &lt;code&gt;A2&lt;/code&gt;. Since Git does not know where &lt;code&gt;D1&lt;/code&gt; came from, and &lt;code&gt;demo/backend&lt;/code&gt; is still based on &lt;code&gt;A2&lt;/code&gt;, trying to rebase &lt;code&gt;demo/backend&lt;/code&gt; on top of the mainline will create merge conflicts.&lt;/p&gt;

&lt;p&gt;Likewise, rebasing &lt;code&gt;demo/frontend&lt;/code&gt; after rebasing &lt;code&gt;demo/backend&lt;/code&gt; will also cause the same issues. So if you had ten stacked branches and you squash merged one of them, you would have to resolve these conflicts nine times.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk9crbfwteg1c16359ays.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk9crbfwteg1c16359ays.png" width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are still just scratching the surface, there are &lt;a href="https://docs.aviator.co/aviator-cli/how-to-guides" rel="noopener noreferrer"&gt;many other use cases&lt;/a&gt; such as reordering commits, splitting, folding, and renaming branches, that can create huge overhead to manage when dealing with stacked PRs.&lt;/p&gt;

&lt;p&gt;That’s why we built stacked PRs management as part of Aviator.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Aviator CLI is different
&lt;/h3&gt;

&lt;p&gt;Think of Aviator as an augmentation layer that sits on top of your existing tooling. Aviator connects with GitHub, Slack, Chrome, and Git CLI to provide an enhanced developer experience.&lt;/p&gt;

&lt;p&gt;Aviator CLI works seamlessly with everything else! The CLI isn’t just a layer on top of Git, but also understands the context of stacks across GitHub. Let’s consider an example.&lt;/p&gt;

&lt;h4&gt;
  
  
  Creating a stack
&lt;/h4&gt;

&lt;p&gt;Creating a stack is fairly straightforward. Except in this case, we use &lt;code&gt;av&lt;/code&gt; CLI to create the branches to ensure that the stack is tracked. For instance, to create your schema branch and corresponding PR, follow the steps below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;av stack branch demo/schema
# make schema changes
git commit -a -m "[demo] schema changes"
av pr create
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since Aviator is also connected to your GitHub, it makes it easy for you to visualize the stack.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwl1gcm49q3lrqs7vdoj5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwl1gcm49q3lrqs7vdoj5.png" width="800" height="489"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Or if you want to visualize it from the terminal, you can still do that with the CLI commands:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fve27c776oldh5u0e263w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fve27c776oldh5u0e263w.png" width="800" height="159"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Updating the stack
&lt;/h4&gt;

&lt;p&gt;Using the stack now becomes a cakewalk. You can add new commits to any branch, and simply run &lt;code&gt;av stack sync&lt;/code&gt; from anywhere in the stack to synchronize all branches. Aviator automatically rebases all the branches for you, and if there’s a real merge conflict, you just have to resolve it once.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F187w2fpauey6rzq7seqn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F187w2fpauey6rzq7seqn.png" width="800" height="347"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Merging the stack
&lt;/h4&gt;

&lt;p&gt;This is where Aviator tools easily stand out from any existing tooling. At Aviator, we have built one of the most advanced MergeQueue to manage auto-merging thousands of changes at scale. Aviator supports seamless integration with the CLI and stacked PRs. So to merge partial or full stack of PRs, you can assign them to Aviator MergeQueue using CLI &lt;code&gt;av pr queue&lt;/code&gt; or by posting a comment in GitHub: &lt;code&gt;/aviator stack merge&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Aviator automatically handles validating, updating, and auto-merging all queued stacks in order.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd5t2b4hv80ycbykxocih.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd5t2b4hv80ycbykxocih.png" width="800" height="572"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now when the PRs are merged, you can this time run &lt;code&gt;av stack sync --trunk&lt;/code&gt; to update all PRs and clean out all merged PRs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Shift-Left is the future
&lt;/h3&gt;

&lt;p&gt;Stacked PRs might initially seem like more work due to the need to break down changes into smaller parts. However, the increase in code review efficiency, faster feedback loops, and enhanced learning opportunities will surely outweigh this overhead. As we continue embracing the shift-left principles, stacked PRs will become increasingly useful.&lt;/p&gt;

&lt;p&gt;The Aviator CLI provides a great way to manage stacked PRs with a lot less tedium. The CLI is &lt;a href="https://github.com/aviator-co/av" rel="noopener noreferrer"&gt;open-source&lt;/a&gt; and completely free. We would love for you to try it out and share your feedback on our &lt;a href="https://github.com/aviator-co/av/discussions" rel="noopener noreferrer"&gt;discussion board&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;At Aviator, we are building developer productivity tools from first principles to empower developers to build faster and better.&lt;/p&gt;

</description>
      <category>codereview</category>
      <category>prs</category>
      <category>devops</category>
    </item>
    <item>
      <title>Scanning AWS S3 Buckets for Security Vulnerabilities</title>
      <dc:creator>Ibrahim Salami</dc:creator>
      <pubDate>Tue, 27 Aug 2024 18:55:23 +0000</pubDate>
      <link>https://forem.com/aviator_co/scanning-aws-s3-buckets-for-security-vulnerabilities-3ie5</link>
      <guid>https://forem.com/aviator_co/scanning-aws-s3-buckets-for-security-vulnerabilities-3ie5</guid>
      <description>&lt;p&gt;All cloud providers offer some variations of file bucket services. These file bucket services allow users to store and retrieve data in the cloud, offering scalability, durability, and accessibility through web portals and APIs. For instance, AWS offers &lt;a href="https://aws.amazon.com/s3/" rel="noopener noreferrer"&gt;Amazon Simple Storage Service (S3)&lt;/a&gt;, GCP offers &lt;a href="https://cloud.google.com/storage" rel="noopener noreferrer"&gt;Google Cloud Storage&lt;/a&gt;, and DigitalOcean provides &lt;a href="https://www.digitalocean.com/products/spaces" rel="noopener noreferrer"&gt;Spaces&lt;/a&gt;. However, if unsecured, these file buckets pose a major security risk, potentially leading to data breaches, data leakages, malware distribution, and data tampering. For example, the United Kingdom Council’s data on &lt;a href="https://www.theregister.com/2023/05/22/capita_security_pensions_aws_bucket_city_councils/" rel="noopener noreferrer"&gt;member’s benefits&lt;/a&gt; was exposed by an unsecured AWS bucket. In another incident in 2021, an unsecured bucket belonging to a &lt;a href="https://www.healthcareinfosecurity.com/report-unsecured-aws-bucket-leaked-cancer-website-user-data-a-19024" rel="noopener noreferrer"&gt;non-profit cancer organization&lt;/a&gt; exposed sensitive images and data for tens of thousands of individuals.&lt;/p&gt;

&lt;p&gt;Thankfully, &lt;a href="https://github.com/sa7mon/S3Scanner" rel="noopener noreferrer"&gt;S3Scanner&lt;/a&gt; can help. S3Scanner is a free and easy-to-use tool that can help you identify and fix unsecured file buckets in all major cloud providers: Amazon S3, Google Cloud Storage, and Spaces:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzm3tu9glx57nfp3gph5s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzm3tu9glx57nfp3gph5s.png" alt="s3 storage bucket architecture" width="800" height="537"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this article, you’ll learn all about S3Scanner and how it can help identify unsecured file buckets on multiple cloud providers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Common Security Risks in Amazon S3 Buckets&lt;a href="https://github.com/jainankit/demorepo/new/master#common-security-risks-in-amazon-s3-buckets" rel="noopener noreferrer"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Amazon S3 buckets offer a simple and scalable solution for storing your data in the cloud. However, just like any other online storage platform, there are security risks you need to be aware of.&lt;/p&gt;

&lt;p&gt;Following are some of the most common security risks associated with Amazon S3 buckets:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Unintentional public access:&lt;/strong&gt; Misconfiguration, such as overly permissive permissions (&lt;em&gt;ie&lt;/em&gt; granting public read access), can cause insecure bucket policies and permissions, which can result in unauthorized users being able to access and perform actions on your S3 bucket.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Insecure bucket policies and permissions:&lt;/strong&gt; S3 buckets use identity and access management (IAM) to control access to data. This allows you to define permissions for individual users and groups using bucket policies. If your bucket policies are not properly configured, it can give unauthorized users access to your data (&lt;em&gt;eg&lt;/em&gt; policies using wildcard). Poorly configured IAM settings can also result in compliance violations due to unauthorized data access or modification, which impacts regulatory requirements and can expose the organization to legal consequences.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Data exposure and leakage:&lt;/strong&gt; Even if your S3 bucket isn’t public, data can still be exposed. For instance, data can be exposed if you accidentally share the URL of an object with someone else or if there are overly permissive permissions for that bucket. Additionally, data exposure can occur if you download data from your S3 bucket to an insecure location.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Lack of encryption:&lt;/strong&gt; The &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingEncryption.html" rel="noopener noreferrer"&gt;lack of encryption&lt;/a&gt; for data stored in S3 buckets is another significant security risk. Without encryption, intercepted data during transit or compromised storage devices may expose sensitive information.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Managing AWS access control and encryption options can be difficult. For instance, AWS has numerous tools, ranging from intricate access controls to robust encryption options, that help to protect your data and accounts from unauthorized access. Navigating this wide range of tools can be daunting, especially for individuals who don’t have a background in security. A single policy misconfiguration or permission can leave sensitive data exposed to unintended audiences.&lt;/p&gt;

&lt;p&gt;This is where S3Scanner could be useful.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is S3Scanner&lt;a href="https://github.com/jainankit/demorepo/new/master#what-is-s3scanner" rel="noopener noreferrer"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;S3Scanner is an &lt;a href="https://github.com/sa7mon/S3Scanner" rel="noopener noreferrer"&gt;open source tool&lt;/a&gt; designed for scanning and identifying security vulnerabilities in Amazon S3 buckets:&lt;/p&gt;

&lt;p&gt;S3Scanner supports many popular platforms including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  AWS (the subject platform of this article)&lt;/li&gt;
&lt;li&gt;  GCP&lt;/li&gt;
&lt;li&gt;  Digital Ocean&lt;/li&gt;
&lt;li&gt;  Linode&lt;/li&gt;
&lt;li&gt;  Scaleway&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can also use S3Scanner with custom providers such as your own bespoke bucket solution. This makes it a versatile solution for various organizations.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Please note, for non AWS services, S3Scanner currently only supports scanning for anonymous user permissions.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The following command shows S3Scanner basic usage to scan for buckets listed in a file called &lt;code&gt;names.txt&lt;/code&gt; and enumerate the objects.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ s3scanner -bucket-file names.txt -enumerate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The following are some of &lt;a href="https://github.com/sa7mon/S3Scanner#features" rel="noopener noreferrer"&gt;S3Scanner’s key features&lt;/a&gt;:&lt;/p&gt;

&lt;h3&gt;
  
  
  Multithreaded Scanning&lt;a href="https://github.com/jainankit/demorepo/new/master#multithreaded-scanning" rel="noopener noreferrer"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;S3Scanner uses &lt;a href="https://en.wikipedia.org/wiki/Multithreading_(computer_architecture)" rel="noopener noreferrer"&gt;multithreading&lt;/a&gt; capabilities to concurrently assess multiple S3 buckets, optimizing the speed of vulnerability detection. To specify the number of threads to use, you can use the &lt;code&gt;-threads&lt;/code&gt; flag and then provide the number of threads you want to use.&lt;/p&gt;

&lt;p&gt;For instance, if you want to use ten threads, you’ll use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;s3scanner -bucket my_bucket -threads 10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Config File&lt;a href="https://github.com/jainankit/demorepo/new/master#config-file" rel="noopener noreferrer"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;If you’re using flags that require config options like custom providers, you’ll need to create a &lt;a href="https://github.com/sa7mon/S3Scanner?tab=readme-ov-file#config-file" rel="noopener noreferrer"&gt;config file&lt;/a&gt;. To do so, create a file named &lt;code&gt;config.yml&lt;/code&gt; and put it in one of the following locations where S3Scanner will look for it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(current directory)
/etc/s3scanner/
$HOME/.s3scanner/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Built-In and Custom Storage Provider Support&lt;a href="https://github.com/jainankit/demorepo/new/master#built-in-and-custom-storage-provider-support" rel="noopener noreferrer"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;As previously stated, S3Scanner seamlessly integrates with various providers. You can use the &lt;code&gt;-provider&lt;/code&gt; option to specify the object storage provider when checking buckets.&lt;/p&gt;

&lt;p&gt;For instance, if you use GCP, you’d use the following command:s3scanner -bucket my_bucket -provider gcp&lt;/p&gt;

&lt;p&gt;To use a custom provider when working with currently unsupported or a local network storage provider, the provider value should be &lt;code&gt;custom&lt;/code&gt; like this:s3scanner -bucket my_bucket -provider custom&lt;/p&gt;

&lt;p&gt;Please note that when you’re working with a custom provider, you also need to set up config file keys under &lt;code&gt;providers.custom&lt;/code&gt;, as listed in the config file. Some examples include &lt;code&gt;address_style&lt;/code&gt;, &lt;code&gt;endpoint_format&lt;/code&gt;, and &lt;code&gt;insecure&lt;/code&gt;. Here’s an example of a custom provider config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# providers.custom required by `-provider custom`
#   address_style - Addressing style used by endpoints.
#     type: string
#     values: "path" or "vhost"
#   endpoint_format - Format of endpoint URLs. Should contain '$REGION' as placeholder for region name
#     type: string
#   insecure - Ignore SSL errors
#     type: boolean
# regions must contain at least one option
providers:
  custom: 
    address_style: "path"
    endpoint_format: "https://$REGION.vultrobjects.com"
    insecure: false
    regions:
      - "ewr1"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Comprehensive Permission Analysis&lt;a href="https://github.com/jainankit/demorepo/new/master#comprehensive-permission-analysis" rel="noopener noreferrer"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;S3Scanner provides access scans by examining bucket permissions. It identifies misconfigurations in access controls, bucket policies, and permissions associated with each S3 bucket.&lt;/p&gt;

&lt;h3&gt;
  
  
  PostgreSQL Database Integration&lt;a href="https://github.com/jainankit/demorepo/new/master#postgresql-database-integration" rel="noopener noreferrer"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;S3Scanner can save scan results directly to a &lt;a href="https://www.postgresql.org/" rel="noopener noreferrer"&gt;PostgreSQL&lt;/a&gt; database. This helps maintain a structured and easily accessible repository of vulnerabilities. Storing results in a database also enhances your ability to track historical data and trends.&lt;/p&gt;

&lt;p&gt;To save all scan results to a PostgreSQL, you can use the &lt;code&gt;-db&lt;/code&gt; flag, like this:s3 scanner -bucket my_bucket -db&lt;/p&gt;

&lt;p&gt;This option requires the &lt;code&gt;db.uri&lt;/code&gt; config file key in the &lt;code&gt;config&lt;/code&gt; file. This is what your &lt;code&gt;config&lt;/code&gt; file should look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Required by -db
db:
uri: "postgresql://user:password@db.host.name:5432/schema_name"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  RabbitMQ Connection for Automation&lt;a href="https://github.com/jainankit/demorepo/new/master#rabbitmq-connection-for-automation" rel="noopener noreferrer"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;You can also integrate with &lt;a href="https://www.rabbitmq.com/" rel="noopener noreferrer"&gt;RabbitMQ&lt;/a&gt;, which is an open source message broker for automation purposes. This allows you to set up automated workflows triggered by scan results or schedule them for regular execution. Automated responses can include alerts, notifications, or further actions based on the identified vulnerabilities, ensuring proactive and continuous security.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;-mq&lt;/code&gt; flag is used to connect to a RabbitMQ server, and it consumes messages that contain the bucket names to scan:s3scanner -mq&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;-mq&lt;/code&gt; flag requires &lt;code&gt;mq.queue_name&lt;/code&gt; and &lt;code&gt;mq.uri&lt;/code&gt; keys to be set up in the &lt;code&gt;config&lt;/code&gt; file.&lt;/p&gt;

&lt;h3&gt;
  
  
  Customizable Reporting&lt;a href="https://github.com/jainankit/demorepo/new/master#customizable-reporting" rel="noopener noreferrer"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;With S3Scanner, you can generate reports tailored to your specific requirements. This flexibility ensures that you can communicate findings effectively and present information in a format that aligns with your organization’s reporting standards.&lt;/p&gt;

&lt;p&gt;For instance, you can use the &lt;code&gt;-json&lt;/code&gt; flag to output the scan results in JSON format:s3scanner -bucket my-bucket -json&lt;/p&gt;

&lt;p&gt;Once the output is in JSON, you can pipe it to &lt;a href="https://jqlang.github.io/jq/" rel="noopener noreferrer"&gt;&lt;code&gt;jq&lt;/code&gt;&lt;/a&gt;, a command-line JSON processor, or other tools that accept JSON, and format the fields as needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  How S3Scanner Works&lt;a href="https://github.com/jainankit/demorepo/new/master#how-s3scanner-works" rel="noopener noreferrer"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;To use S3Scanner, you need to install it on your system. The tool is available on &lt;a href="https://github.com/sa7mon/S3Scanner" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;, and the installation instructions vary based on your platform. Currently, supported platforms include Windows, Mac, Kali Linux, and Docker.&lt;/p&gt;

&lt;p&gt;The installation steps for the various platforms and version numbers are shown below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Platform: Homebrew (MacOS)

&lt;ul&gt;
&lt;li&gt;Version: v3.0.4&lt;/li&gt;
&lt;li&gt;Steps: brew install s3scanner&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Platform: Kali Linux

&lt;ul&gt;
&lt;li&gt;Version: 3.0.0&lt;/li&gt;
&lt;li&gt;Steps: apt install s3scanner&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Platform: Parrot OS

&lt;ul&gt;
&lt;li&gt;Version: –&lt;/li&gt;
&lt;li&gt;Steps: apt install s3scanner&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Platform: BlackArch

&lt;ul&gt;
&lt;li&gt;Version: 464.fd24ab1&lt;/li&gt;
&lt;li&gt;Steps: pacman -S s3scanner&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Platform: Docker

&lt;ul&gt;
&lt;li&gt;Version: v3.0.4&lt;/li&gt;
&lt;li&gt;Steps: docker run ghcr.io/sa7mon/s3scanner&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Platform: Winget (Windows)

&lt;ul&gt;
&lt;li&gt;Version: v3.0.4&lt;/li&gt;
&lt;li&gt;Steps: winget install s3scanner&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Platform: Go

&lt;ul&gt;
&lt;li&gt;Version: v3.0.4&lt;/li&gt;
&lt;li&gt;Steps: go install -v github.com/sa7mon/s3scanner@latest&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Platform: Other (build from source)

&lt;ul&gt;
&lt;li&gt;Version: v3.0.4&lt;/li&gt;
&lt;li&gt;Steps: git clone &lt;a href="mailto:git@github.com"&gt;git@github.com&lt;/a&gt;:sa7mon/S3Scanner.git &amp;amp;&amp;amp; cd S3Scanner &amp;amp;&amp;amp; go build -o s3scanner .&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;For instance, on a Windows system, you would use &lt;a href="https://github.com/microsoft/winget-cli" rel="noopener noreferrer"&gt;winget&lt;/a&gt; and run the following command: &lt;code&gt;winget install s3scanner&lt;/code&gt;. Your output would look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Found S3Scanner [sa7mon.S3Scanner] Version 3.0.4
This application is licensed to you by its owner.
Microsoft is not responsible for, nor does it grant any licenses to, third-party packages.
Downloading https://github.com/sa7mon/S3Scanner/releases/download/v3.0.4/S3Scanner_Windows_x86_64.zip
  ██████████████████████████████  6.52 MB / 6.52 MB
Successfully verified installer hash
Extracting archive...
Successfully extracted archive
Starting package install...
Command line alias added: "S3Scanner"
Successfully installed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The last sentence shows that S3Scanner was successfully installed.&lt;/p&gt;

&lt;p&gt;If you want to avoid installing S3Scanner via the above methods, you can also use the &lt;a href="https://pypi.org/" rel="noopener noreferrer"&gt;Python Package Index (PyPI)&lt;/a&gt;. To do so, search for S3Scanner on PyPI:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftmt5vy05xdr82a107w5x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftmt5vy05xdr82a107w5x.png" alt="s3scanner on pip" width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And select the first option that appears (&lt;em&gt;ie&lt;/em&gt; S3Scanner):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffsgjmqvcc15d537aszmm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffsgjmqvcc15d537aszmm.png" alt="s3scanner pip" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create and navigate to a directory of your choosing (&lt;em&gt;eg&lt;/em&gt; &lt;code&gt;s3scanner_directory&lt;/code&gt;) and run the command &lt;code&gt;pip install S3Scanner&lt;/code&gt; to install it.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Please note that you need to have Python and &lt;a href="https://pypi.org/project/pip/" rel="noopener noreferrer"&gt;pip&lt;/a&gt; installed on your computer to be able to run the &lt;code&gt;pip&lt;/code&gt; command.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Your output looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Collecting S3Scanner
  Downloading S3Scanner-2.0.2-py3-none-any.whl (15 kB)
Requirement already satisfied: boto3&amp;gt;=1.20 in c:\python\python39\lib\site-packages (from S3Scanner) (1.34.2)
Requirement already satisfied: botocore&amp;lt;1.35.0,&amp;gt;=1.34.2 in c:\python\python39\lib\site-packages (from boto3&amp;gt;=1.20-&amp;gt;S3Scanner) (1.34.2)
Requirement already satisfied: jmespath&amp;lt;2.0.0,&amp;gt;=0.7.1 in c:\python\python39\lib\site-packages (from boto3&amp;gt;=1.20-&amp;gt;S3Scanner) (1.0.1)
Requirement already satisfied: s3transfer&amp;lt;0.10.0,&amp;gt;=0.9.0 in c:\python\python39\lib\site-packages (from boto3&amp;gt;=1.20-&amp;gt;S3Scanner) (0.9.0)
Requirement already satisfied: python-dateutil&amp;lt;3.0.0,&amp;gt;=2.1 in c:\python\python39\lib\site-packages (from botocore&amp;lt;1.35.0,&amp;gt;=1.34.2-&amp;gt;boto3&amp;gt;=1.20-&amp;gt;S3Scanner) (2.8.2)
Requirement already satisfied: urllib3&amp;lt;1.27,&amp;gt;=1.25.4 in c:\python\python39\lib\site-packages (from botocore&amp;lt;1.35.0,&amp;gt;=1.34.2-&amp;gt;boto3&amp;gt;=1.20-&amp;gt;S3Scanner) (1.26.18)
Requirement already satisfied: six&amp;gt;=1.5 in c:\python\python39\lib\site-packages (from python-dateutil&amp;lt;3.0.0,&amp;gt;=2.1-&amp;gt;botocore&amp;lt;1.35.0,&amp;gt;=1.34.2-&amp;gt;boto3&amp;gt;=1.20-&amp;gt;S3Scanner) (1.16.0)
Installing collected packages: S3Scanner
Successfully installed S3Scanner-2.0.2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This confirms that the S3Scanner was successfully installed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure Scanning Parameters&lt;a href="https://github.com/jainankit/demorepo/new/master#configure-scanning-parameters" rel="noopener noreferrer"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Before running any scans, you need to make sure everything is working and configure your scanning parameters.&lt;/p&gt;

&lt;p&gt;Run one of the following command to make sure S3Scanner is configured correctly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;s3scanner -h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;s3scanner --help
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should receive some information about the various options you can use when scanning buckets:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;usage: s3scanner [-h] [--version] [--threads n] [--endpoint-url ENDPOINT_URL]
                 [--endpoint-address-style {path,vhost}] [--insecure]
                 {scan,dump} ...

s3scanner: Audit unsecured S3 buckets
           by Dan Salmon - github.com/sa7mon, @bltjetpack

optional arguments:
  -h, --help            show this help message and exit
  --version             Display the current version of this tool
  --threads n, -t n     Number of threads to use. Default: 4
  --endpoint-url ENDPOINT_URL, -u ENDPOINT_URL
                        URL of S3-compliant API. Default: https://s3.amazonaws.com
  --endpoint-address-style {path,vhost}, -s {path,vhost}
                        Address style to use for the endpoint. Default: path
  --insecure, -i        Do not verify SSL

mode:
  {scan,dump}           (Must choose one)
    scan                Scan bucket permissions
    dump                Dump the contents of buckets
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you have the &lt;a href="https://aws.amazon.com/cli/" rel="noopener noreferrer"&gt;AWS Command Line Interface (AWS CLI)&lt;/a&gt; installed and have AWS credentials specified in the &lt;code&gt;.aws&lt;/code&gt; folder, S3Scanner will pick up these credentials for use when scanning. Otherwise, you have to install the &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html" rel="noopener noreferrer"&gt;AWS CLI&lt;/a&gt; to be able to pick buckets in your environment:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4tirjxlkaxholr7b7wf6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4tirjxlkaxholr7b7wf6.png" alt="aws config" width="800" height="144"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Run Scans and Interpret Results&lt;a href="https://github.com/jainankit/demorepo/new/master#run-scans-and-interpret-results" rel="noopener noreferrer"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;To run a scan, you need to run &lt;code&gt;s3scanner&lt;/code&gt; and provide the flags, such as &lt;code&gt;scan&lt;/code&gt; or &lt;code&gt;dump&lt;/code&gt;, and the name of the bucket. For example, to scan for permissions on a bucket called &lt;code&gt;my-bucket&lt;/code&gt;, you would run &lt;code&gt;s3scanner scan --bucket my-bucket&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This gives you a similar output to the following (the columns are delimited by the pipe character, &lt;strong&gt;|&lt;/strong&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;my-bucket | bucket_exists | AuthUsers: [], AllUsers: []
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first portion of the output gives you the name of the bucket, and it tells you if that bucket exists in the S3 universe. The last portion of the output shows you the permissions attributable to authenticated users (anyone with an AWS account) as well as all users.&lt;/p&gt;

&lt;p&gt;Run a scan command for a bucket that is in your AWS environment, such as &lt;code&gt;ans3scanner-bucket&lt;/code&gt;, like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsqwtsk742akhqanet3be.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsqwtsk742akhqanet3be.png" alt="aws buckets" width="800" height="196"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You should get the following output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ans3scanner-bucket | bucket_exists | AuthUsers: [Read], AllUsers: []
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This output shows that the bucket has authenticated users granted &lt;code&gt;[Read]&lt;/code&gt; rights.&lt;/p&gt;

&lt;h4&gt;
  
  
  Scan Your GCP Buckets&lt;a href="https://github.com/jainankit/demorepo/new/master#scan-your-gcp-buckets" rel="noopener noreferrer"&gt;&lt;/a&gt;
&lt;/h4&gt;

&lt;p&gt;To test your GCP buckets, create a bucket in your GCP account and make sure it doesn’t have public access:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foybu22mzwpmnb2t4fhhq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foybu22mzwpmnb2t4fhhq.png" alt="gcp buckets" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Inside the bucket, add a text file:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5e2oal7ixtkvu0wc0p67.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5e2oal7ixtkvu0wc0p67.png" alt="google cloud storage" width="800" height="209"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To scan the bucket, run the previously mentioned command: &lt;code&gt;s3scanner -bucket s3scanner-demo -provider gcp&lt;/code&gt;. You have to provide the &lt;code&gt;-provider gcp&lt;/code&gt; flag to tell S3Scanner that you want to scan a GCP bucket. If you don’t provide this flag, S3Scanner uses AWS (the default option).&lt;/p&gt;

&lt;p&gt;Your output shows that a bucket exists:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;level=info msg="exists    | s3scanner-demo | default | AuthUsers: [] | AllUsers: []"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, change the GCP bucket access to “public” and grant all users &lt;a href="https://cloud.google.com/storage/docs/access-control/making-data-public" rel="noopener noreferrer"&gt;access&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2wx5s5xp6zfj6ypvtlf8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2wx5s5xp6zfj6ypvtlf8.png" alt="gcp" width="800" height="232"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then, scan the GCP bucket. Your output will show that the bucket is available to all users:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;level=info msg="exists    | s3scanner-demo | default | AuthUsers: [] | AllUsers: [READ, READ_ACP]"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Best Practices for Remediation&lt;a href="https://github.com/jainankit/demorepo/new/master#best-practices-for-remediation" rel="noopener noreferrer"&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;After you review the results of your scan, make sure to prioritize the identified issues based on their severity. Some common remediations are as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Adjust bucket permissions:&lt;/strong&gt; You can restrict access to buckets by adjusting &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-policy-language-overview.html" rel="noopener noreferrer"&gt;permissions and policies&lt;/a&gt; to adhere to the principle of least privilege. Make sure to remove unnecessary public access and ensure that only authorized entities have the required permissions.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Regularly audit and monitor your S3 bucket configurations:&lt;/strong&gt; Establish a routine for auditing and monitoring your S3 bucket configurations. You can also set up alerts for any changes to permissions or policies, enabling timely detection and response to potential security incidents. Additionally, you can utilize tools and services such as &lt;a href="https://aws.amazon.com/config/" rel="noopener noreferrer"&gt;AWS Config&lt;/a&gt;, which helps you assess, audit, and evaluate the configuration of your resources. Moreover, &lt;a href="https://aws.amazon.com/premiumsupport/technology/trusted-advisor/" rel="noopener noreferrer"&gt;AWS Trusted Advisor&lt;/a&gt; helps inspect your environment and provides recommendations to improve security, performance, and cost.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Encrypt data:&lt;/strong&gt; Securing data through encryption involves implementing measures for both in transit and at rest. For data that is in transit, employing secure communication channels like &lt;a href="https://en.wikipedia.org/wiki/HTTPS" rel="noopener noreferrer"&gt;HTTPS&lt;/a&gt; during transfer ensures that information remains encrypted between clients and servers. On the server side, AWS S3 offers different &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingEncryption.html" rel="noopener noreferrer"&gt;options for encrypting data at rest&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion&lt;a href="https://github.com/jainankit/demorepo/new/master#conclusion" rel="noopener noreferrer"&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;In this article, you learned about some of the common security risks associated with Amazon S3 buckets and how &lt;a href="https://github.com/sa7mon/S3Scanner" rel="noopener noreferrer"&gt;S3Scanner&lt;/a&gt; can help.&lt;/p&gt;

&lt;p&gt;S3Scanner is a valuable tool for anyone leveraging cloud storage through buckets because it helps you scan for vulnerabilities in your environment. With multithreaded scanning, comprehensive permission analysis, custom storage provider support, PostgreSQL database integration, and customizable reporting, S3Scanner is definitely worth exploring.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.aviator.co/merge-queue" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx5wqgwrk9nbggrwv2wmu.png" width="800" height="97"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>vulnerabilities</category>
      <category>s3</category>
      <category>security</category>
    </item>
    <item>
      <title>The irrational fear of deployments</title>
      <dc:creator>Ibrahim Salami</dc:creator>
      <pubDate>Fri, 09 Aug 2024 21:38:41 +0000</pubDate>
      <link>https://forem.com/aviator_co/the-irrational-fear-of-deployments-5e8m</link>
      <guid>https://forem.com/aviator_co/the-irrational-fear-of-deployments-5e8m</guid>
      <description>&lt;p&gt;A &lt;a href="https://en.wikipedia.org/wiki/2024_CrowdStrike_incident" rel="noopener noreferrer"&gt;recent outage&lt;/a&gt; involving CrowdStrike impacted 8.5 million Windows operating systems, leading to disruptions in various global services, including airlines and hospitals. Multiple analyses have examined the root cause of this incident itself.&lt;/p&gt;

&lt;p&gt;However, as a software engineer, I think we are missing the aspect of human emotions related to deployments, specifically the fear of breaking production. That’s what we will try to dive into in this article. We will cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Understanding the function of release engineering. &lt;/li&gt;
&lt;li&gt;  What software engineers care about and what they don’t.&lt;/li&gt;
&lt;li&gt;  Impact of continuous delivery (CD). &lt;/li&gt;
&lt;li&gt;  A look at manual deployments. &lt;/li&gt;
&lt;li&gt;  Problems with manual deployment and the solution to these problems.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Release Engineering&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Before delving into the fear of deployments from a software engineer’s perspective, let’s first understand the role of a release engineer.&lt;/p&gt;

&lt;p&gt;Release engineering has evolved considerably in recent years, thanks to the modern CI and CD tools and standardization of Kubernetes. Despite these advancements, the primary responsibilities remains the same:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Consistent and repeatable deployments:&lt;/strong&gt; Standardizing release processes, reduces the risk of bad deployments to production. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reducing service disruptions&lt;/strong&gt;: Standardized processes also ensure teams are equipped to tackle harmful production environment incidents—for example, a rollback strategy for scenarios where a release causes problems.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Monitor and Optimize Performance:&lt;/strong&gt; Look for performance improvements for faster and reliable deployments. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Collaborate with engineering:&lt;/strong&gt; Work closely with developers, QA, and DevOps teams to ensure all new and existing services have a well defined deployment process.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What Software Engineers Care About&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Unlike the release engineers, as a software engineer working in the product team we may only care about certain aspects of deployments:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Quick code merges:&lt;/strong&gt; Merging quickly allows them to validate their work and move on to new tasks or unblock dependent tasks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Production incidents&lt;/strong&gt;: Although engineers may not care about all production incidents, they definitely care about their code changes causing any production outages.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Deployment schedule&lt;/strong&gt;: Engineers also like to track when their changes go live or have gone live, so that they can have access to real-time feedback on their changes.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What Software Engineers Don’t Care About&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Although there are things we care about, there are also those we don’t:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Deployment methodology&lt;/strong&gt;: Although we know the need for an efficient and reliable deployment process, they don’t care how it is performed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Effect of other changes&lt;/strong&gt;: Unless things go wrong, we don’t worry about unrelated changes from other developers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Deployment management&lt;/strong&gt;: An engineer is indifferent to who manages deployment in a software team. For instance, we would only care about managing deployment if tasked with doing so. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Impact of Continuous Deployments (CD)&lt;/strong&gt;
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;So what does the fear have to do with Continuous deployments?&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A lot.&lt;/p&gt;

&lt;p&gt;Studies have proven &lt;a href="https://dora.dev/capabilities/continuous-delivery" rel="noopener noreferrer"&gt;several benefits&lt;/a&gt; of Continuous Deployment (CD), and unsurprisingly many of which are &lt;a href="https://en.wikipedia.org/wiki/Psychological_safety" rel="noopener noreferrer"&gt;psychological&lt;/a&gt; in nature. Continuous deployments removes “human-in-the-loop”, therefore it requires a strong trust in the test infrastructure.&lt;/p&gt;

&lt;p&gt;In other words, automated tests are not only ensuring reliability of production, but also providing &lt;a href="https://en.wikipedia.org/wiki/Psychological_safety" rel="noopener noreferrer"&gt;psychological safety&lt;/a&gt;, sometimes irrationally, reducing the fear of deployments. As a developer, I’m more comfortable making changes in a CD process vs if I’m asked to verify the changes manually.&lt;/p&gt;

&lt;p&gt;However, despite the popularity of these CD strategies, a lot of companies still trigger deployments manually (have a human-in-the-loop), indicating a cautious approach to CD implementations. This behavior suggests that teams prefer to retain supervision on the release process and intervene where necessary.&lt;/p&gt;

&lt;p&gt;This is important to understand from a psychological safety perspective. Manual deployments imply that someone is overseeing the process and handling issues when things go wrong. While this provides a sense of security, it can also induce fear in the person deploying and is prone to human error.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Manual deployments&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Despite the drawbacks most teams manage deployments manually. A typical manual deployment may include a few steps:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Supervision&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Someone babysits the entire deployment process before a release goes out. This person is tasked with intervening when and if there are signs of trouble. Teams maintain an oncall person who manages their deployments and handles problems when they arise.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Dedicated Release Teams&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Some teams have a dedicated release engineering team, which ensures releases go smoothly. Since this means a high degree of specialization, the deployment process could be more efficient and reliable.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Spreadsheets&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Some companies maintain a spreadsheet to validate any changes made. This allows companies to systematically review and approve these changes, ensuring they meet predefined quality standards.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Manual QA&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In addition to spreadsheets, manual QA is another layer companies add. Manual QA tests new releases in staging environments before deploying them to production. However, a testing environment isn’t foolproof, so that some real-life scenarios won’t be accounted for. &lt;/p&gt;

&lt;h2&gt;
  
  
  Where Do Things Go Wrong With Manual Deployments?
&lt;/h2&gt;

&lt;p&gt;Many things can go wrong for any software development team relying solely on manual deployments: &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Dependence on a small group&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This can create bottlenecks, which lead to release delays and human error in some instances. Also, a team could have problems when this specific person leaves or can’t deliver on the required tasks. &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;No risk-mitigation strategy&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;There is no strategy for following through in an unfavorable production incident. When an incident happens, the release team has to grapple to find the relevant stakeholders to help resolve and make decisions.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Prone to human error&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Typographical errors in commands or scripts, or forgot to run the pre-deployment or post-deployment steps.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;High effort&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Since the deployments require babysitting the process,it becomes a time consuming effort. Also causing the frequency of deployments to drop significantly. For instance, if it requires an hour to monitor the entire deployment, the release team may decide to skip deployments on the days with minor changes to save that time.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Communication Breakdown&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;It’s unclear from product teams on the state of the releases and when their changes are getting into production.&lt;/p&gt;

&lt;p&gt;Looking at these challenges, it’s easy to understand why engineers dread deployments. The risk of deployment failures, the high stakes, and the pressure to keep downtime low also contribute to this fear. &lt;/p&gt;

&lt;p&gt;These failures can be minimized by increasing test automation. Still, since these tests are carried out in a test environment, you should not expect an automated test to catch every possible error. Failures are to be expected but at a reduced rate.&lt;/p&gt;

&lt;h2&gt;
  
  
  What can we do about it?
&lt;/h2&gt;

&lt;p&gt;Simply set up Continuous Deployments? Easier said than done. Despite the drawbacks, manual deployments are still okay if managed well. The goals should be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  provide guardrails to avoid production incidents&lt;/li&gt;
&lt;li&gt;  reduce human errors&lt;/li&gt;
&lt;li&gt;  enable anyone to trigger deploys&lt;/li&gt;
&lt;li&gt;  ensure deployments happen frequently&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Guardrails – Canary and Rollbacks&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Canary and Rollback strategies can help reduce the impact of an outage and in many cases avert the crisis automatically.&lt;/p&gt;

&lt;p&gt;A canary release exposes your new release to a small portion of production environment traffic. This gives teams insight into issues that might not have come up during testing. &lt;/p&gt;

&lt;p&gt;On the other hand, a rollback strategy helps engineers revert a release to its previous stable version state. It is done when new problems arise after deployments to the production environment. &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Reduce human errors – Standardization&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Define standard deployment methodologies that result in efficiency, consistency, reliability, and high software quality. In their &lt;a href="https://cloud.google.com/devops/state-of-devops" rel="noopener noreferrer"&gt;state of DevOps report&lt;/a&gt;, &lt;a href="https://dora.dev/" rel="noopener noreferrer"&gt;DORA&lt;/a&gt; shows that reliability predicts better operational performance. Furthermore, having a standardized process allows repeatability in release processes, which can be automated. Automating this process helps a team keep production costs lower. &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Democratize deployment process&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Democratizing the deployment process removes the reliance on specific individuals. If we empower any software engineer to deploy, it slowly reduces the fear. “If “anyone can deploy it should not be too hard”. Share your legos!&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Frequent deployments&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;To reduce deployment anxiety, we need to deploy more frequently, not less. The DORA report also highlights that smaller batch deployments are less likely to cause issues and help lower the psychological barrier for developers.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Improve developer experience&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Clarifying what is being deployed enhances the developer experience. Make it easy for developers to know when deployments occur and what changes are included. This transparency helps developers track when their changes go live and simplifies incident investigations.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Defined risk-mitigation strategies&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;There should be defined steps to follow for rollbacks and hotfixes, as this helps eliminate any indecision with production incidents. For instance, there should be separate build and deploy steps for teams to follow for easy rollbacks&lt;/p&gt;

&lt;p&gt;Similarly, standardizing how to deal with hotfixes and cherrypicks can make it simple to operate when the stakes are high.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Feature flags&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Feature flags are like kill-switches that can turn off a new feature that caused an incident in production. This can enable engineers to resolve production incidents quickly.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Software teams must treat release engineering as a priority from the outset of product development to avoid costly mistakes. And we should not let incidents like the Crowdstrike outage cripple our development practices. Addressing the fear of deployment and preventing production incidents involves several key strategies:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Invest in the standardization of deployment processes&lt;/li&gt;
&lt;li&gt;  Set up well-defined risk-mitigating strategies, such as canary releases, strategic rollouts, rollbacks, and hotfixes. &lt;/li&gt;
&lt;li&gt;  Simplify the developer experience by democratizing deployments, and encourage everyone to participate.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;a href="https://www.aviator.co/releases" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4e4r63b88mdpu1fcc696.png" width="800" height="97"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cicd</category>
      <category>dx</category>
    </item>
    <item>
      <title>Comparing Flux CD, Argo CD, and Spinnaker</title>
      <dc:creator>Ibrahim Salami</dc:creator>
      <pubDate>Fri, 26 Jul 2024 19:02:53 +0000</pubDate>
      <link>https://forem.com/aviator_co/comparing-flux-cd-argo-cd-and-spinnaker-5f4n</link>
      <guid>https://forem.com/aviator_co/comparing-flux-cd-argo-cd-and-spinnaker-5f4n</guid>
      <description>&lt;p&gt;Continuous delivery (CD) tools play a crucial role in modern software development workflows, enabling teams to automate the process of deploying applications. Among the available CD tools, Flux CD, Argo CD, and Spinnaker stand out for their unique features and capabilities. This article provides an in-depth comparison of these three tools. In it, we’ll explore their architectures, key features, integration capabilities, and ideal use cases, and we’ll go into each tool’s basic implementation.&lt;/p&gt;

&lt;p&gt;Comparing Flux CD, Argo CD, and Spinnaker is essential for organizations seeking the right CD tool to fit their specific requirements. By understanding the architectural differences, key features, and integration capabilities of each tool, teams can make informed decisions and optimize their deployment workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Brief introduction to Flux CD, Argo CD, and Spinnaker
&lt;/h2&gt;

&lt;p&gt;Flux CD, Argo CD, and Spinnaker are prominent players in the field of CD tools — each offers a unique approach to application deployment and management.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Flux CD:&lt;/strong&gt; Flux CD, or Flux, is an open-source tool that follows the GitOps methodology, where the desired state of the system is controlled in Git repositories. It continuously monitors these repositories for changes and automatically applies them to the Kubernetes cluster.&lt;br&gt;
&lt;strong&gt;- Argo CD:&lt;/strong&gt; Argo CD is another open-source tool designed for Kubernetes-native continuous deployment. It utilizes declarative YAML manifests in a GitHub repository to define the desired application state and synchronizes that with the actual state in the Kubernetes cluster.&lt;br&gt;
&lt;strong&gt;- Spinnaker:&lt;/strong&gt; Spinnaker is a more compact CD platform that provides support for multicloud deployments. It offers advanced features such as automated canary analysis and pipeline orchestration, making it suitable for complex deployment scenarios.&lt;/p&gt;
&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Flux CD&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Flux is constructed with &lt;a href="https://fluxcd.io/flux/components" rel="noopener noreferrer"&gt;GitOps Toolkit components&lt;/a&gt;. In the Flux ecosystem, those components are Flux Controllers, composable APIs, and reusable Go packages. They’re used for developing CD workflows on Kubernetes using GitOps principles.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fle4lgspxwpu6qs92tz13.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fle4lgspxwpu6qs92tz13.png" alt="Image description" width="800" height="525"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Key components of Flux CD include the source controller, which establishes a collection of Kubernetes entities, enabling cluster administrators and automated operators to manage Git and Helm repository tasks through a dedicated controller.&lt;/p&gt;

&lt;p&gt;You have the option of using the toolkit for expanding Flux capabilities and creating custom systems tailored for continuous delivery. A recommended starting point for this is &lt;a href="https://fluxcd.io/flux/gitops-toolkit/source-watcher" rel="noopener noreferrer"&gt;the source-watcher guide&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Argo CD&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Argo CD operates as a Kubernetes controller, continually monitoring active applications and comparing their existing operational state with the intended target state defined in a Git repository. Applications that do not match the desired state are flagged as out of sync. After that, Argo CD provides reporting and visualization of these disparities, offering options for automatic or manual synchronization to bring the operational state in line with the desired target state.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpo81topj0w12s9cztyv2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpo81topj0w12s9cztyv2.png" alt="Image description" width="743" height="708"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Any modifications made to the desired target state in the Git repository are automatically applied and reflected in the specified target environments (usually a Kubernetes cluster). All the changes made are also displayed in the Argo CD UI.&lt;/p&gt;

&lt;p&gt;This architecture ensures automated application deployment and lifecycle management, aligning with the GitOps pattern of using Git repositories as the source of truth for defining application states. Argo CD supports various methods of specifying plain directories of YAML/JSON manifests, Kubernetes manifests, including kustomize applications, Helm charts, and Jsonnet files. &lt;/p&gt;

&lt;p&gt;Argo CD provides a CLI for automation and integration with CI pipelines, webhook integration with version control systems, and so on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Spinnaker&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Spinnaker employs a microservices architecture comprising several components that interact to facilitate the deployment process. Core components of Spinnaker include the Deck UI for user interaction, the Gate API for authentication and authorization, and various cloud-specific Clouddriver services for interacting with cloud providers.&lt;/p&gt;

&lt;p&gt;The diagram below illustrates the interdependencies among microservices. The green rectangles denote “external” elements, such as the Deck UI, a single-page JavaScript application operating within your web browser. The gold rectangles signify Halyard components, which are utilized solely during the configuration of Spinnaker.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmikb6p4ecob0qatfsfij.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmikb6p4ecob0qatfsfij.png" alt="Image description" width="775" height="724"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Key features
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Flux CD&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- GitOps-based continuous delivery:&lt;/strong&gt; Flux CD leverages Git repositories as the source of truth for defining the desired state of the system.&lt;br&gt;
&lt;strong&gt;- Automated deployments: *&lt;em&gt;Flux CD automates the deployment process based on changes detected in Git repositories.&lt;br&gt;
*&lt;/em&gt;- Git repository synchronization:&lt;/strong&gt; Flux CD synchronizes Kubernetes resources with Git repositories, ensuring consistency between environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Argo CD&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Declarative GitOps application deployment:&lt;/strong&gt; Argo CD enables declarative application deployments using YAML manifests stored in Git repositories.&lt;br&gt;
&lt;strong&gt;- Rollback and version control:&lt;/strong&gt; Argo CD supports rollback functionality and maintains version control for application configurations.&lt;br&gt;
&lt;strong&gt;- SSO integration:&lt;/strong&gt; Argo CD provides integration with single sign-on (SSO) systems for authentication and access control.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Spinnaker&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Multi-cloud support:&lt;/strong&gt; Spinnaker offers native support for multiple cloud providers, allowing easy deployment across heterogeneous environments.&lt;br&gt;
&lt;strong&gt;- Automated canary analysis:&lt;/strong&gt; Spinnaker facilitates automated canary analysis for evaluating new versions of applications before pushing them to production.&lt;br&gt;
&lt;strong&gt;- Pipeline orchestration:&lt;/strong&gt; Spinnaker provides robust pipeline orchestration capabilities, enabling complex deployment workflows.&lt;/p&gt;
&lt;h2&gt;
  
  
  Integration and extensibility
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Flux CD&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Integration with Kubernetes and Helm:&lt;/strong&gt; Flux CD integrates easily with Kubernetes and Helm for managing containerized applications.&lt;br&gt;
&lt;strong&gt;- Extensibility through custom controllers:&lt;/strong&gt; Flux CD allows extending the Kubernetes API with custom resource definitions and validation webhooks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Argo CD&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Kubernetes native integration:&lt;/strong&gt; Argo CD is tightly integrated with Kubernetes, leveraging custom resource definitions (CRDs) for managing application deployments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Spinnaker&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Integration with major cloud providers:&lt;/strong&gt; Spinnaker provides out-of-the-box integration with major cloud providers such as AWS, Google Cloud Platform (GCP), and Microsoft Azure.&lt;br&gt;
&lt;strong&gt;- Extensibility through custom stages and plugins:&lt;/strong&gt; It supports extensibility through custom stages and plugins, allowing users to integrate with additional services and tools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use cases and best practices&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Flux CD&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Flux CD is suitable for small- to medium-scale Kubernetes deployments. It’s ideal for teams practicing GitOps methodologies, where the entire deployment process is managed through version-controlled Git repositories. It’s more flexible than Argo CD.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Argo CD&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Argo CD is good for DevOps teams looking for Kubernetes-native continuous deployment solutions. It’s recommended for CI/CD pipelines requiring declarative application definitions stored in Git repositories.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Spinnaker&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Spinnaker is recommended for enterprises with complex, multi-cloud deployment requirements because of its robust multi-cloud support. It’s ideal for organizations needing advanced CD workflows, including canary deployments and automated analysis. It’s more flexible than Flux CD and Argo CD but harder to get started with.&lt;/p&gt;
&lt;h2&gt;
  
  
  Examples of how to use Flux CD, Argo CD, and Spinnaker
&lt;/h2&gt;

&lt;p&gt;This section will cover the basics of how to set up and use Flux CD, Argo CD, and Spinnaker — it’s meant to give you an idea of what you’re getting into before you implement a CD tool in a real project. To follow the steps, you should have a cluster running.&lt;/p&gt;
&lt;h2&gt;
  
  
  How to use Flux CD
&lt;/h2&gt;

&lt;p&gt;Using Flux CD involves setting up a Git repository to store your Kubernetes manifests and configuring Flux CD to synchronize these manifests with your Kubernetes cluster. Here’s a step-by-step guide:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Install Flux CD&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You need to install the Flux CLI to run commands on. With Bash for macOS and Linux, you can use the following command (you can get other installation methods in the &lt;a href="https://fluxcd.io/flux/installation/#install-the-flux-cli" rel="noopener noreferrer"&gt;CLI install documentation&lt;/a&gt;):&lt;/p&gt;

&lt;p&gt;&lt;code&gt;curl -s https://fluxcd.io/install.sh | sudo bash&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You can check whether it installed properly with the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;flux check --pre # use sudo if you get error like "connection refused"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Configure GitHub credentials&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Flux needs your GitHub credentials in order to log in and perform some actions on your repository. Export your GitHub personal access token and username:&lt;/p&gt;

&lt;p&gt;`export GITHUB_TOKEN=&lt;/p&gt;

&lt;p&gt;export GITHUB_USER=&lt;br&gt;
`&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Install Flux CD onto your cluster&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The flux bootstrap github command sets up the Flux controllers on a Kubernetes cluster and sets them to synchronize the cluster’s state with a Git repository. It also uploads the Flux manifests to the Git repository and sets up Flux CD to automatically update itself based on changes in the Git repository.&lt;/p&gt;

&lt;p&gt;To do do this run the following command:&lt;/p&gt;

&lt;p&gt;`echo $GITHUB_TOKEN | flux bootstrap github \&lt;/p&gt;

&lt;p&gt;--owner=$GITHUB_USER \&lt;/p&gt;

&lt;p&gt;--repository= \&lt;/p&gt;

&lt;p&gt;--branch=main \&lt;/p&gt;

&lt;p&gt;--path=./flux-clusters \&lt;/p&gt;

&lt;p&gt;--personal&lt;/p&gt;

&lt;p&gt;--private=false&lt;br&gt;
`&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The bootstrap command above does the following:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creates a Git repository  (in my case, flux-test-app ) on your GitHub account.&lt;/li&gt;
&lt;li&gt;Adds Flux component manifests to the repository.&lt;/li&gt;
&lt;li&gt;Deploys Flux components to your Kubernetes cluster. You can run kubectl get all -n flux-system to check out the components.&lt;/li&gt;
&lt;li&gt;Configures Flux components to track the path /flux-clusters in the repository.&lt;/li&gt;
&lt;li&gt;–private=false flag is used to create a public repository.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Your output will look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhj8x1ev2jhbmzao7t203.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhj8x1ev2jhbmzao7t203.png" alt="Image description" width="800" height="474"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Add Podinfo repository to Flux CD (or any repository you want)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, clone the repository you created (in my case, flux-test-app) to your local machine:&lt;/p&gt;

&lt;p&gt;`git clone &lt;a href="https://github.com/$GITHUB_USER/flux-test-app" rel="noopener noreferrer"&gt;https://github.com/$GITHUB_USER/flux-test-app&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;cd flux-test-app`&lt;/p&gt;

&lt;p&gt;Now run the following to create a &lt;a href="https://fluxcd.io/flux/components/source/gitrepositories" rel="noopener noreferrer"&gt;GitRepository&lt;/a&gt; manifest pointing to the &lt;a href="http://github.com/stefanprodan/podinfo" rel="noopener noreferrer"&gt;github.com/stefanprodan/podinfo&lt;/a&gt; master branch. Podinfo is a web application written in Go.&lt;/p&gt;

&lt;p&gt;`flux create source git podinfo \&lt;/p&gt;

&lt;p&gt;--url=&lt;a href="https://github.com/stefanprodan/podinfo" rel="noopener noreferrer"&gt;https://github.com/stefanprodan/podinfo&lt;/a&gt; \&lt;/p&gt;

&lt;p&gt;--branch=master \&lt;/p&gt;

&lt;p&gt;--interval=2m \&lt;/p&gt;

&lt;p&gt;--export &amp;gt; ./flux-cluster/podinfo-source.yaml&lt;br&gt;
`&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In the command above:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A GitRepository named podinfo is created.&lt;/li&gt;
&lt;li&gt;The source-controller checks the Git repository every two minutes, as indicated by the –interval flag.&lt;/li&gt;
&lt;li&gt;It clones the master branch of the &lt;a href="https://github.com/stefanprodan/podinfo" rel="noopener noreferrer"&gt;https://github.com/stefanprodan/podinfo&lt;/a&gt; repository.&lt;/li&gt;
&lt;li&gt;When the current GitRepository revision differs from the latest fetched revision, a new Artifact is archived.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After the command is run, you should have the corresponding file podinfo-source.yaml.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Deploy the podinfo application using GitOps&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Configure Flux CD to build and apply the &lt;a href="https://github.com/stefanprodan/podinfo/tree/master/kustomize" rel="noopener noreferrer"&gt;kustomize&lt;/a&gt; directory located in the podinfo repository. This directory contains the Kubernetes deployment files.&lt;/p&gt;

&lt;p&gt;Use the following flux create command to create a &lt;a href="https://fluxcd.io/flux/components/kustomize/kustomizations/" rel="noopener noreferrer"&gt;Kustomization&lt;/a&gt; that applies the podinfo deployment:&lt;/p&gt;

&lt;p&gt;`flux create kustomization podinfo \&lt;/p&gt;

&lt;p&gt;--target-namespace=default \&lt;/p&gt;

&lt;p&gt;--source=podinfo \&lt;/p&gt;

&lt;p&gt;--path="./kustomize" \&lt;/p&gt;

&lt;p&gt;--prune=true \&lt;/p&gt;

&lt;p&gt;--wait=true \&lt;/p&gt;

&lt;p&gt;--interval=10m \&lt;/p&gt;

&lt;p&gt;--retry-interval=2m \&lt;/p&gt;

&lt;p&gt;--export &amp;gt; ./flux-cluster/podinfo-kustomization.yaml&lt;br&gt;
`&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In the command above:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Flux GitRepository named podinfo is created that clones the master branch and makes the repository content available as an Artifact inside the cluster.&lt;/li&gt;
&lt;li&gt;A Flux Kustomization named podinfo is created that watches the GitRepository for Artifact changes.&lt;/li&gt;
&lt;li&gt;The Kustomization builds the YAML manifests located at the specified location in –path=”./kustomize”, validates the objects against the Kubernetes API, and applies them on the cluster.&lt;/li&gt;
&lt;li&gt;The –interval=10m flag, every ten minutes, sets the Kustomization to run a server-side dry-run to detect and correct drift inside the cluster.&lt;/li&gt;
&lt;li&gt;The –retry-interval=2m specifies the interval (two minutes) at which to retry a failed reconciliation. &lt;/li&gt;
&lt;li&gt;When the Git revision changes, the manifests are reconciled automatically. If previously applied objects are missing from the current revision, these objects are deleted from the cluster when enabled with –prune=true.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After the command is run you should have the corresponding file podinfo-kustomization.yaml.&lt;/p&gt;

&lt;p&gt;Now commit and push the manifests to the repository:&lt;/p&gt;

&lt;p&gt;`git add -A &amp;amp;&amp;amp; git commit -m "Add podinfo manifests"&lt;/p&gt;

&lt;p&gt;git push`&lt;/p&gt;

&lt;p&gt;After about ten minutes, your application should be running on your cluster. You can check with the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo kubectl -n default get deployments,services&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Output:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fct11uejvkin2g0n9k19s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fct11uejvkin2g0n9k19s.png" alt="Image description" width="800" height="116"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to use Argo CD&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To use Argo CD, you typically install Argo CD onto your Kubernetes cluster, deploy your applications to Kubernetes, configuring Argo CD to watch your application manifests in a Git repository, and then let Argo CD synchronize the desired state of your applications with the actual state running in your cluster.&lt;/p&gt;

&lt;p&gt;Here’s a basic guide to get started:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Install Argo CD onto your Kubernetes cluster&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can install Argo CD using Kubernetes manifests. Below is an example of how you can install Argo CD using kubectl:&lt;/p&gt;

&lt;p&gt;`kubectl create namespace argocd&lt;/p&gt;

&lt;p&gt;kubectl apply -n argocd -f &lt;a href="https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml%60%7B%" rel="noopener noreferrer"&gt;https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml`{%&lt;/a&gt; endraw %}&lt;/p&gt;

&lt;p&gt;Also install the &lt;a href="https://argo-cd.readthedocs.io/en/stable/cli_installation" rel="noopener noreferrer"&gt;Argo CD CLI&lt;/a&gt; to run the argocd commands in later steps.&lt;/p&gt;

&lt;p&gt;Now change the argocd-server service type to LoadBalancer with the following command:&lt;/p&gt;

&lt;p&gt;{% raw %}&lt;code&gt;kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Access the Argo CD UI&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once it’s installed, you can access the Argo CD UI via a port forward or by exposing the service externally. Here’s how to port forward:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl port-forward svc/argocd-server -n argocd 8080:443&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You can then access the Argo CD UI by navigating to &lt;a href="http://localhost:8080" rel="noopener noreferrer"&gt;http://localhost:8080&lt;/a&gt; in your web browser.&lt;/p&gt;

&lt;p&gt;The initial password for the admin (login username) account is automatically generated and saved as plain text in the password field within a secret named argocd-initial-admin-secret in your Argo CD installation namespace. To easily obtain this password, you can run the following argocd admin command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;argocd admin initial-password -n argocd&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Using the username admin and the password from above, log in to Argo CD’s host:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;argocd login https://localhost:8080/&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Creating an app on Kubernetes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, you need to set the current namespace from default to argocd by running the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl config set-context --current --namespace=argocd&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Next, deploy a sample application to the Kubernetes cluster using YAML manifests. This manifest is on &lt;a href="https://github.com/khabdrick/argocd-example-apps.git" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; so you can check out the content. &lt;/p&gt;

&lt;p&gt;Create the example application with the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;argocd app create guestbook --repo https://github.com/khabdrick/argocd-example-apps.git --path  . --dest-server https://kubernetes.default.svc --dest-namespace default&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;If you’re using a different repository, update &lt;a href="https://github.com/khabdrick/argocd-example-apps.git" rel="noopener noreferrer"&gt;https://github.com/khabdrick/argocd-example-apps.git&lt;/a&gt; –path  . in the code as appropriate.&lt;/p&gt;

&lt;p&gt;In the Argo CD UI, you will see that your app has been deployed and synchronized successfully.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb9ia75xv57p3s63e03kg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb9ia75xv57p3s63e03kg.png" alt="Image description" width="800" height="582"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Argo CD will now start monitoring the Git repository for changes and automatically synchronize the application to the desired state specified in the manifests. It takes about three minutes for Argo CD to refresh automatically and synchronize and apply changes in the repository.&lt;/p&gt;

&lt;p&gt;This is a basic guide to get started with Argo CD. Depending on your specific use case and requirements, you may need to explore more advanced features and configurations. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to use Spinnaker&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To install Spinnaker, you need &lt;a href="https://spinnaker.io/docs/reference/halyard/" rel="noopener noreferrer"&gt;Halyard&lt;/a&gt;. Halyard is a tool used to configure and manage Spinnaker deployments. This section outlines the process of setting up Spinnaker with a MySQL database on Kubernetes. We’ll start by running Halyard in a Docker container.&lt;/p&gt;

&lt;p&gt;Note: For this section, I will use a Kubernetes cluster from &lt;a href="https://docs.docker.com/desktop/" rel="noopener noreferrer"&gt;Docker Desktop&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Setting up a MySQL database&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To begin, deploy a MySQL database using Kubernetes and the MariaDB Docker image.&lt;/p&gt;

&lt;p&gt;(Try to use a more secure password.)&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl run mysql --image=mariadb:10.2 --env="MYSQL_ROOT_PASSWORD"="123" --env="MYSQL_DATABASE"="front50"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This command creates a MySQL instance named mysql, setting the root password and creating a database named front50. This will be used to configure &lt;a href="https://spinnaker.io/docs/setup/productionize/persistence/front50-sql" rel="noopener noreferrer"&gt;Front50&lt;/a&gt;. Front50 serves as the persistent storage and retrieval mechanism for Spinnaker’s pipeline configurations, application details, and other metadata.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configuring Halyard&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Next, we configure Halyard by creating a container that runs Halyard:&lt;/p&gt;

&lt;p&gt;`docker run --name halyard --rm \&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-v ~/.kube:/home/spinnaker/.kube \

-it us-docker.pkg.dev/spinnaker-community/docker/halyard:stable`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;In another terminal window, enter the Halyard container:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker exec -it halyard bash&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Once inside the Halyard container, configure the Spinnaker version:&lt;/p&gt;

&lt;p&gt;`hal config version&lt;/p&gt;

&lt;p&gt;hal config version edit --version `&lt;/p&gt;

&lt;p&gt;Enable Kubernetes as a &lt;a href="https://spinnaker.io/docs/setup/install/providers/#:~:text=In%20Spinnaker%2C%20providers%20are%20integrations,your%20applications%20via%20those%20accounts." rel="noopener noreferrer"&gt;provider&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;hal config provider kubernetes enable&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Add a Kubernetes account; docker-desktop in the command below is the context of the cluster running on Docker Desktop:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;hal config provider kubernetes account add my-account --context docker-desktop&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now associate your Kubernetes account (my-account) with Halyard:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;hal config deploy edit --type distributed --account-name my-account&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Configure storage using Redis. This will be changed later, since Halyard doesn’t allow setting MySQL directly:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;hal config storage edit --type redis&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now enable artifacts. The Artifacts feature in Spinnaker allows the system to manage and deploy artifacts (such as Docker images, JAR files, and Debian packages) as part of your deployment pipelines: &lt;/p&gt;

&lt;p&gt;&lt;code&gt;hal config features edit --artifacts true&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configuring Spinnaker to use MySQL&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Next, you have to configure Spinnaker to use the MySQL database. Create the /home/spinnaker/.hal/default/profiles/front50-local.yml file and insert the following configurations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sql:
  enabled: true
  connectionPools:
    default:
      default: true
      jdbcUrl: jdbc:mysql://MYSQL_IP_ADDRESS:3306/front50
      user: root
      password: 123
  migration:
    user: root
    password: 123
    jdbcUrl: jdbc:mysql://MYSQL_IP_ADDRESS:3306/front50
spinnaker:
  redis:
    enabled: false
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace MYSQL_IP_ADDRESS with the appropriate IP address. Also make sure that other credentials match with what you used to deploy MySQL earlier.&lt;/p&gt;

&lt;p&gt;You can get the MYSQL IP by running the following command (outside the Hayland container):&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl get pods -o wide&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Apply the deployment (in the Hayland container). This command is used to apply the changes made to the Spinnaker configuration and deploy or update Spinnaker in the target environment:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;hal deploy apply&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now you can check to see whether the pods are running completely:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl get pods -n spinnaker&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We need the deck and gate pods to be running so we can access the Spinnaker UI. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmgut9xnbt27v3vlz4oxx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmgut9xnbt27v3vlz4oxx.png" alt="Image description" width="800" height="227"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we can forward the deck and gate pods so that we can access it on the browser. Do this with the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl -n spinnaker port-forward &amp;lt;spin-deck-pod-name&amp;gt; 9000&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;On another terminal, run the gate :&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl -n spinnaker port-forward &amp;lt;spin-gate-pod-name&amp;gt; 8084&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now you can access the Spinnaker UI at &lt;a href="http://localhost:9000/" rel="noopener noreferrer"&gt;http://localhost:9000/&lt;/a&gt; and start developing your pipelines.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fym1j00beoxht09aqtdak.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fym1j00beoxht09aqtdak.png" alt="Image description" width="800" height="241"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Flux CD, Argo CD, and Spinnaker offer distinct advantages and cater to different use cases within the realm of continuous delivery. By evaluating their architectures, features, and integrations, you can make informed decisions about the best way to automate your deployment and delivery processes.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Adopting OpenTofu as an Alternative to Terraform</title>
      <dc:creator>Ankit Jain</dc:creator>
      <pubDate>Mon, 18 Mar 2024 22:20:21 +0000</pubDate>
      <link>https://forem.com/aviator_co/adopting-opentofu-as-an-alternative-to-terraform-gb6</link>
      <guid>https://forem.com/aviator_co/adopting-opentofu-as-an-alternative-to-terraform-gb6</guid>
      <description>&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Infrastructure_as_code" rel="noopener noreferrer"&gt;Infrastructure as code (IaC)&lt;/a&gt; is a concept for deploying infrastructure like a software application. Ongoing advancements in infrastructure virtualization have made this possible. The rise of cloud computing transformed physical servers, networks, and hardware infrastructure into virtualized services. Add assets like databases, domain name service (DNS) providers, and authentication systems, and you get infrastructure that can be deployed like software.&lt;/p&gt;

&lt;p&gt;With a few lines of configuration script, you can deploy operating systems, networks, and storage to a remote data center with a single command. The system can be kept in sync with updates to the configuration, which makes changing or redeploying the infrastructure a breeze.&lt;/p&gt;

&lt;p&gt;IaC should not be confused with configuration management. Tools in this category, such as Chef, Puppet, or Ansible, typically deploy and configure software components on existing infrastructure. &lt;a href="https://www.hashicorp.com/products/terraform" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt;, on the other hand, is a provisioning tool that specializes in setting up infrastructure components like operating systems, networks, databases, users, and permissions on bare server hardware. Terraform can play hand-in-hand with configuration management tools by providing the infrastructure necessary for deploying software components.  &lt;/p&gt;

&lt;p&gt;When Terraform's licensing changed, an open-source fork called &lt;a href="https://opentofu.org/" rel="noopener noreferrer"&gt;OpenTofu&lt;/a&gt; was spun off. In this article, you'll learn about OpenTofu's features and how OpenTofu compares to Terraform. &lt;/p&gt;

&lt;p&gt;The goal of this article is to help you make an informed decision about using OpenTofu to manage your infrastructure requirements as code.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is OpenTofu
&lt;/h2&gt;

&lt;p&gt;OpenTofu provides a simple but powerful configuration language to describe the infrastructure that you want to deploy. You specify the number and size of virtual servers to deploy to, the operating system to use, the virtual network overlays, the DNS settings, and anything else your infrastructure needs.&lt;/p&gt;

&lt;p&gt;Then you run a tool that reads configuration scripts written in a &lt;a href="https://opentofu.org/docs/language/" rel="noopener noreferrer"&gt;specialized configuration language&lt;/a&gt; and interacts with infrastructure resources called providers to build and deploy the required parts of the infrastructure.&lt;/p&gt;

&lt;p&gt;If anything needs to be changed, you update the script and run the tool again. The tool then updates the actual state of your infrastructure to match the description.&lt;/p&gt;

&lt;h3&gt;
  
  
  OpenTofu and Terraform
&lt;/h3&gt;

&lt;p&gt;You may have come across the name Terraform in the context of IaC. Terraform and OpenTofu are closely related: OpenTofu is a fork of Terraform 1.5.&lt;/p&gt;

&lt;p&gt;Years ago, a company called &lt;a href="https://www.hashicorp.com/" rel="noopener noreferrer"&gt;HashiCorp&lt;/a&gt; created Terraform, among other DevOps tools like &lt;a href="https://www.vagrantup.com/" rel="noopener noreferrer"&gt;Vagrant&lt;/a&gt;, &lt;a href="https://www.packer.io/" rel="noopener noreferrer"&gt;Packer&lt;/a&gt;, &lt;a href="https://www.nomadproject.io/" rel="noopener noreferrer"&gt;Nomad&lt;/a&gt;, &lt;a href="https://www.vaultproject.io/" rel="noopener noreferrer"&gt;Vault&lt;/a&gt;, and &lt;a href="https://www.consul.io/" rel="noopener noreferrer"&gt;Consul&lt;/a&gt;. All these tools aim at automating software orchestration and infrastructure management.&lt;/p&gt;

&lt;p&gt;Terraform's approach to provisioning infrastructure consists of a three-step workflow: write, plan, and apply. In the write step, you define the resources needed to run the infrastructure. In the plan step, Terraform compares the written infrastructure definition against the existing infrastructure and creates an execution plan for creating, updating, or destroying resources as needed. Finally, in the apply step, Terraform applies the planned changes to the target infrastructure.&lt;/p&gt;

&lt;p&gt;To utilize as many infrastructure resources as possible, Terraform uses a plugin system that allows resource providers, such as DNS services, container orchestration systems, database services, or logging systems, to be included.&lt;/p&gt;

&lt;h4&gt;
  
  
  Terraform's License Change
&lt;/h4&gt;

&lt;p&gt;Terraform had become a popular IaC tool when HashiCorp, its maker, decided to change the license for their tools from the &lt;a href="https://www.mozilla.org/en-US/MPL/2.0/" rel="noopener noreferrer"&gt;Mozilla Public License (MPL) 2.0&lt;/a&gt; to the &lt;a href="https://www.hashicorp.com/bsl" rel="noopener noreferrer"&gt;Business Source License (BSL)&lt;/a&gt;. This move aimed to protect HashiCorp from competitors who could set up hosted Terraform offers just like HashiCorp had and reap the benefits without contributing back.&lt;/p&gt;

&lt;p&gt;HashiCorp is not the first company to make this move. Earlier, companies like &lt;a href="https://www.mongodb.com/" rel="noopener noreferrer"&gt;MongoDB&lt;/a&gt;, &lt;a href="https://redis.com/" rel="noopener noreferrer"&gt;Redis&lt;/a&gt;, and &lt;a href="https://www.cockroachlabs.com/" rel="noopener noreferrer"&gt;Cockroach Labs&lt;/a&gt; also decided to restrict the ability to resell their (otherwise open source) code for similar reasons.&lt;/p&gt;

&lt;p&gt;Nevertheless, HashiCorp's announcement raised concerns among Terraform users about the possible legal implications of the license switch. It didn't take long until the latest MPL-licensed Terraform version got forked. The fork was originally named OpenTF but was later renamed to OpenTofu due to trademark concerns.&lt;/p&gt;

&lt;p&gt;Because Terraform 1.6.x and beyond will be under the new BSL, the OpenTofu project cannot take over changes made to Terraform anymore. While OpenTofu aims to keep feature parity with Terraform, and both projects come with a backward compatibility promise &lt;a href="https://developer.hashicorp.com/terraform/language/v1-compatibility-promises" rel="noopener noreferrer"&gt;(1)&lt;/a&gt; and &lt;a href="https://opentofu.org/docs/language/v1-compatibility-promises" rel="noopener noreferrer"&gt;(2)&lt;/a&gt;, the two projects may slowly diverge.&lt;/p&gt;

&lt;h4&gt;
  
  
  Changes Introduced after the Fork
&lt;/h4&gt;

&lt;p&gt;After the fork, the &lt;a href="https://github.com/opentofu/opentofu/blob/v1.6/CHANGELOG.md#160-unreleased" rel="noopener noreferrer"&gt;changelog for OpenTofu 1.6.0&lt;/a&gt; lists several significant changes, including the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Conditional GNU Privacy Guard (GPG) validation bypass for the default registry&lt;/li&gt;
&lt;li&gt;Changes to the &lt;code&gt;cloud&lt;/code&gt; and &lt;code&gt;remote&lt;/code&gt; backends and the &lt;code&gt;login&lt;/code&gt; and &lt;code&gt;logout&lt;/code&gt; commands that no longer default to &lt;code&gt;app.terraform.io&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;A new &lt;code&gt;tofu test&lt;/code&gt; command that significantly changes how tests are written and executed&lt;/li&gt;
&lt;li&gt;Several enhancements and bug fixes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Additionally, the OpenTofu community continues to add changes to its &lt;a href="https://github.com/opentofu/opentofu/milestones" rel="noopener noreferrer"&gt;roadmap&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Is OpenTofu a Viable Alternative to Terraform
&lt;/h2&gt;

&lt;p&gt;If you're in search of an IaC tool, you'll need to decide between Terraform and OpenTofu. Existing Terraform users even have three options to consider: Whether to stay with Terraform, switch to OpenTofu now, or stick with Terraform 1.5.x for a while and watch future developments closely.&lt;/p&gt;

&lt;p&gt;For both audiences, there are numerous reasons for choosing the free and open source (FOSS) alternative.&lt;/p&gt;

&lt;h3&gt;
  
  
  Compatibility
&lt;/h3&gt;

&lt;p&gt;As of this writing, OpenTofu is fully compatible with Terraform 1.5.x, but the list of features will increasingly diverge in the future. The &lt;a href="https://opentofu.org/faq" rel="noopener noreferrer"&gt;OpenTofu FAQ&lt;/a&gt; has a clear statement about a community-driven approach shaping OpenTofu's future: "The community will decide what features OpenTofu will have."&lt;/p&gt;

&lt;p&gt;However, OpenTofu will remain backward-compatible so that existing Terraform configurations (made with Terraform up to 1.5.x) continue to work with future versions of OpenTofu (per the &lt;a href="https://opentofu.org/manifesto/" rel="noopener noreferrer"&gt;OpenTofu Manifesto&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;Additionally, the OpenTofu core team confidently &lt;a href="https://opentofu.org/faq/#decisions" rel="noopener noreferrer"&gt;predicts&lt;/a&gt; that "the large number of developers pledging their resources to help develop OpenTofu will accelerate the development of features and enable faster releases than Terraform managed previously." In other words, features that Terraform users have been anticipating for some time may come to life faster with OpenTofu than with Terraform.&lt;/p&gt;

&lt;h3&gt;
  
  
  A Free and Open License
&lt;/h3&gt;

&lt;p&gt;The compatibility question may fade in the future when OpenTofu and Terraform are established as two independent projects and interested parties have a decent list of features to compare before settling on one of the tools. Switching between the two will become less frequent.&lt;/p&gt;

&lt;p&gt;What will remain is the question of the license under which each of the projects is offered. Terraform's BSL is a source-available license that does not guarantee that users can use the code in their preferred manner. Notably, users may not include Terraform in offerings to third parties that compete with HashiCorp's offerings. (See the "Additional Use Grant" in the &lt;a href="https://www.hashicorp.com/bsl" rel="noopener noreferrer"&gt;HashiCorp BSL&lt;/a&gt;.)&lt;/p&gt;

&lt;p&gt;In contrast, OpenTofu is licensed under the MPL 2.0, a true FOSS license that grants users the right to use the code as they wish. Having a true FOSS license means that development is driven by the community that was built around OpenTofu, which is large and thriving. The &lt;a href="https://opentofu.org/supporters" rel="noopener noreferrer"&gt;list of supporters&lt;/a&gt; includes more than 150 companies and more than 780 individuals.&lt;/p&gt;

&lt;p&gt;Additionally, OpenTofu has been adopted by the Linux Foundation, a step that guarantees vendor-neutral governance of the project. The Linux Foundation maintains several projects that are used worldwide, including &lt;a href="https://www.linux.org/" rel="noopener noreferrer"&gt;Linux&lt;/a&gt;, &lt;a href="https://kubernetes.io/" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt;, and &lt;a href="https://nodejs.org/en/" rel="noopener noreferrer"&gt;Node.js&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to Choose OpenTofu over Terraform
&lt;/h2&gt;

&lt;p&gt;Reasons for choosing OpenTofu vary based on the user.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Companies&lt;/strong&gt; may have the largest incentive to choose OpenTofu. Use cases for an IaC tool are constantly at risk of conflicting with the BSL. Moreover, the Terraform project changed its license unexpectedly, and it may do so again. With a true open source license and under the umbrella of a nonprofit foundation, OpenTofu provides a far more predictable future.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Digital agencies and consultants&lt;/strong&gt; should strive to offer their clients a solution whose legal basis is predictably stable and immune to the opaque decisions of a single company. OpenTofu is not only that. It's also backed by a &lt;a href="https://opentofu.org/supporters/" rel="noopener noreferrer"&gt;large list of supporters&lt;/a&gt; who joined forces to enhance and extend OpenTofu based on the wishes of the community.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Individual users&lt;/strong&gt; may feel unaffected by Terraform's switch to the BSL. But then there is little that speaks against using OpenTofu instead, which comes under a more open license. While the BSL might have little effect on individual, noncommercial users, nobody knows what possible future changes to the Terraform license will bring. Choosing an open source project now, when switching costs are low, is the sensible choice.&lt;/p&gt;

&lt;h2&gt;
  
  
  Can Terraform Still Be a Good Choice?
&lt;/h2&gt;

&lt;p&gt;With all the obvious advantages of truly open-source projects, it would seem that no one would choose to use Terraform. Don't judge too quickly, though. It's not all black and white. &lt;/p&gt;

&lt;p&gt;For instance, Terraform’s availability as a cloud service when paired with Hashicorp’s enterprise-level support, could be a combo that's difficult for other competitors to match.&lt;br&gt;
Moreover, some customers might conclude that their Terraform use cases are not negatively affected by Terraform's switch from an open-source license to a source-available license.&lt;/p&gt;

&lt;p&gt;Granted, these advantages are rather specific to particular customers and use cases. OpenTofu might catch up on these aspects quickly—never underestimate the power of open source.&lt;/p&gt;

&lt;h2&gt;
  
  
  OpenTofu Continues to Evolve
&lt;/h2&gt;

&lt;p&gt;As mentioned previously, OpenTofu and Terraform are likely to take different paths in the future. With hundreds of supporters committed to shaping the future of OpenTofu, the open source path is likely the one that leads to a more feature-rich and mature product than Terraform.&lt;/p&gt;

&lt;p&gt;For example, in August 2023, &lt;a href="https://twitter.com/OpenTofuOrg/status/1696597790661677207" rel="noopener noreferrer"&gt;OpenTofu announced&lt;/a&gt; an experimental implementation of end-to-end encryption for state files, a feature that Terraform users have been &lt;a href="https://github.com/hashicorp/terraform/issues/516" rel="noopener noreferrer"&gt;waiting for since 2014&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Chances are that there will be more of these initiatives to implement long-awaited features soon.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Future of IaC Tools Is Open Source
&lt;/h2&gt;

&lt;p&gt;The power of open source should not be underestimated. For any company, consultant, or developer whose business model might be threatened by the sudden license change of Terraform, the OpenTofu project provides a solid open source alternative.&lt;/p&gt;

&lt;p&gt;OpenTofu is compatible with Terraform and aims to stay that way. It's a Linux Foundation project free of business tactics and legal insecurity. And it's backed by a large and active community.&lt;/p&gt;

&lt;p&gt;While the BSL forbids certain kinds of uses of Terraform, the MPL 2.0 grants users complete freedom in using OpenTofu. The Linux Foundation has a track record of widely successful open source projects, and OpenTofu is a worthy addition.&lt;/p&gt;

&lt;p&gt;Existing Terraform users may envision a few risks when making the switch to OpenTofu, but those risks are more than mitigated by the stability that an open source, foundation-led project provides. The best time for making the switch is now, while the two projects are still similar enough to enable a smooth transition.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>opentofu</category>
      <category>iac</category>
    </item>
    <item>
      <title>Pre and post-merge tests using a merge queue</title>
      <dc:creator>Ankit Jain</dc:creator>
      <pubDate>Thu, 14 Mar 2024 22:43:06 +0000</pubDate>
      <link>https://forem.com/aviator_co/pre-and-post-merge-tests-using-a-merge-queue-478g</link>
      <guid>https://forem.com/aviator_co/pre-and-post-merge-tests-using-a-merge-queue-478g</guid>
      <description>&lt;p&gt;One of the key ingredients making developers productive is faster feedback loops. A fast feedback loop allows developers to identify and address issues promptly, leading to higher-quality code and faster release cycles.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pre-merge and Post-merge tests
&lt;/h2&gt;

&lt;p&gt;To maintain the good health of your system, there are a few types of tests that may be required. Validating these tests can take anywhere from a sub-second to several hours.&lt;/p&gt;

&lt;p&gt;In the traditional waterfall model, testing is often a phase that occurs after development. However, the emphasis is on catching issues early in agile development and CI/CD workflows. But running all the tests can be extremely time-consuming, and provides slow feedback to the developers.&lt;/p&gt;

&lt;p&gt;Instead, consider dividing tests into “pre-Merge” and “post-Merge” buckets. For instance, explore how LinkedIn &lt;a href="https://engineering.linkedin.com/blog/2020/continuous-integration" rel="noopener noreferrer"&gt;manages pre-merge and post-merge workflows&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Splitting the tests
&lt;/h2&gt;

&lt;p&gt;There are a couple of considerations to split the tests effectively:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The pre-merge tests should be faster to execute ensuring a fast feedback cycle for the developers&lt;/li&gt;
&lt;li&gt;On the other hand, the post-merge tests should be generally more stable. If these tests fail often, we end up in a constant state of failed mainline. But, we should still expect these tests to fail occasionally. If a test doesn’t ever fail, it’s not worth running!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So let’s apply that to some types of tests, these answers may vary depending on your setup but should be generally agreeable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Code linting
&lt;/h3&gt;

&lt;p&gt;Pre-merge: Linting could catch a bug early in the developer lifecycle, and is both cheap and fast to run.&lt;/p&gt;

&lt;h3&gt;
  
  
  Unit tests
&lt;/h3&gt;

&lt;p&gt;Pre-merge: They help catch bugs at the smallest level, ensuring that each piece of the puzzle works independently.&lt;/p&gt;

&lt;h3&gt;
  
  
  Integration testing
&lt;/h3&gt;

&lt;p&gt;Pre-merge: Integration tests verify the interactions between various modules, ensuring a cohesive and functional application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Automated regression testing
&lt;/h3&gt;

&lt;p&gt;Post-merge: Running tests to identify anything that was previously working that may have regressed. We expect these would fail less often but are still essential to maintain code health.&lt;/p&gt;

&lt;h3&gt;
  
  
  Performance testing
&lt;/h3&gt;

&lt;p&gt;Post-merge: As changes are deployed over time, the performance of the software may be impacted, this is where performance testing is important. We do not expect every change to impact the performance, and these tests should not fail often.&lt;/p&gt;

&lt;h3&gt;
  
  
  User Acceptance Testing (UAT)
&lt;/h3&gt;

&lt;p&gt;Post-merge: Any type of manual QA or UAT should be handled post-merge as we expect these to be extremely slow. In most cases, if we identify a failure, it is typically resolved in future releases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Managing the split
&lt;/h2&gt;

&lt;p&gt;Now that we have the tests split, we can still catch most of the issues in the development cycle, and provide fast feedback to the developer. But what happens when the post-merge tests fail?&lt;/p&gt;

&lt;p&gt;In most cases, you coordinate with the release team to roll back the change that caused the failure or roll forward a fix manually. This manual process can be annoying and can also cause poor developer experience.&lt;/p&gt;

&lt;p&gt;This is where MergeQueue could play an interesting role.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding the MergeQueue
&lt;/h2&gt;

&lt;p&gt;Before delving into the intricacies of managing pre and post-merge tests with the queue, let’s take a moment to understand the role of MergeQueue in the CI/CD pipeline. MergeQueue acts as a gatekeeper, managing code merges and orchestrating the deployment process. If you are unfamiliar with merge queues in general, &lt;a href="https://www.aviator.co/blog/what-is-a-merge-queue/" rel="noopener noreferrer"&gt;here’s a good primer&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Most of the modern merge queues offer some variance of an &lt;a href="https://docs.aviator.co/mergequeue/concepts/parallel-mode" rel="noopener noreferrer"&gt;optimistic parallel mode&lt;/a&gt;. In such a mode, as changes are submitted to the queue, it creates an optimistic batch that contains all queued changes including the most recent submitted one. This batch is then validated again for all the tests to ensure that it does not break the mainline before merging the PRs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fast forwarding merge
&lt;/h3&gt;

&lt;p&gt;A small variation of this optimistic parallel mode is called a fast-forwarding merge. The main difference in the case of fast-forwarding is that you are fast-forwarding the mainline (e.g. master) to these validated commit SHAs instead of creating new commit SHAs when the PRs are actually merged post-validation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fduziuylvox11qd95tw0q.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fduziuylvox11qd95tw0q.gif" alt="Fast-forward merge example" width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Post-merge is actually post-queue
&lt;/h3&gt;

&lt;p&gt;Using the image above, you can think of the queued PRs as part of a “staging” branch. So as new commits are being added, those get “merged” into this staging. branch before they land on mainline. If we use that lens, we can run the “post-merge” test in the queue instead of running them after the changes are merged in mainline.&lt;/p&gt;

&lt;p&gt;From the developer feedback viewpoint, the experience of “handing over” the PR to the queue is the same as merging the PR. But now, your rollbacks can be automated. Since the queue validates the changes before forwarding the mainline, any failure detected gets force pushed out.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqd108pvo3rv26aztnskt.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqd108pvo3rv26aztnskt.gif" alt="Fast-forward merge failure" width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Developer experience
&lt;/h2&gt;

&lt;p&gt;As a developer, the workflow would be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open a PR, run the pre-merge test, and request a code review&lt;/li&gt;
&lt;li&gt;Once the changes are approved and the tests pass, instead of merging the PR, they can enqueue it&lt;/li&gt;
&lt;li&gt;MergeQueue will create a new branch with the squash commit of changes and run the CI on it. Developers can still access this staging branch if they want to access the latest codemix, knowing that it may still not have all the tests validated.&lt;/li&gt;
&lt;li&gt;If all the tests pass, the mainline is fast-forwarded to this commit SHA, the original PR is flagged as merged.&lt;/li&gt;
&lt;li&gt;If any of the required tests fail, the mainline is not impacted, and the developer is notified about the failed tests and the PR remains open for the developer to fix the issue and submit again.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Behind the scenes
&lt;/h2&gt;

&lt;p&gt;The queue runs all the post-merge tests, which may be slow but usually pass. The mainline is always green because the changes are completely validated before they reach mainline. You would want to configure &lt;a href="https://docs.aviator.co/mergequeue/how-to-guides/customize-required-checks#override-checks-for-parallel-mode" rel="noopener noreferrer"&gt;separate test execution and validation&lt;/a&gt; for the PRs created by developers and the branches created by the queue. Aviator MergeQueue provides a &lt;a href="https://docs.aviator.co/mergequeue/concepts/optimizing-ci-execution" rel="noopener noreferrer"&gt;simple branching structure&lt;/a&gt; to split the test execution in all the common CI platforms for efficient CI usage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In one of our old posts, we also talked about the death by a thousand papercuts due to the &lt;a href="https://www.aviator.co/blog/engineering-efficiency-calculator/" rel="noopener noreferrer"&gt;broken mainline&lt;/a&gt;, but maintaining a faster feedback cycle is also critical for an improved developer experience.&lt;/p&gt;

&lt;p&gt;Using MergeQueue to split the tests (pre-queue and post-queue) is a great way to balance both sides of the problem while improving developer productivity.&lt;/p&gt;

</description>
      <category>cicd</category>
      <category>automation</category>
      <category>mergequeue</category>
    </item>
    <item>
      <title>SonarQube vs Fortify</title>
      <dc:creator>Brian Neville-O'Neill</dc:creator>
      <pubDate>Wed, 06 Dec 2023 17:31:50 +0000</pubDate>
      <link>https://forem.com/aviator_co/sonarqube-vs-fortify-27fo</link>
      <guid>https://forem.com/aviator_co/sonarqube-vs-fortify-27fo</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.aviator.co%2Fblog%2Fwp-content%2Fuploads%2F2023%2F12%2Fsonarqube-vs-fortify-1024x576.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.aviator.co%2Fblog%2Fwp-content%2Fuploads%2F2023%2F12%2Fsonarqube-vs-fortify-1024x576.jpeg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;SonarSource SonarQube and OpenText Fortify are popular software security and code analysis tools. In this article, we will focus on the following: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SonarQube and Fortify’s features, capabilities, and functionalities.&lt;/li&gt;
&lt;li&gt;A comparison between SonarQube and Microfocus Fortify&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  SonarQube
&lt;/h2&gt;

&lt;p&gt;Sonarqube is a platform used for continuous code inspection and static code analysis. You can use it early in your software development cycle to identify and address code issues. It helps you improve your code quality and reduce build failure rates. &lt;/p&gt;

&lt;p&gt;SonarQube has a lower barrier for fast use because it has a user-friendly interface, community support, and easy setup. &lt;/p&gt;

&lt;h2&gt;
  
  
  SonarQube features
&lt;/h2&gt;

&lt;p&gt;Let’s take a deep dive into the features of SonarQube:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;*&lt;em&gt;Code coverage and Testing:  *&lt;/em&gt; It integrates with many popular testing frameworks and tools that help identify what part of your code hasn’t been tested. It helps with an extensive range by highlighting areas that need test cases. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Code Quality Analysis:&lt;/strong&gt; SonarQube analyzes code according to predefined standards and alerts you when your code doesn’t meet these standards or doesn’t meet some of the rules. It checks for code quality, like code smells, bugs, and vulnerabilities.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Complex Analysis Of Code:&lt;/strong&gt; SonarQube analyses your code and lets you know the part of your code that might be hard to maintain or understand. This insight can make your complex code more readable and easily understood. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CI/CD Integration and Reporting:&lt;/strong&gt; SonarQube integrates with different Continuous Integration and Continuous Delivery (CI/CD) tools, and you can easily add them to your development pipeline. It provides you with centralized reporting that allows you to make data-driven decisions that can improve your software development process.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  SonarQube benefits
&lt;/h2&gt;

&lt;p&gt;There are several strengths you enjoy when you use SonarQube, and they are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Great support for many programming languages&lt;/li&gt;
&lt;li&gt;Interactive community support&lt;/li&gt;
&lt;li&gt;A detailed set of rules for code quality and detection&lt;/li&gt;
&lt;li&gt;It is user-friendly and easy to use&lt;/li&gt;
&lt;li&gt;You can integrate with popular CI/CD tools&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  SonarQube limitations
&lt;/h2&gt;

&lt;p&gt;Despite the benefits you might enjoy when you use SonarQube in your development process, there are certain limitations you should be aware of. They are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;There is limited support for particular programming languages&lt;/li&gt;
&lt;li&gt;It lacks advanced code security features&lt;/li&gt;
&lt;li&gt;False positives in security vulnerabilities&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Fortify
&lt;/h2&gt;

&lt;p&gt;Fortify helps you identify and remedy security vulnerabilities in your software development process. You get a comprehensive approach during your development process with software composition analysis (SCA), dynamic application security testing (DAST), and static application security testing (SAST) it integrates. &lt;/p&gt;

&lt;p&gt;Using these features, you can detect vulnerabilities early on and fix them before deploying your application. It supports programming languages from Apex, Java, and others.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fortify features
&lt;/h2&gt;

&lt;p&gt;Let’s dive into the features of Microfocus Fortify:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Advanced Security Testing:&lt;/strong&gt; Suppose you use Fortify for your software development process. In that case, you enjoy advanced code security testing that would help your overall efforts because it enables you to understand the issues or potential threats better and can help you address these critical bottlenecks. Using Fortify means picking up problems you might miss using other tools. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Static Code Analysis:&lt;/strong&gt; Fortify analyses for code structure and logic, which helps identify coding flaws in your source code. Fortify checks your code against predefined rules and notifies you of an issue, allowing you to fix your code before deploying. In addition, Fortify lets you set your own rules and policies based on your software development requirements. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Integration with Build Sytems:&lt;/strong&gt; Fortify integrates with other build systems and CI/CD pipelines. It allows you to implement security testing as an essential part of your software development process by allowing you to incorporate security testing into existing workflows. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Fortify benefits
&lt;/h2&gt;

&lt;p&gt;There are several benefits of Fortify, and they are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It allows customizable rules and standards for static code analysis&lt;/li&gt;
&lt;li&gt;It has comprehensive security code testing capabilities&lt;/li&gt;
&lt;li&gt;It uses advanced vulnerability testing techniques and methods&lt;/li&gt;
&lt;li&gt;Easy integration with development environments and CI/CD tools&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Fortify limitations
&lt;/h2&gt;

&lt;p&gt;Here are several limitations you have when  using Fortify: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It takes a lot of work to set up and a steep learning curve.&lt;/li&gt;
&lt;li&gt;Compared to SonarQube, it needs more language support. &lt;/li&gt;
&lt;li&gt;It is expensive for enterprise-level usage. &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Comparison: SonarQube vs Fortify
&lt;/h2&gt;

&lt;p&gt;There are some differences when you use both tools for your software development process. However, you must know their weakness and strengths to help you make an ideal and better decision. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;SonarQube beats Fortify because it has the best-suited features regarding quality code analysis. When you use SonarQube for software development builds, you can get comments from code coverage measurement, a predefined rules-based analysis, complexity analysis, and code duplication detection. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fortify beats SonarQube regarding security vulnerabilities because it is more suited for this purpose. Fortify offers you in-depth reporting, customizable rules, and data flow analysis. It is specifically designed to deal with security issues in your code. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In terms of integration with CI/CD tools and development workflows, SonarQube and Fortify offer a seamless workaround for developers. They provide detailed reporting for coding and security vulnerabilities to aid your development process. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Regarding operating costs, SonarQube is less expensive than Fortify for enterprise purposes. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Your choice of software for your development process should depend on your project needs, requirements, and available capital for operation. In the article, we looked at the features of both, their benefits, and the limitations you would face when you use them. Furthermore, by comparing both, you can reach a conclusion for which to use when appropriate, assuming it meets your needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.aviator.co/" rel="noopener noreferrer"&gt;Aviator&lt;/a&gt;: Automate your cumbersome processes
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://aviator.co/" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.aviator.co%2Fwp-content%2Fuploads%2F2022%2F08%2Fblog-cta-1024x727.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Aviator automates tedious developer workflows by managing git Pull Requests (PRs) and continuous integration test (CI) runs to help your team avoid broken builds, streamline cumbersome merge processes, manage cross-PR dependencies, and handle flaky tests while maintaining their security compliance.&lt;/p&gt;

&lt;p&gt;There are 4 key components to Aviator:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;MergeQueue&lt;/strong&gt;  – an automated queue that manages the merging workflow for your GitHub repository to help protect important branches from broken builds. The Aviator bot uses GitHub Labels to identify Pull Requests (PRs) that are ready to be merged, validates CI checks, processes semantic conflicts, and merges the PRs automatically.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ChangeSets&lt;/strong&gt;  – workflows to synchronize validating and merging multiple PRs within the same repository or multiple repositories. Useful when your team often sees groups of related PRs that need to be merged together, or otherwise treated as a single broader unit of change.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TestDeck&lt;/strong&gt;  – a tool to automatically detect, take action on, and process results from flaky tests in your CI infrastructure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stacked PRs CLI&lt;/strong&gt;  – a command line tool that helps developers manage cross-PR dependencies. This tool also automates syncing and merging of stacked PRs. Useful when your team wants to promote a culture of smaller, incremental PRs instead of large changes, or when your workflows involve keeping multiple, dependent PRs in sync.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="http://aviator.co/" rel="noopener noreferrer"&gt;Try it for free.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://www.aviator.co/blog/sonarqube-vs-fortify/" rel="noopener noreferrer"&gt;SonarQube vs Fortify&lt;/a&gt; first appeared on &lt;a href="https://www.aviator.co/blog" rel="noopener noreferrer"&gt;Aviator Blog&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://www.aviator.co/blog/sonarqube-vs-fortify/" rel="noopener noreferrer"&gt;SonarQube vs Fortify&lt;/a&gt; appeared first on &lt;a href="https://www.aviator.co/blog" rel="noopener noreferrer"&gt;Aviator Blog&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>codeanalysis</category>
      <category>fortify</category>
      <category>sonarqube</category>
    </item>
    <item>
      <title>What is a monorepo and why use one?</title>
      <dc:creator>Brian Neville-O'Neill</dc:creator>
      <pubDate>Wed, 29 Nov 2023 18:39:26 +0000</pubDate>
      <link>https://forem.com/aviator_co/what-is-a-monorepo-and-why-use-one-dec</link>
      <guid>https://forem.com/aviator_co/what-is-a-monorepo-and-why-use-one-dec</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.aviator.co%2Fblog%2Fwp-content%2Fuploads%2F2023%2F11%2Fmonorepo-1024x574.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.aviator.co%2Fblog%2Fwp-content%2Fuploads%2F2023%2F11%2Fmonorepo-1024x574.jpg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Managing a sprawling codebase across multiple repositories can be a logistical nightmare. Developers often find themselves juggling various versions, wrestling with incompatible dependencies, and navigating a maze of pull requests and merges.&lt;/p&gt;

&lt;p&gt;This chaos not only hampers productivity but also increases the risk of errors and inconsistencies. Are you tired of this disarray and looking for a streamlined way to manage your projects?&lt;/p&gt;

&lt;p&gt;The answer lies in adopting a monorepo (aka a monolithic repository). One of the most compelling benefits of a monorepo is its ability to simplify version control.&lt;/p&gt;

&lt;p&gt;In a traditional multirepo setup, each project or component has its own repository, often leading to versioning conflicts and making it difficult to keep track of changes across projects. With a monorepo, all your code lives in one place, making it easier to manage versions and maintain a coherent history.&lt;/p&gt;

&lt;p&gt;This centralized approach ensures that everyone on the team is working with the same codebase, reducing the likelihood of versioning issues and making rollbacks more straightforward.&lt;/p&gt;

&lt;p&gt;In this comprehensive guide, you’ll gain insights into what a monorepo is and how it differs from traditional multirepo strategies. You’ll also learn about the advantages of using a monorepo, particularly for larger teams dealing with complex projects.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a monorepo?
&lt;/h2&gt;

&lt;p&gt;A monorepo is a software development strategy where the code for multiple projects is stored in a single version control system (VCS) repository:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FwDFlFLr.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FwDFlFLr.jpeg" title="Monorepo" alt="Monorepo courtesy of Nuno Bispo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This differs from the more traditional approach where each project or module has its own separate repository (aka a multirepo):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FhFUnoVF.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FhFUnoVF.jpeg" title="Monorepo" alt="Polyrepo courtesy of Nuno Bispo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The projects within a monorepo can be interconnected libraries, services, applications, or even documentation.&lt;/p&gt;

&lt;p&gt;The central idea of a monorepos is to consolidate the codebase, ensuring more streamlined version control, code reuse, and improved collaboration. For larger teams, this means better code visibility, simplified dependency management, and the possibility of atomic changes across multiple projects.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why you should use monorepos
&lt;/h2&gt;

&lt;p&gt;One of the main advantages of using a monorepo is unified versioning. In a traditional multirepo setup, each project has its own version history, making it challenging to understand how changes in one project affect others. With a monorepo, all projects share a single version history, making it easier to understand their interdependencies.&lt;/p&gt;

&lt;p&gt;For example, if Project A depends on a feature in Project B, both can be updated simultaneously in a single commit, making it easier to track changes and dependencies.&lt;/p&gt;

&lt;p&gt;Following are a few more advantages of using a monorepo:&lt;/p&gt;

&lt;h3&gt;
  
  
  Reusable code across projects
&lt;/h3&gt;

&lt;p&gt;While it’s true that package managers can help sync dependencies across multiple repositories, having all code in a single repository makes it even easier to share and reuse code. There’s no need to publish internal packages just to share common utilities or components.&lt;/p&gt;

&lt;p&gt;This is particularly beneficial for large teams where multiple projects often have overlapping requirements. Code reusability in a monorepo ensures that developers can easily leverage existing code, reducing duplication and accelerating development cycles.&lt;/p&gt;

&lt;h3&gt;
  
  
  Easier refactoring ensures consistency
&lt;/h3&gt;

&lt;p&gt;In a monorepo, refactoring becomes a less daunting task. Changes can be made once and propagated across all dependent projects in a single commit.&lt;/p&gt;

&lt;p&gt;This ensures that improvements or fixes are consistently applied, reducing the risk of one project lagging behind in terms of code quality or features.&lt;/p&gt;

&lt;h3&gt;
  
  
  Enhanced collaboration through visibility
&lt;/h3&gt;

&lt;p&gt;Monorepos offer improved visibility, allowing teams to better communicate and collaborate. In a large team, this is especially beneficial. Developers can see the entire codebase, understand the context of their work better, and make cross-project changes effortlessly.&lt;/p&gt;

&lt;p&gt;This holistic view eliminates the need for special permissions to access different repositories, making it easier for team members to assist each other and encourage code reuse.&lt;/p&gt;

&lt;h3&gt;
  
  
  Streamlined dependency management
&lt;/h3&gt;

&lt;p&gt;Managing dependencies in a large team can be cumbersome with multiple repositories. A monorepo ensures that there’s a single version of each dependency, reducing conflicts and making updates more predictable.&lt;/p&gt;

&lt;p&gt;This centralized approach to dependency management eliminates the “it works on my machine” type of problem, as every team member works with the same set of standardized tools and configurations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Atomic changes for better version control
&lt;/h3&gt;

&lt;p&gt;In large teams, coordinating releases and updates can be a complex task. Monorepos enable atomic changes, allowing related modifications across multiple projects to be committed at once. This ensures that features or fixes affecting multiple projects are released cohesively, making version control more straightforward and reliable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Optimized CI/CD pipelines
&lt;/h3&gt;

&lt;p&gt;One of the benefits of monorepos is that continuous integration, continuous deployment (CI/CD) are more streamlined. There’s no need to sync multiple repositories or ensure cross-repo compatibility.&lt;/p&gt;

&lt;p&gt;The unified nature of a monorepo allows build and test tools to be standardized, ensuring that everyone is testing and deploying based on the same criteria.&lt;/p&gt;

&lt;p&gt;This is particularly advantageous for large teams, where maintaining consistency in CI/CD practices is crucial for efficient and reliable software delivery.&lt;/p&gt;

&lt;p&gt;By understanding these benefits in the context of large teams, it becomes clear why monorepos are becoming increasingly popular. They offer a unified, streamlined, and efficient approach to software development that is especially advantageous in complex, multiproject environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monorepo challenges and tools that can help
&lt;/h2&gt;

&lt;p&gt;Monorepos have surged in popularity, especially among large tech giants, due to their myriad advantages. But they’re not a one-size-fits-all solution and come with their own set of challenges. Explore some of these challenges and learn about tools that can help.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scaling issues
&lt;/h3&gt;

&lt;p&gt;As the codebase within a monorepo grows, so does the build time. Every time a change is made, the CI system might try to rebuild and retest the entire codebase, making the process slow and cumbersome.&lt;/p&gt;

&lt;p&gt;To help with these scaling issues, build tools like &lt;a href="https://bazel.build/" rel="noopener noreferrer"&gt;Bazel&lt;/a&gt;, &lt;a href="https://www.pantsbuild.org/" rel="noopener noreferrer"&gt;Pants&lt;/a&gt;, and &lt;a href="https://buck2.build/" rel="noopener noreferrer"&gt;Buck2&lt;/a&gt; are specifically designed to optimize the build process through a technique known as incremental builds. Incremental builds minimize the strain on system resources, allowing for more efficient use of hardware, whether you’re working on a local machine or in a cloud-based development environment.&lt;/p&gt;

&lt;p&gt;Unlike traditional build systems that recompile the entire codebase every time a change is made, these tools are smart enough to identify which parts of the codebase are affected by recent changes.&lt;/p&gt;

&lt;p&gt;These tools are built to seamlessly integrate into your existing development workflow. Once configured, they can automatically detect changes in the codebase and trigger the appropriate incremental builds. This automation is particularly beneficial in a CI/CD environment, where rapid and frequent builds are the norm.&lt;/p&gt;

&lt;p&gt;While these tools offer powerful capabilities, they do come with an initial learning curve. Each tool has its own set of configurations, syntax, and best practices that you need to familiarize yourself with. However, the investment in learning is often justified by the significant gains in build speed and efficiency.&lt;/p&gt;

&lt;p&gt;Another advantage of using these specialized build tools is their flexibility. They allow for a high degree of customization, enabling you to tailor the build process to meet the specific needs of your project or team. This is especially useful in large teams or complex projects where generic build configurations may not be sufficient.&lt;/p&gt;

&lt;h3&gt;
  
  
  High complexity
&lt;/h3&gt;

&lt;p&gt;For newcomers or even seasoned team members, navigating a huge codebase can be daunting. Understanding the interdependencies, finding the right modules, or even simply knowing where to start can be overwhelming.&lt;/p&gt;

&lt;p&gt;Code navigation tools such as &lt;a href="https://sourcegraph.com/search" rel="noopener noreferrer"&gt;Sourcegraph&lt;/a&gt; and integrated features within platforms like &lt;a href="https://github.com/" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; serve as invaluable aids for developers navigating extensive codebases. These tools go beyond basic text search to offer a range of advanced functionalities designed to make code exploration more efficient and insightful.&lt;/p&gt;

&lt;p&gt;One of the primary features of these tools is advanced code search, which allows developers to perform complex queries to find specific code snippets, functions, or even documentation within a large codebase. This is particularly useful when you’re trying to understand how a particular piece of code interacts with other components or when you’re debugging.&lt;/p&gt;

&lt;p&gt;Another powerful feature is cross-referencing, which enables developers to easily find where a particular function or variable is used across different files or projects. This is incredibly helpful for understanding the impact of potential changes or for tracking down the root cause of a bug. It eliminates the need to manually search through multiple files, saving both time and effort.&lt;/p&gt;

&lt;p&gt;These tools also offer intelligent code mapping, which provides a visual representation of how different parts of the code are interconnected. This can be especially useful for new team members who are trying to get a grasp of a complex project or for any developer who wants to understand the architecture and dependencies within the codebase.&lt;/p&gt;

&lt;h3&gt;
  
  
  Potential for conflicts
&lt;/h3&gt;

&lt;p&gt;With many developers working simultaneously on the same repository, the chances of conflicting changes or merge conflicts increase. This can hamper the development speed and lead to errors if not resolved correctly.&lt;/p&gt;

&lt;p&gt;VCS like &lt;a href="https://git-scm.com/" rel="noopener noreferrer"&gt;Git&lt;/a&gt; offer robust mechanisms to handle merge conflicts. Features like pull requests in platforms like GitHub or &lt;a href="https://bitbucket.org/" rel="noopener noreferrer"&gt;Bitbucket&lt;/a&gt; allow for code review, helping spot and resolve conflicts before they’re merged into the main branch.&lt;/p&gt;

&lt;p&gt;Additionally, automated testing tools like &lt;a href="https://www.jenkins.io/" rel="noopener noreferrer"&gt;Jenkins&lt;/a&gt;, &lt;a href="https://www.travis-ci.com/" rel="noopener noreferrer"&gt;Travis CI&lt;/a&gt;, or &lt;a href="https://circleci.com/" rel="noopener noreferrer"&gt;CircleCI&lt;/a&gt; can automatically run tests on branches before they’re merged. This ensures that any breaking changes or conflicts get flagged early.&lt;/p&gt;

&lt;p&gt;As you can see, while monorepos have their disadvantages, there’s a range of tools designed to mitigate these challenges.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building a monorepo culture
&lt;/h2&gt;

&lt;p&gt;The decision to use a monorepo goes beyond just tools and technical considerations; it requires a cultural shift in how developers work and collaborate. This culture is foundational to effectively managing and scaling a monorepo environment, ensuring that the benefits outweigh the challenges.&lt;/p&gt;

&lt;p&gt;Take a look at a few different aspects of building a monorepo culture.&lt;/p&gt;

&lt;h3&gt;
  
  
  Shared responsibility
&lt;/h3&gt;

&lt;p&gt;In a monorepo setting, boundaries between projects or components become blurred. Instead of viewing projects as isolated entities, team members should see the entire repository as their domain. That’s why it’s important to encourage collaboration across teams. Cross-team code reviews, pair programming, and team rotations can break silos and foster a holistic view of the codebase.&lt;/p&gt;

&lt;p&gt;Additionally, you should regularly organize internal workshops, tech talks, or code walkthroughs. This can help team members familiarize themselves with different parts of the codebase and understand its intricacies.&lt;/p&gt;

&lt;p&gt;For instance, Google fosters an environment in which &lt;a href="https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/41469.pdf" rel="noopener noreferrer"&gt;developers have the freedom to access and contribute&lt;/a&gt; to any section of the codebase. This approach to code ownership has led to standardized coding practices, enhanced collaboration among team members, and a simplified process of reusing code.&lt;/p&gt;

&lt;h3&gt;
  
  
  Early merging to catch integration Issues
&lt;/h3&gt;

&lt;p&gt;Consistently merging code changes is a proactive approach to software development that helps catch integration issues at an early stage. By integrating changes frequently, you can identify conflicts or bugs sooner rather than later, making them easier to resolve.&lt;/p&gt;

&lt;p&gt;This practice minimizes the risk of encountering larger, more complicated issues in the future, which could require significant time and effort to fix. For example, if two developers are working on features that affect the same piece of code, early merging will reveal any incompatibilities between their changes, allowing for quicker adjustments.&lt;/p&gt;

&lt;p&gt;To manage these merges in a more organized fashion, implementing branching strategies like feature branching or trunk-based development is highly recommended.&lt;/p&gt;

&lt;p&gt;In feature branching, each new feature or bug fix is developed in its own branch. This allows developers to work on different features simultaneously without affecting the main codebase. Once the feature is complete and tested, it can be merged back into the main branch.&lt;/p&gt;

&lt;p&gt;Feature branching is particularly useful for teams that have multiple developers working on different aspects of a project, as it allows for parallel development without the risk of one feature negatively impacting another.&lt;/p&gt;

&lt;p&gt;In comparison, trunk-based development encourages developers to merge their changes directly into the trunk or main codebase as quickly as possible, often multiple times a day. This approach is beneficial for catching integration issues early and ensures that the codebase remains in a consistently deployable state. It’s especially effective for large teams where rapid integration is crucial for maintaining a smooth development workflow.&lt;/p&gt;

&lt;p&gt;Take Facebook’s example, where the codebase is designed to empower engineers to “&lt;a href="https://en.wikipedia.org/wiki/Move_fast_and_break_things" rel="noopener noreferrer"&gt;move fast and break things&lt;/a&gt;,” signifying a culture that values swift innovation along with ongoing refinement and iteration.&lt;/p&gt;

&lt;h3&gt;
  
  
  Thorough documentation
&lt;/h3&gt;

&lt;p&gt;A monorepo’s vastness makes it challenging to navigate and understand. Comprehensive documentation acts as a map, guiding developers through the code.&lt;/p&gt;

&lt;p&gt;Make sure you establish clear standards for documenting code. This might include things like comments, READMEs, and architecture diagrams.&lt;/p&gt;

&lt;p&gt;Additionally, use tools like &lt;a href="https://www.doxygen.nl/" rel="noopener noreferrer"&gt;Doxygen&lt;/a&gt;, &lt;a href="https://www.oracle.com/technical-resources/articles/java/javadoc-tool.html" rel="noopener noreferrer"&gt;Javadoc&lt;/a&gt;, or &lt;a href="https://docs.readthedocs.io/en/stable/intro/getting-started-with-sphinx.html" rel="noopener noreferrer"&gt;Sphinx&lt;/a&gt; to automatically generate documentation from source code comments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Continual refinement for a healthy codebase
&lt;/h3&gt;

&lt;p&gt;As your codebase grows and evolves, it’s essential to periodically revisit and fine-tune existing code. This practice ensures that your code stays clean, efficient, and in line with current best practices. For instance, an algorithm that was efficient a year ago may now have a more optimized version, or a library you’re using might have received updates that you can take advantage of.&lt;/p&gt;

&lt;p&gt;To systematically address this, consider dedicating specific sprints or time periods exclusively to code refactoring and reducing technical debt. For example, you could allocate the last week of every development cycle to revisit sections of the code that have been flagged for optimization or refactoring. This focused effort ensures that your codebase doesn’t accumulate quick fixes or workarounds that can make it harder to maintain and scale over time.&lt;/p&gt;

&lt;p&gt;In addition, encourage a culture of detailed code reviews that go beyond just assessing functionality. These reviews should also scrutinize the quality of the code, examining factors like readability, efficiency, and adherence to coding standards. Peer feedback during these reviews can be invaluable for identifying areas that may require refactoring. For example, a team member might notice that a particular function is overly complex and suggest breaking it down into smaller, more manageable functions, thereby improving both readability and maintainability.&lt;/p&gt;

&lt;p&gt;By continually refining your code, dedicating time to tackle technical debt, and fostering a culture of thorough code reviews, you can maintain a high-quality, efficient codebase that is easier to work with and less prone to issues in the long run.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monorepo culture at tech giants
&lt;/h2&gt;

&lt;p&gt;Monorepo culture has been adopted by many tech giants and renowned companies due to the myriad advantages it offers. Take a quick look at how Google, Facebook, and Microsoft have adopted a monorepo culture:&lt;/p&gt;

&lt;h3&gt;
  
  
  Google
&lt;/h3&gt;

&lt;p&gt;Google is often credited for popularizing the monorepo approach through its massive monolithic codebase known as Piper, which contains billions of lines of code and thousands of projects.&lt;/p&gt;

&lt;p&gt;At Google, a culture of shared ownership encourages developers to access and contribute to any part of the codebase. This collaborative approach has led to consistent coding standards, enhanced collaboration, and easier code reuse.&lt;/p&gt;

&lt;p&gt;In conjunction with this, Google created Bazel, a build tool designed to work with large codebases like theirs. Bazel supports incremental builds, ensuring only affected components are rebuilt, significantly speeding up the build process.&lt;/p&gt;

&lt;h3&gt;
  
  
  Facebook
&lt;/h3&gt;

&lt;p&gt;Facebook also employs a monorepo for its vast collection of projects, including the main Facebook app, Instagram, and WhatsApp.&lt;/p&gt;

&lt;p&gt;Facebook’s codebase encourages engineers to “move fast and break things,” meaning they actively engage in rapid innovation while also continuously refining and iterating.&lt;/p&gt;

&lt;p&gt;In conjunction, Facebook uses Buck, a build system tailored for their monorepo. It ensures efficient and reproducible builds, which is vital given the scale and pace of their development.&lt;/p&gt;

&lt;h3&gt;
  
  
  Microsoft
&lt;/h3&gt;

&lt;p&gt;Microsoft famously transitioned the Windows codebase to a monorepo using Git, the largest Git repo on the planet. With the move, Microsoft aimed to increase developer productivity, improve code sharing, and streamline the engineering system.&lt;/p&gt;

&lt;p&gt;To manage the massive repository, Microsoft developed the &lt;a href="https://github.com/microsoft/VFSForGit" rel="noopener noreferrer"&gt;Virtual File System for Git (VFS for Git)&lt;/a&gt;. It allows the Git client to operate at a scale previously thought impossible by virtualizing the filesystem beneath the repo and making it appear as though all the files are present when, in reality, they are not.&lt;/p&gt;

&lt;p&gt;These companies not only showcase the technical adaptability of monorepos but also emphasize the cultural shift essential for such a model’s success.&lt;/p&gt;

&lt;h2&gt;
  
  
  The benefit of monorepos
&lt;/h2&gt;

&lt;p&gt;Deciding between monorepos and multirepos isn’t solely a technical decision—it encapsulates a team’s collaboration dynamics, accountability distribution, and holistic view toward software creation. When complemented with the right tools and a strong culture emphasizing shared ownership and ongoing refinement, monorepos can create a vibrant, streamlined, and unified framework for software initiatives, particularly for larger teams.&lt;/p&gt;

&lt;p&gt;Beyond technical merits, monorepos foster an enhanced collaborative environment. They dissolve barriers between developers, promoting shared responsibility, comprehensive code reviews, and a unified development environment.&lt;/p&gt;

&lt;p&gt;Together, these features make monorepos a compelling choice for teams seeking both technical efficiency and collaborative synergy.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.aviator.co/" rel="noopener noreferrer"&gt;Aviator&lt;/a&gt;: Automate your cumbersome processes
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://aviator.co/" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.aviator.co%2Fwp-content%2Fuploads%2F2022%2F08%2Fblog-cta-1024x727.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Aviator automates tedious developer workflows by managing git Pull Requests (PRs) and continuous integration test (CI) runs to help your team avoid broken builds, streamline cumbersome merge processes, manage cross-PR dependencies, and handle flaky tests while maintaining their security compliance.&lt;/p&gt;

&lt;p&gt;There are 4 key components to Aviator:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;MergeQueue&lt;/strong&gt;  – an automated queue that manages the merging workflow for your GitHub repository to help protect important branches from broken builds. The Aviator bot uses GitHub Labels to identify Pull Requests (PRs) that are ready to be merged, validates CI checks, processes semantic conflicts, and merges the PRs automatically.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ChangeSets&lt;/strong&gt;  – workflows to synchronize validating and merging multiple PRs within the same repository or multiple repositories. Useful when your team often sees groups of related PRs that need to be merged together, or otherwise treated as a single broader unit of change.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TestDeck&lt;/strong&gt;  – a tool to automatically detect, take action on, and process results from flaky tests in your CI infrastructure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stacked PRs CLI&lt;/strong&gt;  – a command line tool that helps developers manage cross-PR dependencies. This tool also automates syncing and merging of stacked PRs. Useful when your team wants to promote a culture of smaller, incremental PRs instead of large changes, or when your workflows involve keeping multiple, dependent PRs in sync.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="http://aviator.co/" rel="noopener noreferrer"&gt;Try it for free.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://www.aviator.co/blog/what-is-a-monorepo-and-why-use-one/" rel="noopener noreferrer"&gt;What is a monorepo and why use one?&lt;/a&gt; first appeared on &lt;a href="https://www.aviator.co/blog" rel="noopener noreferrer"&gt;Aviator Blog&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://www.aviator.co/blog/what-is-a-monorepo-and-why-use-one/" rel="noopener noreferrer"&gt;What is a monorepo and why use one?&lt;/a&gt; appeared first on &lt;a href="https://www.aviator.co/blog" rel="noopener noreferrer"&gt;Aviator Blog&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>monorepo</category>
    </item>
    <item>
      <title>Building a CI/CD pipeline for a Google App Engine site using CircleCI</title>
      <dc:creator>Brian Neville-O'Neill</dc:creator>
      <pubDate>Mon, 27 Nov 2023 18:21:14 +0000</pubDate>
      <link>https://forem.com/aviator_co/building-a-cicd-pipeline-for-a-google-app-engine-site-using-circleci-21c8</link>
      <guid>https://forem.com/aviator_co/building-a-cicd-pipeline-for-a-google-app-engine-site-using-circleci-21c8</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.aviator.co%2Fblog%2Fwp-content%2Fuploads%2F2023%2F11%2FApp-Engine.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.aviator.co%2Fblog%2Fwp-content%2Fuploads%2F2023%2F11%2FApp-Engine.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this article, we will build a CI/CD pipeline for a Google App Engine Site using CircleCI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisite
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Python installed on your system&lt;/li&gt;
&lt;li&gt;Google Cloud CLI installed&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What we are building
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Documentation site and connect it to GCP (Google Cloud Platform)&lt;/li&gt;
&lt;li&gt;Using CircleCI for automation&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What is CircleCI?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://circleci.com/" rel="noopener noreferrer"&gt;CircleCI&lt;/a&gt; is a popular choice for software engineers, particularly DevOps engineers when working on &lt;a href="https://www.aviator.co/blog/automating-integration-tests/" rel="noopener noreferrer"&gt;automation&lt;/a&gt; and overall CI/CD integrations. The CI/CD platform helps software teams automate the process of building, testing, and deploying code. As a cloud-based platform, CircleCI allows you to seamlessly integrate with any version control system you choose, such as GitHub, Bitbucket, or GitLab. However, we will be working with GitHub on this article.&lt;/p&gt;

&lt;p&gt;One cool thing about CircleCI is that it lets developers define pipelines that automate the process of building, testing, and deploying code. Pipelines are composed of &lt;a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=azure-devops&amp;amp;tabs=yaml" rel="noopener noreferrer"&gt;jobs&lt;/a&gt;, which are individual steps in the CI/CD process. Jobs can be configured to run on various platforms, including Linux, macOS, and Windows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Circle CI?
&lt;/h2&gt;

&lt;p&gt;CircleCI is a friendly tool for teams of all sizes, ranging from small startups to large enterprises. No wonder the tool is usually “top of the ladder” during CI/CD integrations. It is a powerful tool that can help teams improve the quality and speed of their software development process.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Improved code quality:&lt;/strong&gt; CircleCI can help enhance code quality by automating the testing process. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reduced deployment time:&lt;/strong&gt; CircleCI can help reduce the time it takes to deploy code by automating the process of building and deploying. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Increased confidence in releases:&lt;/strong&gt; CircleCI can help increase confidence in releases by ensuring that code is thoroughly tested before deployment. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Improved team communication:&lt;/strong&gt; CircleCI can help to improve team communication by providing a central location for monitoring the progress of builds and tests.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Relevant CircleCI features
&lt;/h2&gt;

&lt;p&gt;Some of the core features of CircleCI include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Parallelism:&lt;/strong&gt; Jobs can be run in parallel to improve the speed of the CI/CD process. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Caching:&lt;/strong&gt; CircleCI can cache build artifacts and test results to improve the speed of subsequent builds. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Notifications:&lt;/strong&gt; CircleCI can notify team members when builds fail or pass. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring:&lt;/strong&gt; CircleCI provides a dashboard that allows teams to monitor the progress of their builds and tests.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Getting started
&lt;/h2&gt;

&lt;p&gt;To get started, we first need to create a free GitHub repo (I assume you already know how to do that). The next step is to clone the empty repo. After this, let’s create a Python virtual environment by running the following command on your terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pipenv shell
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is what it should look like after a successful installation:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_762DC8729D6070F874B9A6C8613F9867EBE8E5E337A9107440BC3B666C3AB306_1700597968958_Screenshot%2B2023-11-21%2B211806.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_762DC8729D6070F874B9A6C8613F9867EBE8E5E337A9107440BC3B666C3AB306_1700597968958_Screenshot%2B2023-11-21%2B211806.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next is to run the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install sphinx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://www.sphinx-doc.org/en/master/" rel="noopener noreferrer"&gt;Sphinx&lt;/a&gt; is a popular documentation generator written in Python that is widely used for creating high-quality documentation for Python projects. It is known for its ease of use, comprehensive features, and extensive support for various output formats.&lt;/p&gt;

&lt;p&gt;The next step is to get a Sphinx quick start. To do this, head over to the &lt;a href="https://www.sphinx-doc.org/en/master/usage/quickstart.html" rel="noopener noreferrer"&gt;get started&lt;/a&gt; section of Sphinx’s official site and run this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sphinx-quickstart
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you run the command, you get asked a series of questions, exactly the ones in the image below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_762DC8729D6070F874B9A6C8613F9867EBE8E5E337A9107440BC3B666C3AB306_1700599745595_Screenshot%2B2023-11-21%2B214746.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_762DC8729D6070F874B9A6C8613F9867EBE8E5E337A9107440BC3B666C3AB306_1700599745595_Screenshot%2B2023-11-21%2B214746.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Respond to these questions until the whole process is complete.&lt;/p&gt;

&lt;p&gt;This whole process creates a build and source directory. We are also going to install Sphinx make files by running this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;make html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After running the command, your project should look like this in your code editor:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_762DC8729D6070F874B9A6C8613F9867EBE8E5E337A9107440BC3B666C3AB306_1700600224195_Screenshot%2B2023-11-21%2B215541.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_762DC8729D6070F874B9A6C8613F9867EBE8E5E337A9107440BC3B666C3AB306_1700600224195_Screenshot%2B2023-11-21%2B215541.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Within the &lt;code&gt;build&lt;/code&gt; directory, we have our website files, which look this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;📦build
 ┣ 📂doctrees
 ┃ ┣ 📜environment.pickle
 ┃ ┗ 📜index.doctree
 ┗ 📂html
 ┃ ┣ 📂_sources
 ┃ ┃ ┗ 📜index.rst.txt
 ┃ ┣ 📂_static
 ┃ ┃ ┣ 📜alabaster.css
 ┃ ┃ ┣ 📜basic.css
 ┃ ┃ ┣ 📜custom.css
 ┃ ┃ ┣ 📜doctools.js
 ┃ ┃ ┣ 📜documentation_options.js
 ┃ ┃ ┣ 📜file.png
 ┃ ┃ ┣ 📜language_data.js
 ┃ ┃ ┣ 📜minus.png
 ┃ ┃ ┣ 📜plus.png
 ┃ ┃ ┣ 📜pygments.css
 ┃ ┃ ┣ 📜searchtools.js
 ┃ ┃ ┗ 📜sphinx_highlight.js
 ┃ ┣ 📜.buildinfo
 ┃ ┣ 📜genindex.html
 ┃ ┣ 📜index.html
 ┃ ┣ 📜objects.inv
 ┃ ┣ 📜search.html
 ┃ ┗ 📜searchindex.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When we run our project on &lt;code&gt;localhost:8000&lt;/code&gt;, this is what it looks like on the browser:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_762DC8729D6070F874B9A6C8613F9867EBE8E5E337A9107440BC3B666C3AB306_1700602624194_Screenshot%2B2023-11-21%2B223633.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_762DC8729D6070F874B9A6C8613F9867EBE8E5E337A9107440BC3B666C3AB306_1700602624194_Screenshot%2B2023-11-21%2B223633.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Congratulations, we have our documentation site live!&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating a GCP project
&lt;/h2&gt;

&lt;p&gt;In this section, we will create a brand new &lt;a href="https://cloud.google.com/gcp?utm_source=google&amp;amp;utm_medium=cpc&amp;amp;utm_campaign=emea-ng-all-en-bkws-all-all-trial-e-gcp-1011340&amp;amp;utm_content=text-ad-none-any-DEV_c-CRE_501794636587-ADGP_Hybrid+%7C+BKWS+-+EXA+%7C+Txt+~+GCP+~+General%23v2-KWID_43700061569959221-aud-1641092902540:kwd-26415313501-userloc_1010294&amp;amp;utm_term=KW_google+cloud+platform-NET_g-PLAC_&amp;amp;&amp;amp;gad_source=1&amp;amp;gclid=CjwKCAiAx_GqBhBQEiwAlDNAZmnpZ1smTjDIwh0PFBZ6hT-NNobPRD5uYG-SDpNd84A6eiw8ZiDMeRoCDkAQAvD_BwE&amp;amp;gclsrc=aw.ds&amp;amp;hl=en" rel="noopener noreferrer"&gt;GCP&lt;/a&gt; project to figure out which settings need to be tweaked or updated from the base. The next thing is to create our app engine &lt;code&gt;app.yaml&lt;/code&gt;. Google provides a &lt;a href="https://cloud.google.com/appengine/docs/legacy/standard/python/getting-started/hosting-a-static-website" rel="noopener noreferrer"&gt;walkthrough&lt;/a&gt; on how to host a static website using GAE. Here, we can copy this YAML file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;runtime: python27
api_version: 1
threadsafe: true

handlers:
- url: /
  static_files: www/index.html
  upload: www/index.html

- url: /(.*)
  static_files: www/1
  upload: www/(.*)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create an &lt;code&gt;app.yaml&lt;/code&gt; file on your editor and paste this code. We then have to edit the YAML file to point it to the proper location where the website files live. To point your Gcloud command line install to this project, use this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud init --project=&amp;lt;"project ID"&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, you will prompted to log in to Google Cloud like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_762DC8729D6070F874B9A6C8613F9867EBE8E5E337A9107440BC3B666C3AB306_1700650957553_Screenshot%2B2023-11-22%2B120153.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_762DC8729D6070F874B9A6C8613F9867EBE8E5E337A9107440BC3B666C3AB306_1700650957553_Screenshot%2B2023-11-22%2B120153.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Follow the link provided to get your authorization code.&lt;/p&gt;

&lt;p&gt;On the GCP dashboard, navigate to “App Engine” and run the command shown there on your terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;glcloud app deploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will get a prompt asking you to choose the location you would like your app to be deployed:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_762DC8729D6070F874B9A6C8613F9867EBE8E5E337A9107440BC3B666C3AB306_1700654477879_Screenshot%2B2023-11-22%2B130059.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_762DC8729D6070F874B9A6C8613F9867EBE8E5E337A9107440BC3B666C3AB306_1700654477879_Screenshot%2B2023-11-22%2B130059.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose anyone! and your app will successfully deploy. It’s time to push our code to Github! Alternatively, you can clone my GitHub repo &lt;a href="https://github.com/ChisomUma/Google-app-engine-CircleCI" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Link Github repo to CircleCI
&lt;/h2&gt;

&lt;p&gt;The first thing you need to do is create a &lt;a href="https://circleci.com/" rel="noopener noreferrer"&gt;CircleCI&lt;/a&gt; account and link your Github to it. The process is pretty straightforward. Our dashboard should look like this after creating and connecting our project to CircleCI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_762DC8729D6070F874B9A6C8613F9867EBE8E5E337A9107440BC3B666C3AB306_1700668023418_Screenshot%2B2023-11-22%2B164548.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_762DC8729D6070F874B9A6C8613F9867EBE8E5E337A9107440BC3B666C3AB306_1700668023418_Screenshot%2B2023-11-22%2B164548.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, in our code editor, we will create a folder named &lt;code&gt;.circleci&lt;/code&gt; and a &lt;code&gt;config.yaml&lt;/code&gt; file inside it which contains a code that works like this: first, it defines a workflow, then, the workflow will say; each time we push to the main branch, run this set of jobs. We will also define that job, which will contain the logic of where we build our documentation site and deploy it to the Google App engine.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;workflows:
  version: 2
  build_and_deploy:
    jobs:
      - build_and_deploy:
        filters:
          branches:
            only:
              - main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;CircleCI will only run this workflow when we push to the main branch. Now, to define the job:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;jobs:
  build_and_deploy:
    docker:
      - image: busybox
    steps:
      - run:
          name: hello world
          command: |
            echo "Hello world"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can check our formatting with CircleCI. To do this, first &lt;a href="///scl/fi/mn6etgqnl4bg8sr01zmhb/Building-a-CICD-Pipeline-for-Google-App-Engine-Site-Using-CircleCI.paper?rlkey=2vq4glmndwaz4vrpkt4h9mrt1&amp;amp;dl=0"&gt;install CircleCI CLI&lt;/a&gt; and run this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;circleci config validate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Response:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_762DC8729D6070F874B9A6C8613F9867EBE8E5E337A9107440BC3B666C3AB306_1700671583544_Screenshot%2B2023-11-22%2B174555.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_762DC8729D6070F874B9A6C8613F9867EBE8E5E337A9107440BC3B666C3AB306_1700671583544_Screenshot%2B2023-11-22%2B174555.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In our CircleCI, you can see the tests, processes, and workflows whenever we push to the main branch on GitHub:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_762DC8729D6070F874B9A6C8613F9867EBE8E5E337A9107440BC3B666C3AB306_1700672537776_Screenshot%2B2023-11-22%2B180013.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_762DC8729D6070F874B9A6C8613F9867EBE8E5E337A9107440BC3B666C3AB306_1700672537776_Screenshot%2B2023-11-22%2B180013.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Awesome! We have successfully created a CI/CD pipeline. Now, whenever we make changes to our code base or documentation, we can just push to the main, and CircleCI will pick up that change (as demonstrated in the image above) and the job or workflow and make deployments a few minutes later.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This article provided a step-by-step guide on building a CI/CD pipeline for a Google App Engine site using CircleCI. We covered setting up a Python environment, using Sphinx for documentation, and integrating the project with Google Cloud Platform.&lt;/p&gt;

&lt;p&gt;The process demonstrated the benefits of automating deployments via CircleCI, including enhanced code quality, reduced deployment time, and improved team communication. This guide highlights the efficiency and effectiveness of CircleCI in streamlining development processes, making it an invaluable tool for modern software development teams.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.aviator.co/" rel="noopener noreferrer"&gt;Aviator&lt;/a&gt;: Automate your cumbersome processes
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://aviator.co/" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.aviator.co%2Fwp-content%2Fuploads%2F2022%2F08%2Fblog-cta-1024x727.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Aviator automates tedious developer workflows by managing git Pull Requests (PRs) and continuous integration test (CI) runs to help your team avoid broken builds, streamline cumbersome merge processes, manage cross-PR dependencies, and handle flaky tests while maintaining their security compliance.&lt;/p&gt;

&lt;p&gt;There are 4 key components to Aviator:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;MergeQueue&lt;/strong&gt;  – an automated queue that manages the merging workflow for your GitHub repository to help protect important branches from broken builds. The Aviator bot uses GitHub Labels to identify Pull Requests (PRs) that are ready to be merged, validates CI checks, processes semantic conflicts, and merges the PRs automatically.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ChangeSets&lt;/strong&gt;  – workflows to synchronize validating and merging multiple PRs within the same repository or multiple repositories. Useful when your team often sees groups of related PRs that need to be merged together, or otherwise treated as a single broader unit of change.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TestDeck&lt;/strong&gt;  – a tool to automatically detect, take action on, and process results from flaky tests in your CI infrastructure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stacked PRs CLI&lt;/strong&gt;  – a command line tool that helps developers manage cross-PR dependencies. This tool also automates syncing and merging of stacked PRs. Useful when your team wants to promote a culture of smaller, incremental PRs instead of large changes, or when your workflows involve keeping multiple, dependent PRs in sync.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="http://aviator.co/" rel="noopener noreferrer"&gt;Try it for free.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://www.aviator.co/blog/ci-cd-google-app-engine/" rel="noopener noreferrer"&gt;Building a CI/CD pipeline for a Google App Engine site using CircleCI&lt;/a&gt; first appeared on &lt;a href="https://www.aviator.co/blog" rel="noopener noreferrer"&gt;Aviator Blog&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://www.aviator.co/blog/ci-cd-google-app-engine/" rel="noopener noreferrer"&gt;Building a CI/CD pipeline for a Google App Engine site using CircleCI&lt;/a&gt; appeared first on &lt;a href="https://www.aviator.co/blog" rel="noopener noreferrer"&gt;Aviator Blog&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>cicd</category>
    </item>
  </channel>
</rss>
