<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Brian Neville-O'Neill</title>
    <description>The latest articles on Forem by Brian Neville-O'Neill (@bnevilleoneill).</description>
    <link>https://forem.com/bnevilleoneill</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/bnevilleoneill"/>
    <language>en</language>
    <item>
      <title>SonarQube vs Fortify</title>
      <dc:creator>Brian Neville-O'Neill</dc:creator>
      <pubDate>Wed, 06 Dec 2023 17:31:50 +0000</pubDate>
      <link>https://forem.com/aviator_co/sonarqube-vs-fortify-27fo</link>
      <guid>https://forem.com/aviator_co/sonarqube-vs-fortify-27fo</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.aviator.co%2Fblog%2Fwp-content%2Fuploads%2F2023%2F12%2Fsonarqube-vs-fortify-1024x576.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.aviator.co%2Fblog%2Fwp-content%2Fuploads%2F2023%2F12%2Fsonarqube-vs-fortify-1024x576.jpeg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;SonarSource SonarQube and OpenText Fortify are popular software security and code analysis tools. In this article, we will focus on the following: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SonarQube and Fortify’s features, capabilities, and functionalities.&lt;/li&gt;
&lt;li&gt;A comparison between SonarQube and Microfocus Fortify&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  SonarQube
&lt;/h2&gt;

&lt;p&gt;Sonarqube is a platform used for continuous code inspection and static code analysis. You can use it early in your software development cycle to identify and address code issues. It helps you improve your code quality and reduce build failure rates. &lt;/p&gt;

&lt;p&gt;SonarQube has a lower barrier for fast use because it has a user-friendly interface, community support, and easy setup. &lt;/p&gt;

&lt;h2&gt;
  
  
  SonarQube features
&lt;/h2&gt;

&lt;p&gt;Let’s take a deep dive into the features of SonarQube:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;*&lt;em&gt;Code coverage and Testing:  *&lt;/em&gt; It integrates with many popular testing frameworks and tools that help identify what part of your code hasn’t been tested. It helps with an extensive range by highlighting areas that need test cases. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Code Quality Analysis:&lt;/strong&gt; SonarQube analyzes code according to predefined standards and alerts you when your code doesn’t meet these standards or doesn’t meet some of the rules. It checks for code quality, like code smells, bugs, and vulnerabilities.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Complex Analysis Of Code:&lt;/strong&gt; SonarQube analyses your code and lets you know the part of your code that might be hard to maintain or understand. This insight can make your complex code more readable and easily understood. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CI/CD Integration and Reporting:&lt;/strong&gt; SonarQube integrates with different Continuous Integration and Continuous Delivery (CI/CD) tools, and you can easily add them to your development pipeline. It provides you with centralized reporting that allows you to make data-driven decisions that can improve your software development process.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  SonarQube benefits
&lt;/h2&gt;

&lt;p&gt;There are several strengths you enjoy when you use SonarQube, and they are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Great support for many programming languages&lt;/li&gt;
&lt;li&gt;Interactive community support&lt;/li&gt;
&lt;li&gt;A detailed set of rules for code quality and detection&lt;/li&gt;
&lt;li&gt;It is user-friendly and easy to use&lt;/li&gt;
&lt;li&gt;You can integrate with popular CI/CD tools&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  SonarQube limitations
&lt;/h2&gt;

&lt;p&gt;Despite the benefits you might enjoy when you use SonarQube in your development process, there are certain limitations you should be aware of. They are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;There is limited support for particular programming languages&lt;/li&gt;
&lt;li&gt;It lacks advanced code security features&lt;/li&gt;
&lt;li&gt;False positives in security vulnerabilities&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Fortify
&lt;/h2&gt;

&lt;p&gt;Fortify helps you identify and remedy security vulnerabilities in your software development process. You get a comprehensive approach during your development process with software composition analysis (SCA), dynamic application security testing (DAST), and static application security testing (SAST) it integrates. &lt;/p&gt;

&lt;p&gt;Using these features, you can detect vulnerabilities early on and fix them before deploying your application. It supports programming languages from Apex, Java, and others.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fortify features
&lt;/h2&gt;

&lt;p&gt;Let’s dive into the features of Microfocus Fortify:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Advanced Security Testing:&lt;/strong&gt; Suppose you use Fortify for your software development process. In that case, you enjoy advanced code security testing that would help your overall efforts because it enables you to understand the issues or potential threats better and can help you address these critical bottlenecks. Using Fortify means picking up problems you might miss using other tools. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Static Code Analysis:&lt;/strong&gt; Fortify analyses for code structure and logic, which helps identify coding flaws in your source code. Fortify checks your code against predefined rules and notifies you of an issue, allowing you to fix your code before deploying. In addition, Fortify lets you set your own rules and policies based on your software development requirements. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Integration with Build Sytems:&lt;/strong&gt; Fortify integrates with other build systems and CI/CD pipelines. It allows you to implement security testing as an essential part of your software development process by allowing you to incorporate security testing into existing workflows. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Fortify benefits
&lt;/h2&gt;

&lt;p&gt;There are several benefits of Fortify, and they are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It allows customizable rules and standards for static code analysis&lt;/li&gt;
&lt;li&gt;It has comprehensive security code testing capabilities&lt;/li&gt;
&lt;li&gt;It uses advanced vulnerability testing techniques and methods&lt;/li&gt;
&lt;li&gt;Easy integration with development environments and CI/CD tools&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Fortify limitations
&lt;/h2&gt;

&lt;p&gt;Here are several limitations you have when  using Fortify: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It takes a lot of work to set up and a steep learning curve.&lt;/li&gt;
&lt;li&gt;Compared to SonarQube, it needs more language support. &lt;/li&gt;
&lt;li&gt;It is expensive for enterprise-level usage. &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Comparison: SonarQube vs Fortify
&lt;/h2&gt;

&lt;p&gt;There are some differences when you use both tools for your software development process. However, you must know their weakness and strengths to help you make an ideal and better decision. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;SonarQube beats Fortify because it has the best-suited features regarding quality code analysis. When you use SonarQube for software development builds, you can get comments from code coverage measurement, a predefined rules-based analysis, complexity analysis, and code duplication detection. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fortify beats SonarQube regarding security vulnerabilities because it is more suited for this purpose. Fortify offers you in-depth reporting, customizable rules, and data flow analysis. It is specifically designed to deal with security issues in your code. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In terms of integration with CI/CD tools and development workflows, SonarQube and Fortify offer a seamless workaround for developers. They provide detailed reporting for coding and security vulnerabilities to aid your development process. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Regarding operating costs, SonarQube is less expensive than Fortify for enterprise purposes. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Your choice of software for your development process should depend on your project needs, requirements, and available capital for operation. In the article, we looked at the features of both, their benefits, and the limitations you would face when you use them. Furthermore, by comparing both, you can reach a conclusion for which to use when appropriate, assuming it meets your needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.aviator.co/" rel="noopener noreferrer"&gt;Aviator&lt;/a&gt;: Automate your cumbersome processes
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://aviator.co/" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.aviator.co%2Fwp-content%2Fuploads%2F2022%2F08%2Fblog-cta-1024x727.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Aviator automates tedious developer workflows by managing git Pull Requests (PRs) and continuous integration test (CI) runs to help your team avoid broken builds, streamline cumbersome merge processes, manage cross-PR dependencies, and handle flaky tests while maintaining their security compliance.&lt;/p&gt;

&lt;p&gt;There are 4 key components to Aviator:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;MergeQueue&lt;/strong&gt;  – an automated queue that manages the merging workflow for your GitHub repository to help protect important branches from broken builds. The Aviator bot uses GitHub Labels to identify Pull Requests (PRs) that are ready to be merged, validates CI checks, processes semantic conflicts, and merges the PRs automatically.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ChangeSets&lt;/strong&gt;  – workflows to synchronize validating and merging multiple PRs within the same repository or multiple repositories. Useful when your team often sees groups of related PRs that need to be merged together, or otherwise treated as a single broader unit of change.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TestDeck&lt;/strong&gt;  – a tool to automatically detect, take action on, and process results from flaky tests in your CI infrastructure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stacked PRs CLI&lt;/strong&gt;  – a command line tool that helps developers manage cross-PR dependencies. This tool also automates syncing and merging of stacked PRs. Useful when your team wants to promote a culture of smaller, incremental PRs instead of large changes, or when your workflows involve keeping multiple, dependent PRs in sync.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="http://aviator.co/" rel="noopener noreferrer"&gt;Try it for free.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://www.aviator.co/blog/sonarqube-vs-fortify/" rel="noopener noreferrer"&gt;SonarQube vs Fortify&lt;/a&gt; first appeared on &lt;a href="https://www.aviator.co/blog" rel="noopener noreferrer"&gt;Aviator Blog&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://www.aviator.co/blog/sonarqube-vs-fortify/" rel="noopener noreferrer"&gt;SonarQube vs Fortify&lt;/a&gt; appeared first on &lt;a href="https://www.aviator.co/blog" rel="noopener noreferrer"&gt;Aviator Blog&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>codeanalysis</category>
      <category>fortify</category>
      <category>sonarqube</category>
    </item>
    <item>
      <title>What is a monorepo and why use one?</title>
      <dc:creator>Brian Neville-O'Neill</dc:creator>
      <pubDate>Wed, 29 Nov 2023 18:39:26 +0000</pubDate>
      <link>https://forem.com/aviator_co/what-is-a-monorepo-and-why-use-one-dec</link>
      <guid>https://forem.com/aviator_co/what-is-a-monorepo-and-why-use-one-dec</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.aviator.co%2Fblog%2Fwp-content%2Fuploads%2F2023%2F11%2Fmonorepo-1024x574.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.aviator.co%2Fblog%2Fwp-content%2Fuploads%2F2023%2F11%2Fmonorepo-1024x574.jpg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Managing a sprawling codebase across multiple repositories can be a logistical nightmare. Developers often find themselves juggling various versions, wrestling with incompatible dependencies, and navigating a maze of pull requests and merges.&lt;/p&gt;

&lt;p&gt;This chaos not only hampers productivity but also increases the risk of errors and inconsistencies. Are you tired of this disarray and looking for a streamlined way to manage your projects?&lt;/p&gt;

&lt;p&gt;The answer lies in adopting a monorepo (aka a monolithic repository). One of the most compelling benefits of a monorepo is its ability to simplify version control.&lt;/p&gt;

&lt;p&gt;In a traditional multirepo setup, each project or component has its own repository, often leading to versioning conflicts and making it difficult to keep track of changes across projects. With a monorepo, all your code lives in one place, making it easier to manage versions and maintain a coherent history.&lt;/p&gt;

&lt;p&gt;This centralized approach ensures that everyone on the team is working with the same codebase, reducing the likelihood of versioning issues and making rollbacks more straightforward.&lt;/p&gt;

&lt;p&gt;In this comprehensive guide, you’ll gain insights into what a monorepo is and how it differs from traditional multirepo strategies. You’ll also learn about the advantages of using a monorepo, particularly for larger teams dealing with complex projects.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a monorepo?
&lt;/h2&gt;

&lt;p&gt;A monorepo is a software development strategy where the code for multiple projects is stored in a single version control system (VCS) repository:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FwDFlFLr.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FwDFlFLr.jpeg" title="Monorepo" alt="Monorepo courtesy of Nuno Bispo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This differs from the more traditional approach where each project or module has its own separate repository (aka a multirepo):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FhFUnoVF.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FhFUnoVF.jpeg" title="Monorepo" alt="Polyrepo courtesy of Nuno Bispo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The projects within a monorepo can be interconnected libraries, services, applications, or even documentation.&lt;/p&gt;

&lt;p&gt;The central idea of a monorepos is to consolidate the codebase, ensuring more streamlined version control, code reuse, and improved collaboration. For larger teams, this means better code visibility, simplified dependency management, and the possibility of atomic changes across multiple projects.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why you should use monorepos
&lt;/h2&gt;

&lt;p&gt;One of the main advantages of using a monorepo is unified versioning. In a traditional multirepo setup, each project has its own version history, making it challenging to understand how changes in one project affect others. With a monorepo, all projects share a single version history, making it easier to understand their interdependencies.&lt;/p&gt;

&lt;p&gt;For example, if Project A depends on a feature in Project B, both can be updated simultaneously in a single commit, making it easier to track changes and dependencies.&lt;/p&gt;

&lt;p&gt;Following are a few more advantages of using a monorepo:&lt;/p&gt;

&lt;h3&gt;
  
  
  Reusable code across projects
&lt;/h3&gt;

&lt;p&gt;While it’s true that package managers can help sync dependencies across multiple repositories, having all code in a single repository makes it even easier to share and reuse code. There’s no need to publish internal packages just to share common utilities or components.&lt;/p&gt;

&lt;p&gt;This is particularly beneficial for large teams where multiple projects often have overlapping requirements. Code reusability in a monorepo ensures that developers can easily leverage existing code, reducing duplication and accelerating development cycles.&lt;/p&gt;

&lt;h3&gt;
  
  
  Easier refactoring ensures consistency
&lt;/h3&gt;

&lt;p&gt;In a monorepo, refactoring becomes a less daunting task. Changes can be made once and propagated across all dependent projects in a single commit.&lt;/p&gt;

&lt;p&gt;This ensures that improvements or fixes are consistently applied, reducing the risk of one project lagging behind in terms of code quality or features.&lt;/p&gt;

&lt;h3&gt;
  
  
  Enhanced collaboration through visibility
&lt;/h3&gt;

&lt;p&gt;Monorepos offer improved visibility, allowing teams to better communicate and collaborate. In a large team, this is especially beneficial. Developers can see the entire codebase, understand the context of their work better, and make cross-project changes effortlessly.&lt;/p&gt;

&lt;p&gt;This holistic view eliminates the need for special permissions to access different repositories, making it easier for team members to assist each other and encourage code reuse.&lt;/p&gt;

&lt;h3&gt;
  
  
  Streamlined dependency management
&lt;/h3&gt;

&lt;p&gt;Managing dependencies in a large team can be cumbersome with multiple repositories. A monorepo ensures that there’s a single version of each dependency, reducing conflicts and making updates more predictable.&lt;/p&gt;

&lt;p&gt;This centralized approach to dependency management eliminates the “it works on my machine” type of problem, as every team member works with the same set of standardized tools and configurations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Atomic changes for better version control
&lt;/h3&gt;

&lt;p&gt;In large teams, coordinating releases and updates can be a complex task. Monorepos enable atomic changes, allowing related modifications across multiple projects to be committed at once. This ensures that features or fixes affecting multiple projects are released cohesively, making version control more straightforward and reliable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Optimized CI/CD pipelines
&lt;/h3&gt;

&lt;p&gt;One of the benefits of monorepos is that continuous integration, continuous deployment (CI/CD) are more streamlined. There’s no need to sync multiple repositories or ensure cross-repo compatibility.&lt;/p&gt;

&lt;p&gt;The unified nature of a monorepo allows build and test tools to be standardized, ensuring that everyone is testing and deploying based on the same criteria.&lt;/p&gt;

&lt;p&gt;This is particularly advantageous for large teams, where maintaining consistency in CI/CD practices is crucial for efficient and reliable software delivery.&lt;/p&gt;

&lt;p&gt;By understanding these benefits in the context of large teams, it becomes clear why monorepos are becoming increasingly popular. They offer a unified, streamlined, and efficient approach to software development that is especially advantageous in complex, multiproject environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monorepo challenges and tools that can help
&lt;/h2&gt;

&lt;p&gt;Monorepos have surged in popularity, especially among large tech giants, due to their myriad advantages. But they’re not a one-size-fits-all solution and come with their own set of challenges. Explore some of these challenges and learn about tools that can help.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scaling issues
&lt;/h3&gt;

&lt;p&gt;As the codebase within a monorepo grows, so does the build time. Every time a change is made, the CI system might try to rebuild and retest the entire codebase, making the process slow and cumbersome.&lt;/p&gt;

&lt;p&gt;To help with these scaling issues, build tools like &lt;a href="https://bazel.build/" rel="noopener noreferrer"&gt;Bazel&lt;/a&gt;, &lt;a href="https://www.pantsbuild.org/" rel="noopener noreferrer"&gt;Pants&lt;/a&gt;, and &lt;a href="https://buck2.build/" rel="noopener noreferrer"&gt;Buck2&lt;/a&gt; are specifically designed to optimize the build process through a technique known as incremental builds. Incremental builds minimize the strain on system resources, allowing for more efficient use of hardware, whether you’re working on a local machine or in a cloud-based development environment.&lt;/p&gt;

&lt;p&gt;Unlike traditional build systems that recompile the entire codebase every time a change is made, these tools are smart enough to identify which parts of the codebase are affected by recent changes.&lt;/p&gt;

&lt;p&gt;These tools are built to seamlessly integrate into your existing development workflow. Once configured, they can automatically detect changes in the codebase and trigger the appropriate incremental builds. This automation is particularly beneficial in a CI/CD environment, where rapid and frequent builds are the norm.&lt;/p&gt;

&lt;p&gt;While these tools offer powerful capabilities, they do come with an initial learning curve. Each tool has its own set of configurations, syntax, and best practices that you need to familiarize yourself with. However, the investment in learning is often justified by the significant gains in build speed and efficiency.&lt;/p&gt;

&lt;p&gt;Another advantage of using these specialized build tools is their flexibility. They allow for a high degree of customization, enabling you to tailor the build process to meet the specific needs of your project or team. This is especially useful in large teams or complex projects where generic build configurations may not be sufficient.&lt;/p&gt;

&lt;h3&gt;
  
  
  High complexity
&lt;/h3&gt;

&lt;p&gt;For newcomers or even seasoned team members, navigating a huge codebase can be daunting. Understanding the interdependencies, finding the right modules, or even simply knowing where to start can be overwhelming.&lt;/p&gt;

&lt;p&gt;Code navigation tools such as &lt;a href="https://sourcegraph.com/search" rel="noopener noreferrer"&gt;Sourcegraph&lt;/a&gt; and integrated features within platforms like &lt;a href="https://github.com/" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; serve as invaluable aids for developers navigating extensive codebases. These tools go beyond basic text search to offer a range of advanced functionalities designed to make code exploration more efficient and insightful.&lt;/p&gt;

&lt;p&gt;One of the primary features of these tools is advanced code search, which allows developers to perform complex queries to find specific code snippets, functions, or even documentation within a large codebase. This is particularly useful when you’re trying to understand how a particular piece of code interacts with other components or when you’re debugging.&lt;/p&gt;

&lt;p&gt;Another powerful feature is cross-referencing, which enables developers to easily find where a particular function or variable is used across different files or projects. This is incredibly helpful for understanding the impact of potential changes or for tracking down the root cause of a bug. It eliminates the need to manually search through multiple files, saving both time and effort.&lt;/p&gt;

&lt;p&gt;These tools also offer intelligent code mapping, which provides a visual representation of how different parts of the code are interconnected. This can be especially useful for new team members who are trying to get a grasp of a complex project or for any developer who wants to understand the architecture and dependencies within the codebase.&lt;/p&gt;

&lt;h3&gt;
  
  
  Potential for conflicts
&lt;/h3&gt;

&lt;p&gt;With many developers working simultaneously on the same repository, the chances of conflicting changes or merge conflicts increase. This can hamper the development speed and lead to errors if not resolved correctly.&lt;/p&gt;

&lt;p&gt;VCS like &lt;a href="https://git-scm.com/" rel="noopener noreferrer"&gt;Git&lt;/a&gt; offer robust mechanisms to handle merge conflicts. Features like pull requests in platforms like GitHub or &lt;a href="https://bitbucket.org/" rel="noopener noreferrer"&gt;Bitbucket&lt;/a&gt; allow for code review, helping spot and resolve conflicts before they’re merged into the main branch.&lt;/p&gt;

&lt;p&gt;Additionally, automated testing tools like &lt;a href="https://www.jenkins.io/" rel="noopener noreferrer"&gt;Jenkins&lt;/a&gt;, &lt;a href="https://www.travis-ci.com/" rel="noopener noreferrer"&gt;Travis CI&lt;/a&gt;, or &lt;a href="https://circleci.com/" rel="noopener noreferrer"&gt;CircleCI&lt;/a&gt; can automatically run tests on branches before they’re merged. This ensures that any breaking changes or conflicts get flagged early.&lt;/p&gt;

&lt;p&gt;As you can see, while monorepos have their disadvantages, there’s a range of tools designed to mitigate these challenges.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building a monorepo culture
&lt;/h2&gt;

&lt;p&gt;The decision to use a monorepo goes beyond just tools and technical considerations; it requires a cultural shift in how developers work and collaborate. This culture is foundational to effectively managing and scaling a monorepo environment, ensuring that the benefits outweigh the challenges.&lt;/p&gt;

&lt;p&gt;Take a look at a few different aspects of building a monorepo culture.&lt;/p&gt;

&lt;h3&gt;
  
  
  Shared responsibility
&lt;/h3&gt;

&lt;p&gt;In a monorepo setting, boundaries between projects or components become blurred. Instead of viewing projects as isolated entities, team members should see the entire repository as their domain. That’s why it’s important to encourage collaboration across teams. Cross-team code reviews, pair programming, and team rotations can break silos and foster a holistic view of the codebase.&lt;/p&gt;

&lt;p&gt;Additionally, you should regularly organize internal workshops, tech talks, or code walkthroughs. This can help team members familiarize themselves with different parts of the codebase and understand its intricacies.&lt;/p&gt;

&lt;p&gt;For instance, Google fosters an environment in which &lt;a href="https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/41469.pdf" rel="noopener noreferrer"&gt;developers have the freedom to access and contribute&lt;/a&gt; to any section of the codebase. This approach to code ownership has led to standardized coding practices, enhanced collaboration among team members, and a simplified process of reusing code.&lt;/p&gt;

&lt;h3&gt;
  
  
  Early merging to catch integration Issues
&lt;/h3&gt;

&lt;p&gt;Consistently merging code changes is a proactive approach to software development that helps catch integration issues at an early stage. By integrating changes frequently, you can identify conflicts or bugs sooner rather than later, making them easier to resolve.&lt;/p&gt;

&lt;p&gt;This practice minimizes the risk of encountering larger, more complicated issues in the future, which could require significant time and effort to fix. For example, if two developers are working on features that affect the same piece of code, early merging will reveal any incompatibilities between their changes, allowing for quicker adjustments.&lt;/p&gt;

&lt;p&gt;To manage these merges in a more organized fashion, implementing branching strategies like feature branching or trunk-based development is highly recommended.&lt;/p&gt;

&lt;p&gt;In feature branching, each new feature or bug fix is developed in its own branch. This allows developers to work on different features simultaneously without affecting the main codebase. Once the feature is complete and tested, it can be merged back into the main branch.&lt;/p&gt;

&lt;p&gt;Feature branching is particularly useful for teams that have multiple developers working on different aspects of a project, as it allows for parallel development without the risk of one feature negatively impacting another.&lt;/p&gt;

&lt;p&gt;In comparison, trunk-based development encourages developers to merge their changes directly into the trunk or main codebase as quickly as possible, often multiple times a day. This approach is beneficial for catching integration issues early and ensures that the codebase remains in a consistently deployable state. It’s especially effective for large teams where rapid integration is crucial for maintaining a smooth development workflow.&lt;/p&gt;

&lt;p&gt;Take Facebook’s example, where the codebase is designed to empower engineers to “&lt;a href="https://en.wikipedia.org/wiki/Move_fast_and_break_things" rel="noopener noreferrer"&gt;move fast and break things&lt;/a&gt;,” signifying a culture that values swift innovation along with ongoing refinement and iteration.&lt;/p&gt;

&lt;h3&gt;
  
  
  Thorough documentation
&lt;/h3&gt;

&lt;p&gt;A monorepo’s vastness makes it challenging to navigate and understand. Comprehensive documentation acts as a map, guiding developers through the code.&lt;/p&gt;

&lt;p&gt;Make sure you establish clear standards for documenting code. This might include things like comments, READMEs, and architecture diagrams.&lt;/p&gt;

&lt;p&gt;Additionally, use tools like &lt;a href="https://www.doxygen.nl/" rel="noopener noreferrer"&gt;Doxygen&lt;/a&gt;, &lt;a href="https://www.oracle.com/technical-resources/articles/java/javadoc-tool.html" rel="noopener noreferrer"&gt;Javadoc&lt;/a&gt;, or &lt;a href="https://docs.readthedocs.io/en/stable/intro/getting-started-with-sphinx.html" rel="noopener noreferrer"&gt;Sphinx&lt;/a&gt; to automatically generate documentation from source code comments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Continual refinement for a healthy codebase
&lt;/h3&gt;

&lt;p&gt;As your codebase grows and evolves, it’s essential to periodically revisit and fine-tune existing code. This practice ensures that your code stays clean, efficient, and in line with current best practices. For instance, an algorithm that was efficient a year ago may now have a more optimized version, or a library you’re using might have received updates that you can take advantage of.&lt;/p&gt;

&lt;p&gt;To systematically address this, consider dedicating specific sprints or time periods exclusively to code refactoring and reducing technical debt. For example, you could allocate the last week of every development cycle to revisit sections of the code that have been flagged for optimization or refactoring. This focused effort ensures that your codebase doesn’t accumulate quick fixes or workarounds that can make it harder to maintain and scale over time.&lt;/p&gt;

&lt;p&gt;In addition, encourage a culture of detailed code reviews that go beyond just assessing functionality. These reviews should also scrutinize the quality of the code, examining factors like readability, efficiency, and adherence to coding standards. Peer feedback during these reviews can be invaluable for identifying areas that may require refactoring. For example, a team member might notice that a particular function is overly complex and suggest breaking it down into smaller, more manageable functions, thereby improving both readability and maintainability.&lt;/p&gt;

&lt;p&gt;By continually refining your code, dedicating time to tackle technical debt, and fostering a culture of thorough code reviews, you can maintain a high-quality, efficient codebase that is easier to work with and less prone to issues in the long run.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monorepo culture at tech giants
&lt;/h2&gt;

&lt;p&gt;Monorepo culture has been adopted by many tech giants and renowned companies due to the myriad advantages it offers. Take a quick look at how Google, Facebook, and Microsoft have adopted a monorepo culture:&lt;/p&gt;

&lt;h3&gt;
  
  
  Google
&lt;/h3&gt;

&lt;p&gt;Google is often credited for popularizing the monorepo approach through its massive monolithic codebase known as Piper, which contains billions of lines of code and thousands of projects.&lt;/p&gt;

&lt;p&gt;At Google, a culture of shared ownership encourages developers to access and contribute to any part of the codebase. This collaborative approach has led to consistent coding standards, enhanced collaboration, and easier code reuse.&lt;/p&gt;

&lt;p&gt;In conjunction with this, Google created Bazel, a build tool designed to work with large codebases like theirs. Bazel supports incremental builds, ensuring only affected components are rebuilt, significantly speeding up the build process.&lt;/p&gt;

&lt;h3&gt;
  
  
  Facebook
&lt;/h3&gt;

&lt;p&gt;Facebook also employs a monorepo for its vast collection of projects, including the main Facebook app, Instagram, and WhatsApp.&lt;/p&gt;

&lt;p&gt;Facebook’s codebase encourages engineers to “move fast and break things,” meaning they actively engage in rapid innovation while also continuously refining and iterating.&lt;/p&gt;

&lt;p&gt;In conjunction, Facebook uses Buck, a build system tailored for their monorepo. It ensures efficient and reproducible builds, which is vital given the scale and pace of their development.&lt;/p&gt;

&lt;h3&gt;
  
  
  Microsoft
&lt;/h3&gt;

&lt;p&gt;Microsoft famously transitioned the Windows codebase to a monorepo using Git, the largest Git repo on the planet. With the move, Microsoft aimed to increase developer productivity, improve code sharing, and streamline the engineering system.&lt;/p&gt;

&lt;p&gt;To manage the massive repository, Microsoft developed the &lt;a href="https://github.com/microsoft/VFSForGit" rel="noopener noreferrer"&gt;Virtual File System for Git (VFS for Git)&lt;/a&gt;. It allows the Git client to operate at a scale previously thought impossible by virtualizing the filesystem beneath the repo and making it appear as though all the files are present when, in reality, they are not.&lt;/p&gt;

&lt;p&gt;These companies not only showcase the technical adaptability of monorepos but also emphasize the cultural shift essential for such a model’s success.&lt;/p&gt;

&lt;h2&gt;
  
  
  The benefit of monorepos
&lt;/h2&gt;

&lt;p&gt;Deciding between monorepos and multirepos isn’t solely a technical decision—it encapsulates a team’s collaboration dynamics, accountability distribution, and holistic view toward software creation. When complemented with the right tools and a strong culture emphasizing shared ownership and ongoing refinement, monorepos can create a vibrant, streamlined, and unified framework for software initiatives, particularly for larger teams.&lt;/p&gt;

&lt;p&gt;Beyond technical merits, monorepos foster an enhanced collaborative environment. They dissolve barriers between developers, promoting shared responsibility, comprehensive code reviews, and a unified development environment.&lt;/p&gt;

&lt;p&gt;Together, these features make monorepos a compelling choice for teams seeking both technical efficiency and collaborative synergy.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.aviator.co/" rel="noopener noreferrer"&gt;Aviator&lt;/a&gt;: Automate your cumbersome processes
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://aviator.co/" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.aviator.co%2Fwp-content%2Fuploads%2F2022%2F08%2Fblog-cta-1024x727.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Aviator automates tedious developer workflows by managing git Pull Requests (PRs) and continuous integration test (CI) runs to help your team avoid broken builds, streamline cumbersome merge processes, manage cross-PR dependencies, and handle flaky tests while maintaining their security compliance.&lt;/p&gt;

&lt;p&gt;There are 4 key components to Aviator:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;MergeQueue&lt;/strong&gt;  – an automated queue that manages the merging workflow for your GitHub repository to help protect important branches from broken builds. The Aviator bot uses GitHub Labels to identify Pull Requests (PRs) that are ready to be merged, validates CI checks, processes semantic conflicts, and merges the PRs automatically.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ChangeSets&lt;/strong&gt;  – workflows to synchronize validating and merging multiple PRs within the same repository or multiple repositories. Useful when your team often sees groups of related PRs that need to be merged together, or otherwise treated as a single broader unit of change.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TestDeck&lt;/strong&gt;  – a tool to automatically detect, take action on, and process results from flaky tests in your CI infrastructure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stacked PRs CLI&lt;/strong&gt;  – a command line tool that helps developers manage cross-PR dependencies. This tool also automates syncing and merging of stacked PRs. Useful when your team wants to promote a culture of smaller, incremental PRs instead of large changes, or when your workflows involve keeping multiple, dependent PRs in sync.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="http://aviator.co/" rel="noopener noreferrer"&gt;Try it for free.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://www.aviator.co/blog/what-is-a-monorepo-and-why-use-one/" rel="noopener noreferrer"&gt;What is a monorepo and why use one?&lt;/a&gt; first appeared on &lt;a href="https://www.aviator.co/blog" rel="noopener noreferrer"&gt;Aviator Blog&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://www.aviator.co/blog/what-is-a-monorepo-and-why-use-one/" rel="noopener noreferrer"&gt;What is a monorepo and why use one?&lt;/a&gt; appeared first on &lt;a href="https://www.aviator.co/blog" rel="noopener noreferrer"&gt;Aviator Blog&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>monorepo</category>
    </item>
    <item>
      <title>Building a CI/CD pipeline for a Google App Engine site using CircleCI</title>
      <dc:creator>Brian Neville-O'Neill</dc:creator>
      <pubDate>Mon, 27 Nov 2023 18:21:14 +0000</pubDate>
      <link>https://forem.com/aviator_co/building-a-cicd-pipeline-for-a-google-app-engine-site-using-circleci-21c8</link>
      <guid>https://forem.com/aviator_co/building-a-cicd-pipeline-for-a-google-app-engine-site-using-circleci-21c8</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.aviator.co%2Fblog%2Fwp-content%2Fuploads%2F2023%2F11%2FApp-Engine.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.aviator.co%2Fblog%2Fwp-content%2Fuploads%2F2023%2F11%2FApp-Engine.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this article, we will build a CI/CD pipeline for a Google App Engine Site using CircleCI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisite
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Python installed on your system&lt;/li&gt;
&lt;li&gt;Google Cloud CLI installed&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What we are building
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Documentation site and connect it to GCP (Google Cloud Platform)&lt;/li&gt;
&lt;li&gt;Using CircleCI for automation&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What is CircleCI?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://circleci.com/" rel="noopener noreferrer"&gt;CircleCI&lt;/a&gt; is a popular choice for software engineers, particularly DevOps engineers when working on &lt;a href="https://www.aviator.co/blog/automating-integration-tests/" rel="noopener noreferrer"&gt;automation&lt;/a&gt; and overall CI/CD integrations. The CI/CD platform helps software teams automate the process of building, testing, and deploying code. As a cloud-based platform, CircleCI allows you to seamlessly integrate with any version control system you choose, such as GitHub, Bitbucket, or GitLab. However, we will be working with GitHub on this article.&lt;/p&gt;

&lt;p&gt;One cool thing about CircleCI is that it lets developers define pipelines that automate the process of building, testing, and deploying code. Pipelines are composed of &lt;a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=azure-devops&amp;amp;tabs=yaml" rel="noopener noreferrer"&gt;jobs&lt;/a&gt;, which are individual steps in the CI/CD process. Jobs can be configured to run on various platforms, including Linux, macOS, and Windows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Circle CI?
&lt;/h2&gt;

&lt;p&gt;CircleCI is a friendly tool for teams of all sizes, ranging from small startups to large enterprises. No wonder the tool is usually “top of the ladder” during CI/CD integrations. It is a powerful tool that can help teams improve the quality and speed of their software development process.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Improved code quality:&lt;/strong&gt; CircleCI can help enhance code quality by automating the testing process. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reduced deployment time:&lt;/strong&gt; CircleCI can help reduce the time it takes to deploy code by automating the process of building and deploying. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Increased confidence in releases:&lt;/strong&gt; CircleCI can help increase confidence in releases by ensuring that code is thoroughly tested before deployment. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Improved team communication:&lt;/strong&gt; CircleCI can help to improve team communication by providing a central location for monitoring the progress of builds and tests.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Relevant CircleCI features
&lt;/h2&gt;

&lt;p&gt;Some of the core features of CircleCI include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Parallelism:&lt;/strong&gt; Jobs can be run in parallel to improve the speed of the CI/CD process. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Caching:&lt;/strong&gt; CircleCI can cache build artifacts and test results to improve the speed of subsequent builds. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Notifications:&lt;/strong&gt; CircleCI can notify team members when builds fail or pass. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring:&lt;/strong&gt; CircleCI provides a dashboard that allows teams to monitor the progress of their builds and tests.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Getting started
&lt;/h2&gt;

&lt;p&gt;To get started, we first need to create a free GitHub repo (I assume you already know how to do that). The next step is to clone the empty repo. After this, let’s create a Python virtual environment by running the following command on your terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pipenv shell
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is what it should look like after a successful installation:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_762DC8729D6070F874B9A6C8613F9867EBE8E5E337A9107440BC3B666C3AB306_1700597968958_Screenshot%2B2023-11-21%2B211806.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_762DC8729D6070F874B9A6C8613F9867EBE8E5E337A9107440BC3B666C3AB306_1700597968958_Screenshot%2B2023-11-21%2B211806.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next is to run the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install sphinx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://www.sphinx-doc.org/en/master/" rel="noopener noreferrer"&gt;Sphinx&lt;/a&gt; is a popular documentation generator written in Python that is widely used for creating high-quality documentation for Python projects. It is known for its ease of use, comprehensive features, and extensive support for various output formats.&lt;/p&gt;

&lt;p&gt;The next step is to get a Sphinx quick start. To do this, head over to the &lt;a href="https://www.sphinx-doc.org/en/master/usage/quickstart.html" rel="noopener noreferrer"&gt;get started&lt;/a&gt; section of Sphinx’s official site and run this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sphinx-quickstart
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you run the command, you get asked a series of questions, exactly the ones in the image below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_762DC8729D6070F874B9A6C8613F9867EBE8E5E337A9107440BC3B666C3AB306_1700599745595_Screenshot%2B2023-11-21%2B214746.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_762DC8729D6070F874B9A6C8613F9867EBE8E5E337A9107440BC3B666C3AB306_1700599745595_Screenshot%2B2023-11-21%2B214746.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Respond to these questions until the whole process is complete.&lt;/p&gt;

&lt;p&gt;This whole process creates a build and source directory. We are also going to install Sphinx make files by running this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;make html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After running the command, your project should look like this in your code editor:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_762DC8729D6070F874B9A6C8613F9867EBE8E5E337A9107440BC3B666C3AB306_1700600224195_Screenshot%2B2023-11-21%2B215541.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_762DC8729D6070F874B9A6C8613F9867EBE8E5E337A9107440BC3B666C3AB306_1700600224195_Screenshot%2B2023-11-21%2B215541.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Within the &lt;code&gt;build&lt;/code&gt; directory, we have our website files, which look this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;📦build
 ┣ 📂doctrees
 ┃ ┣ 📜environment.pickle
 ┃ ┗ 📜index.doctree
 ┗ 📂html
 ┃ ┣ 📂_sources
 ┃ ┃ ┗ 📜index.rst.txt
 ┃ ┣ 📂_static
 ┃ ┃ ┣ 📜alabaster.css
 ┃ ┃ ┣ 📜basic.css
 ┃ ┃ ┣ 📜custom.css
 ┃ ┃ ┣ 📜doctools.js
 ┃ ┃ ┣ 📜documentation_options.js
 ┃ ┃ ┣ 📜file.png
 ┃ ┃ ┣ 📜language_data.js
 ┃ ┃ ┣ 📜minus.png
 ┃ ┃ ┣ 📜plus.png
 ┃ ┃ ┣ 📜pygments.css
 ┃ ┃ ┣ 📜searchtools.js
 ┃ ┃ ┗ 📜sphinx_highlight.js
 ┃ ┣ 📜.buildinfo
 ┃ ┣ 📜genindex.html
 ┃ ┣ 📜index.html
 ┃ ┣ 📜objects.inv
 ┃ ┣ 📜search.html
 ┃ ┗ 📜searchindex.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When we run our project on &lt;code&gt;localhost:8000&lt;/code&gt;, this is what it looks like on the browser:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_762DC8729D6070F874B9A6C8613F9867EBE8E5E337A9107440BC3B666C3AB306_1700602624194_Screenshot%2B2023-11-21%2B223633.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_762DC8729D6070F874B9A6C8613F9867EBE8E5E337A9107440BC3B666C3AB306_1700602624194_Screenshot%2B2023-11-21%2B223633.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Congratulations, we have our documentation site live!&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating a GCP project
&lt;/h2&gt;

&lt;p&gt;In this section, we will create a brand new &lt;a href="https://cloud.google.com/gcp?utm_source=google&amp;amp;utm_medium=cpc&amp;amp;utm_campaign=emea-ng-all-en-bkws-all-all-trial-e-gcp-1011340&amp;amp;utm_content=text-ad-none-any-DEV_c-CRE_501794636587-ADGP_Hybrid+%7C+BKWS+-+EXA+%7C+Txt+~+GCP+~+General%23v2-KWID_43700061569959221-aud-1641092902540:kwd-26415313501-userloc_1010294&amp;amp;utm_term=KW_google+cloud+platform-NET_g-PLAC_&amp;amp;&amp;amp;gad_source=1&amp;amp;gclid=CjwKCAiAx_GqBhBQEiwAlDNAZmnpZ1smTjDIwh0PFBZ6hT-NNobPRD5uYG-SDpNd84A6eiw8ZiDMeRoCDkAQAvD_BwE&amp;amp;gclsrc=aw.ds&amp;amp;hl=en" rel="noopener noreferrer"&gt;GCP&lt;/a&gt; project to figure out which settings need to be tweaked or updated from the base. The next thing is to create our app engine &lt;code&gt;app.yaml&lt;/code&gt;. Google provides a &lt;a href="https://cloud.google.com/appengine/docs/legacy/standard/python/getting-started/hosting-a-static-website" rel="noopener noreferrer"&gt;walkthrough&lt;/a&gt; on how to host a static website using GAE. Here, we can copy this YAML file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;runtime: python27
api_version: 1
threadsafe: true

handlers:
- url: /
  static_files: www/index.html
  upload: www/index.html

- url: /(.*)
  static_files: www/1
  upload: www/(.*)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create an &lt;code&gt;app.yaml&lt;/code&gt; file on your editor and paste this code. We then have to edit the YAML file to point it to the proper location where the website files live. To point your Gcloud command line install to this project, use this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud init --project=&amp;lt;"project ID"&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, you will prompted to log in to Google Cloud like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_762DC8729D6070F874B9A6C8613F9867EBE8E5E337A9107440BC3B666C3AB306_1700650957553_Screenshot%2B2023-11-22%2B120153.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_762DC8729D6070F874B9A6C8613F9867EBE8E5E337A9107440BC3B666C3AB306_1700650957553_Screenshot%2B2023-11-22%2B120153.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Follow the link provided to get your authorization code.&lt;/p&gt;

&lt;p&gt;On the GCP dashboard, navigate to “App Engine” and run the command shown there on your terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;glcloud app deploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will get a prompt asking you to choose the location you would like your app to be deployed:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_762DC8729D6070F874B9A6C8613F9867EBE8E5E337A9107440BC3B666C3AB306_1700654477879_Screenshot%2B2023-11-22%2B130059.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_762DC8729D6070F874B9A6C8613F9867EBE8E5E337A9107440BC3B666C3AB306_1700654477879_Screenshot%2B2023-11-22%2B130059.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose anyone! and your app will successfully deploy. It’s time to push our code to Github! Alternatively, you can clone my GitHub repo &lt;a href="https://github.com/ChisomUma/Google-app-engine-CircleCI" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Link Github repo to CircleCI
&lt;/h2&gt;

&lt;p&gt;The first thing you need to do is create a &lt;a href="https://circleci.com/" rel="noopener noreferrer"&gt;CircleCI&lt;/a&gt; account and link your Github to it. The process is pretty straightforward. Our dashboard should look like this after creating and connecting our project to CircleCI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_762DC8729D6070F874B9A6C8613F9867EBE8E5E337A9107440BC3B666C3AB306_1700668023418_Screenshot%2B2023-11-22%2B164548.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_762DC8729D6070F874B9A6C8613F9867EBE8E5E337A9107440BC3B666C3AB306_1700668023418_Screenshot%2B2023-11-22%2B164548.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, in our code editor, we will create a folder named &lt;code&gt;.circleci&lt;/code&gt; and a &lt;code&gt;config.yaml&lt;/code&gt; file inside it which contains a code that works like this: first, it defines a workflow, then, the workflow will say; each time we push to the main branch, run this set of jobs. We will also define that job, which will contain the logic of where we build our documentation site and deploy it to the Google App engine.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;workflows:
  version: 2
  build_and_deploy:
    jobs:
      - build_and_deploy:
        filters:
          branches:
            only:
              - main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;CircleCI will only run this workflow when we push to the main branch. Now, to define the job:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;jobs:
  build_and_deploy:
    docker:
      - image: busybox
    steps:
      - run:
          name: hello world
          command: |
            echo "Hello world"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can check our formatting with CircleCI. To do this, first &lt;a href="///scl/fi/mn6etgqnl4bg8sr01zmhb/Building-a-CICD-Pipeline-for-Google-App-Engine-Site-Using-CircleCI.paper?rlkey=2vq4glmndwaz4vrpkt4h9mrt1&amp;amp;dl=0"&gt;install CircleCI CLI&lt;/a&gt; and run this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;circleci config validate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Response:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_762DC8729D6070F874B9A6C8613F9867EBE8E5E337A9107440BC3B666C3AB306_1700671583544_Screenshot%2B2023-11-22%2B174555.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_762DC8729D6070F874B9A6C8613F9867EBE8E5E337A9107440BC3B666C3AB306_1700671583544_Screenshot%2B2023-11-22%2B174555.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In our CircleCI, you can see the tests, processes, and workflows whenever we push to the main branch on GitHub:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_762DC8729D6070F874B9A6C8613F9867EBE8E5E337A9107440BC3B666C3AB306_1700672537776_Screenshot%2B2023-11-22%2B180013.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_762DC8729D6070F874B9A6C8613F9867EBE8E5E337A9107440BC3B666C3AB306_1700672537776_Screenshot%2B2023-11-22%2B180013.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Awesome! We have successfully created a CI/CD pipeline. Now, whenever we make changes to our code base or documentation, we can just push to the main, and CircleCI will pick up that change (as demonstrated in the image above) and the job or workflow and make deployments a few minutes later.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This article provided a step-by-step guide on building a CI/CD pipeline for a Google App Engine site using CircleCI. We covered setting up a Python environment, using Sphinx for documentation, and integrating the project with Google Cloud Platform.&lt;/p&gt;

&lt;p&gt;The process demonstrated the benefits of automating deployments via CircleCI, including enhanced code quality, reduced deployment time, and improved team communication. This guide highlights the efficiency and effectiveness of CircleCI in streamlining development processes, making it an invaluable tool for modern software development teams.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.aviator.co/" rel="noopener noreferrer"&gt;Aviator&lt;/a&gt;: Automate your cumbersome processes
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://aviator.co/" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.aviator.co%2Fwp-content%2Fuploads%2F2022%2F08%2Fblog-cta-1024x727.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Aviator automates tedious developer workflows by managing git Pull Requests (PRs) and continuous integration test (CI) runs to help your team avoid broken builds, streamline cumbersome merge processes, manage cross-PR dependencies, and handle flaky tests while maintaining their security compliance.&lt;/p&gt;

&lt;p&gt;There are 4 key components to Aviator:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;MergeQueue&lt;/strong&gt;  – an automated queue that manages the merging workflow for your GitHub repository to help protect important branches from broken builds. The Aviator bot uses GitHub Labels to identify Pull Requests (PRs) that are ready to be merged, validates CI checks, processes semantic conflicts, and merges the PRs automatically.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ChangeSets&lt;/strong&gt;  – workflows to synchronize validating and merging multiple PRs within the same repository or multiple repositories. Useful when your team often sees groups of related PRs that need to be merged together, or otherwise treated as a single broader unit of change.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TestDeck&lt;/strong&gt;  – a tool to automatically detect, take action on, and process results from flaky tests in your CI infrastructure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stacked PRs CLI&lt;/strong&gt;  – a command line tool that helps developers manage cross-PR dependencies. This tool also automates syncing and merging of stacked PRs. Useful when your team wants to promote a culture of smaller, incremental PRs instead of large changes, or when your workflows involve keeping multiple, dependent PRs in sync.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="http://aviator.co/" rel="noopener noreferrer"&gt;Try it for free.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://www.aviator.co/blog/ci-cd-google-app-engine/" rel="noopener noreferrer"&gt;Building a CI/CD pipeline for a Google App Engine site using CircleCI&lt;/a&gt; first appeared on &lt;a href="https://www.aviator.co/blog" rel="noopener noreferrer"&gt;Aviator Blog&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://www.aviator.co/blog/ci-cd-google-app-engine/" rel="noopener noreferrer"&gt;Building a CI/CD pipeline for a Google App Engine site using CircleCI&lt;/a&gt; appeared first on &lt;a href="https://www.aviator.co/blog" rel="noopener noreferrer"&gt;Aviator Blog&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>cicd</category>
    </item>
    <item>
      <title>Mckinsey developer productivity metrics: Opportunity isn’t the goal</title>
      <dc:creator>Brian Neville-O'Neill</dc:creator>
      <pubDate>Fri, 17 Nov 2023 18:38:17 +0000</pubDate>
      <link>https://forem.com/aviator_co/mckinsey-developer-productivity-metrics-opportunity-isnt-the-goal-50o5</link>
      <guid>https://forem.com/aviator_co/mckinsey-developer-productivity-metrics-opportunity-isnt-the-goal-50o5</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zxZuvQOT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.aviator.co/blog/wp-content/uploads/2023/11/dev-productivity-1024x569.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zxZuvQOT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.aviator.co/blog/wp-content/uploads/2023/11/dev-productivity-1024x569.jpg" width="800" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Understanding software team performance is essential so business leaders and engineering managers can accurately assess potential optimizations that could improve throughput. However, precise performance measuring remains a challenge for many organizations.&lt;/p&gt;

&lt;p&gt;It’s often unclear which metrics should be used, how to analyze them, and whether improving them will actually increase your team’s output.&lt;/p&gt;

&lt;p&gt;A recent &lt;a href="https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/yes-you-can-measure-software-developer-productivity"&gt;McKinsey report&lt;/a&gt; argues that software teams should focus their benchmarks on &lt;em&gt;opportunities&lt;/em&gt;. An opportunity represents the possibility of effecting an improvement in product quality or the efficiency of the development process.&lt;/p&gt;

&lt;p&gt;In this article, we’ll analyze this method and how it complements established frameworks like &lt;a href="https://www.aviator.co/blog/everything-wrong-with-dora-metrics"&gt;DORA&lt;/a&gt; and &lt;a href="https://www.aviator.co/blog/whats-wrong-with-using-space-to-measure-developer-productivity"&gt;SPACE&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is opportunity-driven performance measurement?
&lt;/h2&gt;

&lt;p&gt;Software development is a multi-faceted, collaborative, and iterative discipline that requires continual effort from everyone on the team. Modern software companies rarely have idle time; as soon as one feature sprint begins, the focus immediately moves to the next one.&lt;/p&gt;

&lt;p&gt;Therefore, it’s important that the development cycle facilitates gradual improvements over time, to ensure the team can keep making productivity enhancements without being overwhelmed by the burden of existing work.&lt;/p&gt;

&lt;p&gt;These pathways to improvement are the “opportunities” described by McKinsey’s methodology. The approach is designed to let organizations analyze whether their teams are able to move forward, without demanding the use of heavy instrumentation.&lt;/p&gt;

&lt;p&gt;Assessing whether teams have the opportunity to improve provides several benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;You can determine whether a team is at risk of being overworked:&lt;/strong&gt; Teams with few opportunities to improve will be too busy to take on additional work or prioritize internal optimizations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Understand how teams feel about their work:&lt;/strong&gt; Fewer opportunities can indicate that teams are stifled in their creative abilities. If all engineering efforts have to go in one direction, then there’s a higher risk of burnout.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Identify a team’s proficiency, relative to its peers:&lt;/strong&gt; Teams that are creating more opportunities can be more technically proficient than their peers. They solve solutions in a shorter time, leaving more room to implement improvements in the delivery process.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Assess whether you have enough overall capacity to innovate:&lt;/strong&gt; If all your teams are struggling to create opportunities to improve, then it means you’re at the limit of what you can achieve with your current capacity. You might be able to fulfill existing obligations but will be unable to meet any additional pressures (such as rapidly launching new features to respond to competitor announcements).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gain assurance that teams are delivering value to the business:&lt;/strong&gt; Teams that have a very high opportunity score—relative to other teams in your organization—might actually be delivering relatively little business value. It could mean they have too much idle time or are preoccupied with process enhancements, to the detriment of value-creating work.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It’s clear from this list that “opportunities created” is a valid metric for tracking software development performance. But how do you actually measure available opportunities?&lt;/p&gt;

&lt;h2&gt;
  
  
  Opportunity-driven metrics explained
&lt;/h2&gt;

&lt;p&gt;McKinsey suggests four primary “lenses” to look through:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Time spent in the inner/outer loop&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Developer Velocity Index benchmark&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Analysis of backlog contributions&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Talent capability score&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let’s explore how these metrics provide improvement opportunities for your organization.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Time spent in the inner/outer loop
&lt;/h3&gt;

&lt;p&gt;The software development lifecycle incorporates inner and outer loops. The inner loop is where work gets done; it mainly consists of writing, testing, building, and deploying code. The outer loop encapsulates administrative tasks that can distract and frustrate developers; examples include planning meetings, compliance audits, and collaboration with other teams.&lt;/p&gt;

&lt;p&gt;Capturing the time spent in each loop provides information that offers a picture of opportunities. Ideally, developers should be spending the majority of their time in the inner loop, as this is where they’re contributing the most value to the organization.&lt;/p&gt;

&lt;p&gt;Teams or individuals that are getting stuck in the outer loop will be spending more time on less productive tasks, limiting your opportunities to progress other work.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Developer velocity index benchmark
&lt;/h3&gt;

&lt;p&gt;McKinsey’s existing &lt;a href="https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/developer-velocity-how-software-excellence-fuels-business-performance"&gt;Developer Velocity Index (DVI)&lt;/a&gt; benchmark encapsulates performance across several verticals including tooling, working methods, process optimization, and compliance. Teams or individuals with high DVI scores are equipped to capitalize on more opportunities; they are consistently productive at high rates of throughput, which creates windows between regularly scheduled work in which improvements can be made.&lt;/p&gt;

&lt;p&gt;Tools &lt;a href="https://learn.microsoft.com/en-us/assessments/e50f7040-f235-4360-9d1d-cf753e12fed1"&gt;are available&lt;/a&gt; to help you calculate DVI. You can also produce your own version by combining performance and activity metrics such as the number of commits made, issues closed, discussions contributed, and features delivered by each individual or team.&lt;/p&gt;

&lt;p&gt;Developer satisfaction should be considered too, as frustration or inability to operate autonomously will usually impede velocity.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Analysis of backlog contributions
&lt;/h3&gt;

&lt;p&gt;The tasks that a team or individual works on can reveal opportunities to either reassign work or encourage upskilling. Firstly, developers who routinely work on backlog tasks represent an increased opportunity as they’re delivering additional value beyond what may be expected.&lt;/p&gt;

&lt;p&gt;If those backlog tasks fall slightly outside the developer’s usual expertise, then successfully tackling them implies there’s a retraining opportunity as the engineer has already demonstrated proficiency with those skills.&lt;/p&gt;

&lt;p&gt;The opportunity might not necessarily relate to the specific developer who’s working on the tasks. For example, if developers are routinely undertaking infrastructure tasks—but they’re taking a relatively long time to reach completion—then the opportunity is to introduce more operations staff who are better equipped to deal with those tickets. This then frees up the software team to deliver a higher throughput on their specialist development work.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Talent capability score
&lt;/h3&gt;

&lt;p&gt;A talent capability score summarizes the skills and abilities held within a team or organization. It also encapsulates the distribution of skill levels, from beginner through to senior experts.&lt;/p&gt;

&lt;p&gt;High-performing organizations will need a breadth of skills across DevSecOps sectors, with a range of skill levels for each one—junior developers are required to ensure fresh talent enters the business, but these must be supported by senior staff. Having too many people with similar proficiency can impede you in the long term, reducing your talent capability score.&lt;/p&gt;

&lt;p&gt;Advanced talent capability means more opportunities to deliver throughput and process improvements. Different perspectives will make it more likely that new working methods will be discovered. Similarly, having a good spread of proficiencies allows you to sustain performance long-term by coaching and mentoring newcomers into senior roles.&lt;/p&gt;

&lt;p&gt;This progression pathway can make the organization more attractive to developers, increasing staff retention rates and hence producing an additional increase in opportunity.&lt;/p&gt;

&lt;h2&gt;
  
  
  The challenges of using opportunity metrics for performance measurement
&lt;/h2&gt;

&lt;p&gt;McKinsey’s report has prompted &lt;a href="https://newsletter.pragmaticengineer.com/p/measuring-developer-productivity"&gt;more debate about&lt;/a&gt; whether, how, and why software performance should be measured. Looking at the opportunity as a benchmark for performance is a novel idea that encourages a more holistic framing of results while recognizing the iterative, long-term nature of modern software lifecycles.&lt;/p&gt;

&lt;p&gt;However, actually collecting and utilizing this data is likely to be challenging for many organizations. Moreover, the metrics don’t necessarily tell the whole story—similarly to DORA and SPACE, you need to analyze them in the context of your organization’s working methods and business aims.&lt;/p&gt;

&lt;p&gt;Having a theoretical opportunity to improve isn’t necessarily meaningful, such as if it relates to a low-priority project or you’re already satisfying all performance objectives.&lt;/p&gt;

&lt;h3&gt;
  
  
  Opportunity isn’t the goal
&lt;/h3&gt;

&lt;p&gt;Perhaps the biggest drawback of focusing on opportunity is that creating opportunities shouldn’t actually be your goal. Ultimately, software teams are always going to be guided by outputs. An opportunity represents a possibility of an outcome, but it doesn’t guarantee you will realize it. To tangibly improve, you must utilize your opportunities to drive improvements across your development lifecycle.&lt;/p&gt;

&lt;p&gt;The metrics discussed here won’t directly tell you whether that’s happening. For example, you could hire more proficient developers to optimize your talent capability score. This creates an opportunity to achieve higher throughput—but are you actually capitalizing on that opportunity?&lt;/p&gt;

&lt;p&gt;This question must be answered to understand whether continuing the investment is an effective way to further increase throughput. (The answer will also be the primary concern of business and finance teams who want to know whether the new hiring wave has generated financial ROI.)&lt;/p&gt;

&lt;p&gt;It can be tempting to assume that the new highly skilled developers will immediately start producing results, but the reality could be very different. If you don’t have the tools and processes to support effective collaboration, then your engineers might actually be sitting idle.&lt;/p&gt;

&lt;p&gt;Consequently, it’s imperative that apparent opportunities are approached with caution. You must conduct your own research to determine whether opportunities are being actioned, which could involve consideration of trends shown in outcome-oriented performance data—such as DORA’s average deployment frequency and change failure rate values.&lt;/p&gt;

&lt;p&gt;Even so, it may still be difficult to accurately attribute those changes back to your opportunity results. Hence, analyzing opportunities could raise more questions than it solves if you can’t measure whether available opportunities are being converted into actual improvements.&lt;/p&gt;

&lt;h3&gt;
  
  
  Opportunity means obfuscation
&lt;/h3&gt;

&lt;p&gt;Because opportunity must be seen only as an enabler of outcomes—not a target outcome in itself—it means that this method adds another degree of obfuscation to your performance analysis. It is a helpful way to interpret how performance &lt;em&gt;could&lt;/em&gt; change in the future, but it certainly does not mean that it will change in a particular way.&lt;/p&gt;

&lt;p&gt;Zeroing in on opportunity while disregarding other benchmarks could therefore distract from more meaningful optimizations. Obfuscating simpler development metrics (such as a team being understaffed, or deployments taking too long to complete) behind obscure “opportunity” scores won’t be helpful to teams already struggling to get a grip on performance.&lt;/p&gt;

&lt;p&gt;Opportunity is therefore best-used as an indicator that you’re on the right track. It shouldn’t be used in isolation, but for high-performing teams, it can help you understand whether you’re on course to sustain and improve your current performance. It recognizes the reality that continual reinvestment of time and resources is required to keep improving processes, and hence increase productivity.&lt;/p&gt;

&lt;p&gt;In summary, measures such as DORA, SPACE, and your own business metrics can help you understand what’s working and what needs improvement in your software development processes. Opportunity analysis offers an additional dimension that suggests whether you’re equipped to implement those improvements across your processes and products. However, this is a derived value that can obscure important details, so it must always be tied back to your organization’s context.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Software development performance is topical but getting a handle on it is still hard. DORA and SPACE provide measurements for how you’re performing today, but they don’t always reveal whether you’re on a trajectory for future growth.&lt;/p&gt;

&lt;p&gt;Assessing the opportunities being created by your teams helps build a more complete picture of long-term performance. Without the opportunity to improve, you can’t pursue the process optimizations, efficiency enhancements, and toolchain revisions that are critical to scaling DevOps teams to match product growth and market demands.&lt;/p&gt;

&lt;p&gt;That said, the model still won’t give you a definitive picture of overall productivity. There can be multiple reasons why a metric changes or a trend appears, so it’s vital you interrogate your data to ensure that findings are real and relevant. For this reason, it’s usually best to start off small by collecting a few performance metrics—across DORA, SPACE, and opportunity analysis—that are easily measurable and have a clear impact on your business.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.aviator.co/"&gt;Aviator&lt;/a&gt;: Automate your cumbersome processes
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://aviator.co/"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MGzRmW1s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://blog.aviator.co/wp-content/uploads/2022/08/blog-cta-1024x727.png" width="800" height="568"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Aviator automates tedious developer workflows by managing git Pull Requests (PRs) and continuous integration test (CI) runs to help your team avoid broken builds, streamline cumbersome merge processes, manage cross-PR dependencies, and handle flaky tests while maintaining their security compliance.&lt;/p&gt;

&lt;p&gt;There are 4 key components to Aviator:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;MergeQueue&lt;/strong&gt;  – an automated queue that manages the merging workflow for your GitHub repository to help protect important branches from broken builds. The Aviator bot uses GitHub Labels to identify Pull Requests (PRs) that are ready to be merged, validates CI checks, processes semantic conflicts, and merges the PRs automatically.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ChangeSets&lt;/strong&gt;  – workflows to synchronize validating and merging multiple PRs within the same repository or multiple repositories. Useful when your team often sees groups of related PRs that need to be merged together, or otherwise treated as a single broader unit of change.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TestDeck&lt;/strong&gt;  – a tool to automatically detect, take action on, and process results from flaky tests in your CI infrastructure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stacked PRs CLI&lt;/strong&gt;  – a command line tool that helps developers manage cross-PR dependencies. This tool also automates syncing and merging of stacked PRs. Useful when your team wants to promote a culture of smaller, incremental PRs instead of large changes, or when your workflows involve keeping multiple, dependent PRs in sync.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="http://aviator.co/"&gt;Try it for free.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://www.aviator.co/blog/mckinsey-developer-productivity-metrics-opportunity-isnt-the-goal/"&gt;Mckinsey developer productivity metrics: Opportunity isn’t the goal&lt;/a&gt; first appeared on &lt;a href="https://www.aviator.co/blog"&gt;Aviator Blog&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://www.aviator.co/blog/mckinsey-developer-productivity-metrics-opportunity-isnt-the-goal/"&gt;Mckinsey developer productivity metrics: Opportunity isn’t the goal&lt;/a&gt; appeared first on &lt;a href="https://www.aviator.co/blog"&gt;Aviator Blog&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>developerproductivity</category>
      <category>dora</category>
      <category>space</category>
    </item>
    <item>
      <title>Automating integration tests: Tools and frameworks for efficient QA</title>
      <dc:creator>Brian Neville-O'Neill</dc:creator>
      <pubDate>Thu, 09 Nov 2023 20:06:28 +0000</pubDate>
      <link>https://forem.com/aviator_co/automating-integration-tests-tools-and-frameworks-for-efficient-qa-1k9c</link>
      <guid>https://forem.com/aviator_co/automating-integration-tests-tools-and-frameworks-for-efficient-qa-1k9c</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.aviator.co%2Fblog%2Fwp-content%2Fuploads%2F2023%2F11%2Fautomating-integration-tests-1024x568.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.aviator.co%2Fblog%2Fwp-content%2Fuploads%2F2023%2F11%2Fautomating-integration-tests-1024x568.jpg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In today’s fast-paced software development environment, the need for rapid and reliable testing is paramount. One essential aspect of testing is integration testing, which ensures that different components of a software system work seamlessly together.&lt;/p&gt;

&lt;p&gt;In this article, we will explore the benefits of automating Integration testing in software development, and a case study using three popular tools to illustrate the implementation of automated integration testing in a real-world scenario.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is integration testing?
&lt;/h2&gt;

&lt;p&gt;Integration testing is a level of software testing in which individual units or components of a software application are combined and tested as a group. The primary goal of integration testing is to ensure that the interactions between these components when integrated, work as expected and that the application functions correctly as a whole.&lt;/p&gt;

&lt;h2&gt;
  
  
  Unit Testing vs. Integration Testing:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Scope:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Unit Testing:&lt;/strong&gt; Tests individual units in isolation (functions, classes).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integration Testing:&lt;/strong&gt; Tests interactions and interfaces between integrated units.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Testing Depth:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Unit Testing:&lt;/strong&gt; Focuses on internal logic of a single unit, avoids external dependencies.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integration Testing:&lt;/strong&gt; Validates interactions between multiple units, includes external dependencies like databases or APIs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Purpose:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Unit Testing:&lt;/strong&gt; Verifies correctness of individual units, catches bugs within a unit.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integration Testing:&lt;/strong&gt; Ensures collaboration between units, detects integration-related issues.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Dependencies:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Unit Testing:&lt;/strong&gt; Independent of external dependencies, uses test doubles (mocks, stubs).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integration Testing:&lt;/strong&gt; Requires external dependencies, verifies integration points with other components.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Unit testing is concerned with the correctness of individual units of code, while integration testing focuses on the interactions and integration of multiple units to ensure they work together as expected.&lt;/p&gt;

&lt;h2&gt;
  
  
  Importance of integration testing in software development
&lt;/h2&gt;

&lt;p&gt;Integration testing is a critical phase in the software development lifecycle where individual components or modules of a system are tested together to uncover issues related to their interactions.&lt;/p&gt;

&lt;p&gt;This type of testing ensures that various parts of a software application collaborate as intended and that data flows correctly between them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of integration testing
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Interface Issues:&lt;/strong&gt; Identifies data passing errors, inconsistent APIs, and communication breakdowns.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error Localization:&lt;/strong&gt; Pinpoints failure locations for quicker fixes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;End-to-End Functionality:&lt;/strong&gt; Verifies software’s correct operation for a reliable user experience.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Risk Reduction:&lt;/strong&gt; Early issue detection lowers the risk of critical problems in later stages.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Challenges in manual integration testing?
&lt;/h2&gt;

&lt;p&gt;Manual integration testing, while necessary, can be a time-consuming and error-prone process. Here are some of the challenges associated with manual integration testing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Manual testing for complex software is time-consuming, slowing down development cycles.&lt;/li&gt;
&lt;li&gt;Requires dedicated testers, incurring high costs for organizations.&lt;/li&gt;
&lt;li&gt;Human error can introduce inconsistencies in test execution and miss critical scenarios.&lt;/li&gt;
&lt;li&gt;As software complexity increases, manual testing struggles to scale with the growing number of integration points.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Roles of automation in enhancing integration testing
&lt;/h2&gt;

&lt;p&gt;Automation is the solution to many of the challenges posed by manual integration testing. By leveraging automation tools and frameworks, you can streamline the testing process, improve accuracy, and expedite the feedback loop. Here’s how automation enhances integration testing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Speed:&lt;/strong&gt; Automated tests run faster, enabling frequent testing cycles.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consistency:&lt;/strong&gt; Tests execute consistently, reducing human errors.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Coverage:&lt;/strong&gt; Automation tests a wide range of scenarios comprehensively.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CI/CD Integration:&lt;/strong&gt; Seamless integration provides immediate feedback on code changes, allowing early issue detection.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the subsequent sections of this article, we will delve deeper into the tools and frameworks available for automating integration testing.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to choose the right tools and frameworks for automation testing
&lt;/h2&gt;

&lt;p&gt;Selecting the right tools and frameworks for automating integration tests is a crucial decision that can significantly impact the efficiency and effectiveness of your testing efforts. Let’s explore the criteria for tool selection and discuss some popular automation tools and testing frameworks for integration tests.&lt;/p&gt;

&lt;h2&gt;
  
  
  Criteria for tool selection
&lt;/h2&gt;

&lt;p&gt;When evaluating testing tools for your integration testing needs, consider the following criteria to make an informed decision:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Programming Languages and Technologies Compatibility&lt;/li&gt;
&lt;li&gt;Community Support and Documentation&lt;/li&gt;
&lt;li&gt;Reporting and Visualization Capabilities &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Popular automation tools
&lt;/h2&gt;

&lt;p&gt;Let’s take a closer look at some popular automation tools commonly used for automating integration tests:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Selenium –&lt;/strong&gt; &lt;a href="https://www.selenium.dev/downloads/" rel="noopener noreferrer"&gt;Downloads | Selenium&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Language Support:&lt;/strong&gt; Selenium supports multiple programming languages, including Java, Python, C#, and JavaScript, making it versatile for various development environments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Browser Compatibility:&lt;/strong&gt; Selenium is widely used for web application testing and supports a broad range of web browsers, including Chrome, Firefox, Safari, and more.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Community:&lt;/strong&gt; Selenium has a large and active community, which means extensive online resources, forums, and support.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cypress –&lt;/strong&gt; &lt;a href="https://www.cypress.io/" rel="noopener noreferrer"&gt;JavaScript Component Testing and E2E Testing Framework | Cypress&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Focused on Web Applications:&lt;/strong&gt; Cypress is designed specifically for testing web applications and provides a rich set of features tailored to this domain.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;JavaScript:&lt;/strong&gt; Cypress uses JavaScript for test scripting, which may be advantageous if your project primarily involves JavaScript development.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Interactive Debugging:&lt;/strong&gt; Cypress offers real-time debugging capabilities, allowing testers to see what happens at each step of a test.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Postman –&lt;/strong&gt; &lt;a href="https://www.postman.com/downloads/" rel="noopener noreferrer"&gt;Download Postman | Get Started for Free&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;API Testing:&lt;/strong&gt; Postman is a popular tool for testing APIs. It allows you to create and execute API requests, making it ideal for integration testing of APIs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Collections:&lt;/strong&gt; Postman collections enable the organization of API requests into logical groups, facilitating test case management.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automation:&lt;/strong&gt; Postman provides options for automating API tests, making it suitable for incorporating into CI/CD pipelines. &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Testing frameworks for integration tests
&lt;/h2&gt;

&lt;p&gt;In addition to automation tools, you would also need testing frameworks to structure and manage your integration tests. Here are some commonly used testing frameworks for integration testing:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pytest (Python) –&lt;/strong&gt; &lt;a href="https://docs.pytest.org/en/7.4.x/" rel="noopener noreferrer"&gt;pytest Documentation&lt;/a&gt; &lt;strong&gt;:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Python Support:&lt;/strong&gt; Pytest is a popular testing framework for Python applications, offering a simple syntax and powerful test discovery mechanisms.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rich Ecosystem:&lt;/strong&gt; Pytest has a rich ecosystem of plugins and extensions that enhance its functionality and support various testing needs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;TestNG (Java) –&lt;/strong&gt; &lt;a href="https://testng.org/doc/" rel="noopener noreferrer"&gt;TestNG – Documentation&lt;/a&gt; &lt;strong&gt;:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Java Support:&lt;/strong&gt; TestNG is a testing framework for Java applications. It provides advanced features such as parallel test execution, data-driven testing, and test grouping.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Annotations:&lt;/strong&gt; TestNG uses annotations to define test methods and specify test configurations, making it easy to create and manage test suites.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;NUnit (C#) –&lt;/strong&gt; &lt;a href="https://nunit.org/" rel="noopener noreferrer"&gt;NUnit.org&lt;/a&gt; &lt;strong&gt;:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;C# Support:&lt;/strong&gt; NUnit is a testing framework for C# applications, offering similar features to JUnit and TestNG.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Attributes:&lt;/strong&gt; NUnit uses attributes to define tests and test fixtures, making creating and managing test suites in C# projects easy.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These tools and frameworks, when used in conjunction with best practices, can streamline your testing efforts, improve test coverage, and contribute to the overall quality of your software.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up automation for integration tests
&lt;/h2&gt;

&lt;p&gt;Let’s talk about some of the essential steps required to set up automation for integration tests include configuring the testing environment, managing dependencies, and writing test scripts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Environment configuration and dependencies
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Environment Setup:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Development Environment:&lt;/strong&gt; Set up necessary software and libraries for test automation, including code editors, version control systems, and language runtimes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Testing Environment:&lt;/strong&gt; Create a testing environment mirroring the production setup for accurate testing.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Dependency Management:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Package Managers:&lt;/strong&gt; Utilize npm, pip, or Maven to manage project dependencies and automation tools.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Virtual Environments:&lt;/strong&gt; Use tools like Docker to isolate dependencies, ensuring consistency across various development setups. &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Writing test scripts
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;API Testing Tools:&lt;/strong&gt; Choose tools like Postman or Requests for API interactions, covering diverse endpoints and response scenarios.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Authentication:&lt;/strong&gt; Implement required authentication methods such as API tokens or OAuth in your tests.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Handling:&lt;/strong&gt; Prepare test data, considering dynamic data generation or using mock data. &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Simulating user interactions
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;UI Automation Tools:&lt;/strong&gt; Select Selenium or Cypress for simulating user actions in web applications.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Test Cases:&lt;/strong&gt; Write test cases replicating critical user interactions like form submissions and button clicks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Asynchronous Actions:&lt;/strong&gt; Handle asynchronous actions like AJAX requests for accurate testing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cross-Browser Testing:&lt;/strong&gt; Verify application functionality across multiple browsers and versions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Management:&lt;/strong&gt; Plan test data management, including database setup or generation scripts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Mocking:&lt;/strong&gt; Use mocking libraries to simulate external services, isolating tests from external factors.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Cleanup:&lt;/strong&gt; Implement mechanisms to clean up test data post-execution for a consistent testing environment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By following these steps, you can establish a solid foundation for automating integration tests.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to execute and analyze automated integration tests
&lt;/h2&gt;

&lt;p&gt;Let’s also consider the steps involved in executing and analyzing automated integration tests include running tests locally, integrating them into CI/CD pipelines, generating reports and logs, and interpreting test results and failures.&lt;/p&gt;

&lt;h2&gt;
  
  
  Running tests locally and in CI/CD Pipelines
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Local and CI/CD Test Execution:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Setup:&lt;/strong&gt; Configure local environment with necessary dependencies and tools.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Running Tests:&lt;/strong&gt; Execute tests using chosen automation tool via CLI or IDE.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Debugging:&lt;/strong&gt; Use automation tool features for script debugging.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;CI/CD Pipeline Integration:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Version Control:&lt;/strong&gt; Link repository with CI/CD system for automated test triggers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automated Testing:&lt;/strong&gt; Integrate tests into CI/CD pipeline for continuous validation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Parallel Execution:&lt;/strong&gt; Run tests in parallel to optimize pipeline speed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Generating Reports and Logs:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Test Execution Reports:&lt;/strong&gt; Automation tools provide detailed test execution reports.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom Reports:&lt;/strong&gt; Develop custom reports for additional insights or visualizations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Logging:&lt;/strong&gt; Implement logging in scripts for vital execution data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Debugging Output:&lt;/strong&gt; Ensure clear error messages and stack traces for effective debugging.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Interpreting Test Results:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Passed Tests:&lt;/strong&gt; Validate expected behavior in various scenarios.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Failed Tests:&lt;/strong&gt; Investigate reasons behind failures, such as incorrect assertions or unexpected responses.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Failure Investigation:&lt;/strong&gt; Refer to logs, reproduce failures locally, and pinpoint issues.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Root Cause Analysis:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Code Review:&lt;/strong&gt; Check recent code changes linked to failed tests.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Environment and Data:&lt;/strong&gt; Consider external factors like environment changes or data inconsistencies.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Bug Reporting and Tracking:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Issue Tracking:&lt;/strong&gt; Report genuine bugs in project’s issue tracking system.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Re-Run Tests:&lt;/strong&gt; After fixes, re-run tests to confirm problem resolution.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By following these steps, you can effectively execute and analyze automated integration tests as part of your software development process. This approach ensures you catch integration issues early, facilitate debugging, and maintain a reliable and robust codebase.&lt;/p&gt;

&lt;h2&gt;
  
  
  Case study: Implementing automated integration testing
&lt;/h2&gt;

&lt;p&gt;This is a case study that illustrates the implementation of automated integration testing in a real-world scenario. We’ll cover the scenario description, tool and framework selection, test script creation and execution, and the results **providing practical examples and guidance on implementing these strategies using popular tools and frameworks, including FastAPI, to help you overcome challenges and automate integration tests successfully.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scenario description
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Scenario:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Imagine you work for a software development company that is building a modern e-commerce platform. The platform consists of various components, including a web application, a RESTful API, and a backend database. Your team is responsible for ensuring the smooth integration of these components and maintaining a high level of quality in the application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Challenges:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Frequent code changes and feature additions are causing integration issues.&lt;/li&gt;
&lt;li&gt;Manual testing is time-consuming, error-prone, and slowing down the development process.&lt;/li&gt;
&lt;li&gt;Ensuring that customer data remains secure and consistent across the platform is a top priority.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Tool and framework selection:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Selenium:&lt;/strong&gt; For UI automation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Postman:&lt;/strong&gt; For API testing and executing API requests.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pytest:&lt;/strong&gt; For structuring and managing integration tests.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Test script creation and execution&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Test Script Creation:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Selenium for UI Test:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When using Selenium to write test, it doesn’t provide a testing tool/framework. You can write test cases using Python’s &lt;a href="https://docs.pytest.org/en/stable/" rel="noopener noreferrer"&gt;pytest&lt;/a&gt; module. Other options for a tool/framework are &lt;a href="https://docs.python.org/3/library/unittest.html" rel="noopener noreferrer"&gt;unittest&lt;/a&gt; and &lt;a href="https://nose.readthedocs.io/en/latest/" rel="noopener noreferrer"&gt;nose&lt;/a&gt;. In this example, we use pytest as the framework of choice and chrome’s web driver, Chronium to test the login process in an application. The choice of what web driver to download is as a result what web browser you want to use.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Setup&lt;/strong&gt; :&lt;/p&gt;

&lt;p&gt;Install Selenium, Pytest and download the appropriate WebDriver in this case ChromeDriver, and add to Path. You can check out how to install the ChromeDriver and add it to path &lt;a href="https://www.browserstack.com/guide/run-selenium-tests-using-selenium-chromedriver" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
# Install the Selenium WebDriver, pytest and necessary browser driver which is ChromeDriver
# pip install selenium pytest
# Download ChromeDriver: https://sites.google.com/chromium.org/driver/downloads
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Import Modules&lt;/strong&gt; :&lt;/p&gt;

&lt;p&gt;Import the required module.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;WebDriver Initialization&lt;/strong&gt; :&lt;/p&gt;

&lt;p&gt;Create an instance of the Chrome WebDriver to control the browser.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
options = Options()

# To open the browser maximized
options.add_argument("start-maximized")

# To keep the browser open
options.add_experimental_option("detach", True)

# Initialize the web driver (assuming you've downloaded and placed ChromeDriver in your PATH)
driver = webdriver.Chrome(options=options)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Test Function&lt;/strong&gt; :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Define a test function &lt;code&gt;**test_logIn**&lt;/code&gt; for simulating user interactions on the website.&lt;/li&gt;
&lt;li&gt;Navigates to the website, inserts the email and password you provided after searching for the field using &lt;code&gt;driver.find_element&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Use the &lt;code&gt;Return&lt;/code&gt; key to press enter.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def test_logIn():
    # Insert the 'email' and 'password' credentials
    email = "[email-address]"
    password = "[password]"
   # Navigate to the login page
    driver.get("[website-url]") # Replace with the login page URL

    # Check the title of the page
    assert "TestWebsit" in driver.title 

    # Find and interact with the email and password fields
    email_field = driver.find_element(By.NAME, "email")
    password_field = driver.find_element(By.NAME, "password")

    email_field.send_keys(eamil)
    password_field.send_keys(password)
    # Enter the 'RETURN' key
    password_field.send_keys(Keys.RETURN)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Cleanup&lt;/strong&gt; :&lt;/p&gt;

&lt;p&gt;Close the browser to release resources.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Clean up: close the browser
driver.quit()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Running Selenium UI Tests:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To run Selenium UI tests, you’ll need to make sure you have the Selenium WebDriver set up and installed. Here’s how to run the Selenium UI tests:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Place your Selenium UI test code in a Python file, e.g., &lt;code&gt;ui_test.py&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Open a terminal and navigate to the directory where &lt;code&gt;ui_test.py&lt;/code&gt; is located.&lt;/li&gt;
&lt;li&gt;Run the UI test script using Python:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pytest ui_test.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;The script will launch a Chrome browser, navigate to the specified website, perform the user interactions, and verify the results.
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_E9D71EF7736F5AF1F0FAAF9173BD209BB3E069C62E0284EAEBE75203D80969A6_1697465052220_Op7a2vgg.png"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Pytest for integration testing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Pytest test cases are written to test the interaction between the web application and the RESTful API. These tests ensure that data flows correctly between the frontend and backend. In this example, the Sign-up endpoint is tested.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Setup&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Import the libraries and classes that would be used.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Import Libraries:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Import the libraries and classes that will be used.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import httpx
import pytest
import pytest_asyncio
from typing import AsyncIterator
from fastapi import FastAPI, status
from pydantic_settings import BaseSettings
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Create an Endpoint:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Instantiate the FastAPI class&lt;/li&gt;
&lt;li&gt;Write a schema for the endpoint you want to create&lt;/li&gt;
&lt;li&gt;Create your Endpoint
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
app = FastAPI()

class Register(BaseSettings):
    username: str
    password: str
    email: str

@app.post("/register", status_code=status.HTTP_201_CREATED)
async def signUp(register: Register):
    return {
        "username": register.username,
        "email": register.email
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Fixture for Base URL:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Define a Pytest fixture &lt;code&gt;client&lt;/code&gt; that uses the &lt;code&gt;httpx.AsyncClient&lt;/code&gt; to make asynchronous operations instead of the FastApi’s &lt;code&gt;TestClient&lt;/code&gt; for making synchronous operations&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@pytest_asyncio.fixture
async def client() -&amp;gt; AsyncIterator[httpx.AsyncClient]:
    async with httpx.AsyncClient(app=app, base_url="http://testserver") as client:
        yield client
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Test Function&lt;/strong&gt; :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Define an asynchronous test function (&lt;code&gt;**test_api_integration**&lt;/code&gt;) for integration testing.&lt;/li&gt;
&lt;li&gt;Use an async HTTP client (&lt;code&gt;**httpx.AsyncClient**&lt;/code&gt;) to simulate user registration and login.&lt;/li&gt;
&lt;li&gt;Assert the response status codes and verify that the data provided is accurate.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
@pytest.mark.asyncio
async def test_api_integration(client: httpx.AsyncClient):
    # Simulate a user registration and login process
    registration_data = {
                "username": "testuser",
                "password": "password123",
                "email": "testuser@example.com"
            }

    response = await client.post("/register", json=registration_data)
    assert response.status_code == 201
    assert response.json() == {
                "username": registration_data['username'],
                "email": registration_data['email']
            }
            # Perform additional integration tests as needed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Running Pytest Integration Tests:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To run Pytest integration tests, you’ll need to have your FastAPI application running. Here’s how to run the Pytest integration tests: 

&lt;ol&gt;
&lt;li&gt;Place your Pytest integration test code in a Python file, e.g., &lt;code&gt;integration_test.py&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Open a terminal and navigate to the directory where &lt;code&gt;integration_test.py&lt;/code&gt; is located.&lt;/li&gt;
&lt;li&gt;Run the Pytest test suite:&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;pytest integration_test.py&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pytest will discover and run the test functions within the specified Python file. Ensure that your FastAPI server is running and accessible at the base URL specified in the test script.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_63A784C597FB30B350642C32C1B25D0039CB1610675BBE80414C517738DBA4DB_1697462322013_Running%2Bintegration_test.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_63A784C597FB30B350642C32C1B25D0039CB1610675BBE80414C517738DBA4DB_1697462322013_Running%2Bintegration_test.png" alt="Running integration_test.py"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;By following these steps, you can execute the UI, API, and integration tests to verify the functionality of your application. Make sure to customize the test scripts according to your specific application and requirements before running them.&lt;/li&gt;
&lt;li&gt;Tests are also integrated into the CI/CD pipeline, triggering automated test runs on each code commit and deployment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Postman for API Tests&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;API test suites are developed to verify the functionality of the RESTful API. Tests include endpoints for user registration, product retrieval, and order processing. In this example, we would test the Sign-Up endpoint created while using pytest.&lt;/p&gt;

&lt;p&gt;Download the &lt;a href="https://www.postman.com/downloads/" rel="noopener noreferrer"&gt;Postman application&lt;/a&gt; into your computer and create an account. Alternatively, you can use the &lt;a href="https://www.postman.com/" rel="noopener noreferrer"&gt;Postman web application&lt;/a&gt; but in this example, we are making use of the desktop app.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_63A784C597FB30B350642C32C1B25D0039CB1610675BBE80414C517738DBA4DB_1697400939334_Testing%2Bthe%2BSign%2Bin%2BEndpoint.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_63A784C597FB30B350642C32C1B25D0039CB1610675BBE80414C517738DBA4DB_1697400939334_Testing%2Bthe%2BSign%2Bin%2BEndpoint.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To begin, create a &lt;code&gt;Workspace&lt;/code&gt; by clicking the dropdown arrow.&lt;/li&gt;
&lt;li&gt;Create a collection by clicking the &lt;code&gt;Create Collection&lt;/code&gt; icon.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_63A784C597FB30B350642C32C1B25D0039CB1610675BBE80414C517738DBA4DB_1697401070847_Create%2Bcollection.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_63A784C597FB30B350642C32C1B25D0039CB1610675BBE80414C517738DBA4DB_1697401070847_Create%2Bcollection.png" alt="Create a Collection"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provide the URL to the endpoint you want to test, do not forget input the correct http method.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_63A784C597FB30B350642C32C1B25D0039CB1610675BBE80414C517738DBA4DB_1697401088954_the%2BURL.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_63A784C597FB30B350642C32C1B25D0039CB1610675BBE80414C517738DBA4DB_1697401088954_the%2BURL.png" alt="Insert the URL to the Endpoint"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add a request body if required. In this case, a body is required.&lt;/li&gt;
&lt;li&gt;Click &lt;code&gt;Send&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_63A784C597FB30B350642C32C1B25D0039CB1610675BBE80414C517738DBA4DB_1697401125454_the%2Bbody.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_63A784C597FB30B350642C32C1B25D0039CB1610675BBE80414C517738DBA4DB_1697401125454_the%2Bbody.png" alt="Add a Request Body"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check the response body to make sure what you want is being returned.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_63A784C597FB30B350642C32C1B25D0039CB1610675BBE80414C517738DBA4DB_1697401249914_the%2Bresponse.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_63A784C597FB30B350642C32C1B25D0039CB1610675BBE80414C517738DBA4DB_1697401249914_the%2Bresponse.png" alt="Response Body"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When the Endpoints become much, you can choose to run the collection as a whole instead of testing the endpoints individually.
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_63A784C597FB30B350642C32C1B25D0039CB1610675BBE80414C517738DBA4DB_1697401265820_Run%2Bcollection.png" alt="Run Collection"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Results and impact on QA process
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Integration issues are detected and resolved much earlier in the development process, reducing the cost and complexity of fixing them.&lt;/li&gt;
&lt;li&gt;UI tests with Selenium have identified and resolved several user interface issues, leading to an improved user experience.&lt;/li&gt;
&lt;li&gt;API tests in Postman have helped ensure that the RESTful API functions correctly, maintaining data integrity and security.&lt;/li&gt;
&lt;li&gt;Integration tests with Pytest have confirmed that the components of the e-commerce platform work seamlessly together, reducing integration-related bugs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this case study, the implementation of automated integration testing using Selenium, Postman, and Pytest has significantly improved the software development process. Integration issues are caught early, leading to a more efficient and reliable QA process and a higher-quality e-commerce (example) platform for customers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Best practices for effective automation
&lt;/h2&gt;

&lt;p&gt;To ensure the success of your automated integration testing efforts, it’s essential to follow best practices that promote efficiency, reliability, and maintainability. Below are some:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Isolate tests to prevent dependencies on prior states.&lt;/li&gt;
&lt;li&gt;Programmatic data creation ensures replicable testing.&lt;/li&gt;
&lt;li&gt;Implement cleanup mechanisms for a consistent environment.&lt;/li&gt;
&lt;li&gt;Regularly update tests to match application changes.&lt;/li&gt;
&lt;li&gt;Remove redundant tests for efficiency.&lt;/li&gt;
&lt;li&gt;Keep documentation and reports clear, concise, and up-to-date. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this comprehensive exploration of automated integration testing, we’ve covered the significance of this testing approach and the tools, frameworks, and best practices that enable its successful implementation.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.aviator.co/" rel="noopener noreferrer"&gt;Aviator&lt;/a&gt;: Automate your cumbersome processes
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://aviator.co/" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.aviator.co%2Fwp-content%2Fuploads%2F2022%2F08%2Fblog-cta-1024x727.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Aviator automates tedious developer workflows by managing git Pull Requests (PRs) and continuous integration test (CI) runs to help your team avoid broken builds, streamline cumbersome merge processes, manage cross-PR dependencies, and handle flaky tests while maintaining their security compliance.&lt;/p&gt;

&lt;p&gt;There are 4 key components to Aviator:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;MergeQueue&lt;/strong&gt;  – an automated queue that manages the merging workflow for your GitHub repository to help protect important branches from broken builds. The Aviator bot uses GitHub Labels to identify Pull Requests (PRs) that are ready to be merged, validates CI checks, processes semantic conflicts, and merges the PRs automatically.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ChangeSets&lt;/strong&gt;  – workflows to synchronize validating and merging multiple PRs within the same repository or multiple repositories. Useful when your team often sees groups of related PRs that need to be merged together, or otherwise treated as a single broader unit of change.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TestDeck&lt;/strong&gt;  – a tool to automatically detect, take action on, and process results from flaky tests in your CI infrastructure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stacked PRs CLI&lt;/strong&gt;  – a command line tool that helps developers manage cross-PR dependencies. This tool also automates syncing and merging of stacked PRs. Useful when your team wants to promote a culture of smaller, incremental PRs instead of large changes, or when your workflows involve keeping multiple, dependent PRs in sync.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="http://aviator.co/" rel="noopener noreferrer"&gt;Try it for free.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://www.aviator.co/blog/automating-integration-tests/" rel="noopener noreferrer"&gt;Automating integration tests: Tools and frameworks for efficient QA&lt;/a&gt; first appeared on &lt;a href="https://www.aviator.co/blog" rel="noopener noreferrer"&gt;Aviator Blog&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://www.aviator.co/blog/automating-integration-tests/" rel="noopener noreferrer"&gt;Automating integration tests: Tools and frameworks for efficient QA&lt;/a&gt; appeared first on &lt;a href="https://www.aviator.co/blog" rel="noopener noreferrer"&gt;Aviator Blog&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>automation</category>
      <category>integrationtests</category>
      <category>qa</category>
    </item>
    <item>
      <title>Introducing Aviator’s engineering efficiency calculator</title>
      <dc:creator>Brian Neville-O'Neill</dc:creator>
      <pubDate>Tue, 07 Nov 2023 18:58:45 +0000</pubDate>
      <link>https://forem.com/aviator_co/introducing-aviators-engineering-efficiency-calculator-2cm</link>
      <guid>https://forem.com/aviator_co/introducing-aviators-engineering-efficiency-calculator-2cm</guid>
      <description>&lt;p&gt;&lt;a href="https://app.aviator.co/calculator"&gt;https://app.aviator.co/calculator&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Measuring engineering productivity is a complicated process — it’s hard to get a full picture of how developers spend their time. A common way to measure productivity is to analyze system metrics like DORA or SPACE. These can be extremely useful metrics to understand the productivity of the team compared to the industry standards. Diving into each of those metrics can also provide insights into what’s slowing down the team.&lt;/p&gt;

&lt;p&gt;But sometimes there are also “hidden pockets” of time that developers spend throughout their day that may not be perceived as impacting productivity. However, when we start adding those things up the numbers can be alarming.&lt;/p&gt;

&lt;p&gt;For instance, consider the amount of time a developer spends debugging a flaky test trying to figure out if it failed because of their change or not. Or, the time spent by a developer who is trying to resolve a mainline build failure.&lt;/p&gt;

&lt;p&gt;To provide that perspective, we built a calculator that makes a run at assessing engineering efficiency. By no means does this provide a full analysis of your engineering team’s efficiency. &lt;strong&gt;What it does provide is a glimpse of the “hidden pockets” of time wasted that typically do not show up in more common productivity metrics.&lt;/strong&gt; The calculator focuses on how much time you and your team lose due to build and test failures in developer workflows.&lt;/p&gt;

&lt;p&gt;If you compare this to DORA metrics, the lead time for changes is significantly impacted by build and test instability. &lt;a href="https://app.aviator.co/calculator"&gt;That impact can be assessed using this calculator.&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How it works
&lt;/h2&gt;

&lt;p&gt;We ask you to input data based on your GitHub activities and how you use GitHub branches. To explain the actual calculations below, let’s assign variables to each of them:&lt;/p&gt;

&lt;p&gt;M – PRs merged per day&lt;/p&gt;

&lt;p&gt;X – mainline failures in a week&lt;/p&gt;

&lt;p&gt;T – average CI time&lt;/p&gt;

&lt;p&gt;F – flakiness factor %&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Fdezd5kB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.aviator.co/blog/wp-content/uploads/2023/11/Screen-Shot-2023-10-24-at-6.25.27-PM-1024x536.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Fdezd5kB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.aviator.co/blog/wp-content/uploads/2023/11/Screen-Shot-2023-10-24-at-6.25.27-PM-1024x536.png" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Based on these inputs, we estimate how much time your engineering team wastes weekly on managing build failures and dealing with flaky tests. Let’s go over the results one by one.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cLMi9VKR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.aviator.co/blog/wp-content/uploads/2023/11/Screen-Shot-2023-10-24-at-6.30.44-PM-1024x576.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cLMi9VKR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.aviator.co/blog/wp-content/uploads/2023/11/Screen-Shot-2023-10-24-at-6.30.44-PM-1024x576.png" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Hours wasted to fix
&lt;/h3&gt;

&lt;p&gt;This calculates how many hours are wasted to identify, triage, fix, and have the build pass again when a mainline build failure is detected. Typically in a large team, someone will notice and report the broken mainline build.&lt;/p&gt;

&lt;p&gt;We assume that a mainline build failure involves an average of 1-3 developers to debug and fix. If we consider an average of one hour for the time it takes for the issue to be reported and a fix to be pushed, we are spending &lt;strong&gt;(2*T + 1) hours&lt;/strong&gt; to track, investigate, and resolve the issue.&lt;/p&gt;

&lt;p&gt;That means, if there are X failures a week, we are spending ((&lt;strong&gt;2 devs * X/5 * (2*T + 1)) hours&lt;/strong&gt; in developer time to fight mainline build failures every day.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hours wasted in resolving merge conflicts
&lt;/h3&gt;

&lt;p&gt;Rollbacks and merge conflicts can cause further issues. Assuming that there are roughly 2% PRs that have merge conflicts during the window of broken build time &lt;strong&gt;((2*T + 1) * X/5)&lt;/strong&gt;, and &lt;strong&gt;M/8&lt;/strong&gt; PRs coming in every hour, we will spend &lt;strong&gt;((2*T + 1) * X/5) * 0.02 * M/8&lt;/strong&gt; wasted in resolving these conflicts.&lt;/p&gt;

&lt;h3&gt;
  
  
  Weekly CI failures due to broken builds
&lt;/h3&gt;

&lt;p&gt;If the team is not using a golden branch to base their feature branches on, they would likely create feature branches on top of a failed mainline branch. Since the number of PRs created during any time would be similar to the average number of feature branches based out of mainline, this would cause &lt;strong&gt;(2*T + 1 hour) * X/5 * M/8&lt;/strong&gt; number of CI failures happening every day.&lt;/p&gt;

&lt;h3&gt;
  
  
  Time to resolve CI
&lt;/h3&gt;

&lt;p&gt;With approximately fifteen minutes of context switching the handle every build failure, that’s &lt;strong&gt;(2*T + 1 hour) * X/5 * M/8 * 0.25&lt;/strong&gt; hours of developer time wasted every day with CI failures.&lt;/p&gt;

&lt;h3&gt;
  
  
  Time spent rerunning flaky tests
&lt;/h3&gt;

&lt;p&gt;Similarly, with the flaky tests, the context switching time required to investigate whether the test was flaky or real, and rerunning the tests itself takes an average of fifteen minutes per run. Depending on the flakiness factor, the developers would waste &lt;strong&gt;(0.25 * M * F / 100)&lt;/strong&gt; hours every day.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;Improving efficiency&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;Although most of these impact the DORA metrics associated with &lt;strong&gt;Lead time for changes,&lt;/strong&gt; we are still just scratching the surface in terms of measuring the inefficiencies in engineering team workflows. The impact of build and test failures also leads to delayed releases impacting the other DORA metrics like &lt;strong&gt;deployment frequency, time to restore service,&lt;/strong&gt; and persistence of flaky tests in the system can lead to a higher change Failure rate. &lt;a href="https://cloud.google.com/blog/products/devops-sre/using-the-four-keys-to-measure-your-devops-performance"&gt;Learn more about DORA metrics&lt;/a&gt;. &lt;a href="https://www.aviator.co/blog/everything-wrong-with-dora-metrics/"&gt;Or, learn more about their disadvantages.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We built &lt;a href="https://www.aviator.co/"&gt;Aviator&lt;/a&gt; to solve some of these hidden challenges for large engineering teams. Today, using Aviator MergeQueue many engineering organizations can scale your merge workflow without breaking builds. Combining that with a flaky test suppression system like &lt;a href="https://www.aviator.co/testdeck"&gt;TestDeck&lt;/a&gt;, teams can save hundreds of engineering hours every week.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://www.aviator.co/blog/engineering-efficiency-calculator/"&gt;Introducing Aviator’s engineering efficiency calculator&lt;/a&gt; first appeared on &lt;a href="https://www.aviator.co/blog"&gt;Aviator Blog&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://www.aviator.co/blog/engineering-efficiency-calculator/"&gt;Introducing Aviator’s engineering efficiency calculator&lt;/a&gt; appeared first on &lt;a href="https://www.aviator.co/blog"&gt;Aviator Blog&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aviator</category>
      <category>calculator</category>
      <category>dorametrics</category>
      <category>effiency</category>
    </item>
    <item>
      <title>A modern guide to CODEOWNERS</title>
      <dc:creator>Brian Neville-O'Neill</dc:creator>
      <pubDate>Tue, 31 Oct 2023 15:24:47 +0000</pubDate>
      <link>https://forem.com/aviator_co/a-modern-guide-to-codeowners-242h</link>
      <guid>https://forem.com/aviator_co/a-modern-guide-to-codeowners-242h</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.aviator.co%2Fblog%2Fwp-content%2Fuploads%2F2023%2F10%2FCODEOWNERS-1024x650.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.aviator.co%2Fblog%2Fwp-content%2Fuploads%2F2023%2F10%2FCODEOWNERS-1024x650.jpeg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In software development, numerous people touch the same code, and it’s crucial to keep track of who is responsible for what. This is where &lt;code&gt;CODEOWNERS&lt;/code&gt; come into play.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;CODEOWNERS&lt;/code&gt; feature allows you to specify individuals or teams who are responsible for code in a repository, making it easier to manage your projects. This feature can streamline the review process, enhance security, and make sure the right people are responsible for the right code.&lt;/p&gt;

&lt;p&gt;In this guide, you’ll learn all about why you need &lt;code&gt;CODEOWNERS&lt;/code&gt; and how to effectively use this feature in your software development workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why you need CODEOWNERS
&lt;/h2&gt;

&lt;p&gt;In order to outline who is responsible for different sections of a codebase, you need to create a &lt;code&gt;CODEOWNERS&lt;/code&gt; file in your repository. When a pull request (PR) is opened that touches files listed in the &lt;code&gt;CODEOWNERS&lt;/code&gt; file, the designated code owner is automatically asked to review that section of the code. Several popular code hosting platforms support this feature, including &lt;a href="https://github.com/" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;, &lt;a href="https://gitlab.com/" rel="noopener noreferrer"&gt;GitLab&lt;/a&gt;, and &lt;a href="https://bitbucket.org/" rel="noopener noreferrer"&gt;Bitbucket&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Following are some of the reasons why you should consider implementing &lt;code&gt;CODEOWNERS&lt;/code&gt; into your software project:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Enhanced security measures:&lt;/strong&gt;  Defining &lt;code&gt;CODEOWNERS&lt;/code&gt; creates a safeguard, ensuring that only designated users or teams can sign off on changes to specific areas of the codebase. This reduces the likelihood of unauthorized or accidental alterations to the code.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Streamlined review mechanism:&lt;/strong&gt;  When you use &lt;code&gt;CODEOWNERS&lt;/code&gt;, you automate the process of identifying the most appropriate reviewers for each PR. This minimizes delays and ensures that each PR is examined by someone with domain-specific knowledge, elevating the quality of the review.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Increased sense of accountability:&lt;/strong&gt;  Allocating certain code sections to specific maintainers not only speeds up the review process but also instills a greater sense of responsibility. When you’re a named code owner, you’re more likely to be proactive about code quality, documentation, and other best practices.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automated processes for greater efficiency:&lt;/strong&gt;  When you incorporate &lt;code&gt;CODEOWNERS&lt;/code&gt; into your continuous integration, continuous delivery (CI/CD) workflows, you automate the initial step of code review. This means that the right eyes are reviewing the code quickly, which is crucial for rapid development cycles and high-velocity teams.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When you implement &lt;code&gt;CODEOWNERS&lt;/code&gt;, you’re not merely inserting a file into your repository; you’re fundamentally improving how your project is managed, secured, and maintained.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting started with CODEOWNERS
&lt;/h2&gt;

&lt;p&gt;If you have a repository without a &lt;code&gt;CODEOWNERS&lt;/code&gt; file, any member can review a PR without restrictions. This means that critical pieces of code can be modified without proper oversight.&lt;/p&gt;

&lt;p&gt;To avoid this situation, you need to create a &lt;code&gt;CODEOWNERS&lt;/code&gt; file using the following steps:&lt;/p&gt;

&lt;p&gt;Create a new file named &lt;code&gt;CODEOWNERS&lt;/code&gt; inside a &lt;code&gt;docs&lt;/code&gt; folder at the root of your project. Then add your first entry like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* @your-username

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When someone opens a PR, &lt;code&gt;@your-username&lt;/code&gt; will be asked to review it.&lt;/p&gt;

&lt;p&gt;In this scenario, let’s assume you have a sample &lt;a href="https://github.com/See4Devs/sample-codeowners-test" rel="noopener noreferrer"&gt;GitHub repository&lt;/a&gt; with the following structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sample-codeowners-test/
|-- docs/
| |-- CODEOWNERS
|
|-- src/
| |-- hello.py
|
|-- tests/
| |-- test_hello.py
|
|-- README.md

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Please note:&lt;/strong&gt;  Putting the &lt;code&gt;CODEOWNERS&lt;/code&gt; file in the &lt;code&gt;docs&lt;/code&gt; folder applies to both GitHub and GitLab. Additionally, make sure you replace the &lt;code&gt;@your-username&lt;/code&gt; with your username.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To enable or disable &lt;code&gt;CODEOWNERS&lt;/code&gt; in your GitHub repository, you need to define a rule set. To do so, you need to go to your repository  &lt;strong&gt;Settings&lt;/strong&gt;  and click on  &lt;strong&gt;Rules &amp;gt; Rulesets&lt;/strong&gt;  in the left navigation bar. Then enable  &lt;strong&gt;Require a pull request before merging&lt;/strong&gt;  and  &lt;strong&gt;Require review from Code Owners&lt;/strong&gt; :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FTiHA3dA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FTiHA3dA.png" alt="Enable GitHub CODEOWNERS"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you’re using GitLab, you can enable &lt;code&gt;CODEOWNERS&lt;/code&gt; on a protected branch. For more information on how to do this, refer to this &lt;a href="https://docs.gitlab.com/ee/user/project/protected_branches.html#require-code-owner-approval-on-a-protected-branch" rel="noopener noreferrer"&gt;GitLab documentation&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FPzZX2KM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FPzZX2KM.png" alt="Enable GitLab CODEOWNERS"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Disabling CODEOWNERS
&lt;/h2&gt;

&lt;p&gt;If you disable the  &lt;strong&gt;Require review from Code Owners&lt;/strong&gt;  option in your GitHub  &lt;strong&gt;Settings&lt;/strong&gt;  or if you remove the  &lt;strong&gt;Require approval from code owners&lt;/strong&gt;  in GitLab, then the PR can be merged without a review from the designated code owners. While this can speed up the merge process, it does so at the expense of code quality and security:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FSXd8li9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FSXd8li9.png" alt="Disable CODEOWNERS"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For all the following examples, this option is enabled.&lt;/p&gt;

&lt;h2&gt;
  
  
  Avoiding common pitfalls with the CODEOWNERS file
&lt;/h2&gt;

&lt;p&gt;Navigating the complexities of repository management can be challenging. Avoiding missteps in the &lt;code&gt;CODEOWNERS&lt;/code&gt; file setup is crucial for streamlined code reviews.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a CODEOWNERS file, but leave it empty
&lt;/h3&gt;

&lt;p&gt;If you create a &lt;code&gt;CODEOWNERS&lt;/code&gt; file but leave it empty, no one is assigned to review PRs. While this won’t block the PR process, you lose the advantages of having specified reviewers, which is generally not recommended.&lt;/p&gt;

&lt;p&gt;In this scenario, any user with write access can approve code before it’s merged.&lt;/p&gt;

&lt;h3&gt;
  
  
  Define a path without any users assigned
&lt;/h3&gt;

&lt;p&gt;When setting up a CODEOWNERS file, it’s essential to assign users or teams to specific paths. However, consider this improper configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;src/*
tests/*

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, you define two paths in your &lt;code&gt;CODEOWNERS&lt;/code&gt; file: &lt;code&gt;tests/*&lt;/code&gt; and &lt;code&gt;src/*&lt;/code&gt;. But you don’t assign any users or teams to it. In this scenario, no one is automatically assigned to review PRs affecting files in the &lt;code&gt;tests&lt;/code&gt; and &lt;code&gt;src&lt;/code&gt; directories. This contradicts the purpose of using a &lt;code&gt;CODEOWNERS&lt;/code&gt; file and should be avoided.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a team without any members in it
&lt;/h3&gt;

&lt;p&gt;In organizational setups, there may arise a need to initialize a team structure even before members are allocated. Let’s assume the &lt;code&gt;CODEOWNERS&lt;/code&gt; file has the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* @your-org/your-empty-team

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you define a team in your &lt;code&gt;CODEOWNERS&lt;/code&gt; file (&lt;em&gt;ie&lt;/em&gt; &lt;code&gt;@your-org/your-empty-team&lt;/code&gt;) but that team has no members, the PR will not have any automatic reviewers. In this case, the merge is blocked unless you add a member to the team. It’s best to make sure that each team has at least one member before adding it to your &lt;code&gt;CODEOWNERS&lt;/code&gt; file:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FdJ2r5Wt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FdJ2r5Wt.png" alt="Team without a member"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Using the CODEOWNERS file properly
&lt;/h2&gt;

&lt;p&gt;Effective codebase management requires a deep knowledge of the tools available. This is why it’s crucial to understand how to properly utilize the &lt;code&gt;CODEOWNERS&lt;/code&gt; file, as it facilitates precise delegation of responsibility.&lt;/p&gt;

&lt;h3&gt;
  
  
  Basic ownership
&lt;/h3&gt;

&lt;p&gt;If you have a simple team and want one or two people to be responsible for all code, your &lt;code&gt;CODEOWNERS&lt;/code&gt; file might look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# The user @your-username is the code owner for the entire repository
* @your-username

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Any PR related to changes in your code needs the approval of &lt;code&gt;@your-username&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FxfHSiHm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FxfHSiHm.png" alt="Code owner required to review"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Department-based ownership
&lt;/h3&gt;

&lt;p&gt;In a larger organization, you might have different departments responsible for different aspects of the code, like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# The Engineering department is responsible for all source code
src/* @Your-Org/EngineeringTeam

# The Quality Assurance team is responsible for all tests
tests/* @Your-Org/QATeam

# The Documentation team is responsible for the README file
/README.md @Your-Org/DocumentationTeam

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Any PR related to changes under the &lt;code&gt;src&lt;/code&gt; folder requires the approval of a member from the &lt;code&gt;@EngineeringTeam&lt;/code&gt;. The changes in the &lt;code&gt;tests&lt;/code&gt; folder require approval from a member of the &lt;code&gt;@QATeam&lt;/code&gt;, and the &lt;code&gt;README.md&lt;/code&gt; file require approval from the &lt;code&gt;@DocumentationTeam&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FrDouc94.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FrDouc94.png" alt="Team review required"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Please note:&lt;/strong&gt;  When specifying a team, you need to make sure that this team has &lt;code&gt;write&lt;/code&gt; access to your repository. To define the team’s name, you need to put the organization name followed by a forward slash as well as the team’s name: &lt;code&gt;@Your-Org/QATeam&lt;/code&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Multilevel ownership
&lt;/h3&gt;

&lt;p&gt;Sometimes, you’ll have a hierarchy of responsibilities. In this scenario, you can specify multiple code owners like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# The Core team is responsible for the entire codebase
* @Your-Org/CoreTeam

# But specific modules have additional specialized owners
src/hello.py @Your-Org/PythonExperts @Your-Org/CoreTeam

# The tests have their own owners, in addition to being under the CoreTeam
tests/* @Your-Org/TestTeam @Your-Org/CoreTeam

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, a change to &lt;code&gt;src/hello.py&lt;/code&gt; requires approval from someone in either &lt;code&gt;@Your-Org/PythonExperts&lt;/code&gt; or &lt;code&gt;@Your-Org/CoreTeam&lt;/code&gt;; whereas changes to something in the &lt;code&gt;tests&lt;/code&gt; folder requires approval from either &lt;code&gt;@Your-Org/TestTeam&lt;/code&gt; or &lt;code&gt;@Your-Org/CoreTeam&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Exclusion rules
&lt;/h3&gt;

&lt;p&gt;You can also negate ownership for specific files or folders like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# The Core team is responsible for the entire codebase
* @Your-Org/CoreTeam

# Except for the hello.py, which is maintained by the Python team
/src/hello.py @Your-Org/PythonTeam

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this scenario, the &lt;code&gt;@Your-Org/CoreTeam&lt;/code&gt; is responsible for everything except &lt;code&gt;src/hello.py&lt;/code&gt;, which is the sole responsibility of the &lt;code&gt;@Your-Org/PythonTeam&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;By choosing the right setup for your team’s needs, you can use the &lt;code&gt;CODEOWNERS&lt;/code&gt; file to enforce code quality and ensure that the right people are reviewing changes to different parts of your codebase.&lt;/p&gt;

&lt;h2&gt;
  
  
  How team and user permissions affect CODEOWNERS
&lt;/h2&gt;

&lt;p&gt;When managing code, understanding permissions is imperative. In simple terms, permissions decide who can do what in a project. Features like &lt;code&gt;CODEOWNERS&lt;/code&gt; use these permissions to decide who can review and approve code changes. In order to use &lt;code&gt;CODEOWNERS&lt;/code&gt; effectively, you need to know how these permissions are set and what they mean.&lt;/p&gt;

&lt;p&gt;In GitHub, you can set permissions at an organization level, at the repository level, or within teams. Your &lt;code&gt;CODEOWNERS&lt;/code&gt;settings are affected by these permissions. For instance, if a user doesn’t have write access to a repo, they can’t be a code owner.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;CODEOWNERS&lt;/code&gt; feature doesn’t operate in isolation; it’s tied to the permissions model of the platform you’re using. Say you set a team as a code owner, but individual members of that team don’t have the necessary repository-level permissions. In this case, the &lt;code&gt;CODEOWNERS&lt;/code&gt; setting won’t function as expected, and the &lt;a href="https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/about-code-owners#codeowners-file-location" rel="noopener noreferrer"&gt;team members won’t be able to act on the PR in the capacity of a code owner&lt;/a&gt;. The same applies to users.&lt;/p&gt;

&lt;p&gt;In GitLab, the permission model is similar but uses different naming conventions. You have to be at least a &lt;code&gt;Maintainer&lt;/code&gt; in a project to become a code owner. Similar to GitHub, GitLab also demands a certain level of permissions for a user to be named as a code owner. Notably, the &lt;code&gt;Maintainer&lt;/code&gt; role allows for a broad set of permissions, including the ability to merge code and manage the repository. This is essential for code owners to effectively review and approve code changes.&lt;/p&gt;

&lt;p&gt;If you designate a &lt;code&gt;Developer&lt;/code&gt; in GitLab as a code owner, the setting will not take effect because the user doesn’t possess the requisite permissions to enforce code ownership rules. For more information on how it works, check out the &lt;a href="https://docs.gitlab.com/ee/user/project/codeowners/" rel="noopener noreferrer"&gt;GitLab documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Regardless of the platform you’re using, understanding the interplay between &lt;code&gt;CODEOWNERS&lt;/code&gt; and permissions is critical for setting up an effective and secure code review process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article, you learned about the importance of implementing &lt;code&gt;CODEOWNERS&lt;/code&gt; in your projects. Not only does it offer an extra layer of security by designating who can approve changes, but it also streamlines the PR review process and fosters a sense of ownership among project maintainers.&lt;/p&gt;

&lt;p&gt;Additionally, integrating a &lt;code&gt;CODEOWNERS&lt;/code&gt; file can bring about a cultural shift within your development team. It encourages clear accountability, ensuring that specific individuals or teams are responsible for particular codebases. This clarity can help prevent code rot since everyone knows who to go to for improvements or bug fixes, enhancing long-term maintainability.&lt;/p&gt;

&lt;p&gt;Moreover, &lt;code&gt;CODEOWNERS&lt;/code&gt; can serve as an excellent documentation tool. New team members can immediately understand which parts of the codebase are owned by which teams, easing the onboarding process and enhancing project transparency. It can also help external contributors identify whom they should contact for code-specific queries or clarifications, which fosters a more collaborative environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.aviator.co/" rel="noopener noreferrer"&gt;Aviator&lt;/a&gt;: Automate your cumbersome processes
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://aviator.co/" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.aviator.co%2Fwp-content%2Fuploads%2F2022%2F08%2Fblog-cta-1024x727.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Aviator automates tedious developer workflows by managing git Pull Requests (PRs) and continuous integration test (CI) runs to help your team avoid broken builds, streamline cumbersome merge processes, manage cross-PR dependencies, and handle flaky tests while maintaining their security compliance.&lt;/p&gt;

&lt;p&gt;There are 4 key components to Aviator:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;MergeQueue&lt;/strong&gt;  – an automated queue that manages the merging workflow for your GitHub repository to help protect important branches from broken builds. The Aviator bot uses GitHub Labels to identify Pull Requests (PRs) that are ready to be merged, validates CI checks, processes semantic conflicts, and merges the PRs automatically.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ChangeSets&lt;/strong&gt;  – workflows to synchronize validating and merging multiple PRs within the same repository or multiple repositories. Useful when your team often sees groups of related PRs that need to be merged together, or otherwise treated as a single broader unit of change.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TestDeck&lt;/strong&gt;  – a tool to automatically detect, take action on, and process results from flaky tests in your CI infrastructure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stacked PRs CLI&lt;/strong&gt;  – a command line tool that helps developers manage cross-PR dependencies. This tool also automates syncing and merging of stacked PRs. Useful when your team wants to promote a culture of smaller, incremental PRs instead of large changes, or when your workflows involve keeping multiple, dependent PRs in sync.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="http://aviator.co/" rel="noopener noreferrer"&gt;Try it for free.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://www.aviator.co/blog/a-modern-guide-to-codeowners/" rel="noopener noreferrer"&gt;A modern guide to CODEOWNERS&lt;/a&gt; first appeared on &lt;a href="https://www.aviator.co/blog" rel="noopener noreferrer"&gt;Aviator Blog&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://www.aviator.co/blog/a-modern-guide-to-codeowners/" rel="noopener noreferrer"&gt;A modern guide to CODEOWNERS&lt;/a&gt; appeared first on &lt;a href="https://www.aviator.co/blog" rel="noopener noreferrer"&gt;Aviator Blog&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>github</category>
      <category>codeowners</category>
    </item>
    <item>
      <title>How to work with git submodules</title>
      <dc:creator>Brian Neville-O'Neill</dc:creator>
      <pubDate>Tue, 31 Oct 2023 14:45:05 +0000</pubDate>
      <link>https://forem.com/aviator_co/how-to-work-with-git-submodules-185f</link>
      <guid>https://forem.com/aviator_co/how-to-work-with-git-submodules-185f</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QXEwHQxa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.aviator.co/blog/wp-content/uploads/2023/10/git-submodules.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QXEwHQxa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.aviator.co/blog/wp-content/uploads/2023/10/git-submodules.webp" alt="" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Git submodules allow you to include one repository inside another. This is useful when you want fine-grained control over your dependencies, or in situations where a dependency manager is not suitable. Submodules are powerful tools, and it’s worth understanding them properly before using them.&lt;/p&gt;

&lt;p&gt;In this article, we’ll cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What Git submodules are&lt;/li&gt;
&lt;li&gt;Common workflows with submodules&lt;/li&gt;
&lt;li&gt;What they are useful for&lt;/li&gt;
&lt;li&gt;When you shouldn’t use them&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At the end of the article, you’ll also find links to further resources.&lt;/p&gt;

&lt;h2&gt;
  
  
  Git submodules
&lt;/h2&gt;

&lt;p&gt;Imagine you are working on a text editor. You’ve implemented the basic features of viewing and editing files, and now you want to add syntax highlighting. There’s a cool library on GitHub that does exactly what you want, but it hasn’t been published to the dependency manager you use. How can you use it?&lt;/p&gt;

&lt;p&gt;This is a situation where Git submodules might come in handy. Submodules are a feature of Git that lets you include one repository inside another. This means that you can include the syntax highlighting library in your text editor’s repo while keeping a link to the original repository so that you can receive upstream changes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--m6nexSy9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.aviator.co/blog/wp-content/uploads/2023/10/editor-repository-structure.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--m6nexSy9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.aviator.co/blog/wp-content/uploads/2023/10/editor-repository-structure.png" alt="" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above diagram shows what your repository structure might look like when you use submodules. The &lt;code&gt;src&lt;/code&gt; and &lt;code&gt;test&lt;/code&gt; directories contain your own files, but &lt;code&gt;lib&lt;/code&gt; contains the syntax highlighter library as a submodule.&lt;/p&gt;

&lt;p&gt;Submodules are entire Git repositories that are pinned to a specific commit. Your local copy of a repository containing a submodule will contain all of the files from the submodule, which means that you can treat it as if it were your own code. Submodules let you view, edit, and reference all of the files in the contained repository.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;text-editor
├── lib
│ └── syntax-highlighter
│ ├── README.md
│ ├── docs
│ │ └── very-good-docs.md
│ └── ...
├── src
│ ├── editor.py
│ └── ...
└── test
    └── ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Above is the file structure of our text editor repository after adding the syntax highlighting library as a submodule. All of the files from the submodule are on our filesystem and ready for us to edit them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Workflows
&lt;/h2&gt;

&lt;p&gt;Now that you’ve seen what submodules can do, the following section will take you through how to use them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Adding a submodule to your repository
&lt;/h3&gt;

&lt;p&gt;Following on from the example earlier of a syntax highlighting library, imagine that the library you want to add is at the following URL:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.github.com/username/syntax-highlighter"&gt;https://www.github.com/username/syntax-highlighter&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can add this library as a submodule to your repository by using the command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;git submodule add https://www.github.com/username/syntax-highlighter lib/syntax-highlighter&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This will add two new files to your repository, &lt;code&gt;.gitmodules&lt;/code&gt; and &lt;code&gt;lib/syntax-highlighter&lt;/code&gt;. You can see these files using &lt;code&gt;git status&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/src/text-editor$ git status
On branch main

No commits yet

Changes to be committed:
  (use "git rm --cached &amp;lt;file&amp;gt;..." to unstage)
     new file: .gitmodules
     new file: lib/syntax-highlighter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;.gitmodules&lt;/code&gt; is a simple text file that lists the submodules in your repository. You should commit this file so that other people working on your repository can also use the submodule.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;lib/syntax-highlighter&lt;/code&gt; is a bit more complicated. Git sees this path as a file, but your filesystem sees the path as a directory. You can output what Git sees by running &lt;code&gt;git diff --cached lib/syntax-highlighter&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/src/text-editor$ git diff --cached lib/syntax-highlighter
diff --git a/lib/syntax-highlighter b/lib/syntax-highlighter
new file mode 160000
index 0000000..ac8e080
--- /dev/null
+++ b/lib/syntax-highlighter
@@ -0,0 +1 @@
+Subproject commit ac8e080ae2ba4c582eb5842139ab7e5082b4cff0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As shown in the diff above, Git sees the submodule as a file containing the commit ID currently tracked by the submodule. By default, this will be the latest commit to the default branch, which is usually &lt;code&gt;main&lt;/code&gt; on newer Git repositories and &lt;code&gt;master&lt;/code&gt; on older ones.&lt;/p&gt;

&lt;p&gt;However, if you look at the submodule on the filesystem using something like &lt;code&gt;ls&lt;/code&gt;, you’ll see that it’s a directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/src/example$ ls lib/syntax-highlighter
README.md docs src test
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What’s more, this directory is actually a Git repository in its own right! You can run things like &lt;code&gt;git status&lt;/code&gt; and even edit the code in it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cloning a repository that contains a submodule
&lt;/h3&gt;

&lt;p&gt;Git stores each submodule as an entry in &lt;code&gt;.gitmodules&lt;/code&gt; and a file in the repo that describes what commit the submodule points to. As a result, when you clone a repo, you need to do a little extra work to download the code for the submodule into your local copy.&lt;/p&gt;

&lt;p&gt;Let’s say you’ve cloned the text-editor repo from earlier:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;git clone https://github.com/username/text-editor&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;If you were then to examine &lt;code&gt;lib/syntax-highlighter&lt;/code&gt;, you’d find that it’s just an empty directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/src/text-editor$ less lib/syntax-highlighter/
lib/syntax-highlighter/ is a directory
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To populate &lt;code&gt;lib/syntax-highlighter&lt;/code&gt; with the submodule’s code, you need to run &lt;code&gt;git submodule update --init --recursive&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/src/text-editor$ git submodule update --init --recursive
Submodule 'lib/syntax-highlighter' (https://github.com/username/syntax-highlighter.git) registered for path 'lib/syntax-highlighter'
Cloning into '/home/username/src/text-editor/lib/syntax-highlighter'...
Submodule path 'lib/syntax-highlighter': checked out '55086f1cb2ee8294d3354805be941171c287557d'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a convenient shorthand for &lt;code&gt;git submodule init&lt;/code&gt; followed by &lt;code&gt;git submodule update&lt;/code&gt;. If your submodules have submodules then this command will also initialize those recursively. &lt;code&gt;init&lt;/code&gt; figures out where the submodule comes from and &lt;code&gt;update&lt;/code&gt; downloads its contents.&lt;/p&gt;

&lt;p&gt;An alternative workflow is to use &lt;code&gt;git clone --recurse-submodules&lt;/code&gt;. This is an even shorter shorthand that is equivalent to a &lt;code&gt;git clone&lt;/code&gt;, &lt;code&gt;git submodule init&lt;/code&gt;, and &lt;code&gt;git submodule update&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Editing a submodule’s code
&lt;/h3&gt;

&lt;p&gt;Submodules are complete Git repositories in their own right. This means that you can use them exactly as you would any other Git repository. To illustrate this point, let’s walk through making a change to a submodule in our repository.&lt;/p&gt;

&lt;p&gt;Imagine that you want to add a line to the syntax-highlighter library to let it support Python. We can make that change in our favorite text editor (possibly the one we’re building!) and then see the change with &lt;code&gt;git diff&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/src/text-editor/lib/syntax-highlighter$ git diff
diff --git a/src/supported-languages.txt b/src/supported-languages.txt
index ad2c90d..f4311cb 100644
--- a/src/supported-languages.txt
+++ b/src/supported-languages.txt
@@ -2,3 +2,4 @@ javascript
 markdown
 java
 c++
+python
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note the path in the terminal prompt: &lt;code&gt;~src/text-editor/lib/syntax-highlighter&lt;/code&gt;. We are making this change inside the submodule, not inside the original syntax-highlighter repository.&lt;/p&gt;

&lt;p&gt;After making the change, we can do our usual &lt;code&gt;git add&lt;/code&gt;, &lt;code&gt;git commit&lt;/code&gt;, and voila! We have edited our submodule. You can see this change in the text-editor repository by running &lt;code&gt;git diff lib/syntax-highlighter&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/src/text-editor$ git diff lib/syntax-highlighter/
diff --git a/lib/syntax-highlighter b/lib/syntax-highlighter
index 8b6e157..55086f1 160000
--- a/lib/syntax-highlighter
+++ b/lib/syntax-highlighter
@@ -1 +1 @@
-Subproject commit 8b6e157f0fb785c619b99373bb474e03b1b72f54
+Subproject commit 55086f1cb2ee8294d3354805be941171c287557d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that this diff just updates the commit ID that the submodule refers to. The actual changes to the submodule are not recorded in the parent repository. This leads to a really important point: to make changes to a submodule, &lt;strong&gt;you need push access to the original repository&lt;/strong&gt;. Otherwise, the changes would be reflected in your local copy of the submodule, but nowhere else.&lt;/p&gt;

&lt;p&gt;If you didn’t create the submodule, and therefore don’t have push access, that’s ok! You just need to fork the original repository and then use your fork as the submodule’s URL.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pulling upstream changes into a submodule
&lt;/h3&gt;

&lt;p&gt;Submodules maintain a link to the upstream repository that they originate from. You can use this link to pull upstream changes.&lt;/p&gt;

&lt;p&gt;Imagine that after you added Python support to the syntax highlighting library, you hear that the maintainers have added TypeScript support. This sounds like a useful feature to include in your text editor and so you want to pull their changes. The first step is to &lt;code&gt;cd&lt;/code&gt; into the submodule and &lt;code&gt;fetch&lt;/code&gt; the changes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/src/text-editor$ cd lib/syntax-highlighter/
~/src/text-editor/lib/syntax-highlighter$ git fetch
remote: Enumerating objects: 7, done.
remote: Counting objects: 100% (7/7), done.
remote: Compressing objects: 100% (3/3), done.
remote: Total 4 (delta 0), reused 4 (delta 0), pack-reused 0
Unpacking objects: 100% (4/4), 419 bytes | 419.00 KiB/s, done.
From github.com/username/syntax-highlighter
 + 49301eb...54f7bbb main -&amp;gt; origin/main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It’s important to &lt;code&gt;cd&lt;/code&gt; to the submodule directory first because, otherwise, you will fetch changes for your parent repository. The &lt;code&gt;git fetch&lt;/code&gt; shows that &lt;code&gt;main&lt;/code&gt; has been updated on the remote repository.&lt;/p&gt;

&lt;p&gt;The changes that we want to pull in are on the &lt;code&gt;main&lt;/code&gt; branch, so we’ll need to &lt;code&gt;merge&lt;/code&gt; them into our own branch. We can use &lt;code&gt;git merge&lt;/code&gt; for this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/src/text-editor/lib/syntax-highlighter$ git merge origin/main
Auto-merging src/supported-languages.txt
CONFLICT (content): Merge conflict in src/supported-languages.txt
Automatic merge failed; fix conflicts and then commit the result.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Oh no! There’s a merge conflict with our branch. Thankfully in this case it’s quite small:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;javascript
markdown
java
c++
&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt; HEAD
python
=======
typescript
&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt; origin/main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this case, we want to keep both changes and so we can just delete the merge conflict markers. We can now add, commit and push this change to our remote.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/src/text-editor/lib/syntax-highlighter$ git add src/
~/src/text-editor/lib/syntax-highlighter$ git commit
[add-python 98d5210] Merge remote-tracking branch 'origin/main' into add-python
~/src/text-editor/lib/syntax-highlighter$ git push
Enumerating objects: 10, done.
Counting objects: 100% (10/10), done.
Delta compression using up to 12 threads
Compressing objects: 100% (3/3), done.
Writing objects: 100% (4/4), 500 bytes | 500.00 KiB/s, done.
Total 4 (delta 0), reused 0 (delta 0)
To github.com/username/syntax-highlighter.git
   55086f1..98d5210 add-python -&amp;gt; add-python
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The ability to edit the code of submodules is both their most powerful feature and their most dangerous. Maintaining your own branch tangential to the main branch of a library is incredibly useful, but be prepared to face merge conflicts.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are submodules useful for?
&lt;/h2&gt;

&lt;p&gt;Below are a couple of examples of when you might want to use submodules over a dependency manager or other solution.&lt;/p&gt;

&lt;h3&gt;
  
  
  Libraries not in your dependency manager
&lt;/h3&gt;

&lt;p&gt;Not every library is available through a dependency manager, and no dependency manager has every library. If your dependency manager doesn’t support a certain library, then submodules can help you include it in your project.&lt;/p&gt;

&lt;p&gt;In this case, you should weigh up the work of maintaining a submodule against the work of adding that library to your dependency management system. Remember that submodules need to be manually updated.&lt;/p&gt;

&lt;h3&gt;
  
  
  Editable libraries that track upstream
&lt;/h3&gt;

&lt;p&gt;Dependency managers, for the most part, aren’t designed for you to modify the dependencies that they manage. If you want to make changes to a library that you depend on, then submodules might be a good solution.&lt;/p&gt;

&lt;p&gt;Submodules keep a link to the upstream code. This means that you can still pull in the latest security and bug-fix updates from the library you depend on. If you were to just copy and paste the code into your repository, getting updates from upstream would become a lot harder.&lt;/p&gt;

&lt;p&gt;An alternative in this situation is to try and merge your changes to the upstream repository. However, this isn’t always practical. There might be license issues, your changes might not be accepted, and even if they are you will likely have to wait a while before they get merged.&lt;/p&gt;

&lt;h3&gt;
  
  
  Internal libraries
&lt;/h3&gt;

&lt;p&gt;It’s not always OK to publish libraries that are developed within your organization externally due to intellectual property and copyright concerns. &lt;a href="https://devops.stackexchange.com/questions/1898/what-is-an-artifactory"&gt;Internal package mirrors&lt;/a&gt; are one solution to this problem, as they allow you to publish packages within your organization. Submodules can be a lot simpler to manage, however, and you should weigh up the cost of keeping a submodule up-to-date against the cost of maintaining a package mirror.&lt;/p&gt;

&lt;h2&gt;
  
  
  When not to use submodules
&lt;/h2&gt;

&lt;p&gt;Submodules are powerful, but they come with some caveats. For starters, submodules don’t have automatic update mechanisms like dependency managers do.&lt;/p&gt;

&lt;p&gt;If you add a submodule to your project, then you become responsible for keeping it up to date, whereas if you install a dependency with a dependency manager, the dependency manager can automatically keep the package on the latest version.&lt;/p&gt;

&lt;p&gt;Git doesn’t download the contents of a submodule by default. This is not obvious to developers who haven’t worked with submodules before and can become a trip hazard for your project. If you use submodules in your project, then it’s worth thoroughly documenting development workflows in &lt;code&gt;contributing.md&lt;/code&gt; or similar.&lt;/p&gt;

&lt;p&gt;Submodules bring increased complexity to your development workflow, so it’s only worth using them if you need to. If a dependency manager will satisfy your use case, then consider using it over submodules.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://git-scm.com/book/en/v2/Git-Tools-Submodules"&gt;https://git-scm.com/book/en/v2/Git-Tools-Submodules&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://git-scm.com/docs/git-submodule"&gt;https://git-scm.com/docs/git-submodule&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.blog/2016-02-01-working-with-submodules/"&gt;https://github.blog/2016-02-01-working-with-submodules/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.aviator.co/"&gt;Aviator&lt;/a&gt;: Automate your cumbersome processes
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://aviator.co/"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MGzRmW1s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://blog.aviator.co/wp-content/uploads/2022/08/blog-cta-1024x727.png" alt="" width="800" height="568"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Aviator automates tedious developer workflows by managing git Pull Requests (PRs) and continuous integration test (CI) runs to help your team avoid broken builds, streamline cumbersome merge processes, manage cross-PR dependencies, and handle flaky tests while maintaining their security compliance.&lt;/p&gt;

&lt;p&gt;There are 4 key components to Aviator:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;MergeQueue&lt;/strong&gt;  – an automated queue that manages the merging workflow for your GitHub repository to help protect important branches from broken builds. The Aviator bot uses GitHub Labels to identify Pull Requests (PRs) that are ready to be merged, validates CI checks, processes semantic conflicts, and merges the PRs automatically.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ChangeSets&lt;/strong&gt;  – workflows to synchronize validating and merging multiple PRs within the same repository or multiple repositories. Useful when your team often sees groups of related PRs that need to be merged together, or otherwise treated as a single broader unit of change.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TestDeck&lt;/strong&gt;  – a tool to automatically detect, take action on, and process results from flaky tests in your CI infrastructure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stacked PRs CLI&lt;/strong&gt;  – a command line tool that helps developers manage cross-PR dependencies. This tool also automates syncing and merging of stacked PRs. Useful when your team wants to promote a culture of smaller, incremental PRs instead of large changes, or when your workflows involve keeping multiple, dependent PRs in sync.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="http://aviator.co/"&gt;Try it for free.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://www.aviator.co/blog/how-to-work-with-git-submodules/"&gt;How to work with git submodules&lt;/a&gt; first appeared on &lt;a href="https://www.aviator.co/blog"&gt;Aviator Blog&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://www.aviator.co/blog/how-to-work-with-git-submodules/"&gt;How to work with git submodules&lt;/a&gt; appeared first on &lt;a href="https://www.aviator.co/blog"&gt;Aviator Blog&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>git</category>
    </item>
    <item>
      <title>Modeling a merge queue with TLA+</title>
      <dc:creator>Brian Neville-O'Neill</dc:creator>
      <pubDate>Fri, 27 Oct 2023 18:41:58 +0000</pubDate>
      <link>https://forem.com/aviator_co/modeling-a-merge-queue-with-tla-2h10</link>
      <guid>https://forem.com/aviator_co/modeling-a-merge-queue-with-tla-2h10</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.aviator.co%2Fblog%2Fwp-content%2Fuploads%2F2023%2F10%2Fimage.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.aviator.co%2Fblog%2Fwp-content%2Fuploads%2F2023%2F10%2Fimage.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When building a complex software system, it is hard to reason its behavior. Formal methods are a computer science research area that assists this situation by using a mathematical basis.&lt;/p&gt;

&lt;p&gt;Model checking is one of the formal method techniques. By creating a small version of a system, it explores possible system states and checks if its desired properties are satisfied. Computer science students and graduates might be familiar with model-checking tools such as Alloy, SPIN, and NuSMV.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.innoq.com/en/articles/2023/04/an-introduction-to-tla/" rel="noopener noreferrer"&gt;TLA+&lt;/a&gt; is a specification language used for model checking created by Leslie Lamport. It is known for industrial usage in companies like Microsoft and AWS. In this article, we use TLA+ to create a basic merge queue system.&lt;/p&gt;

&lt;h1&gt;
  
  
  TLA+
&lt;/h1&gt;

&lt;p&gt;TLA+ was created by &lt;a href="https://en.wikipedia.org/wiki/Leslie_Lamport" rel="noopener noreferrer"&gt;Leslie Lamport&lt;/a&gt;. You might have heard of his other works such as Lamport Clock and Paxos. The language is used for system specification; you write the system state and processes that interact with the state.&lt;/p&gt;

&lt;p&gt;With this description, you might imagine that it is like a database schema and a programming language. To some degree it is correct. Both are used for describing a system. Specification languages, however, are specialized to describe expected “properties” of systems.&lt;/p&gt;

&lt;p&gt;You can think of properties as features of a system. Let’s take a merge queue as an example. Merge queue is a queueing system that takes a pull request one by one, tests them, and merges them into the mainline in order. In this system, you expect that a PR is merged in a queueing order, and a PR gets rejected when a test fails.&lt;/p&gt;

&lt;p&gt;When writing such a system, you can run a few test cases to see if it’s merging PRs in order. This is limited in the sense that it only tests certain situations. There can be a timing bug in that a PR gets merged earlier than previous PRs.&lt;/p&gt;

&lt;p&gt;Model checkers like TLA+ address this problem by doing an exhaustive search. Any non-trivial system has way larger state space, so we need to write a smaller model that captures the core part of the system. We can get insights by formally describing a system or describing the desired property, and the model checkers assist us with their consistency.&lt;/p&gt;

&lt;p&gt;TLA+ itself is a specification language, but there is another language called PlusCal that is close to programming languages. Specifications written in PlusCal can be translated into TLA+. For most software engineers, PlusCal would be easier to use. There are some good TLA+ and PlusCal tutorials. The author used &lt;a href="https://learntla.com/" rel="noopener noreferrer"&gt;Learn TLA+&lt;/a&gt;. In addition to TLA+’s IDE, there is also &lt;a href="https://marketplace.visualstudio.com/items?itemName=alygin.vscode-tlaplus" rel="noopener noreferrer"&gt;VSCode extension&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Modeling a bare-bones merge queue
&lt;/h1&gt;

&lt;p&gt;Let’s start with a small merge queue system. In this system, we want to model the following actors:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Developers who enqueue PRs into the system.&lt;/li&gt;
&lt;li&gt;CI system that tests a PR.&lt;/li&gt;
&lt;li&gt;Merge queue system that merges a tested PR.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this first merge queue system, we ignore the “queue” part and merge the first PR that finished testing. Later we will add the “queue” part.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---- MODULE mergequeue ----
EXTENDS TLC, Sequences, Integers, FiniteSets

PRNumber == 1..3
PRState == {
    "pending",
    "queued-waiting-validation",
    "queued-validated",
    "merged"
}

(* --algorithm mergequeue

variables
    prs = [
        prn in PRNumber |-&amp;gt;
        [
           state |-&amp;gt; "pending"
        ]
    ]

define
    Merged == &amp;lt;&amp;gt; A idx in PRNumber :
        prs[idx]["state"] = "merged"
end define;

fair process worker in {"w1", "w2"}
begin
    ProcessPR:
        with prn in PRNumber do
            await prs[prn]["state"] = "queued-validated";
            prs[prn]["state"] := "merged";
            time := time + 1;
        end with;
        goto ProcessPR;
end process;

fair process queuer in {"q1", "q2"}
begin
    QueuePR:
        with prn in PRNumber do
            await prs[prn]["state"] = "pending";
            prs[prn]["state"] := "queued-waiting-validation";
        end with;
        goto QueuePR;
end process;

fair process ci_worker in {"ci1", "ci2"}
begin
    RunCI:
        with prn in PRNumber do
            await prs[prn]["state"] = "queued-waiting-validation";
            prs[prn]["state"] := "queued-validated";
        end with;
        goto RunCI;
end process;

end algorithm; *)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are 6 processes and 3 pull requests in the system. They are working in parallel and eventually, they get merged. As a property, it defines one that eventually all PRs get merged. This is a very trivial system, but it’s a good starting point.&lt;/p&gt;

&lt;p&gt;Running this passes the validation. To test if the validation is actually validating what we want, we can change the merged property to expect everything to be “queued-validated”. Running that version produces a counter-example.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.aviator.co%2Fblog%2Fwp-content%2Fuploads%2F2023%2F10%2Fimage1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.aviator.co%2Fblog%2Fwp-content%2Fuploads%2F2023%2F10%2Fimage1.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At step 10, it finds all states to be “merged”, not “queued-validated”.&lt;/p&gt;

&lt;p&gt;Let’s modify the model so that it can handle test failures. In this modification, the CI system produces either failed or passed, and the merge queue system marks the PR to be blocked for the failed ones.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PRState == {
    "pending",
    "queued-waiting-validation",
    "queued-validated",
    "queued-failed",
    "merged",
    "blocked",
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The CI workers nondeterministically mark the PRs as failed or passed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;RunCI:
    with prn in PRNumber do
        await prs[prn]["state"] = "queued-waiting-validation";
        either
            prs[prn]["state"] := "queued-validated";
        or
            prs[prn]["state"] := "queued-failed";
        end either;
    end with;
    goto RunCI;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The merge queue workers merge the PRs or mark them as blocked.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ProcessPR:
    with prn in PRNumber do
        either
            await prs[prn]["state"] = "queued-validated";
            prs[prn]["state"] := "merged";
        or
            await prs[prn]["state"] = "queued-failed";
            prs[prn]["state"] := "blocked";
        end either;
    end with;
    goto ProcessPR;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Running this modified model will actually produce a counter-example.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.aviator.co%2Fblog%2Fwp-content%2Fuploads%2F2023%2F10%2Fimage2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.aviator.co%2Fblog%2Fwp-content%2Fuploads%2F2023%2F10%2Fimage2.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We need to change the expected property for the model.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;MergedOrBlocked == &amp;lt;&amp;gt; A idx in PRNumber :
    prs[idx]["state"] in {"merged", "blocked"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this model, eventually, every PR gets merged or blocked.&lt;/p&gt;

&lt;h1&gt;
  
  
  Adding the “queue” part to the merge queue
&lt;/h1&gt;

&lt;p&gt;The described merge queue system merges the finished PR and rejects them if the tests fail, but they are merged out of order. We want them to get merged in the queuing order. Let’s add the queuing part to the system.&lt;/p&gt;

&lt;p&gt;In this case, we change the MergeQueue to behave like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;MergeQueue system merges a PR if the PR that passed the test and the previously queued PRs are merged.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To represent the queue order, we need to add a concept of “time”. Let’s add that to PRs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variables
    time = 1,
    prs = [
        prn in PRNumber |-&amp;gt;
        [
           state |-&amp;gt; "pending",
           queuedAt |-&amp;gt; 0
        ]
    ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And queuing a PR sets this queued at and increments the clock.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;QueuePR:
    with prn in PRNumber do
        await prs[prn]["state"] = "pending";
        prs[prn]["state"] := "queued-waiting-validation" || prs[prn]["queuedAt"] := time;
        time := time + 1;
    end with;
    goto QueuePR;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;How do we make the merge queue workers merge only if the prior PRs get merged? Let’s create a predicate to test if prior PRs are already merged.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PriorPRMerged(prn) ==
    A idx in PRNumber:
        (/ prs[idx]["queuedAt"] # 0
         / prs[idx]["queuedAt"] &amp;lt; prs[prn]["queuedAt"])
            =&amp;gt; prs[idx]["state"] = "merged"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This condition checks if the PR’s queuedAt is already filled and it’s earlier than the PR in question. If so, it should be marked “merged”. Otherwise, this condition won’t become true.&lt;/p&gt;

&lt;p&gt;With this condition, we can guard the merge process.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;await prs[prn]["state"] = "queued-validated" / PriorPRMerged(prn);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This modification will make the queue stuck. Why? Let’s see the counter example produced by the model checker.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.aviator.co%2Fblog%2Fwp-content%2Fuploads%2F2023%2F10%2Fimage3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.aviator.co%2Fblog%2Fwp-content%2Fuploads%2F2023%2F10%2Fimage3.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the counter-example, the PRs are queued from 1 to 3. However, the first PR gets blocked. Other PRs are stuck because PriorPRMerged condition will never be satisfied.&lt;/p&gt;

&lt;p&gt;What do we want to do in this situation? We can remove the blocked PR from the queue and let the other PRs get merged. This can get more complicated if the MergeQueue system supports &lt;a href="https://docs.aviator.co/mergequeue/concepts/changesets" rel="noopener noreferrer"&gt;ChangeSets&lt;/a&gt;, where the system merges related PRs together. This is not in the current scope, so let’s punt on that. Let’s remove the blocked PR from the queue.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;await prs[prn]["state"] = "queued-failed";
prs[prn]["state"] := "blocked" || prs[prn]["queuedAt"] := 0;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Changing this makes the model checker happy.&lt;/p&gt;

&lt;p&gt;As we see in this extension, the model checker presents a possible system state that won’t satisfy the expected property. In order to fix the problem, we need to think about what extra behavior is needed for the system. In this case, if a PR gets blocked by a CI, it can clog the entire queue.&lt;/p&gt;

&lt;p&gt;This time, we simply remove the offending PR from the queue, but there can be other actions, such as enqueuing the blocked PR to the back of the queue. This iterative process inspires you to the high-level spec of the system in a verifiable way.&lt;/p&gt;

&lt;h1&gt;
  
  
  Final remarks
&lt;/h1&gt;

&lt;p&gt;We went through a system modeling process by creating a small merge queue system. Starting from a small one, we expand it to have more specifications. In the process, we discovered an example situation where an expected property won’t be held, and what can be missing in the system. The resulting system is still small, but this is a good starting point. We can iterate the process until the system has enough features.&lt;/p&gt;

&lt;p&gt;Model checking is relatively easy to start. For people who are more interested in this area, we suggest reading &lt;a href="https://mitpress.mit.edu/9780262528900/" rel="noopener noreferrer"&gt;Software Abstractions&lt;/a&gt;. You should be able to find relevant classes in undergraduate CS courses in universities. Some model checkers are based on &lt;a href="https://en.wikipedia.org/wiki/SAT_solver" rel="noopener noreferrer"&gt;SAT solvers&lt;/a&gt;. Occasionally, you will find a problem that can be converted into a SAT problem in software engineering. Adding this as a toolbox allows you to have more options for problems at your hand.&lt;/p&gt;

&lt;p&gt;TLA+ is fun and easy to learn. AWS reports that engineers at various levels could get useful results in &lt;a href="https://awsmaniac.com/how-formal-methods-helped-aws-to-design-amazing-services/" rel="noopener noreferrer"&gt;2-3 weeks after they started learning&lt;/a&gt;. The author of this article also learned TLA+ for the first time and was able to write this small MergeQueue model in half a day. Being able to get a useful result in a few weeks is totally believable. If you haven’t tried it out, we recommend it.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.aviator.co/" rel="noopener noreferrer"&gt;Aviator&lt;/a&gt;: Automate your cumbersome processes
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://aviator.co/" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.aviator.co%2Fwp-content%2Fuploads%2F2022%2F08%2Fblog-cta-1024x727.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Aviator automates tedious developer workflows by managing git Pull Requests (PRs) and continuous integration test (CI) runs to help your team avoid broken builds, streamline cumbersome merge processes, manage cross-PR dependencies, and handle flaky tests while maintaining their security compliance.&lt;/p&gt;

&lt;p&gt;There are 4 key components to Aviator:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;MergeQueue&lt;/strong&gt;  – an automated queue that manages the merging workflow for your GitHub repository to help protect important branches from broken builds. The Aviator bot uses GitHub Labels to identify Pull Requests (PRs) that are ready to be merged, validates CI checks, processes semantic conflicts, and merges the PRs automatically.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ChangeSets&lt;/strong&gt;  – workflows to synchronize validating and merging multiple PRs within the same repository or multiple repositories. Useful when your team often sees groups of related PRs that need to be merged together, or otherwise treated as a single broader unit of change.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TestDeck&lt;/strong&gt;  – a tool to automatically detect, take action on, and process results from flaky tests in your CI infrastructure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stacked PRs CLI&lt;/strong&gt;  – a command line tool that helps developers manage cross-PR dependencies. This tool also automates syncing and merging of stacked PRs. Useful when your team wants to promote a culture of smaller, incremental PRs instead of large changes, or when your workflows involve keeping multiple, dependent PRs in sync.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="http://aviator.co/" rel="noopener noreferrer"&gt;Try it for free.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://www.aviator.co/blog/merge-queue-tla/" rel="noopener noreferrer"&gt;Modeling a merge queue with TLA+&lt;/a&gt; first appeared on &lt;a href="https://www.aviator.co/blog" rel="noopener noreferrer"&gt;Aviator Blog&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://www.aviator.co/blog/merge-queue-tla/" rel="noopener noreferrer"&gt;Modeling a merge queue with TLA+&lt;/a&gt; appeared first on &lt;a href="https://www.aviator.co/blog" rel="noopener noreferrer"&gt;Aviator Blog&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>merging</category>
      <category>mergequeue</category>
      <category>tla</category>
    </item>
    <item>
      <title>How to optimize Jenkins pipeline performance</title>
      <dc:creator>Brian Neville-O'Neill</dc:creator>
      <pubDate>Thu, 26 Oct 2023 19:11:13 +0000</pubDate>
      <link>https://forem.com/aviator_co/how-to-optimize-jenkins-pipeline-performance-5081</link>
      <guid>https://forem.com/aviator_co/how-to-optimize-jenkins-pipeline-performance-5081</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.aviator.co%2Fblog%2Fwp-content%2Fuploads%2F2023%2F10%2Fjenkins-pipeline-1024x581.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.aviator.co%2Fblog%2Fwp-content%2Fuploads%2F2023%2F10%2Fjenkins-pipeline-1024x581.jpg"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Jenkins is a popular open-source automation server that is widely used for building, testing, and deploying code. Jenkins pipelines, which allow you to define and automate your entire build and deployment process, are a powerful feature of Jenkins. When you’re building a CI/CD pipeline with Jenkins, optimizing performance becomes crucial to ensure efficiency and speed in your CI/CD pipeline.&lt;/p&gt;

&lt;p&gt;In this article, we will explore various strategies and techniques to optimize the performance of your Jenkins pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why optimize?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Optimizing your Jenkins pipeline offers several benefits, including:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Faster build times:&lt;/strong&gt; Faster pipelines mean quicker feedback for developers, which can lead to more agile and efficient development cycles&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reduced resource consumption:&lt;/strong&gt; Optimized pipelines consume fewer resources, which can translate to cost savings, especially in cloud-based CI/CD environments&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Improved reliability:&lt;/strong&gt; Well-optimized pipelines are less prone to failures, leading to a more reliable CI/CD process&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let’s dive into the steps you can take to achieve these benefits.&lt;/p&gt;

&lt;h2&gt;
  
  
  Best practices for performance optimization
&lt;/h2&gt;

&lt;p&gt;Whether you’re a seasoned Jenkins administrator or just beginning to harness its power, it’s crucial to embrace best practices for performance optimization that can help you streamline your processes, minimize bottlenecks, and maximize the efficiency of your CI/CD workflows.&lt;/p&gt;

&lt;p&gt;Here, we will explore a comprehensive set of best practices, strategies, and techniques to fine-tune your Jenkins environment. From optimizing hardware resources and pipeline design to utilizing plugins effectively, we will dive into the world of Jenkins performance optimization, equipping you with the knowledge and tools you need to ensure your Jenkins server operates at its peak potential.&lt;/p&gt;

&lt;p&gt;Before we get into the optimization technique, I will provide a table that shows the impact and difficulty level of each technique so you can have a little imagination of what it will be like before getting in.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Impact&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Difficulty&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1. &lt;strong&gt;Keep Builds Minimal at the Master Nodes&lt;/strong&gt;
&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;⭐⭐⭐⭐&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;⭐⭐&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;2. Plugin Management&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;⭐⭐⭐&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;⭐⭐⭐&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;3. Workspace and Build Optimization&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;⭐⭐⭐&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;⭐⭐⭐&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;4. Use the “Matrix” and “Parallel” steps&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;⭐⭐⭐⭐&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;⭐⭐⭐⭐&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;5. Job Trigger Optimization&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;⭐⭐&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;⭐⭐⭐&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;6. Optimize Docker Usage&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;⭐⭐⭐⭐&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;⭐⭐⭐⭐&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;7. Workspace and Artifact Caching&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;⭐⭐⭐&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;⭐⭐⭐&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;8. Use Monitoring Tools&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;⭐⭐⭐⭐⭐&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;⭐⭐⭐⭐⭐&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Keep builds minimal at the master nodes
&lt;/h2&gt;

&lt;p&gt;A key strategy is to keep builds minimal at the master node. The Jenkins master node is the control center of your CI/CD pipeline, and overloading it with resource-intensive builds can lead to performance bottlenecks, slower job execution, and reduced overall efficiency. To address this challenge and ensure a well-optimized Jenkins environment, it’s essential to distribute builds across multiple nodes and leverage the full power of your infrastructure.&lt;/p&gt;

&lt;p&gt;The Jenkins master node should primarily serve as an orchestrator and controller of your CI/CD pipeline. It is responsible for managing jobs, scheduling builds, and maintaining the configuration of your Jenkins environment.&lt;/p&gt;

&lt;p&gt;To keep builds minimal at the master node, we can use &lt;strong&gt;distributed builds&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Distributed Builds in Jenkins&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Distributed builds in Jenkins involve the parallel execution of jobs on multiple agents, taking full advantage of your infrastructure resources. This approach is particularly beneficial when dealing with resource-intensive tasks, large-scale projects, or the need to reduce build times to meet tight delivery deadlines.&lt;/p&gt;

&lt;p&gt;By distributing builds across multiple nodes in Jenkins, you can optimize performance, reduce build times, and enhance the overall efficiency of your CI/CD pipeline, ultimately delivering software faster and more reliably.&lt;/p&gt;

&lt;p&gt;Here’s a sample Jenkins Pipeline script to demonstrate a simple distributed build setup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pipeline {
    agent none
    stages {
        stage('Build on Linux Agent') {
            agent { label 'linux' }
            steps {
                sh 'make'
            }
        }
        stage('Build on Windows Agent') {
            agent { label 'windows' }
            steps {
                bat 'msbuild myapp.sln'
            }
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, the pipeline defines two stages, each running on a specific agent labeled ‘linux’ or ‘windows’, illustrating the distribution of build tasks across different agents.&lt;/p&gt;

&lt;h2&gt;
  
  
  Plugin management
&lt;/h2&gt;

&lt;p&gt;Jenkins plugins extend the functionality of the platform, allowing you to add various features and integrations. However, an excessive number of plugins or outdated ones can lead to reduced performance and potential compatibility issues. Here are some best practices for optimizing performance through efficient plugin management:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Keep Plugins up to date&lt;/strong&gt; : Regularly updating your Jenkins plugins is crucial to ensure you have the latest features, bug fixes, and security enhancements. Outdated plugins may contain vulnerabilities or be incompatible with newer Jenkins versions, leading to performance issues or security risks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Uninstall unnecessary plugins:&lt;/strong&gt; Just as important as keeping plugins up-to-date is eliminating unnecessary ones. Over time, Jenkins instances tend to accumulate plugins that are no longer needed. These unused plugins can increase overhead and slow down your pipeline execution. Regularly review your installed plugins and remove any that are obsolete or no longer in use.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use plugin managers:&lt;/strong&gt; Jenkins offers several plugin management tools that can help streamline the process. Consider using plugin managers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Test plugin updates:&lt;/strong&gt; Before updating a plugin, testing it in a non-production environment is essential. Plugin updates can sometimes introduce compatibility issues or unexpected behavior.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Workspace and build optimization
&lt;/h2&gt;

&lt;p&gt;The workspace is where your pipeline jobs execute, and ensuring it’s lean and well-organized can lead to faster builds and reduced resource consumption. Here are some strategies to achieve workspace and build optimization:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Minimize workspace size:&lt;/strong&gt; Use the “&lt;a href="https://plugins.jenkins.io/ws-cleanup/#plugin-content-declarative-pipeline" rel="noopener noreferrer"&gt;cleanWs&lt;/a&gt;” step within your pipeline to remove unnecessary files from the workspace. This step helps keep the workspace clean and reduces the storage and I/O overhead, leading to faster builds. Be selective in what you clean to ensure essential artifacts and build dependencies are retained.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Implement reusable pipeline libraries:&lt;/strong&gt; One common source of inefficiency in Jenkins pipelines is redundant code. If you have multiple pipelines with similar or overlapping logic, consider implementing &lt;a href="https://cd.foundation/blog/2020/08/05/jenkins-templating-engine-how-to-build-reusable-pipeline-templates/" rel="noopener noreferrer"&gt;reusable pipeline&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Artifact caching:&lt;/strong&gt; Implement artifact caching mechanisms to store and retrieve frequently used dependencies.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Optimize build steps:&lt;/strong&gt; Review and optimize the individual build steps within your pipeline. This includes optimizing scripts, minimizing the use of excessive logs, and using efficient build tools and techniques. Reducing the duration of each build step can collectively contribute to faster overall pipeline execution.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Use the “matrix” and “parallel” steps
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.jenkins.io/blog/2017/09/25/declarative-1/" rel="noopener noreferrer"&gt;Parallelism&lt;/a&gt; and &lt;a href="https://docs.cloudbees.com/docs/cloudbees-ci/latest/pipelines/matrix" rel="noopener noreferrer"&gt;matrix&lt;/a&gt; can be used in Jenkins to optimize and streamline your continuous integration (CI) and continuous delivery (CD) pipelines. They help in distributing and managing tasks more efficiently, especially when dealing with a large number of builds and configurations. Here’s how parallelism and matrices can be used for optimization in Jenkins:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Parallelism&lt;/strong&gt; : Jenkins supports parallel execution of tasks within a build or pipeline. This is particularly useful for tasks that can be executed concurrently, such as running tests on different platforms or building different components of an application simultaneously.&lt;/p&gt;

&lt;p&gt;Here is an example of how to implement parallelism in Jenkins:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pipeline {
agent none
stages {
stage('Build And Test') { // Rename the stage to provide a more descriptive name
parallel {
stage('macos-chrome') {
agent { label 'macos-chrome' }
stages {
stage('Build') { // Build stage for macOS Chrome
steps {
sh 'echo Build for macos-chrome'
}
}
stage('Test') { // Test stage for macOS Chrome
steps {
sh 'echo Test for macos-chrome'
}
}
}
}
stage('macos-firefox') {
agent { label 'macos-firefox' }
stages {
stage('Build') { // Build stage for macOS Firefox
steps {
sh 'echo Build for macos-firefox'
}
}
stage('Test') { // Test stage for macOS Firefox
steps {
sh 'echo Do Test for macos-firefox'
}
}
}
}
stage('macos-safari') {
agent { label 'macos-safari' }
stages {
stage('Build') { // Build stage for macOS Safari
steps {
sh 'echo Build for macos-safari'
}
}
stage('Test') { // Test stage for macOS Safari
steps {
sh 'echo Test for macos-safari'
}
}
}
}
stage('linux-chrome') {
agent { label 'linux-chrome' }
stages {
stage('Build') { // Build stage for Linux Chrome
steps {
sh 'echo Build for linux-chrome'
}
}
stage('Test') { // Test stage for Linux Chrome
steps {
sh 'echo Test for linux-chrome'
}
}
}
}
stage('linux-firefox') {
agent { label 'linux-firefox' }
stages {
stage('Build') { // Build stage for Linux Firefox
steps {
sh 'echo Build for linux-firefox'
}
}
stage('Test') { // Test stage for Linux Firefox
steps {
sh 'echo Test for linux-firefox'
}
}
}
}
}
}
}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This Jenkins pipeline is designed to automate the build and test processes for a software project across different operating systems and web browsers. The pipeline consists of several stages, each of which serves a specific purpose.&lt;/p&gt;

&lt;p&gt;In the first stage, “build and test,” the pipeline leverages parallel execution to run multiple sub-stages concurrently. These sub-stages are responsible for building and testing the software on various configurations. There are dedicated sub-stages for different combinations of the operating system and web browser, including macOS with Chrome, macOS with Firefox, macOS with Safari, Linux with Chrome, and Linux with Firefox.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Matrix build&lt;/strong&gt; : Jenkins provides a feature called “Matrix build” that allows you to define a matrix of parameters and execute your build or test configurations across different combinations. This is useful for testing your software on various platforms, browsers, or environments in parallel. Here’s an example of how a matrix pipeline can be implemented:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
pipeline {
    agent none
    stages {
      stage('BuildAndTest') { // This is the main stage that includes a matrix build and test configuration.
        matrix {
          agent { // This defines the agent label for each matrix configuration.
            label "${PLATFORM}-${BROWSER}"
          }
          axes {
            axis { // This axis defines the PLATFORM variable with possible values 'linux' and 'macos'.
              name 'PLATFORM'
              values 'linux', 'macos'
            }
            axis { // This axis defines the BROWSER variable with possible values 'chrome', 'firefox', and 'safari'.
              name 'BROWSER'
              values 'chrome', 'firefox', 'safari'
            }
          }
          excludes {
            exclude { // This specifies an exclusion for the matrix based on specific PLATFORM and BROWSER values.
              axis {
                name 'PLATFORM'
                values 'linux'
              }
              axis {
                name 'BROWSER'
                values 'safari'
              }
            }
          }
          stages {
            stage('Build') { // This stage represents the build process for the selected matrix configuration.
              steps {
                sh 'echo Do Build for $PLATFORM-$BROWSER'
              }
            }
            stage('Test') { // This stage represents the test process for the selected matrix configuration.
              steps {
                sh 'echo Do Test for $PLATFORM-$BROWSER'
              }
            }
          }
        }
      }
    }
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Inside the &lt;code&gt;BuildAndTest&lt;/code&gt; stage, a matrix is defined using the &lt;code&gt;matrix&lt;/code&gt; block. This matrix configuration involves different axes, &lt;code&gt;PLATFORM&lt;/code&gt; and &lt;code&gt;BROWSER&lt;/code&gt;, which specify combinations of operating systems (linux and macOS) and web browsers (Chrome, Firefox, and Safari).&lt;/p&gt;

&lt;p&gt;Certain combinations, such as &lt;code&gt;linux&lt;/code&gt; with &lt;code&gt;safari&lt;/code&gt;, are excluded, meaning they won&lt;code&gt;t be part of the automated tests. For each combination, there are two sub-stages:&lt;/code&gt;Build&lt;code&gt;and&lt;/code&gt;Test&lt;code&gt;. In the&lt;/code&gt;Build&lt;code&gt;sub-stage, a command is executed to build the software for the specific combination, and in the&lt;/code&gt;Test` sub-stage, a test command is executed to verify the software’s functionality. The pipeline will run these sub-stages in parallel for all valid combinations, resulting in efficient testing and building across different environments.&lt;/p&gt;

&lt;p&gt;By using parallelism and matrix combinations in Jenkins, you can significantly reduce the time it takes to test your software or run various configurations, thereby optimizing your CI/CD pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  Job trigger optimization
&lt;/h2&gt;

&lt;p&gt;Continuous integration pipelines can be triggered by various events, such as code commits, pull requests, or scheduled runs. However, triggering jobs too frequently can strain your Jenkins server and slow down the overall process. To implement this you can:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Consider using a sensible trigger strategy, e.g., triggering a job only on important events like code merges to the main branch. You can configure webhooks or use the Jenkinsfile to control the trigger conditions.&lt;/li&gt;
&lt;li&gt;Jenkins provides a “&lt;a href="https://www.youtube.com/watch?v=5cHXy6Lw-eM" rel="noopener noreferrer"&gt;quiet period&lt;/a&gt;” feature that introduces a delay before starting a job after a trigger event. This can be particularly useful to batch multiple job executions triggered by consecutive commits. Adjust the quiet period to allow Jenkins to batch related jobs together, reducing the frequency of builds and optimizing resource usage.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Optimize Docker usage
&lt;/h2&gt;

&lt;p&gt;If your Jenkins pipeline utilizes Docker containers for building and testing, be mindful of the number of containers that run simultaneously. Running too many containers concurrently can exhaust system resources and impact performance.&lt;/p&gt;

&lt;p&gt;Adjust your Jenkins configuration to limit the maximum number of containers that can run concurrently based on the available system resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use Docker Image Caching&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Docker image caching can significantly reduce the time it takes to set up the necessary environment for your builds. When a Docker image is pulled, it is stored locally, and subsequent builds can reuse this cached image.&lt;/p&gt;

&lt;p&gt;Implement a Docker image caching strategy in your Jenkins pipelines to minimize the need for frequent image pulls, especially for base images and dependencies that don’t change often. This can significantly speed up your build process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Workspace and artifact caching
&lt;/h2&gt;

&lt;p&gt;Workspace and artifact caching are essential optimization techniques in Jenkins that can significantly improve build and deployment efficiency. Workspace caching helps reduce the time required to check out and update source code repositories, while artifact caching helps speed up the distribution of build artifacts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Workspace caching:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Workspace caching allows Jenkins to cache the contents of a workspace so that when a new build is triggered, it can reuse the cached workspace if the source code has not changed significantly. This is particularly useful for large projects with dependencies and libraries that do not change frequently.&lt;/p&gt;

&lt;p&gt;Here’s an example Jenkinsfile script that demonstrates workspace caching using the Pipeline Caching plugin:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
pipeline {&lt;br&gt;
    agent any&lt;br&gt;
    options {&lt;br&gt;
        // Configure workspace caching&lt;br&gt;
        skipDefaultCheckout()&lt;br&gt;
        cache(workspace: true, paths: ['path/to/cache'])&lt;br&gt;
    }&lt;br&gt;
    stages {&lt;br&gt;
        stage('Checkout') {&lt;br&gt;
            steps {&lt;br&gt;
                script {&lt;br&gt;
                    // Checkout source code manually&lt;br&gt;
                    checkout scm&lt;br&gt;
                }&lt;br&gt;
            }&lt;br&gt;
        }&lt;br&gt;
        stage('Build') {&lt;br&gt;
            steps {&lt;br&gt;
                // Your build steps here&lt;br&gt;
            }&lt;br&gt;
        }&lt;br&gt;
    }&lt;br&gt;
}&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Artifact Caching:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Artifact caching involves caching build artifacts (e.g., compiled binaries, build outputs) so that subsequent builds can reuse them rather than recompiling or regenerating them. This can save a significant amount of time and resources.&lt;/p&gt;

&lt;p&gt;To cache build artifacts, you can use the “stash” and “unstash” steps in your Jenkinsfile:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`&lt;br&gt;
pipeline {&lt;br&gt;
    agent any&lt;br&gt;
    stages {&lt;br&gt;
        stage('Build') {&lt;br&gt;
            steps {&lt;br&gt;
                // Build your project&lt;br&gt;
                sh 'make'&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;            // Stash the build artifacts
            stash name: 'my-artifacts', includes: 'path/to/artifacts/**'
        }
    }
    stage('Test') {
        steps {
            // Unstash and use the cached artifacts
            unstash 'my-artifacts'

            // Run tests using the cached artifacts
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;}&lt;br&gt;
`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Use monitoring tools:
&lt;/h2&gt;

&lt;p&gt;Monitoring tools are essential for optimizing Jenkins pipelines and ensuring they run smoothly and efficiently. They provide insights into the performance and health of your Jenkins infrastructure, helping you identify and address bottlenecks and issues. Here are a few monitoring tools you can use in Jenkins:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Prometheus and Grafana:&lt;/strong&gt; Prometheus is a popular open-source monitoring and alerting toolkit, and Grafana is a visualization platform that works well with Prometheus. You can use the “Prometheus Metrics Plugin” in Jenkins to expose Jenkins metrics to Prometheus, and then create dashboards in Grafana to visualize these metrics.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Jenkins monitoring plugin:&lt;/strong&gt; The Jenkins Monitoring Plugin allows you to monitor various aspects of Jenkins, including build queue times, node usage, and system load. It provides useful information for optimizing Jenkins infrastructure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SonarQube integration:&lt;/strong&gt; SonarQube is a tool for continuous inspection of code quality. Integrating SonarQube with Jenkins allows you to monitor code quality and identify areas for improvement in your projects.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Using these monitoring tools in Jenkins, you can gain insights into your pipelines’ performance and resource utilization, allowing you to make informed decisions for optimization and troubleshooting.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In conclusion, optimizing performance in your Jenkins pipeline is crucial for achieving faster, more efficient, and reliable continuous integration and continuous delivery processes. Jenkins is a powerful automation server, and by implementing a variety of best practices and techniques, you can ensure that it operates at its peak potential. This article has explored various strategies to achieve performance optimization in your Jenkins pipeline, particularly when publishing Docker images to DockerHub.&lt;/p&gt;

&lt;p&gt;The benefits of optimization of the Matrix Project are evident, with faster build times, reduced resource consumption, and improved reliability being at the forefront. Ensuring that your Jenkins pipeline operates efficiently can lead to more agile development cycles, cost savings, and a dependable CI/CD process.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.aviator.co/" rel="noopener noreferrer"&gt;Aviator&lt;/a&gt;: Automate your cumbersome processes
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://aviator.co/" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.aviator.co%2Fwp-content%2Fuploads%2F2022%2F08%2Fblog-cta-1024x727.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Aviator automates tedious developer workflows by managing git Pull Requests (PRs) and continuous integration test (CI) runs to help your team avoid broken builds, streamline cumbersome merge processes, manage cross-PR dependencies, and handle flaky tests while maintaining their security compliance.&lt;/p&gt;

&lt;p&gt;There are 4 key components to Aviator:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;MergeQueue&lt;/strong&gt;  – an automated queue that manages the merging workflow for your GitHub repository to help protect important branches from broken builds. The Aviator bot uses GitHub Labels to identify Pull Requests (PRs) that are ready to be merged, validates CI checks, processes semantic conflicts, and merges the PRs automatically.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ChangeSets&lt;/strong&gt;  – workflows to synchronize validating and merging multiple PRs within the same repository or multiple repositories. Useful when your team often sees groups of related PRs that need to be merged together, or otherwise treated as a single broader unit of change.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TestDeck&lt;/strong&gt;  – a tool to automatically detect, take action on, and process results from flaky tests in your CI infrastructure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stacked PRs CLI&lt;/strong&gt;  – a command line tool that helps developers manage cross-PR dependencies. This tool also automates syncing and merging of stacked PRs. Useful when your team wants to promote a culture of smaller, incremental PRs instead of large changes, or when your workflows involve keeping multiple, dependent PRs in sync.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="http://aviator.co/" rel="noopener noreferrer"&gt;Try it for free.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://www.aviator.co/blog/how-to-optimize-jenkins-pipeline-performance/" rel="noopener noreferrer"&gt;How to optimize Jenkins pipeline performance&lt;/a&gt; first appeared on &lt;a href="https://www.aviator.co/blog" rel="noopener noreferrer"&gt;Aviator Blog&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://www.aviator.co/blog/how-to-optimize-jenkins-pipeline-performance/" rel="noopener noreferrer"&gt;How to optimize Jenkins pipeline performance&lt;/a&gt; appeared first on &lt;a href="https://www.aviator.co/blog" rel="noopener noreferrer"&gt;Aviator Blog&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>cicd</category>
      <category>jenkins</category>
    </item>
    <item>
      <title>What is build failure rate?</title>
      <dc:creator>Brian Neville-O'Neill</dc:creator>
      <pubDate>Fri, 13 Oct 2023 19:38:56 +0000</pubDate>
      <link>https://forem.com/aviator_co/what-is-build-failure-rate-ikj</link>
      <guid>https://forem.com/aviator_co/what-is-build-failure-rate-ikj</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oylrbsHR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.aviator.co/blog/wp-content/uploads/2023/10/build-failure-rate-1024x579.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oylrbsHR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.aviator.co/blog/wp-content/uploads/2023/10/build-failure-rate-1024x579.jpg" alt="" width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;During the software build process, you convert your code into computer-understandable format, depending on the language. Javascript and related libraries are condensed back into vanilla format. Rust and Java code is compiled into machine language before committing. &lt;/p&gt;

&lt;p&gt;Build failure rate is a metric that measures the percentage of build failures during your software build process. You should note that there are important uses for build failure rates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It helps you avoid harmful software user experience&lt;/li&gt;
&lt;li&gt;Build failure rates tell you more about your development efficiency&lt;/li&gt;
&lt;li&gt;It enables you to refactor code where needed to increase software efficiency &lt;/li&gt;
&lt;li&gt;It gives you the confidence to release quality software when the failure rate is low or non-existent&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this article, we will focus on the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Factors affecting build failure rates&lt;/li&gt;
&lt;li&gt;How to measure the build failure rate in your software build process&lt;/li&gt;
&lt;li&gt;Effects of build failure rates&lt;/li&gt;
&lt;li&gt;Strategies to reduce build failure rates in your development process &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Factors affecting build failure rate
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Code Quality
&lt;/h3&gt;

&lt;p&gt;The quality of your code directly affects build failure rate. Let’s say you are building a house using blocks and don’t adequately align the blocks you use. There is a high chance that defects will appear later in the building. If you had done it neatly, that wouldn’t be the case.&lt;/p&gt;

&lt;p&gt;If your code quality is terrible, it will result in bugs and defects, which obviously affects build failure rate. Good code quality should meet the following metrics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Clarity&lt;/li&gt;
&lt;li&gt;Reusability&lt;/li&gt;
&lt;li&gt;Security&lt;/li&gt;
&lt;li&gt;Testability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You should note that robust code is the foundation of a stable build. &lt;/p&gt;

&lt;h3&gt;
  
  
  Third-Party Integrations
&lt;/h3&gt;

&lt;p&gt;Another factor that affects the build failure rate in your software build process is how you manage your dependencies. Since these external libraries are essential to your software development, you must ensure you appropriately handle them. &lt;/p&gt;

&lt;p&gt;A lack of third-party management discipline causes a high failure rate with time. Here are some good practices to avoid a build failure rate:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Automated dependency update:&lt;/strong&gt; Outdated dependencies are one of the causes of build failure. To avoid this, updating dependency versions would help you avoid causing changes that could cause build failures. To this, you should have your dependency versions updated automatedly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Metadata Tracking:&lt;/strong&gt; Using incorrect dependency metadata creates a situation where the wrong installation is done. Doing this can introduce failures into your build. So, keeping track of the metadata helps prevent this since the correct versions are installed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Using a package manager:&lt;/strong&gt; To avoid each developer on your team using their dependencies inconsistently. Introducing a centralized management model will help create consistency when using dependencies and reduce the potential build failure rate.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A hypothetical example: You had developed software using the available dependency update “X” at the time, and during your build process, another update, “Y,” became available, which made “X” have deficiencies. A build failure may likely happen if your dependency update isn’t automated.&lt;/p&gt;

&lt;p&gt;If you don’t apply best practices when using dependencies, they can cause a high percentage of build failure in your software development process.&lt;/p&gt;

&lt;h3&gt;
  
  
  Infrastructure Environment
&lt;/h3&gt;

&lt;p&gt;A consistent environment with defined parameters helps reduce variability and helps with configuration errors. Let’s say you use an inconsistent environment. You used a specific environment, let’s name it “A,” and later on, you use another named “B.” The errors you get in environment “A” could be entirely different in environment “B,” which could be hard to track and resolve. &lt;/p&gt;

&lt;p&gt;You can also avoid conflicts from concurrent changes by using a monorepo. Aviator’s &lt;u&gt;MergeQueue&lt;/u&gt; helps you maintain a monorepo without issues. Also, having a robust infrastructure environment helps your team avoid issues like server crashes, which cause build failures. &lt;/p&gt;

&lt;p&gt;Lastly, if you use a bundled virtual environment like docker containers, you can avoid infrastructure inconsistency issues. &lt;/p&gt;

&lt;h2&gt;
  
  
  Measuring a Build Failure Rate
&lt;/h2&gt;

&lt;p&gt;You can calculate a build failure rate using a straightforward formula: Divide the total number of build failures by the total number of attempted builds within a given time period. The resulting answer is expressed in a percentage. &lt;/p&gt;

&lt;p&gt;For example, let’s say the total number of build failures in your build process is 30, and the total number of attempted builds is 50 within six months. To get the build failure rate, divide the total number of build failures, which is 30, by the total number of attempted builds, which is 50. Calculating this will give you a build failure rate of 60 percent.&lt;/p&gt;

&lt;p&gt;The formula is given as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Build failure rate% = (Number of build failures/Number of attempted builds*100
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Another way you can track and report the build failure rate over time is by using tools that show you the analytical reports of failures you have. Also, there are best practices you should use when monitoring your builds over time:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Calculate the failure rate of your builds over a given period.&lt;/li&gt;
&lt;li&gt;Monitor the failure rate for each stage of your pipeline.&lt;/li&gt;
&lt;li&gt;Identify and monitor &lt;a href="https://www.aviator.co/blog/what-causes-flaky-tests-and-how-to-manage-them/"&gt;flaky tests&lt;/a&gt; separately. &lt;/li&gt;
&lt;li&gt;Log the build failure rate you get in detail.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Measuring the build failure rate in your software development offers advantages, and analyzing this helps you achieve a stable build. &lt;/p&gt;

&lt;h2&gt;
  
  
  Effects of a Build Failure Rate
&lt;/h2&gt;

&lt;p&gt;The effects of a build failure rate vary. The impact on your software development process could be different from another. However, these are the expected effects of a high build failure rate:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Delayed project timeline and make you miss deadlines &lt;/li&gt;
&lt;li&gt;Increased software development costs (engineer’s time)&lt;/li&gt;
&lt;li&gt;Negatively impact on your team’s morale and productivity &lt;/li&gt;
&lt;li&gt;If you don’t resolve the underlying issues, you will release software with defects and bugs&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Build failure happens during a software build process. Implementing strategies around the factors affecting build failure can help you have a low build failure rate:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Having good code quality by debugging using linter and static tools&lt;/li&gt;
&lt;li&gt;Well-managed third-party integrations in your software development&lt;/li&gt;
&lt;li&gt;Consistent infrastructure environment&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Knowing what causes build failures, preventing them, and applying good practices will reduce the total number of failures concerning your overall attempted builds.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.aviator.co/"&gt;Aviator&lt;/a&gt;: Automate your cumbersome processes
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://aviator.co/"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MGzRmW1s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://blog.aviator.co/wp-content/uploads/2022/08/blog-cta-1024x727.png" alt="" width="800" height="568"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Aviator automates tedious developer workflows by managing git Pull Requests (PRs) and continuous integration test (CI) runs to help your team avoid broken builds, streamline cumbersome merge processes, manage cross-PR dependencies, and handle flaky tests while maintaining their security compliance.&lt;/p&gt;

&lt;p&gt;There are 4 key components to Aviator:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;MergeQueue&lt;/strong&gt;  – an automated queue that manages the merging workflow for your GitHub repository to help protect important branches from broken builds. The Aviator bot uses GitHub Labels to identify Pull Requests (PRs) that are ready to be merged, validates CI checks, processes semantic conflicts, and merges the PRs automatically.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ChangeSets&lt;/strong&gt;  – workflows to synchronize validating and merging multiple PRs within the same repository or multiple repositories. Useful when your team often sees groups of related PRs that need to be merged together, or otherwise treated as a single broader unit of change.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TestDeck&lt;/strong&gt;  – a tool to automatically detect, take action on, and process results from flaky tests in your CI infrastructure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stacked PRs CLI&lt;/strong&gt;  – a command line tool that helps developers manage cross-PR dependencies. This tool also automates syncing and merging of stacked PRs. Useful when your team wants to promote a culture of smaller, incremental PRs instead of large changes, or when your workflows involve keeping multiple, dependent PRs in sync.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="http://aviator.co/"&gt;Try it for free.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://www.aviator.co/blog/build-failure-rate/"&gt;What is build failure rate?&lt;/a&gt; first appeared on &lt;a href="https://www.aviator.co/blog"&gt;Aviator Blog&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://www.aviator.co/blog/build-failure-rate/"&gt;What is build failure rate?&lt;/a&gt; appeared first on &lt;a href="https://www.aviator.co/blog"&gt;Aviator Blog&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>dora</category>
    </item>
    <item>
      <title>Embracing trunk-based development: Advantages, disadvantages, and best practices</title>
      <dc:creator>Brian Neville-O'Neill</dc:creator>
      <pubDate>Thu, 12 Oct 2023 19:56:24 +0000</pubDate>
      <link>https://forem.com/aviator_co/embracing-trunk-based-development-advantages-disadvantages-and-best-practices-dh0</link>
      <guid>https://forem.com/aviator_co/embracing-trunk-based-development-advantages-disadvantages-and-best-practices-dh0</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QsWwZKzp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.aviator.co/blog/wp-content/uploads/2023/10/trunk-based-development-1024x577.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QsWwZKzp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.aviator.co/blog/wp-content/uploads/2023/10/trunk-based-development-1024x577.jpg" alt="" width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Trunk-based development (TBD) caters to developers working on the same codebase (trunk). As a developer, there’s a huge chance you are working with other developers on a project, merging changes to a single codebase. This means managing every change made to the source code during development. This is where version control management (VCM) practices are especially helpful.&lt;/p&gt;

&lt;p&gt;VCM practices are vital in preserving code history, preventing errors, supporting parallel development, enabling continuous integration, enhancing code review, and more. Trunk-based development is a popular VCM practice that helps reduce the challenges associated with branch management and merge conflicts.&lt;/p&gt;

&lt;p&gt;In this article, you will learn about the advantages, disadvantages, best practices, and the role of merge queues in trunk-based development.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a trunk-based development?
&lt;/h2&gt;

&lt;p&gt;Trunk-based development is, quite frankly, an old practice. Its origin can be traced back to earlier version control systems like concurrent version systems (CVS) and subversion (SVN), where the prevalent philosophy advocated for everyone working on a single trunk, occasionally creating short-lived branches for purposes like bug-fixing or experimental development.&lt;/p&gt;

&lt;p&gt;Authors like Jez Humble, co-author of &lt;a href="https://books.google.com/books/about/The_DevOps_Handbook.h"&gt;DevOps Handbook&lt;/a&gt; and &lt;a href="https://martinfowler.com/books/continuousDelivery.html"&gt;Continous Delivery&lt;/a&gt;, mentioned Version Control and the importance of avoiding long-lived branches in the 90s. So, what makes TBD so unique?&lt;/p&gt;

&lt;p&gt;With TBD, developers can make frequent small changes, such as code integrations directly to the main branch or trunk, instead of creating long-lived feature branches. TBD is a type of continuous integration that prioritizes regular integration of code changes and their testing within a shared environment.  &lt;/p&gt;

&lt;p&gt;The goal is to promptly identify and address integration problems, thereby lowering the chances of such issues occurring again. This practice also shortens the time needed to introduce new features and enhances the overall quality of the codebase.&lt;/p&gt;

&lt;p&gt;TBD is common among DevOps teams since it is mostly associated with merging and integration phases.&lt;/p&gt;

&lt;h2&gt;
  
  
  The significance of trunk-based development in your CI/CD Pipeline
&lt;/h2&gt;

&lt;p&gt;Trunk-based workflows are great because they work seamlessly with &lt;a href="https://www.aviator.co/blog/how-to-create-a-ci-cd-pipeline/"&gt;continuous integration and continuous delivery (CI/CD)&lt;/a&gt; services. Here’s how it works:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Each time you make a change to the main branch, automated tests (a branch of CI) check to make sure it doesn’t break anything&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After passing these tests, a pipeline can be set up to prepare the changes for the QA team to review on the staging branch&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The release manager, responsible for creating new releases, can then use the main branch’s code to publish a new release based on feedback from the QA team&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If any issues arise in production, a senior developer can make fixes specifically for that release&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This approach keeps each release unique and doesn’t change the older release branches. As time passes, you can safely delete older release branches that are no longer needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  The role of a merge queue in trunk-based development
&lt;/h2&gt;

&lt;p&gt;Merge queue plays a crucial role in managing code changes and ensuring a streamlined and controlled process for integrating PRs into the main development branch, often called the “trunk” or “mainline.”&lt;/p&gt;

&lt;p&gt;The role of a merge queue in TBD includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Centralized Integration Point:&lt;/strong&gt; The merge queue serves as the central point where code changes from different developers or teams are merged into the main branch. This ensures that all code changes are funneled through a single point of integration, reducing integration complexities and conflicts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Controlled Integration:&lt;/strong&gt; The merge queue provides a controlled and orderly way to merge code changes into the trunk. This control helps prevent integration issues and conflicts that can arise when multiple developers attempt to merge their changes simultaneously.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;FIFO (First-In-First-Out) Order:&lt;/strong&gt; In a merge queue, code changes are typically merged in a first-come, first-served order. This &lt;a href="https://www.geeksforgeeks.org/fifo-first-in-first-out-approach-in-programming/"&gt;FIFO approach&lt;/a&gt; ensures that code changes are integrated in the submitted sequence, preventing any individual or team from monopolizing the merge process.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Rollback Mechanism:&lt;/strong&gt; In the event that a merged change causes unexpected issues or errors in the trunk, a merge queue may provide mechanisms to quickly roll back or revert the change, minimizing the impact on the development process.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automated Testing and Validation:&lt;/strong&gt; Before code changes are merged into the trunk, they typically go through a series of automated tests and validations to ensure that they meet the required quality and compatibility standards. This can include unit tests, integration tests, code reviews, and other quality checks.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Implementing trunk-based development
&lt;/h2&gt;

&lt;p&gt;Trunk-based development can be really useful in various application development cases. For instance, a startup MVP for investors or a client project on Fiverr. You do not need to wait for a project admin or a senior developer to review and merge the PR before pushing your changes. &lt;/p&gt;

&lt;p&gt;An exception is the presence of experienced developers who help eliminate the chances of build-failing changes committed to the trunk. &lt;/p&gt;

&lt;p&gt;Here is how I would implement TBD:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Online services like Gitlab, Github, and more are needed for Git version control&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You need to create three pivotal branches: the main branch (also known as the trunk), a staging branch, and a production branch. Developers commit their code changes to the main branch. A release manager can then create new releases by merging the main branch with either the staging or production branch&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Code quality and integration should be prioritized if you plan to implement trunk-based development. A great way to ensure this is through the employment of a CI system. A CI system tests the modifications made by developers before changes become ready for release&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A CD system takes center stage in this process. It helps during the building and deployment stages of a project in either the staging or production environment&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Trunk-based development vs. feature-based/Gitflow
&lt;/h2&gt;

&lt;p&gt;Both development workflows have pros and cons, but depending on your instance, you can decide which works for you. &lt;/p&gt;

&lt;p&gt;TBD is better if you are working on a project that changes frequently and requires swift, constant deployment. Developers can seamlessly integrate small changes to the trunk. Let’s say you are developing new product features or fixing a bug, TBD is absolutely great for this kind of change. You can also perform frequent code reviews and smooth automated test integrations with TBD.&lt;/p&gt;

&lt;p&gt;Meanwhile, feature-based/Gitflow is much more useful in long-term projects. For example, if you are developing a large e-commerce application, you should utilize Gitflow, especially if a huge development team is on board. This is because Gitflow allows developers to work on separate branches, eliminating the need for collaborations and meetings.&lt;/p&gt;

&lt;h2&gt;
  
  
  Advantages of trunk-based development
&lt;/h2&gt;

&lt;p&gt;Let’s take a look at some of the main advantages of implementing TBD:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Simpler Codebase:&lt;/strong&gt; With fewer long-lived branches, the codebase remains simpler and easier to understand. This reduces complexity, making it easier to maintain and troubleshoot code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Integration with CI/CD:&lt;/strong&gt; An important benefit of TBD is its seamless integration with CI/CD services. This fosters an agile and smooth development environment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Faster time-to-market:&lt;/strong&gt; Most businesses take faster time-to-market very seriously. They need to roll out new features or products as fast as possible to customers. As a developer, implementing TBD can make this happen. TBD helps reduce development and deployment time, ensuring new functionalities are delivered to end-users on time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Improved Feedback Loop:&lt;/strong&gt; Regular integration allows for quick feedback on the impact of code changes. Developers can assess how their modifications affect the overall system, which is valuable for iterative improvement.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Easier Debugging:&lt;/strong&gt; When issues arise, it’s easier to identify the exact change that introduced the problem since there are fewer changes in the main branch. This simplifies debugging and troubleshooting.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Disadvantages of trunk-based development
&lt;/h2&gt;

&lt;p&gt;As with any great software, implementation, or service, TBD has its advantages and disadvantages. Let’s take a look at some of its disadvantages.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Isolation Problems:&lt;/strong&gt; Since feature branches are absent, isolating changes for testing purposes can be challenging before integration into the Trunk.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Learning Curve:&lt;/strong&gt; Sadly, unless you are very technically inclined, wrapping your head around the ins and outs of TBD might take a while. The best way to avoid this is through training and training materials such as e-books, videos, and community support.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Code conflicts:&lt;/strong&gt; If codes aren’t merged properly, it leads to conflicts in your main codebase (the trunk).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Dependency management:&lt;/strong&gt; Trunk-based development can complicate the task of managing dependencies since developers cannot readily isolate various components or features.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Versioning and Rollbacks:&lt;/strong&gt; Managing versions and rollbacks can be more complex in TBD, especially when multiple versions are under active development simultaneously.&lt;/p&gt;

&lt;h2&gt;
  
  
  Best practices for implementing trunk-based development
&lt;/h2&gt;

&lt;p&gt;Companies like Netflix, Google, and Facebook use TBD during development. According to Google’s published &lt;a href="https://services.google.com/fh/files/misc/state-of-devops-2021.pdf"&gt;Accelerate State of DevOps 2021&lt;/a&gt; report, &lt;strong&gt;&lt;em&gt;companies with higher performance are likely to have trunk-based development implemented&lt;/em&gt;&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;Let’s take a look at some of the best practices for implementing TBD.&lt;/p&gt;

&lt;h3&gt;
  
  
  Automation First
&lt;/h3&gt;

&lt;p&gt;To begin with, it’s important to introduce automation wherever feasible and effective. Automation should be applied in various aspects, such as software development, testing procedures, and deployment processes.&lt;/p&gt;

&lt;p&gt;By embracing this approach, you can empower your team to carry out swift and effective iterations, thereby reducing the likelihood of any adverse impact on the primary codebase. A great automation software like &lt;a href="https://www.aviator.co/"&gt;Aviator&lt;/a&gt; is there to help you achieve this seamlessly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Code freezes halt development. Avoid it!
&lt;/h3&gt;

&lt;p&gt;Code freezes can impede the development process and cause integration issues as changes made in isolation may not smoothly align when merged into the main branch. A great way to avoid code freezes is to incentivize developers to integrate their work into the trunk when it reaches a stable state, even if it’s not entirely finished. This rapid integration helps prevent the accumulation of errors that tend to occur with prolonged waits.&lt;/p&gt;

&lt;p&gt;In cases where it’s necessary, you can utilize feature toggles or branch-by-abstraction techniques to temporarily hide unfinished features, ensuring they don’t affect the rest of the codebase.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pair or mob Programming?
&lt;/h3&gt;

&lt;p&gt;In TBD, it’s essential to conduct code reviews promptly. While automated testing is valuable but doesn’t guarantee top-notch code quality. Therefore, employing methods like &lt;a href="https://medium.com/bgl-tech/mobbing-vs-pair-programming-3eb1e387183d"&gt;pair or mob programming&lt;/a&gt; to enhance team communication and facilitate immediate code review is beneficial. &lt;/p&gt;

&lt;h3&gt;
  
  
  A features flags management tool is the way to go
&lt;/h3&gt;

&lt;p&gt;Feature management is a method for handling feature development and testing. It involves gradually releasing changes or new features to specific user groups before full deployment. This is achieved using “feature flags” that wrap new changes in inactive code paths, which can be activated when needed.&lt;/p&gt;

&lt;p&gt;To simplify this process, consider using feature management tools like LaunchDarkly. These tools facilitate feature experimentation, A/B testing, and continuous deployment of microservices, allowing developers to combine testing with feature rollout. This approach ensures feature deployment without compromising software stability and provides the flexibility to roll back features without affecting the rest of the code in production.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Embracing Trunk-Based Development represents a bold stride forward in the world of software engineering. By fostering a culture of frequent integration, streamlined collaboration, and rapid innovation, this approach accelerates development cycles and enhances code quality. It underscores the significance of continuous integration, automated testing, and efficient development workflows.&lt;/p&gt;

&lt;p&gt;This article covered the advantages, disadvantages, and best practices for implementing trunk-based development.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.aviator.co/"&gt;Aviator&lt;/a&gt;: Automate your cumbersome processes
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://aviator.co/"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MGzRmW1s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://blog.aviator.co/wp-content/uploads/2022/08/blog-cta-1024x727.png" alt="" width="800" height="568"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Aviator automates tedious developer workflows by managing git Pull Requests (PRs) and continuous integration test (CI) runs to help your team avoid broken builds, streamline cumbersome merge processes, manage cross-PR dependencies, and handle flaky tests while maintaining their security compliance.&lt;/p&gt;

&lt;p&gt;There are 4 key components to Aviator:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;MergeQueue&lt;/strong&gt;  – an automated queue that manages the merging workflow for your GitHub repository to help protect important branches from broken builds. The Aviator bot uses GitHub Labels to identify Pull Requests (PRs) that are ready to be merged, validates CI checks, processes semantic conflicts, and merges the PRs automatically.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ChangeSets&lt;/strong&gt;  – workflows to synchronize validating and merging multiple PRs within the same repository or multiple repositories. Useful when your team often sees groups of related PRs that need to be merged together, or otherwise treated as a single broader unit of change.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TestDeck&lt;/strong&gt;  – a tool to automatically detect, take action on, and process results from flaky tests in your CI infrastructure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stacked PRs CLI&lt;/strong&gt;  – a command line tool that helps developers manage cross-PR dependencies. This tool also automates syncing and merging of stacked PRs. Useful when your team wants to promote a culture of smaller, incremental PRs instead of large changes, or when your workflows involve keeping multiple, dependent PRs in sync.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="http://aviator.co/"&gt;Try it for free.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://www.aviator.co/blog/trunk-based-development/"&gt;Embracing trunk-based development: Advantages, disadvantages, and best practices&lt;/a&gt; first appeared on &lt;a href="https://www.aviator.co/blog"&gt;Aviator Blog&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://www.aviator.co/blog/trunk-based-development/"&gt;Embracing trunk-based development: Advantages, disadvantages, and best practices&lt;/a&gt; appeared first on &lt;a href="https://www.aviator.co/blog"&gt;Aviator Blog&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>devops</category>
    </item>
  </channel>
</rss>
