<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Ed LeGault</title>
    <description>The latest articles on Forem by Ed LeGault (@edlegaultle).</description>
    <link>https://forem.com/edlegaultle</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/edlegaultle"/>
    <language>en</language>
    <item>
      <title>Chaos Engineering: Breaking Things On Purpose</title>
      <dc:creator>Ed LeGault</dc:creator>
      <pubDate>Tue, 03 Jun 2025 18:39:22 +0000</pubDate>
      <link>https://forem.com/leading-edje/chaos-engineering-breaking-things-on-purpose-4752</link>
      <guid>https://forem.com/leading-edje/chaos-engineering-breaking-things-on-purpose-4752</guid>
      <description>&lt;p&gt;In today's complex digital landscape, systems fail. This isn't pessimism, it's a fundamental truth that experienced technical professionals understand all too well. When (not if) failures occur, the difference between organizations that thrive and those that struggle often comes down to a single factor: preparation. Enter chaos engineering, a disciplined approach to identifying system vulnerabilities by proactively introducing controlled failures in production environments.&lt;/p&gt;

&lt;p&gt;Chaos engineering is like a vaccine for your infrastructure.  A little controlled pain now prevents a lot of uncontrolled pain later.&lt;/p&gt;

&lt;p&gt;This article explores why chaos engineering deserves both budget allocation and prioritization within technical organizations seeking to build truly resilient systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Chaos Engineering?
&lt;/h2&gt;

&lt;p&gt;Chaos Engineering is the practice of deliberately injecting failures into a system to test its resilience and identify weaknesses before they manifest as customer-impacting incidents. Pioneered by Netflix with their &lt;a href="https://netflix.github.io/chaosmonkey/" rel="noopener noreferrer"&gt;Chaos Monkey&lt;/a&gt; tool, this methodology has evolved into a structured discipline focused on conducting controlled experiments that reveal how systems behave under stress.&lt;/p&gt;

&lt;p&gt;At its core, chaos engineering follows a scientific process:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Form a hypothesis about how the system should behave under adverse conditions&lt;/li&gt;
&lt;li&gt;Design an experiment that introduces a specific failure or stressor&lt;/li&gt;
&lt;li&gt;Execute the experiment in a controlled environment&lt;/li&gt;
&lt;li&gt;Observe and measure the system's response&lt;/li&gt;
&lt;li&gt;Learn and improve by addressing identified weaknesses&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Unlike traditional testing which verifies known requirements, chaos engineering explores the unknown: how systems respond to unexpected conditions that weren't explicitly designed for but will inevitably occur.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Resilience Matters in Modern Systems
&lt;/h2&gt;

&lt;p&gt;Modern digital infrastructures have evolved into intricate ecosystems of microservices, distributed databases, and complex dependencies. This complexity introduces numerous potential failure points that can cascade in unexpected ways.&lt;/p&gt;

&lt;p&gt;Resilience isn't merely a technical consideration, it's a business imperative. The ability to withstand unexpected conditions directly impacts revenue, reputation, and customer trust.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Benefits of Chaos Engineering
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Increased Reliability and Resiliency
&lt;/h2&gt;

&lt;p&gt;Chaos engineering fundamentally changes how teams approach system reliability. Rather than merely responding to failures, teams proactively identify weaknesses before they impact users. This approach helps identify hidden vulnerabilities that traditional testing misses, builds confidence in actual (not theoretical) fault tolerance capabilities, and validates that recovery mechanisms function as designed.  If a team feels uncomfortable running a chaos experiment on a particular system, that discomfort often signals exactly where testing is most needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Improved Incident Response
&lt;/h2&gt;

&lt;p&gt;When teams regularly practice responding to controlled failures, they develop muscle memory that proves invaluable during actual incidents. This regular practice leads to reduced mean time to resolution (MTTR) as teams become familiar with failure patterns and respond more efficiently. Engineers develop intuition about system behavior under stress, and the process often reveals gaps in runbooks and incident response procedures that can be addressed proactively.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reduced Downtime and Business Impact
&lt;/h2&gt;

&lt;p&gt;By discovering vulnerabilities proactively, organizations can address weaknesses before they result in customer-facing outages. This approach means fewer unexpected outages, as issues are discovered during controlled experiments rather than in production. Recovery times become shorter thanks to well-exercised recovery procedures, and customer impact is minimized as systems become more reliable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Improved Understanding of System Behavior
&lt;/h2&gt;

&lt;p&gt;Perhaps the most underrated benefit of chaos engineering is how it deepens engineers' understanding of their systems. Teams gain clearer visualization of how components interact, understand how failures cascade through systems, and learn how systems behave under various forms of stress. This knowledge proves invaluable when designing new features or troubleshooting issues.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enhanced Confidence in Production Changes
&lt;/h2&gt;

&lt;p&gt;Organizations practicing chaos engineering report higher confidence when deploying new features and infrastructure changes. This confidence stems from validated resilience mechanisms, understood failure modes, and battle-tested monitoring systems that ensure observability during failures.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Get Started with Chaos Engineering
&lt;/h2&gt;

&lt;p&gt;Implementing chaos engineering doesn't require a massive organizational overhaul.  Here are a few steps to help get started:&lt;/p&gt;

&lt;h2&gt;
  
  
  Establish a Baseline
&lt;/h2&gt;

&lt;p&gt;Before conducting any chaos experiments, ensure you have comprehensive monitoring and observability tools in place. You'll need clear metrics for what constitutes "normal" system behavior and defined service level objectives (SLOs) that matter to your business. This baseline allows you to measure the impact of your experiments objectively and determine whether your system is resilient enough to withstand the introduced failures.&lt;/p&gt;

&lt;h2&gt;
  
  
  Start with Game Days
&lt;/h2&gt;

&lt;p&gt;Before implementing automated chaos, conduct manual "game days"—scheduled exercises where teams intentionally create failure scenarios and practice response procedures. These events build confidence, establish processes, and identify gaps in your incident response capabilities. A typical game day might involve manually stopping a non-critical service and documenting the recovery process, impact, and lessons learned.&lt;/p&gt;

&lt;h2&gt;
  
  
  Choose Simple, Low-Risk Experiments
&lt;/h2&gt;

&lt;p&gt;Your first automated chaos experiments should target non-critical infrastructure components and run during business hours when engineers are available. Make sure to have well-defined abort conditions and communicate broadly to stakeholders. For example, start by terminating a single instance in a load-balanced pool of web servers during a low-traffic period, with the hypothesis that customers should experience no impact.&lt;/p&gt;

&lt;h2&gt;
  
  
  Document and Share Results
&lt;/h2&gt;

&lt;p&gt;After each experiment, document findings, including unexpected behaviors, and share results with the broader engineering organization. Update runbooks and monitoring based on insights, and prioritize fixing any revealed weaknesses. This knowledge-sharing amplifies the value of each experiment across the organization.&lt;/p&gt;

&lt;h2&gt;
  
  
  Gradually Expand Scope
&lt;/h2&gt;

&lt;p&gt;As confidence grows, incrementally increase both the complexity of experiments and their proximity to critical business functions. Progress from testing individual components to testing interactions between systems, and move from staging environments toward production. Expand from simple resource constraints to more complex failure modes, and consider implementing continuous chaos engineering with automated guardrails.&lt;/p&gt;

&lt;h2&gt;
  
  
  Build a Culture of Resilience
&lt;/h2&gt;

&lt;p&gt;The most successful chaos engineering programs evolve beyond tools and processes to become cultural movements within organizations. Celebrate the discovery of weaknesses rather than punishing failures, and reward teams for building resilient systems that withstand chaos experiments. Include resilience as a first-class requirement in system design, and provide time and resources for fixing weaknesses discovered through experiments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Available Tools for Chaos Engineering
&lt;/h2&gt;

&lt;p&gt;The chaos engineering ecosystem has matured significantly, with options ranging from cloud provider offerings to open-source frameworks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud Provider Tools
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/fis/" rel="noopener noreferrer"&gt;AWS Fault Injection Service&lt;/a&gt; offers native integration with AWS services, supporting EC2, ECS, EKS, and RDS experiments with safeguards and automatic rollbacks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://azure.microsoft.com/en-us/products/chaos-studio" rel="noopener noreferrer"&gt;Azure Chaos Studio&lt;/a&gt; targets Azure-specific resources with an experiment builder featuring managed fault types and integration with Azure monitoring for validation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Open-Source Options
&lt;/h2&gt;

&lt;p&gt;Here are a few open-source or free tools for chaos engineering you may want to investigate:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://netflix.github.io/chaosmonkey/" rel="noopener noreferrer"&gt;Chaos Monkey&lt;/a&gt;, Netflix's original chaos engineering tool, randomly terminates instances in production. Though simple, it remains effective for basic resilience testing.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://litmuschaos.io/" rel="noopener noreferrer"&gt;Litmus&lt;/a&gt; offers Kubernetes-native chaos engineering with an extensive experiment catalog for cloud-native systems and an active community.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://chaostoolkit.org/" rel="noopener noreferrer"&gt;Chaos Toolkit&lt;/a&gt; provides a framework-agnostic approach with extensive API support for various platforms and declarative experiment definitions.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.gremlin.com/chaos-engineering" rel="noopener noreferrer"&gt;Gremlin&lt;/a&gt;, while commercial, offers a free tier with a user-friendly interface, broad attack types, and built-in safety mechanisms.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Implementing Chaos Engineering Responsibly - Not getting fired
&lt;/h2&gt;

&lt;p&gt;While the benefits are substantial, chaos engineering requires thoughtful implementation. Start small with non-critical systems and simple experiments. Establish clear boundaries for potential impact and ensure comprehensive observability during experiments. Prepare mechanisms to quickly restore normal operations, and communicate plans to stakeholders before conducting chaos experiments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Chaos engineering represents a paradigm shift from reactive to proactive reliability management. Rather than waiting for unexpected failures to reveal system weaknesses, forward-thinking organizations deliberately introduce controlled failure to build more resilient systems.&lt;/p&gt;

&lt;p&gt;As systems grow more complex, the organizations that thrive will be those that embrace failure as an inevitable reality and engineer accordingly. Chaos engineering isn't merely about breaking things, it's about building confidence through evidence-based resilience.&lt;/p&gt;

&lt;p&gt;You don't run chaos experiments because you enjoy creating problems.  You run them because problems are inevitable, and you would rather find them on your terms instead of your customers'.&lt;/p&gt;

&lt;p&gt;When viewed through this lens, chaos engineering isn't a luxury, it's a necessity for modern system reliability that delivers measurable business value through improved uptime, faster incident response, and enhanced customer experience.&lt;/p&gt;

</description>
      <category>automation</category>
      <category>testing</category>
      <category>devops</category>
      <category>security</category>
    </item>
    <item>
      <title>The Critical Role of Automated Dependency Scanning in the Modern Software Development Lifecycle</title>
      <dc:creator>Ed LeGault</dc:creator>
      <pubDate>Tue, 03 Jun 2025 18:14:44 +0000</pubDate>
      <link>https://forem.com/leading-edje/the-critical-role-of-automated-dependency-scanning-in-the-modern-software-development-lifecycle-1hn9</link>
      <guid>https://forem.com/leading-edje/the-critical-role-of-automated-dependency-scanning-in-the-modern-software-development-lifecycle-1hn9</guid>
      <description>&lt;p&gt;In today's rapidly evolving software development landscape, managing dependencies has become increasingly complex. Applications often rely on dozens, sometimes hundreds, of third-party libraries and packages, each with their own update cycles and security considerations. This article explores why automated dependency scanning has emerged as a critical component in modern DevOps practices.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Security Imperative
&lt;/h2&gt;

&lt;p&gt;Security vulnerabilities in dependencies represent one of the most significant risks to application security today. According to recent studies, over 80% of code-bases contain at least one vulnerability in their dependencies.&lt;/p&gt;

&lt;p&gt;Automated dependency scanning addresses this challenge by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Continuous vulnerability detection&lt;/strong&gt;: Rather than point-in-time assessments, automated tools constantly monitor for newly discovered vulnerabilities in your dependencies&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rapid response capability&lt;/strong&gt;: When vulnerabilities are discovered, teams can be notified immediately, rather than during periodic manual reviews&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Comprehensive coverage&lt;/strong&gt;: Scanning tools can analyze deep dependency trees that would be impractical to review manually&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Expecting developers to manually track CVEs is like expecting them to memorize pi to 100 digits.  Although it is technically possible, there are much better uses of their brainpower.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reducing Friction in Development Workflows
&lt;/h2&gt;

&lt;p&gt;The most effective security measures are those that developers actually use. Traditional security reviews often create bottlenecks that frustrate development teams and slow delivery.&lt;/p&gt;

&lt;p&gt;Automated dependency scanning helps by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Integrating directly into existing workflows&lt;/strong&gt;: Scans run automatically during commits, PRs, or builds&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Minimizing manual intervention&lt;/strong&gt;: Only requiring developer attention when action is genuinely needed&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Providing clear remediation paths&lt;/strong&gt;: Rather than just flagging issues, good tools suggest specific fixes&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Organizational Consistency with Team Flexibility
&lt;/h2&gt;

&lt;p&gt;One of the greatest challenges in enterprise software development is balancing organizational standards with team autonomy. Dependency scanning tools help strike this balance by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Establishing baseline security standards&lt;/strong&gt;: Ensuring all projects meet minimum security requirements&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Supporting customizable policies&lt;/strong&gt;: Allowing teams to adjust scanning criteria based on project risk profiles&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enabling centralized visibility&lt;/strong&gt;: Providing security teams oversight while allowing development teams to maintain velocity&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Practical Implementation: Tools of the Trade
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Renovate
&lt;/h3&gt;

&lt;p&gt;Renovate has emerged as a powerful, flexible tool for automated dependency management. It works by:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Scanning repositories for dependencies&lt;/li&gt;
&lt;li&gt;Checking for updates against package registries&lt;/li&gt;
&lt;li&gt;Creating pull requests with dependency updates&lt;/li&gt;
&lt;li&gt;Providing detailed change information, including changelog links and compatibility information&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Renovate's configuration as code approach means teams can tailor its behavior precisely to their needs. For example, a team might configure Renovate to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"extends"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"config:base"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"packageRules"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"matchUpdateTypes"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"minor"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"patch"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"matchCurrentVersion"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"!/^0/"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"automerge"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration would automatically merge non-breaking updates for stable dependencies, while requiring manual review for major versions or pre-1.0 packages.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dependabot
&lt;/h3&gt;

&lt;p&gt;GitHub's built-in Dependabot offers streamlined dependency management for GitHub repositories. Its key features include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Automated security updates&lt;/strong&gt;: Dependabot can automatically create PRs when security vulnerabilities are detected&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scheduled version updates&lt;/strong&gt;: Regular checks for newer versions based on configurable schedules&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ecosystem support&lt;/strong&gt;: Works with major package ecosystems including npm, Maven, NuGet, and many others&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A typical Dependabot configuration might look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
&lt;span class="na"&gt;updates&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;package-ecosystem&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;npm"&lt;/span&gt;
    &lt;span class="na"&gt;directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/"&lt;/span&gt;
    &lt;span class="na"&gt;schedule&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;weekly"&lt;/span&gt;
    &lt;span class="na"&gt;open-pull-requests-limit&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
    &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;dependencies"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  CI/CD Integration
&lt;/h2&gt;

&lt;p&gt;The true power of dependency scanning emerges when integrated into CI/CD pipelines:&lt;/p&gt;

&lt;h3&gt;
  
  
  GitHub Actions Example
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Dependency Security Check&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;main&lt;/span&gt; &lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;pull_request&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;main&lt;/span&gt; &lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;schedule&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;cron&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;0&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;7&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;1'&lt;/span&gt;  &lt;span class="c1"&gt;# Weekly on Mondays&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;security&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v3&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Run dependency security scan&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dependency-check/dependency-check-action@v2&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;project&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;My&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Project'&lt;/span&gt;
          &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;.'&lt;/span&gt;
          &lt;span class="na"&gt;format&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;HTML'&lt;/span&gt;
          &lt;span class="na"&gt;out&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;reports'&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Upload results&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/upload-artifact@v3&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dependency-check-report&lt;/span&gt;
          &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;reports&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  GitLab CI Example
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;dependency_scanning&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test&lt;/span&gt;
  &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;dependency-check --project "My Project" --scan . --format JSON --out reports/dependency-check-report.json&lt;/span&gt;
  &lt;span class="na"&gt;artifacts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;reports/dependency-check-report.json&lt;/span&gt;
    &lt;span class="na"&gt;reports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;dependency_scanning&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;reports/dependency-check-report.json&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$CI_COMMIT_BRANCH == "main"&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$CI_PIPELINE_SOURCE == "merge_request_event"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Key Benefits at a Glance
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Reduced security risk&lt;/strong&gt;: Automatically identify and remediate vulnerabilities before they can be exploited&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Developer productivity&lt;/strong&gt;: Eliminate tedious manual dependency reviews&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Operational efficiency&lt;/strong&gt;: Detect compatibility issues before they cause production incidents&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compliance support&lt;/strong&gt;: Maintain audit trails of dependency updates and vulnerability remediations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Technical debt prevention&lt;/strong&gt;: Prevent dependency versions from falling too far behind current releases&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Automated dependency scanning is no longer a "nice-to-have" but a critical component of modern software development.  Trying to manage dependencies manually is like trying to update a card catalog in a library that adds 10,000 new books every day. You might start with good intentions, but you'll be buried under index cards by lunchtime.&lt;/p&gt;

&lt;p&gt;By implementing automated dependency scanning with tools like Renovate and Dependabot, and integrating them into CI/CD pipelines, organizations can significantly reduce security risks while increasing development velocity. The key lies in creating systems that provide consistent security guardrails while remaining flexible enough to accommodate different team workflows and project requirements.&lt;/p&gt;

&lt;p&gt;In a world where software supply chain attacks continue to rise, automated dependency scanning represents one of the most effective measures organizations can take to protect their applications and their customers.&lt;/p&gt;

</description>
      <category>automation</category>
      <category>cicd</category>
      <category>devops</category>
      <category>security</category>
    </item>
    <item>
      <title>Comparing an IT Department’s DevOps Journey to a Personal Health and Wellness Journey</title>
      <dc:creator>Ed LeGault</dc:creator>
      <pubDate>Wed, 11 Sep 2024 13:39:05 +0000</pubDate>
      <link>https://forem.com/leading-edje/comparing-an-it-departments-devops-journey-to-a-personal-health-and-wellness-journey-be7</link>
      <guid>https://forem.com/leading-edje/comparing-an-it-departments-devops-journey-to-a-personal-health-and-wellness-journey-be7</guid>
      <description>&lt;p&gt;Embarking on a DevOps transformation within an IT department is much like committing to a personal health and wellness journey. Both quests require dedication, perseverance, and incremental improvements. Through this article, we will use the metaphor of a personal health journey to compare the stages and challenges of a DevOps transformation. Each section corresponds to common ground between the two pathways, aiming to make the often complex IT processes more relatable. By recognizing these parallels, readers can gain a clearer understanding of the DevOps journey, glean insights into best practices, and perhaps even find humor in the shared struggles and triumphs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Initial Realization: "Something Needs to Change"
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The phase when the need for transformation becomes apparent.
&lt;/h3&gt;

&lt;h3&gt;
  
  
  Personal Health and Wellness:
&lt;/h3&gt;

&lt;p&gt;Jane, our wellness hero, has an epiphany when she finds herself winded after climbing just a few stairs. She knows it’s time for a change—in her diet, exercise routine, and overall lifestyle. Maybe it’s that moment of silent negotiation with her jittery heartbeat that marks the turning point.&lt;/p&gt;

&lt;h3&gt;
  
  
  The IT Department:
&lt;/h3&gt;

&lt;p&gt;The moment comes when the IT department realizes its deployment processes are inefficient, and its operations are siloed. It needs to embrace DevOps to stay competitive. Picture the IT manager waking up in the middle of the night, drenched in sweat, after yet another server crash caused by deployment glitches.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Goals: "Getting in Shape"
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Identifying and defining clear, actionable objectives.
&lt;/h3&gt;

&lt;h3&gt;
  
  
  Personal Health and Wellness:
&lt;/h3&gt;

&lt;p&gt;Jane sets her &lt;a href="https://www.atlassian.com/blog/productivity/how-to-write-smart-goals#:~:text=What%20are%20SMART%20goals%3F,within%20a%20certain%20time%20frame." rel="noopener noreferrer"&gt;SMART goals&lt;/a&gt;: Specific, Measurable, Achievable, Relevant, and Time-bound.  She decides to start with &lt;strong&gt;Improve cardiovascular health, run 5k without stopping in the next 3 months, and practice mindfulness daily&lt;/strong&gt;. These goals are written on a sticky note and slapped on the fridge, right next to the emergency chocolate stash.&lt;/p&gt;

&lt;h3&gt;
  
  
  The IT Department:
&lt;/h3&gt;

&lt;p&gt;The DevOps journey also starts with setting SMART goals. Objectives like &lt;strong&gt;Increasing our automated test coverage and reducing manual testing by 50% within the next 6 months&lt;/strong&gt; and &lt;strong&gt;increase deployment frequency from 2 weeks to every day in the next year&lt;/strong&gt; are now the strategic goals that all other tactical goals are aiming to resolve.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Plan: "Action Items"
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Constructing a detailed roadmap to achieve the set goals.
&lt;/h3&gt;

&lt;h3&gt;
  
  
  Personal Health and Wellness:
&lt;/h3&gt;

&lt;p&gt;Jane signs up for a yoga class, buys a flashy new pair of running shoes, and downloads a meditation app that she swears she will use daily.&lt;/p&gt;

&lt;h3&gt;
  
  
  The IT Department:
&lt;/h3&gt;

&lt;p&gt;The team plans to adopt continuous integration and continuous deployment (CI/CD) pipelines, containerization with Docker, and automation with GitHub.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Initial Struggles: "Why Did I Sign Up for This?"
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The early, often challenging stages of implementing change.
&lt;/h3&gt;

&lt;h3&gt;
  
  
  Personal Health and Wellness:
&lt;/h3&gt;

&lt;p&gt;Jane hits the yoga mat but discovers that "Downward Dog" looks easier on Instagram than it performs in real life. She also confronts the brutal reality that kale smoothies are an acquired taste.&lt;/p&gt;

&lt;h3&gt;
  
  
  The IT Department:
&lt;/h3&gt;

&lt;p&gt;The IT team finds itself wrestling with complex YAML configurations and cryptic error logs at ungodly hours. GitHub Actions and Docker might as well be foreign languages, and someone is frantically Googling "How to fix broken pipelines at 2 AM."&lt;/p&gt;

&lt;h2&gt;
  
  
  Small Wins: "Things Are Looking Up"
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Experiencing initial successes that indicate progress.
&lt;/h3&gt;

&lt;h3&gt;
  
  
  Personal Health and Wellness:
&lt;/h3&gt;

&lt;p&gt;Jane manages to hold a plank for an entire minute and finishes a 5k run without stopping. She rewards herself with a delicious smoothie and a little happy dance.&lt;/p&gt;

&lt;h3&gt;
  
  
  The IT Department:
&lt;/h3&gt;

&lt;p&gt;After several weeks of trial and error, the team finally automates its first successful multi-environment deployment. Imagine the celebration: high-fives all around, even from the department’s token cynic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Plateaus: The Dreaded Standstill
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Encountering moments where progress seems to halt.
&lt;/h3&gt;

&lt;h3&gt;
  
  
  Personal Health and Wellness:
&lt;/h3&gt;

&lt;p&gt;Jane hits a plateau where her energy levels refuse to improve despite her routine. She contemplates giving up after seeing no progress for two weeks despite sticking to her regimen. Netflix and binge-watching start to whisper her name again (loudly).&lt;/p&gt;

&lt;h3&gt;
  
  
  The IT Department:
&lt;/h3&gt;

&lt;p&gt;Even with automation, some deployments still fail, and those unexpected outages seem to pop up just when everything feels smooth. The team realizes that DevOps is not a “set it and forget it” deal; constant tweaking is needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Breakthrough: "Hard Work Pays Off"
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Achieving significant milestones that validate the journey.
&lt;/h3&gt;

&lt;h3&gt;
  
  
  Personal Health and Wellness:
&lt;/h3&gt;

&lt;p&gt;Jane breaks through her plateau by consulting a nutritionist and adjusting her routine. She reaches her wellness goals and feels healthier than ever. She even starts to enjoy the yoga sessions—well, sort of.&lt;/p&gt;

&lt;h3&gt;
  
  
  The IT Department:
&lt;/h3&gt;

&lt;p&gt;Finally, the team's persistence pays off. They achieve near-instantaneous, fail-safe deployments, and collaboration between developers and operations has never been better. Scaling new services is a breeze.&lt;/p&gt;

&lt;h2&gt;
  
  
  Maintenance Mode: "Staying the Course"
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Ensuring that the achieved goals are upheld through continuous effort.
&lt;/h3&gt;

&lt;h3&gt;
  
  
  Personal Health and Wellness:
&lt;/h3&gt;

&lt;p&gt;Jane also realizes that maintaining health and wellness is a lifelong commitment. She keeps up her exercise regimen and healthy eating, allowing herself occasional treats without guilt.&lt;/p&gt;

&lt;h3&gt;
  
  
  The IT Department:
&lt;/h3&gt;

&lt;p&gt;Having achieved DevOps Zen, the team knows that maintenance is crucial. They continuously monitor systems, gather feedback, and look for further optimizations. It’s a lifestyle now, not a one-time sprint.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: The Journey Continues
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Reflecting on the path taken and the continuous nature of improvement.
&lt;/h3&gt;

&lt;p&gt;The DevOps journey and a personal health and wellness journey both illustrate that achieving and maintaining success requires effort, resilience, and adaptability. Both paths are dotted with challenges and triumphs, but in the end, the rewards make the journey worthwhile.&lt;/p&gt;

&lt;p&gt;So, whether you're an IT professional automating deployments or a wellness enthusiast jogging off those last few calories, remember: the journey might be tough, but the destination is worth it. And when in doubt, there’s always room for a little humor and a well-deserved break (or cheat day).&lt;/p&gt;

</description>
      <category>devops</category>
      <category>career</category>
      <category>funny</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Harnessing DORA Metrics and EBM for Enhanced Value Delivery: An Integrated Approach</title>
      <dc:creator>Ed LeGault</dc:creator>
      <pubDate>Fri, 22 Mar 2024 18:22:47 +0000</pubDate>
      <link>https://forem.com/leading-edje/harnessing-dora-metrics-and-ebm-for-enhanced-value-delivery-an-integrated-approach-12h</link>
      <guid>https://forem.com/leading-edje/harnessing-dora-metrics-and-ebm-for-enhanced-value-delivery-an-integrated-approach-12h</guid>
      <description>&lt;p&gt;In the fast-paced realm of software development, continuous improvement is vital for maintaining competitiveness and delivering value to users effectively. Two instrumental frameworks that aid organizations in this journey are the DORA metrics and Evidence-Based Management (EBM). DORA metrics offer a detailed view of the team's software delivery capabilities, while EBM provides a broader perspective on the overall strategic value delivery of products. Understanding how these two sets of measurements complement each other is crucial for organizations striving to optimize their delivery processes and enhance value creation systematically. This article delves into the essence of DORA metrics, explores the pillars of EBM, and unravels how a combined view of both can offer a compelling blueprint for delivering software with efficiency and precision.&lt;/p&gt;

&lt;h3&gt;
  
  
  DORA Metrics
&lt;/h3&gt;

&lt;p&gt;DORA metrics are a set of four key measurements that have become a standard for gauging the effectiveness of software development and delivery teams. The term "DORA" refers to the DevOps Research and Assessment team, which was instrumental in developing these metrics. They are derived from years of research and analysis of data collected from thousands of IT professionals and are a product of the annual &lt;em&gt;State of DevOps&lt;/em&gt; reports.&lt;/p&gt;

&lt;p&gt;The four DORA metrics are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Deployment Frequency (DF):&lt;/strong&gt; Measures how often an organization successfully releases to production. High-performing teams tend to deploy more frequently, with deployments ranging from multiple times per day to once every few months, depending on the organization's goals and the application's complexity.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Lead Time for Changes (LT):&lt;/strong&gt; Refers to the amount of time it takes for a change to go from code committed to code successfully running in production. This metric helps organizations understand their speed in moving from an idea to a deliverable product.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Time to Restore Service (TRS):&lt;/strong&gt; Measures how long it takes an organization to recover from a failure in production. This metric gives an indication of an organization’s resilience and reliability by looking at its ability to quickly address and mitigate failures.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Change Failure Rate (CFR):&lt;/strong&gt; Indicates the percentage of deployments that cause a failure in production. This metric provides insight into the stability of the software delivery process and the risk associated with a release.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These metrics allow organizations to benchmark their performance against industry standards and identify areas for improvement in their software delivery process.&lt;/p&gt;

&lt;h3&gt;
  
  
  How DORA Metrics Relate to EBM
&lt;/h3&gt;

&lt;p&gt;EBM, or Evidence-Based Management, is a framework from Scrum.org used to help organizations measure, manage, and increase the value they derive from their product development initiatives. It focuses on empirically measuring and evaluating outcomes to guide improvements and decision-making. EBM highlights four Key Value Areas (KVAs):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Current Value (CV):&lt;/strong&gt; Reflects the value that the software currently delivers to customers and stakeholders.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Ability to Innovate (A2I):&lt;/strong&gt; Represents the capability of a product development organization to deliver new capabilities that might produce more value.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Time to Market (TTM):&lt;/strong&gt; Corresponds to the speed and efficiency with which an organization can deliver new features, enhancements, and fixes to the market.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Unrealized Value (UV):&lt;/strong&gt; Denotes the potential but unrealized value in the market that the organization could capture by fulfilling additional customer needs and requirements.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;DORA metrics relate to EBM primarily by providing key insights into an organization's 'Ability to Innovate' and 'Time to Market'. Specifically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Deployment Frequency&lt;/strong&gt; and &lt;strong&gt;Lead Time for Changes&lt;/strong&gt; offer concrete data on the 'Time to Market', highlighting how quickly an organization can deliver new features, enhancements, and bug fixes to users.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Time to Restore Service&lt;/strong&gt; and &lt;strong&gt;Change Failure Rate&lt;/strong&gt; tie into the 'Ability to Innovate' by reflecting the stability and reliability of the current delivery process, which affects the team's capacity to work on new innovations rather than dealing with fixing issues.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both sets of metrics, DORA and EBM, complement each other as they provide empirical evidence that helps steer strategic decisions in an organization's journey towards continuous improvement. While DORA metrics give a more granular look at the software delivery performance, EBM offers a broader perspective on the overall value delivery and business outcomes. Together, they can help organizations align their technical practices with their business goals.&lt;/p&gt;

&lt;p&gt;Embracing DORA metrics and EBM is a strategic move for organizations committed to superior product delivery and business outcomes. By applying DORA metrics, teams can refine their software delivery efficiency, with enhanced deployment frequency, reduced lead times, rapid service restoration, and lower change failure rates. In parallel, integrating the insights from EBM helps ensure these improvements are in lockstep with the broader value goals of the organization. Learning how DORA metrics provide the empirical data within the 'Time to Market' and 'Ability to Innovate' Key Value Areas of EBM allows for a holistic approach to enhancing value delivery.&lt;/p&gt;

&lt;p&gt;The synergy between DORA metrics and EBM turns data into actionable insights, empowering teams to align their technological prowess with their strategic business needs. It bridges the gap between operational excellence and business outcome optimization. By leveraging the combined strengths of both frameworks, organizations can effectively navigate the complexities of modern software delivery, fostering a culture of continuous improvement and sustained competitive advantage. As the digital landscape continues to evolve, the integration of DORA metrics and EBM emerges as an indispensable conduit for organizations to thrive, satisfying the ever-growing expectations of their customers and stakeholders.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>The Costly Aftermath: Navigating the Financial Implications of Exiting a Code Freeze</title>
      <dc:creator>Ed LeGault</dc:creator>
      <pubDate>Tue, 02 Jan 2024 19:55:00 +0000</pubDate>
      <link>https://forem.com/leading-edje/the-costly-aftermath-navigating-the-financial-implications-of-exiting-a-code-freeze-fol</link>
      <guid>https://forem.com/leading-edje/the-costly-aftermath-navigating-the-financial-implications-of-exiting-a-code-freeze-fol</guid>
      <description>&lt;p&gt;Exiting a code freeze period poses unique challenges that can inadvertently increase costs, potentially negating the intended financial benefits of the implementation. Here are the primary concerns:&lt;/p&gt;

&lt;h3&gt;
  
  
  Logjam of Post-Freeze Development
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Merge Conflicts&lt;/strong&gt;: Backlog of changes may cause integration issues, requiring additional developer time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CI/CD Pipeline Stress&lt;/strong&gt;: Surge of updates can overwhelm automation systems, causing delays.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Quality Assurance Overload
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Testing Backlog&lt;/strong&gt;: QA is faced with numerous new changes to validate.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Human Error&lt;/strong&gt;: Increased testing volume may lead to oversights.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Additional Resources&lt;/strong&gt;: Expansion of the QA team may temporarily increase labor costs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Disruption of Team Rhythm and Morale
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Transitioning Back&lt;/strong&gt;: Shifting gears from freeze to full-speed development can impact productivity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Team Morale&lt;/strong&gt;: The abrupt change in pace can negatively influence team motivation.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Market and Momentum Loss
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Competitive Lag&lt;/strong&gt;: Delays in releasing new features can result in falling behind competitors.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Revenue Delay&lt;/strong&gt;: Postponing feature deployment typically defers potential earnings.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Deferred Feature Deployment Costs
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Lost User Engagement&lt;/strong&gt;: Late feature introduction may reduce impact and adoption.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cascading Release Effect&lt;/strong&gt;: Overloading users with too many features at once can lessen their effectiveness.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Increased Technical Debt
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Quick Fixes Pre-Freeze&lt;/strong&gt;: Rushed solutions before a freeze often lead to long-term complications.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deferred Maintenance&lt;/strong&gt;: Addressing post-freeze technical debt can be costlier.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Handling Post-Freeze Bugs
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Undetected Issues&lt;/strong&gt;: New commits might introduce previously undetected bugs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fixing and Testing&lt;/strong&gt;: Debugging and validation expenses can disrupt normal development.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Rather than facing the myriad risks associated with a code freeze and its subsequent thaw, organizations might find it more financially sound to build a robust CI/CD process that they can trust to handle changes continuously and avoid code freezes altogether. By focusing on the following, companies can maintain software quality and reduce costs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Mature CI/CD Pipelines&lt;/strong&gt;: Implement and refine automation tools to handle integration and delivery smoothly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Comprehensive Testing&lt;/strong&gt;: Develop a thorough and automated testing suite to catch issues early.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Incremental Changes&lt;/strong&gt;: Encourage small, frequent updates to the codebase to minimize disruption and facilitate easier testing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Continuous Feedback Loops&lt;/strong&gt;: Use real-time monitoring and feedback to constantly improve processes and product quality.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DevOps Best Practices&lt;/strong&gt;: Adopt a culture of rapid iteration, cross-functional collaboration, and transparency.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In embracing these principles, organizations can mitigate the risks associated with large batch changes and code deployment backlogs. By steering clear of code freezes, development teams can deliver features and fixes that respond more promptly to market demands and user feedback, without the potential financial downsides of a freeze and thaw cycle.&lt;/p&gt;

&lt;p&gt;Continuous delivery is not just about releasing more often, but also about creating a more effective and collaborative development environment. This mindset shift allows firms to operate with greater agility, ultimately saving money by consistently delivering value to users and staying competitive in a dynamic market.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/leading-edje"&gt;&lt;br&gt;
  &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6_daMlK2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/5uo60qforg9yqdpgzncq.png" alt="Smart EDJE Image" width="800" height="280"&gt;&lt;br&gt;
&lt;/a&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cicd</category>
    </item>
    <item>
      <title>How Daily Code Deployments Improve Communication and Attention to Detail</title>
      <dc:creator>Ed LeGault</dc:creator>
      <pubDate>Fri, 22 Sep 2023 17:24:48 +0000</pubDate>
      <link>https://forem.com/leading-edje/how-daily-code-deployments-improve-communication-and-attention-to-detail-4ako</link>
      <guid>https://forem.com/leading-edje/how-daily-code-deployments-improve-communication-and-attention-to-detail-4ako</guid>
      <description>&lt;p&gt;The adoption of daily deployment strategies brings about multiple benefits that fuel the vitality of software development operations. The ability to release smaller portions of code daily eases the monitoring of errors, diminishes the risks associated with changes, and enables more rapid mitigation. When combined with Agile and DevOps practices, this tactic fosters a significant change in mindset, promoting a sense of teamwork and collective ownership.&lt;/p&gt;

&lt;h2&gt;
  
  
  Shared Responsibility in Focus
&lt;/h2&gt;

&lt;p&gt;The concept of shared responsibility is a cornerstone principle for effective teamwork. In a software environment, it means that everyone from developers, testers, and system engineers, to project managers and stakeholders, takes a mutual responsibility for the application's performance in production.&lt;/p&gt;

&lt;p&gt;When everyone bears shared responsibility, there is profound awareness in decision making as each choice can impact not only the individual performing a specific task but also the whole team, the final product, and ultimately, the end-users. This shared sense of accountability motivates team members to fully invest themselves in performing tasks with the highest level of accuracy and precision.&lt;/p&gt;

&lt;h2&gt;
  
  
  Amplified Communication
&lt;/h2&gt;

&lt;p&gt;As every team member is closely involved in the code's lifecycle, there is a sense of mutual involvement and dependency.&lt;/p&gt;

&lt;p&gt;This mutual obligation encourages proactive communication, fostering active discussions, idea creation, and continuous feedback. It urges team members to provide their insights, suggest enhancements, and deliberate on potential issues. This collaborative approach breaks down barriers, allowing for faster resolution of problems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Elevated Attention to Detail
&lt;/h2&gt;

&lt;p&gt;Code being deployed every day invariably encourages a meticulous approach towards tasks, as every modification could have a considerable impact on the production environment. The introduction of daily code deployment means every commit carries potential implications for the live product. This forces team members to pay more attention to detail in the form of more thorough code reviews and unit testing.&lt;/p&gt;

&lt;p&gt;Moreover, this heightened sense of detail encourages continuous learning and improvement. Even the smallest missteps get converted into valuable lessons, pushing the team towards refining their practices and increasing their efficiency in future deployments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Integrating daily code deployments with a culture of shared responsibility establishes a framework promoting open communication and meticulous attention to detail. This union empowers team members to make informed decisions, and be held accountable for their actions, leading to a more effective, less error-prone, and collaborative workplace.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/leading-edje"&gt;&lt;br&gt;
  &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6_daMlK2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/i/5uo60qforg9yqdpgzncq.png" alt="Smart EDJE Image" width="800" height="280"&gt;&lt;br&gt;
&lt;/a&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cicd</category>
    </item>
    <item>
      <title>Is a DevOps Team a Thing?</title>
      <dc:creator>Ed LeGault</dc:creator>
      <pubDate>Fri, 16 Dec 2022 20:15:38 +0000</pubDate>
      <link>https://forem.com/leading-edje/is-a-devops-team-a-thing-1fhd</link>
      <guid>https://forem.com/leading-edje/is-a-devops-team-a-thing-1fhd</guid>
      <description>&lt;p&gt;I can see some people reading this right now thinking "well, yes, I am a member of that thing", or "there is no such thing".  New tools and the name have caused a lot of confusion around this question.  Let's take a look at why things are confusing and why the answer is an interesting one.&lt;/p&gt;

&lt;h2&gt;
  
  
  Good idea, terrible name
&lt;/h2&gt;

&lt;p&gt;The term DevOps has become overloaded and misused in a variety of ways over the years.  Some people still think DevOps is a thing they can buy and "boom", they have DevOps.  It doesn't help that it has crept into the names of software packages and tools to make it even more confusing.  The principals behind pure DevOps theory are a set of practices that strive to create a culture that can reliably meet the goals of your organization.  It is more than just combining "Development" and "Operations".  It includes automated testing, security and monitoring.  I guess DevSecQualMonOps doesn't have the same ring to it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Someone needs to bring it all together
&lt;/h2&gt;

&lt;p&gt;Notice I said that DevOps is a set of practices.  It isn't really a set of tools.  Sure, there are a lot of really cool tools that allow you to accomplish your DevOps goals.  Concepts and tools such as containerization, CI/CD pipelines, automated tests etc allow for really nice and streamlined DevOps principals to be accomplished.  For example, you can create a CI/CD pipeline that builds your application, packages it into a container and runs a security scan against it and fails the pipeline if a violation is detected.  That is shortening feedback loops and giving a developer fast feedback on if they have possibly added a security vulnerability.  Ideally there would be some sort of IDE plugin or script that the developer could run locally that would let them know the same results before they committed their code and ran the scan.  In this example, who built the pipeline and figured out how to run the security scan?  Also, hopefully they created something reusable so all other application pipelines could leverage the same tools without having to reinvent the wheel.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enter Platform Engineering
&lt;/h2&gt;

&lt;p&gt;To leverage these tools and procedures it takes a team of people to build and maintain them.  Now that things like build pipelines and cloud infrastructure are defined and implemented "as code" it means that they can incur technical debt.  Software needs upgraded and common things need templated and made available for reuse.  Think of it as creating the "yellow brick road" for everyone to follow along their software development journey with the goal being to optimize the developer experience and productivity.  The term for this is now called "Platform Engineering".&lt;/p&gt;

&lt;h2&gt;
  
  
  Enter Site Reliability Engineering
&lt;/h2&gt;

&lt;p&gt;Similar to Platform Engineering, Site Reliability Engineering leverages tools and procedures but to ensure the reliability of the application.  This could be in the form of providing dashboards to show the performance and availability of an application by leveraging monitoring and display tools.  This could be using software to define Service-Level Objectives (SLO) that software will use to determine if someone needs alerted or if an applications needs automatically restarted.  Another example is providing the monitoring to ensure that if a release of the application has degraded stability then it will automatically be rolled-back.  &lt;/p&gt;

&lt;h2&gt;
  
  
  So what is the answer?
&lt;/h2&gt;

&lt;p&gt;If your DevOps team is really just a renamed Operations team then no, that isn't a thing.  If your DevOps team provides value in the form of supporting Developer productivity in the form of reusable templates and components then they are probably a Platform Engineering team.  If they provide value in the form of site reliability by providing monitoring and tools to better assist production deployments then they are probably a Site Reliability Engineering team.  Maybe they are both and can be split into two teams and you got all the way to the end of this article to find out that the answer is "maybe".  However, I encourage you to learn more about how the term DevOps has evolved into the creation of "Platform Engineering" and "Site Reliability Engineering".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/leading-edje"&gt;&lt;br&gt;
  &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SfUhPiEd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5uo60qforg9yqdpgzncq.png" alt="Smart EDJE Image" width="800" height="280"&gt;&lt;br&gt;
&lt;/a&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>culture</category>
      <category>sre</category>
    </item>
    <item>
      <title>What have we learned now that everything is in containers?</title>
      <dc:creator>Ed LeGault</dc:creator>
      <pubDate>Tue, 06 Dec 2022 20:09:02 +0000</pubDate>
      <link>https://forem.com/leading-edje/what-have-we-learned-now-that-everything-is-in-containers-192b</link>
      <guid>https://forem.com/leading-edje/what-have-we-learned-now-that-everything-is-in-containers-192b</guid>
      <description>&lt;p&gt;Now that the concept of containerization has been around for quite a few years we can take a step back and look at the lessons learned from turning everything into containers.  You will be hard pressed to find an environment that doesn't either package the resulting build artifact as an image or at least use build agents that run steps inside of containers.  Let's take a look at how this new, and now current, world is shaping up and what we can learn from it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What has gone well?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Building something once and running it everywhere has caused consistent test results because if it worked in one environment it should work in another.  Right?  Right?  Well, that is the goal.  That reminds me of a key tenant of CI/CD and containerization.  Don't rebuild the image per environment.  Re-tag, don't rebuild.  As long as you are doing that then I would say running the same image in production that you tested in lower environments is a good thing.  "It worked on my machine"&lt;/li&gt;
&lt;li&gt;You can change out the entire internal working of the application and it will still run in a consistent manner.  This is cool.  You can change your application from a spring app running in a tomcat based image to spring boot and it will still start, stop and run in a consistent manner as a container.  As long as your application adheres to the API contract, ports and monitoring requirements then the tools that run and monitor it don't really care what it is made up of on the inside.&lt;/li&gt;
&lt;li&gt;Robust runtime orchestration tools can provide scaling, monitoring, healing and automated deployment possibilities because they are built to handle containers.  The most famous example is kubernetes.  You can scale your application along with performing rolling updates of upgrades just by utilizing the tooling provided.  That is, as long as it is dealing with a containerized application of course.  This is a plus, but I have a feeling kubernetes is going to come up again in another category.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What has not gone well?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The learning curve of application developers knowing more about the infrastructure and packaging of their application.  Producing an image as a deployment artifact moves the responsibility of knowing about the runtime infrastructure to the development team when they might have had no previous knowledge of what or how their application was running.  This could be seen as a good thing once this knowledge gap has been overcome.  However, there are many organizations where the developers "just write code and are paid to pump out features".  Although this line of thinking is a DevOps anti-pattern, it is a necessary evil in the IT world that we live in.&lt;/li&gt;
&lt;li&gt;Applications need to ensure that they are taking in configuration values as environment variables.  There are many times that applications are built and packaged with dev, qa and prod configuration files bundled in the image.  This needs to be avoided in favor of environment variables and, in most cases, requires application code to be changed to accommodate this.&lt;/li&gt;
&lt;li&gt;The learning curve for orchestration templating options such as docker-compose or if using kubernetes things like helm or kustomize.  To orchestrate the application it usually requires defining resources in YAML files that require some values to be different per environment.  This requires the need for templating those YAML resources.  The use of these tools requires learning new concepts and software to fully utilize them to their potential.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What are the lessons learned?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Using containers and containerization concepts does not mean Docker.  There are alternatives to using Docker such as podman (&lt;a href="https://podman.io/"&gt;https://podman.io/&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;The complexity of orchestration options are leading to some developers becoming subject matter experts in that particular software.  Again, the best example of this is kubernetes (I told you it was coming up again).  Fully utilizing kubernetes to its potential is an artform in of itself and in most cases requires a person, or team of people, to handle the complexity of how to properly scale, roll-out or roll-back the application being deployed.  This can be seen as going back to the "throw it over the wall" problem between dev and ops with the difference being that the wall has moved.  Instead it is important to have developers understand that their application is running as a container in kubernetes and what that means.  In most cases that just means that they have to have a liveness and readiness endpoint available and to know if anything they changed is going to impact what the current memory settings are.  They don't have to be kubernetes experts but they should be aware of what the impact of their changes are to the orchestration that runs the application.&lt;/li&gt;
&lt;li&gt;Cool tools can mask something that is wrong with the application.  Automatic scaling, healing, restarting the application when it goes down, etc are all awesome tools.  However, someone needs to go back and look at why the application crashes every day around the same time or why it gets restarted every two days.  Just because the tools recover from the problem doesn't mean that there really isn't a problem.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Overall containerization has proven itself to be useful and valuable.  I think overall the pros far outweigh the cons.  Containers are also not always the right tool for the job and sometimes solutions such as serverless are the answer.  However, this greatly depends on the requirements of the problem you are trying to solve.  It also makes me wonder if I will be retired by the time someone gets hired to convert applications that I have containerized into the next thing, whatever that is.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/leading-edje"&gt;&lt;br&gt;
  &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SfUhPiEd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5uo60qforg9yqdpgzncq.png" alt="Smart EDJE Image" width="800" height="280"&gt;&lt;br&gt;
&lt;/a&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>docker</category>
      <category>containerapps</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Going Back to the Basics</title>
      <dc:creator>Ed LeGault</dc:creator>
      <pubDate>Fri, 27 May 2022 19:52:04 +0000</pubDate>
      <link>https://forem.com/leading-edje/going-back-to-the-basics-5dji</link>
      <guid>https://forem.com/leading-edje/going-back-to-the-basics-5dji</guid>
      <description>&lt;p&gt;I recently attended my daughter's middle school band concert.  Since the kids are very early in their journey of learning an instrument, the school had requested that some senior high school band members sit in with them.  This gave me a flashback to my high school band days as the first chair trumpet player that thought I was better than I really was.  I was that person that got to sit in with the middle school band playing Hot Cross Buns.  It reminded me how fun it was to play very basic music and concentrate on playing beginner music as well as I possibly could.&lt;/p&gt;

&lt;p&gt;Thinking about how fun it is to go back to basics made me realize something interesting that I have done during my tech career.  I have presented "Intro to Docker" in multiple talks, presentations and conferences.  The slides and demos have changed some over the years but the basic material is the same.  It has been recorded multiple times and people can go back and watch the recordings versus me doing the talk again in a live setting.  I have also attended multiple conference presentations that are centered around Docker or Container basics.&lt;/p&gt;

&lt;p&gt;Why would I attend a talk that is very similar to material I have presented multiple times?  Well, to be the person in the back of the room that can't wait to ask a question that I already know the answer.  That is why.  There is always one of those people in every conference audience.  It might as well be me.  I am just kidding.  The real reason I go to conference talks that I think I am an expert about the subject is to show me that I can always learn something new.  I have picked up little nuggets of wisdom from every session that I have attended.&lt;/p&gt;

&lt;p&gt;It might be as simple as how the presenter organizes the slides.  In one talk I learned that you don't have to include the entire id of a running container when issuing commands.  I watched the presenter doing a demo clean up his running containers by issuing a stop command using the first three characters of the id.  In the middle of the presentation I blurted out a question without waiting my turn "wait, you can do that?", which I admit makes me that person at the conference as well.&lt;/p&gt;

&lt;p&gt;Too many times I think as technologists we constantly strive to work on or build something using the latest, greatest bleeding-edge technology.  There is something satisfying about seeing if you can make a java app as clean and simple as possible even if you have created hundreds of them in the past.  Take some time to review something that you think you are an expert.  Think of it as a coder therapy session.  Odds are you are going to enjoy it and probably learn something along the way.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/leading-edje"&gt;&lt;br&gt;
  &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SfUhPiEd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5uo60qforg9yqdpgzncq.png" alt="Smart EDJE Image" width="800" height="280"&gt;&lt;br&gt;
&lt;/a&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>discuss</category>
      <category>career</category>
      <category>watercooler</category>
    </item>
    <item>
      <title>Docker as a Toolbox</title>
      <dc:creator>Ed LeGault</dc:creator>
      <pubDate>Tue, 22 Dec 2020 21:56:48 +0000</pubDate>
      <link>https://forem.com/leading-edje/docker-as-a-toolbox-26jo</link>
      <guid>https://forem.com/leading-edje/docker-as-a-toolbox-26jo</guid>
      <description>&lt;p&gt;Docker is by far the most popular software for containerization of applications. Containers make it easy to package, release, deploy and execute an application stack in a consistent and repeatable manner. However, there is more to Docker than packaging and orchestrating an application stack. Docker can also be used to build code, execute scripts and execute steps in a CI/CD pipeline.  You can also use docker to try out tools without needing to install them on your machine.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Executing CI/CD Steps Within Containers
&lt;/h2&gt;

&lt;p&gt;There are many build tools, such as Jenkins, Bitbucket and GitLab that support running steps of a pipeline within a running docker container. This has the following advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build dependencies like node, maven, java, etc are no longer needed to be installed on the build server&lt;/li&gt;
&lt;li&gt;Application teams can dictate and maintain their build dependencies&lt;/li&gt;
&lt;li&gt;Developers can build code exactly the same way the build server does&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Typically each step in a pipeline can define what image to run as a container.  Here is an example of a GitLab pipeline YAML for a build step that uses maven:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;image: maven:latest

stages:
  - build

build:
  stage: build
  script:
    - mvn package
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Executing Build Steps Locally
&lt;/h2&gt;

&lt;p&gt;The same build steps that run during the CI/CD pipeline can also be ran locally.  This can be done via docker by running a container and volume mapping the source location. A few examples are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;docker run --rm -v $(PWD):/usr/src/app -w /usr/src/app node:4 npm install&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;docker run --rm -v $(PWD):/usr/src/app -w /usr/src/app maven:3.5.3 mvn clean install&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;docker run --rm -v $(PWD):/usr/src/app -w /usr/src/app gradle:4.7 gradle clean build&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Executing Scripts
&lt;/h2&gt;

&lt;p&gt;Docker can also be used to execute other things that are needed for a release such as scripts or data migration utilities. These changes can be packaged as a Docker image that is versioned along with the release they are needed for. This allows the installation of a release to be repeatable and testable via automation. Packaging and executing these tasks via Docker provides the ability to virtually remove the need for an installation playbook because those steps are authored within a Docker image. If scripts are packaged in an image named "migration-utils" you can run them and even pass parameters to them. For example, if you want to execute a script named "runFileMigration.sh prod" in your "migration-utils" image you would execute the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;docker run --rm -v local-dir:/path/to/files --entrypoint=runFileMigration.sh migration-utils prod&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Notice how when you override the entrypoint the argument "prod" is now sent as the argument to the defined entrypoint. In this example you specify a volume that represents where the files are that the script needs access to.&lt;/p&gt;

&lt;h2&gt;
  
  
  Experimenting With Tools
&lt;/h2&gt;

&lt;p&gt;I recently attended a webinar that walked through a demo of a tool that executes static analysis of kubernetes manifests.  The tool seemed quite interesting and they provided a link to the project location on github.  Within the README it had various installation instructions for installing the tool.  There were options for Windows, Linux, Mac (Home Brew) and a docker image.  I immediately copy/pasted the instructions for the &lt;code&gt;docker run&lt;/code&gt; command and changed a few things for my particular file location.  I then went looking for similar tools to compare and contrast the output and see if I liked the functionality of one over the other.  In each case if they provide a docker image I just run that.  This provides two advantages.  The first is that I don't have to install each of these tools on my machine and then uninstall them if I don't like them.  The other is that if I execute things via the &lt;code&gt;docker run&lt;/code&gt; command and I get the various parameters and settings the way I like them I can easily convert that command to a corresponding declaration in a CI file.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;There are many more examples of using Docker for things other than building images and running containerized applications.  Weather it be for CI/CD, executing scripts or experimenting with tools or anything else you can think of, feel free to try using a docker image the next time you want to install and/or get something working.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/leading-edje"&gt;&lt;br&gt;
  &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5uo60qforg9yqdpgzncq.png" alt="Smart EDJE Image"&gt;&lt;br&gt;
&lt;/a&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>devops</category>
    </item>
    <item>
      <title>Running Static Analysis on Kubernetes Manifests - Isn't Just For Code Anymore</title>
      <dc:creator>Ed LeGault</dc:creator>
      <pubDate>Fri, 18 Dec 2020 22:30:19 +0000</pubDate>
      <link>https://forem.com/leading-edje/running-static-analysis-on-kubernetes-manifests-isn-t-just-for-code-anymore-37n0</link>
      <guid>https://forem.com/leading-edje/running-static-analysis-on-kubernetes-manifests-isn-t-just-for-code-anymore-37n0</guid>
      <description>&lt;p&gt;Running static analysis during a CI/CD pipeline is typically used for the purposes of finding formatting issues, spotting common mistakes and ensuring consistency in code before it is even compiled or tested.  There are tools available that are capable of performing similar checks against kubernetes manifests and also helm charts.  Adding these checks will allow the team to find problems in kubernetes configuration before they are attempted to be applied or deployed.  These tools provide slightly different levels of functional value depending on what they are verifying.&lt;/p&gt;

&lt;h2&gt;
  
  
  API Based Validation
&lt;/h2&gt;

&lt;p&gt;There are tools available, such as &lt;a href="https://www.kubeval.com/"&gt;kubeval&lt;/a&gt; that validate your kubernetes manifests against a given API schema.  It provides the option to pass particular version of kubernetes when doing the validation to ensure that your manifest files conform to that version's schema.  Since a tool like this is just validating that the yaml is correct against a particular schema and API it does not do any checking of security issues or allow for customization.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Pre-built Rule Based Validation
&lt;/h2&gt;

&lt;p&gt;Tools such as &lt;a href="https://github.com/bridgecrewio/checkov"&gt;checkov&lt;/a&gt; are more rule based and validate your kubernetes manifests against a set of rules.  These are typically industry best practices and also known security issues.  These types of tools can also be extended to add your own custom rules to allow for enforcement of any consistent requirements you have in your manifests.  An example might be if you require each resource to have a certain label you could add a custom rule to enforce that it always exists.&lt;/p&gt;

&lt;h2&gt;
  
  
  Helm Linting and Validation
&lt;/h2&gt;

&lt;p&gt;If you are using helm charts you can use most of the tools available to run the same checks but you must first run a &lt;code&gt;helm template&lt;/code&gt; command and then send the resulting output yaml to the tool.  Helm has a lint command that allows for checking that directory structure and pre-requisite files exist to properly template and package the chart.&lt;/p&gt;

&lt;h2&gt;
  
  
  Finding Problems Sooner is Good
&lt;/h2&gt;

&lt;p&gt;By adding some sort of static analysis to your kubernetes files to your CI/CD pipeline you can find and address configuration or consistency issues faster and easier.  Feel free to search around and look at different tools and options.  I have just included a few here to start the journey.  You may find that you like one over the other or something meets your needs for different reasons.  You also need to decide the types of things you want to check and enforce.  Maybe you just want to start with linting your helm charts and a second step is to add an opinion based check.  It is all up to what you find value in at the time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/leading-edje"&gt;&lt;br&gt;
  &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SfUhPiEd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5uo60qforg9yqdpgzncq.png" alt="Smart EDJE Image" width="800" height="280"&gt;&lt;br&gt;
&lt;/a&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>devops</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Finding the CI/CD "Sweet Spot"</title>
      <dc:creator>Ed LeGault</dc:creator>
      <pubDate>Wed, 16 Dec 2020 19:13:25 +0000</pubDate>
      <link>https://forem.com/leading-edje/finding-the-ci-cd-sweet-spot-3ifl</link>
      <guid>https://forem.com/leading-edje/finding-the-ci-cd-sweet-spot-3ifl</guid>
      <description>&lt;p&gt;Many IT groups and organizations already have some sort of continuous integration or continuous delivery in place.  If they don't they are probably in the process of wanting to implement some portion of CI/CD.  Each IT organization has their own culture and a particular way in which they develop, test and implement software.  There is not one particular way that they should implement CI/CD processes, procedures or pipelines but there are some principles they should keep in mind to find the ultimate CI/CD "sweet spot".  &lt;/p&gt;

&lt;h2&gt;
  
  
  What is the goal?
&lt;/h2&gt;

&lt;p&gt;The goal of any for-profit organization is to make money.  In the case of adding CI/CD processes and procedures that translates to adding value.  According to the dictionary, a "sweet spot" is "an optimum point or combination of factors or qualities".  When setting out to provide CI/CD to a given product or piece of software you need to continually keep an eye to what adds value and what does not.&lt;/p&gt;

&lt;h2&gt;
  
  
  How do you decide which things add value?
&lt;/h2&gt;

&lt;p&gt;You need to find the things that allow all of the people contributing to the product to either do more with less or go faster.  Developers want to go faster and not make mistakes.  Although they may exist, I don't think I have ever met a developer that messed up on purpose.  If a mistake is made in a piece of code that is most likely because they either didn't know a better way to do it (and probably copy/pasted it) or had no idea it was going to break something else.  Most organizations currently have some form of static analysis tools and also require or enforce some level of unit testing as well as executing automated QA tests.  This is a great place to start to add value but you shouldn't just add these checks to a pipeline without a few considerations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Can the same check be executed locally?
&lt;/h3&gt;

&lt;p&gt;Steps need to be able to also be ran on the developer's local environment to let them know about violations before those violations are found when checked in the pipeline.  A step in the pipeline should not be introduced unless the same check can be performed and verified locally.&lt;/p&gt;

&lt;h3&gt;
  
  
  Should a failure during a  particular check block the pipeline?
&lt;/h3&gt;

&lt;p&gt;Most of the time when introducing static analysis or unit test coverage to an existing codebase there are going to be a fair amount of failures or low coverage.  If this is the case it is good to go ahead and run the check and also report on the results but do not fail the pipeline yet.  That is a goal you can work toward as code becomes more stable and developers can fix the things that are found in the report.&lt;/p&gt;

&lt;h3&gt;
  
  
  Does the team find the rules valuable?
&lt;/h3&gt;

&lt;p&gt;As the team progresses through using a particular set of rules for either static analysis or coverage it is good practice to routinely take a look at the violations and determine how valuable the different rules are to enforce.&lt;/p&gt;

&lt;h3&gt;
  
  
  Are automated tests really finding defects?
&lt;/h3&gt;

&lt;p&gt;The execution of automated tests needs to be repeatable and reliable.  If they fail for reasons other than a code problem or a difference in what the test is expecting then they need refactored and not run in the CI/CD pipeline.  If failures are disregarded because of a volatile environment or data issues then those things need addressed before executing the tests in the pipeline will provide any value.&lt;/p&gt;

&lt;h2&gt;
  
  
  Look to Continually Add Value
&lt;/h2&gt;

&lt;p&gt;You need to be willing to continually look at what is adding value to your CI/CD process and/or pipeline.  In some cases you could add value by removing a certain check that has proven to either be pointless or needs work before it can be re-introduced.  Take a look at incoming defects and see where things were missed and determine if a step in the process needs changed or refined.  If a particular step takes a long time to complete and causes a bottle-neck then see what can be done to speed it up.  The value in CI/CD is in the reliability of repeatability.  When you are maximizing productivity and quality at the same time you will find you are in the CI/CD "sweet spot"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/leading-edje"&gt;&lt;br&gt;
  &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SfUhPiEd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5uo60qforg9yqdpgzncq.png" alt="Smart EDJE Image" width="800" height="280"&gt;&lt;br&gt;
&lt;/a&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
    </item>
  </channel>
</rss>
