<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Steve Fenton</title>
    <description>The latest articles on Forem by Steve Fenton (@_steve_fenton_).</description>
    <link>https://forem.com/_steve_fenton_</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/_steve_fenton_"/>
    <language>en</language>
    <item>
      <title>How AI Makes You 56% Faster and 19% Slower</title>
      <dc:creator>Steve Fenton</dc:creator>
      <pubDate>Tue, 12 May 2026 11:32:13 +0000</pubDate>
      <link>https://forem.com/_steve_fenton_/how-ai-makes-you-56-faster-and-19-slower-33k</link>
      <guid>https://forem.com/_steve_fenton_/how-ai-makes-you-56-faster-and-19-slower-33k</guid>
      <description>&lt;p&gt;&lt;strong&gt;The flow rate of change means we don’t have all the answers!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There’s a growing body of research around AI coding assistants with a confusing range of conflicting results. This is to be expected when the landscape is constantly shifting from coding suggestions to agent-based workflows to Ralph Wiggum loops and beyond.&lt;/p&gt;

&lt;p&gt;The Reichenbach Falls in Switzerland has a drop of 250 meters and a flow rate of 180–300 cubic meters (enough to fill about 1,500 bathtubs) every minute. This is comparable to the rate of change in tools and techniques around coding assistants over the past year, so few of us are using it in the same way. You can’t establish best practices under these conditions; only practical point-in-time techniques.&lt;/p&gt;

&lt;p&gt;As an industry, we, like Sherlock Holmes and James Moriarty, are battling on the precipice of this torrent, and the survival of high-quality software and sustainable delivery is at stake.&lt;/p&gt;

&lt;p&gt;Given the rapid evolution of tools and techniques, I hesitate to cite studies from 2025, let alone 2023. Yet these are the most-cited studies on the effectiveness of coding assistants, and they present conflicting findings. One study reports developers completed tasks 56% faster, while another reports a 19% slowdown.&lt;/p&gt;

&lt;p&gt;The studies provide a platform for thinking critically about AI in software development, enabling more constructive discussions, even as we fumble our collective way toward understanding how to use it meaningfully.&lt;/p&gt;

&lt;h2&gt;
  
  
  The GitHub Self-Assessment
&lt;/h2&gt;

&lt;p&gt;The often-cited 56% speed up stems from a 2023 collaboration between &lt;a href="https://arxiv.org/abs/2302.06590" rel="noopener noreferrer"&gt;Microsoft Research, GitHub, and MIT&lt;/a&gt;. The number emerged from a lab test in which developers were given a set of instructions and a test suite to see how quickly and successfully they could create an HTTP server in JavaScript.&lt;/p&gt;

&lt;p&gt;In this test, the AI-assisted group completed the task in 71 minutes, compared to 161 minutes for the control group. That makes it 55.8% faster. Much of the difference came from the speed at which novice developers completed the task. Task success was comparable between the two groups.&lt;/p&gt;

&lt;p&gt;There are weaknesses in this approach. The tool vendor was involved in defining the task against which the tool would be measured. If I were sitting an exam, it would be to my advantage to set the questions. Despite this, we can generously accept that it made the coding task faster, and that the automated tests sufficiently defined task success.&lt;/p&gt;

&lt;p&gt;We might also be generous in stating that tools have improved over the past three years. Benchmarking reports like those from METR have found that AI has doubled the length of tasks it can handle every 7 months; other improvements are likely.&lt;/p&gt;

&lt;p&gt;We’ve also observed the emergence of techniques that introduce work plans and task chunking, thereby improving the agent’s ability to perform larger tasks that would otherwise incur context decay.&lt;/p&gt;

&lt;p&gt;And METR is also the source of our cautionary counter finding on task speed.&lt;/p&gt;

&lt;h2&gt;
  
  
  The METR sense check
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://arxiv.org/pdf/2507.09089" rel="noopener noreferrer"&gt;METR&lt;/a&gt; study in 2025 examined the impact of contemporary tools on task completion times in real-world open-source projects. The research is based on 246 tasks performed by 16 developers who had experience using AI tools. Each task was randomly assigned to an AI-assisted group and a control group. Screen recordings were captured to check and categorize the task completion.&lt;/p&gt;

&lt;p&gt;The research found that tasks were slowed by 19%, which appears to contradict the earlier report. In reality, the active coding time was reduced by AI tools, as was the task of searching for answers, testing, and debugging. The difference in the METR report was that it identified tools that introduced new task categories, such as reviewing AI output, prompting, and waiting for responses. These new tasks, along with increased idle/overhead time, consumed the gains and pushed overall task completion times into the red.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ivhl4qhapm7zc0tvlrj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ivhl4qhapm7zc0tvlrj.png" alt="Time saved by AI is often taken up by new tasks associated with AI-assisted work" width="800" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Source: &lt;a href="https://arxiv.org/abs/2507.09089" rel="noopener noreferrer"&gt;METR Measuring the Impact of Early-2025 AI&lt;/a&gt;. Task category comparison.&lt;/p&gt;

&lt;p&gt;One finding from the METR study worth noting is the perception problem. Developers predicted AI assistants would speed them up. After completing the task, they also estimated they had saved time, even though they were 19% slower. This highlights that our perceptions of productivity are unreliable, as they were when we believed that multitasking made us more productive.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lack of consensus
&lt;/h2&gt;

&lt;p&gt;A recently released study from &lt;a href="https://www.multitudes.com/data-to-cut-through-the-hype" rel="noopener noreferrer"&gt;Multitudes&lt;/a&gt;, based on data collected over 10 months in 2025, highlights the lack of consensus around the productivity benefits of AI coding tools. They found that the number of code changes increased, but this was countered by an increase in out-of-hours commits.&lt;/p&gt;

&lt;p&gt;This appears to be a classic case of increasing throughput at the expense of stability, with out-of-hours commits representing failure demand rather than feature development. It also confuses the picture as developers working more hours would also create more commits, even without an AI assistant.&lt;/p&gt;

&lt;p&gt;Some of the blame was attributed to adoption patterns that gave little time for learning and increased delivery pressure on teams, now that they had tools to that were supposed to help them.&lt;/p&gt;

&lt;h2&gt;
  
  
  The wicked talent problem
&lt;/h2&gt;

&lt;p&gt;One finding that repeatedly comes up in the research is that AI coding assistants benefit novice developers more than those with deep experience. This makes it likely that the use of these tools will exacerbate a wicked talent problem. Novice developers may never shed their reliance on tools, as they become accustomed to working at a higher level of abstraction.&lt;/p&gt;

&lt;p&gt;This is excellent news for those selling AI coding tools, as an ever-expanding market of developers who can’t deliver without the tools will be a fruitful source of future income. When investors are ready to recoup, organizations will have little choice but to accept whatever pricing structure is required to make vendors profitable. Given the level of investment, this may be a difficult price to accept.&lt;/p&gt;

&lt;p&gt;The problem may deepen as organizations have stopped hiring junior developers, believing that senior developers can delegate junior-level tasks to AI tools. This doesn’t align with the research, which shows junior developers speed up the most when using AI.&lt;/p&gt;

&lt;p&gt;The &lt;a href="http://octopus.com/publications/ai-pulse-report" rel="noopener noreferrer"&gt;AI Pulse Report&lt;/a&gt; compares this to the aftermath of the dot-com bubble, when junior hiring was frozen, resulting in a shortage of skilled developers. When hiring picked up again, increased competition for talent led to higher salaries.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzl49343exidisiks6x74.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzl49343exidisiks6x74.png" alt="Organizations are slowing the hiring of junior developers" width="800" height="530"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Source: &lt;a href="http://octopus.com/publications/ai-pulse-report" rel="noopener noreferrer"&gt;The AI Pulse Report&lt;/a&gt;. Hiring plans for junior developers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Continuous means safe, quick, and sustainable
&lt;/h2&gt;

&lt;p&gt;While many practitioners recognize the relevance of value stream management and the theory of constraints to AI adoption, a counter-movement is emerging that calls for the complete removal of downstream roadblocks.&lt;/p&gt;

&lt;p&gt;“If you can’t complete code reviews at the speed at which they are created with AI, you should stop doing them. Every other quality of a system should be subverted to straight-line speed. Why waste time in discovery when it would starve the code-generating machine? Instead, we should build as much as we can as fast as we can.”&lt;/p&gt;

&lt;p&gt;As a continuous delivery practitioner and a long-time follower of the DORA research program, this makes no sense to me. One of the most powerful findings in the DORA research is that a user-centric approach beats flat-line speed in terms of product performance. You can slow development down to a trickle if you’ve worked out your discovery process, because you don’t need many rounds of chaotic or random experiments when you have a deep understanding of the user and the problem they want solved.&lt;/p&gt;

&lt;p&gt;We have high confidence that continuous delivery practices improve the success of AI adoption. You shouldn’t rush to dial up coding speed until you’ve put those practices in place, and you shouldn’t remove practices in the name of speed. That means working in small batches, integrating changes into the main branch every few hours, keeping your code deployable at all times, and automating builds, code analysis, tests, and deployments to smooth the flow of change.&lt;/p&gt;

&lt;p&gt;Continuous delivery is about getting all types of changes to users safely, quickly, and sustainably. The calls to remove stages from the deployment pipeline to expedite delivery compromise the safety and sustainability of software delivery, permanently degrading the software’s value for a temporary gain.&lt;/p&gt;

&lt;h2&gt;
  
  
  It’s a system
&lt;/h2&gt;

&lt;p&gt;There’s so much to unpack in the research, and many of the studies are zoomed in on a single link in a much longer chain. Flowing value from end to end safely, quickly, and sustainably should be the goal, rather than merely maintaining flat-line speed or optimizing individual tasks, especially when those tasks are the constraining factor.&lt;/p&gt;

&lt;p&gt;With the knowledge we’ve built over the last seven decades, we should be moving into a new era of professionalism in software engineering. Instead, we’re being distracted by speed above all other factors. When my local coffee shop did this, complete with a clipboard-wielding Taylorist assessor tasked with bringing order-to-delivery times down to 30 seconds, the delivery of fast, bad coffee convinced me to find a new place to get coffee. Is this what we want from our software?&lt;/p&gt;

&lt;p&gt;The results across multiple studies show that claims of a revolution are premature, unless it’s an overlord revolution that will depress the salaries of those pesky software engineers and develop a group of builders unable to deliver software unless it’s through these new tools. Instead, we should examine the landscape and learn from research and from one another as we work out how to use LLM-based tools effectively in our complex socio-technical environments.&lt;/p&gt;

&lt;p&gt;We are at a crossroads: either professionalize our work or adopt a prompt-and-fix model that resembles the earliest attempts to build software. There are infinite futures ahead of us. I don’t dread the AI-assisted future as a developer, but as a software user. I can’t tolerate the quality and usability chasm that will result from removing continuous delivery practices in the name of speed.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>coding</category>
      <category>productivity</category>
      <category>tooling</category>
    </item>
    <item>
      <title>Continuous Delivery Office Hours Ep.1: Continuous Delivery should be your top priority</title>
      <dc:creator>Steve Fenton</dc:creator>
      <pubDate>Tue, 05 May 2026 09:07:21 +0000</pubDate>
      <link>https://forem.com/_steve_fenton_/continuous-delivery-office-hours-ep1-continuous-delivery-should-be-your-top-priority-16j0</link>
      <guid>https://forem.com/_steve_fenton_/continuous-delivery-office-hours-ep1-continuous-delivery-should-be-your-top-priority-16j0</guid>
      <description>&lt;p&gt;Continuous Delivery promotes low-risk releases, faster time-to-market, higher quality, lower costs, better products, and happier teams. Software is at the core of everything a business does today, so organizations must be able to respond to customer needs more quickly than ever.&lt;/p&gt;

&lt;p&gt;Taking a quarter or a month to deliver new functionality puts companies behind their competition and prevents them from serving their customers. Few practices offer as much return on investment as Continuous Delivery, but many organizations continue to resist it, often making their deployment problems worse in the process.&lt;/p&gt;

&lt;p&gt;Understanding why Continuous Delivery matters and how to implement it effectively can transform not only your deployment process but also your entire software development approach.&lt;/p&gt;

&lt;h2&gt;
  
  
  Watch the episode
&lt;/h2&gt;

&lt;p&gt;You can watch the episode below, or read on to find some of the key discussion points.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=V67ASNnUGDs" rel="noopener noreferrer"&gt;Watch Continuous Delivery Office Hours Ep.1&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Continuous Delivery?
&lt;/h2&gt;

&lt;p&gt;At its core, Continuous Delivery means you can deploy your software at any time. A good indication of whether a team practices Continuous Delivery is whether they prioritize work that keeps software deployable. Other development styles usually continue working on features and return to deployability issues later.&lt;/p&gt;

&lt;p&gt;That means teams must have fast, automated feedback for every change, highlighting when the software has an issue that would prevent its deployment. Deployments to all environments must be automated, with artifacts and deployment processes pinned to avoid unexpected changes between deployments.&lt;/p&gt;

&lt;h2&gt;
  
  
  The big three: Time, risk, and money
&lt;/h2&gt;

&lt;p&gt;The longer the intervals between deployments, the more you accumulate risk, and the more you delay the value the changes will realize. If you wait six months between deployments, you're more likely to get caught in a firefighting loop, spending more time pinpointing bug sources because of the volume of changes.&lt;/p&gt;

&lt;p&gt;Crucially, until you place new features in users' hands, you accumulate market risk that the changes won't solve the underlying problem in a way users accept.&lt;/p&gt;

&lt;h2&gt;
  
  
  The deployment paradox
&lt;/h2&gt;

&lt;p&gt;Human psychology works against us when deployments go wrong. Having waited six months to deploy, the pain of the firefighting stage and the increased risk of deploying large batches of changes mean people develop an aversion to deployments.&lt;/p&gt;

&lt;p&gt;When a process is stressful and goes wrong, we naturally want to do it less often. You might think: "If we do fewer deployments, we'll have less pain." But this is precisely backwards.&lt;/p&gt;

&lt;p&gt;Decreasing deployment frequency increases batch size, making the next deployment more likely to go wrong and cause pain. This is like avoiding the dentist after a painful checkup; the longer you leave it, the worse the next visit will be.&lt;/p&gt;

&lt;p&gt;Risk-averse organizations have instincts that work against their goal of safety. The solution isn't to deploy less often; it's to deploy more frequently with smaller batches of changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Keeping software deployable during feature development
&lt;/h2&gt;

&lt;p&gt;Another objection to Continuous Integration and Delivery is that features take time to build, so you can't deploy while a feature is in flight. With an infinity of overlapping feature development, this would result in never deploying (or, more likely, work taking place in long-lived branches).&lt;/p&gt;

&lt;p&gt;The solution is to separate deployments from feature release. Trunk-based development (integrating changes into the main branch every day, often many times each day) and feature toggles make it possible to work from a shared code base without making in-flight features visible to users.&lt;/p&gt;

&lt;p&gt;There are many benefits to feature toggles beyond supporting Continuous Delivery. They also let you share features early with specific user segments or roll them out progressively rather than all at once.&lt;/p&gt;

&lt;h2&gt;
  
  
  Changing what deployment success means
&lt;/h2&gt;

&lt;p&gt;When you separate deployment from release, you also transform how you measure deployment success. You're no longer testing whether new functionality works during deployment. You're only verifying that the application is running and healthy. This focus makes deployments faster and less stressful.&lt;/p&gt;

&lt;p&gt;Feature toggles reduce the stress and burden of deployments because you'll no longer miss deployment issues while checking functionality or miss functionality problems while monitoring deployments. Separating these concerns means each gets proper attention.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solving dependency challenges
&lt;/h2&gt;

&lt;p&gt;Feature toggles also address one of the most complex problems in microservices: deployment dependencies. Despite the promise of independently deployable services, teams often create elaborate deployment choreographies to ensure services are deployed in a specific order. Sometimes they give up entirely and deploy everything simultaneously. They accept unpredictable behavior during deployment or direct users to a holding page until it's complete.&lt;/p&gt;

&lt;p&gt;When deployments form a chain of dependencies, the architecture isn't truly microservices but a distributed monolith. Real microservices should deploy independently. Feature toggles make this possible. Deploy all services when ready, then switch on functionality once dependencies are in place.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Continuous Delivery isn't just about deploying more often. It's about reducing risk through smaller changes, separating deployment from release, maintaining deployable code at all times, and giving teams the confidence to move quickly and safely.&lt;/p&gt;

&lt;p&gt;The instinct to slow down after problems is natural, but it's counterproductive. The path to safer deployments runs through more frequent deployments, not fewer. Organizations that embrace this counterintuitive truth gain a competitive advantage through faster feedback, lower risk, and ultimately, better software.&lt;/p&gt;

</description>
      <category>automation</category>
      <category>cicd</category>
      <category>devops</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Running Astro in a preview container</title>
      <dc:creator>Steve Fenton</dc:creator>
      <pubDate>Mon, 27 Apr 2026 15:12:52 +0000</pubDate>
      <link>https://forem.com/_steve_fenton_/running-astro-in-a-preview-container-2b35</link>
      <guid>https://forem.com/_steve_fenton_/running-astro-in-a-preview-container-2b35</guid>
      <description>&lt;p&gt;If you’ve ever worked on a collection of different Node apps, you’ve likely encountered version conflicts. Everyone wants a different version of Node or PNPM, and your new job is trying to align them all, or managing versions daily.&lt;/p&gt;

&lt;p&gt;That’s when open-source hero Kostis Kapelonis said, “Why don’t we run the preview in a container?” In fact, he didn’t just say this; he also submitted a PR. I told you he’s an open-source hero.&lt;/p&gt;

&lt;p&gt;The PR added a Dockerfile and a docker-compose.yaml file to the project, which let you spin up the preview site using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once Kostis had done all the hard work, I added a small enhancement to make Astro’s live preview work when you change files. That meant you could start the container and keep working while all your changes are instantly visible in the preview. That keeps the developer inner loop nice and tight.&lt;/p&gt;

&lt;p&gt;If you want to do the same, here’s how to make it happen. Once again, I added a very small cherry to Kostis’ wonderfully fluffy cake, so send your adoration his way.&lt;/p&gt;

&lt;h2&gt;
  
  
  Add a Docker compose file
&lt;/h2&gt;

&lt;p&gt;Here’s the &lt;code&gt;docker-compose.yml&lt;/code&gt; file for your Astro project. It goes in the root directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;astro&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3000:3000"&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;.:/app&lt;/span&gt;
 &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/app/node_modules&lt;/span&gt;
    &lt;span class="s"&gt;environment&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
 &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;NODE_ENV=development&lt;/span&gt;
 &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;HOST=0.0.0.0&lt;/span&gt;
    &lt;span class="s"&gt;stdin_open&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="s"&gt;tty&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This maps the volumes, with a special case for node_modules. It exposes Astro’s port &lt;code&gt;3000&lt;/code&gt; inside the container to port 3000 on your machine so you can open it in your browser.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Docker file
&lt;/h2&gt;

&lt;p&gt;Here’s the &lt;code&gt;Dockerfile&lt;/code&gt;, which also goes in your project’s root directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;# Use Node 20 as the base image&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; node:20-slim&lt;/span&gt;

&lt;span class="c"&gt;# Install pnpm globally&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; PNPM_HOME="/pnpm"&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; PATH="$PNPM_HOME:$PATH"&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;corepack &lt;span class="nb"&gt;enable&lt;/span&gt;

&lt;span class="c"&gt;# Set the working directory&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;

&lt;span class="c"&gt;# Copy package files&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package.json pnpm-lock.yaml* ./&lt;/span&gt;

&lt;span class="c"&gt;# Install dependencies&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;pnpm &lt;span class="nb"&gt;install&lt;/span&gt;

&lt;span class="c"&gt;# Copy the rest of the source code&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;

&lt;span class="c"&gt;# Expose the default Astro port&lt;/span&gt;
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 3000&lt;/span&gt;

&lt;span class="c"&gt;# Start the dev server&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["pnpm", "compose:dev"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There’s an optimization here around the package files, which is why they get their own copy command. There’s an extra command in the package.json file that we call here, too. It’s a variation of the dev script we use, but switches out the Astro run with a slight variation (the addition of the --host flag) lets just show the important bits in this code snippet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="nl"&gt;"scripts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;     
    &lt;/span&gt;&lt;span class="nl"&gt;"compose:dev"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"npm-run-all --parallel dev:img dev:dictionary compose:dev:astro dev:watch"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"compose:dev:astro"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"astro dev --host"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Vite config change
&lt;/h2&gt;

&lt;p&gt;The final change is the one that makes the live refresh to work. This goes in your astro.config.mjs file, and I popped it right after the existing server config.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;server&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;port&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;3000&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="nl"&gt;vite&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;watch&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;usePolling&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;},&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Spinning it up
&lt;/h2&gt;

&lt;p&gt;The first time I ran this, I started things up with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up &lt;span class="nt"&gt;--build&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can then stop the container with&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose down
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you haven’t changed the container, you can start it with the faster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>webdev</category>
      <category>astro</category>
    </item>
    <item>
      <title>Developer Productivity in the Age of AI: Why Your Past Predicts Your Future</title>
      <dc:creator>Steve Fenton</dc:creator>
      <pubDate>Mon, 20 Apr 2026 11:57:29 +0000</pubDate>
      <link>https://forem.com/_steve_fenton_/developer-productivity-in-the-age-of-ai-why-your-past-predicts-your-future-i84</link>
      <guid>https://forem.com/_steve_fenton_/developer-productivity-in-the-age-of-ai-why-your-past-predicts-your-future-i84</guid>
      <description>&lt;p&gt;You're looking at a list of things you'd love to do, and you're looking at AI coding tools as a way to boost your way down that list. You might not have the relationships mapped out, but you can see there is some route to value if you spend on LLMs that speed up code.&lt;/p&gt;

&lt;p&gt;You're now in the developer productivity game.&lt;/p&gt;

&lt;h2&gt;
  
  
  The idea behind developer productivity
&lt;/h2&gt;

&lt;p&gt;The roots of developer productivity are straightforward. Some smart engineering managers figured out that a small team of developers with the best machines, screens, and development tools could generate value at a rate and quality that far exceeded their "head count". You could also supply all these upgrades to developers at a cost way below the fully loaded cost of 1 more developer.&lt;/p&gt;

&lt;p&gt;The return on investment for this approach was incredible, but traditional engineering managers didn't understand it. They thought developers were asking for more screens because it made them look more important. This emerged from organizations that rewarded managers for empire-building by granting them larger offices with better views.&lt;/p&gt;

&lt;p&gt;I'm a big fan of Ron Westrum's Typology of Organizational Cultures. For this post, though, we'll keep things simple and refer to traditional thinking (keep equipment costs low) and modern thinking (provide high-quality tools).&lt;/p&gt;

&lt;p&gt;We have never shaken off this traditional-versus-modern divide over developer productivity. And now, the subject has returned to the spotlight due to AI and, more specifically, LLM-based coding tools. Your organization's past approach to developer productivity will determine whether you can successfully integrate AI tools into your development teams.&lt;/p&gt;

&lt;p&gt;Let's look at why.&lt;/p&gt;

&lt;h2&gt;
  
  
  A tale of two cities
&lt;/h2&gt;

&lt;p&gt;Traditional organizations operate through control. Managers dictate how work is done, choosing the processes and tools workers must use. Instructions flow downward, and managers define efficiency. Workers are evaluated individually against the manager's prescribed methods, rather than by outcomes.&lt;/p&gt;

&lt;p&gt;Modern organizations operate through trust. Teams choose how to work, selecting from available options or proposing new tools when needs emerge. Authority flows to those closest to the work. Performance is a team sport measured by outcomes.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Traditional&lt;/th&gt;
&lt;th&gt;Modern&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Operates through control&lt;/td&gt;
&lt;td&gt;Operates through trust&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Efficiency is manager directed&lt;/td&gt;
&lt;td&gt;Productivity is worker-led&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Finds the cheapest tools&lt;/td&gt;
&lt;td&gt;Chooses the best tools for each job&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Prefers expanding teams&lt;/td&gt;
&lt;td&gt;Prefers small teams&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tooling is a cost&lt;/td&gt;
&lt;td&gt;Tooling increases value&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Performance is individual&lt;/td&gt;
&lt;td&gt;Value flows from collaboration&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;As we experiment with AI coding tools, we are gaining crucial insights. We are developing a better understanding of how much human oversight is needed to successfully and sustainably deliver high-quality software to users. A strong guiding hand is crucial in directing and correcting the output of these tools.&lt;/p&gt;

&lt;p&gt;The value of any software you build, by hand or with assistance, comes from the flow of information. That means listening to software users and collaborating internally. While the code is what gets left behind by the process, it's only an artifact of a more fundamental learning process. The ability to learn and share knowledge will also benefit teams as they discover how to apply AI coding tools to this process.&lt;/p&gt;

&lt;p&gt;It's also clear that Continuous Delivery and automation remain paramount. In the past, automated linting, security scanning, and tests gave us confidence in the code teams wrote; now, they can provide us with confidence in code generated by LLMs. DORA's &lt;a href="https://dora.dev/ai/" rel="noopener noreferrer"&gt;AI Capabilities Model&lt;/a&gt; includes 7 capabilities essential to successful AI adoption, including user-centric focus, strong version control practices, and working in small batches.&lt;/p&gt;

&lt;p&gt;For organizations that haven't adopted Continuous Delivery, rocky shores lie ahead when they unleash AI tools on their codebases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Return on an unspecified investment
&lt;/h2&gt;

&lt;p&gt;Now here's the fascinating conundrum for anyone trying to calculate a return on investment for AI coding tools. The commercial tools indeed remove many tasks that, as a developer, I don't want to do, though they also introduce new ones. I see why people would like to use them to remove much of the noise and focus on the essential details in the software. The problem is, you run out of credits fast, so if you want to use these tools full-time, you'll need subscription levels that support that.&lt;/p&gt;

&lt;p&gt;Credit exhaustion is the first friction point where traditional organizations will come unstuck. Developers who rely heavily on AI coding tools will slow drastically when credits run out. This will likely become a significant problem over time as developers become more dependent on working at the high level of abstraction that prompting offers.&lt;/p&gt;

&lt;p&gt;Imagine if coding languages had similar limits. You'd run out of Python hours and have to continue your work using assembly language.&lt;/p&gt;

&lt;p&gt;Organizations with a cost focus will challenge developers who want a higher budget for these tools. Any manager who has previously denied more screen real estate is likely to reject higher subscription costs for AI coding tools. Their view in both cases is that the promised productivity isn't real.&lt;/p&gt;

&lt;p&gt;The second hurdle for these commercial tools is the uncertain future pricing. We know some AI companies are burning through investment cash, which means the price we pay is subsidized by their desire for growth. There must be a pivot point at which they begin the search for profitability. This will once again trigger problems in cost-focused organizations.&lt;/p&gt;

&lt;p&gt;Some developers are already thinking ahead and looking for open-source models they can run locally to reduce cost uncertainty, but, as always, you pay one way or another. The time spent assessing, updating, and managing these models is a direct loss of the productivity you're trying to gain.&lt;/p&gt;

&lt;p&gt;One solution may be for commercial vendors to offer fixed-price, unlimited use through local models. The challenger to this solution will come from Platform Engineering or DevEx teams, who could supply a packaged open-source local solution for developers to reduce the overhead of selection and maintenance.&lt;/p&gt;

&lt;h2&gt;
  
  
  The nature of problems changes
&lt;/h2&gt;

&lt;p&gt;Traditional and modern organizations face the same challenges, but you can see that culture fundamentally shapes how they are addressed.&lt;/p&gt;

&lt;p&gt;Modern organizations will judge their return on investment by the value they deliver. Their past investments in Continuous Delivery will provide a solid foundation for them to experiment with new tools, and they'll creatively address the cost issues associated with AI coding tools.&lt;/p&gt;

&lt;p&gt;Traditional organizations will seek to minimize costs, avoid investing in automated pipelines, and demand higher developer output with no real basis for expecting it.&lt;/p&gt;

&lt;p&gt;The set of capabilities a modern organization applies to high-throughput, high-quality software delivery is surrounded by subtle, interconnected relationships. For the traditional organizations that just want to "buy AI", the benefits are unlikely to arrive.&lt;/p&gt;

&lt;p&gt;Happy deployments!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>devex</category>
      <category>cicd</category>
    </item>
    <item>
      <title>Setting GitHub as a trusted publisher for npm</title>
      <dc:creator>Steve Fenton</dc:creator>
      <pubDate>Mon, 13 Apr 2026 13:36:59 +0000</pubDate>
      <link>https://forem.com/_steve_fenton_/setting-github-as-a-trusted-publisher-for-npm-560i</link>
      <guid>https://forem.com/_steve_fenton_/setting-github-as-a-trusted-publisher-for-npm-560i</guid>
      <description>&lt;p&gt;So, stuff happened and &lt;strong&gt;npm&lt;/strong&gt; has been updated to reduce the volume of stuff happening. In a world of SBOMs, SLSA, and supply chain attacks, it's time to get serious about publishing packages. In this case, that means using the new &lt;em&gt;Trusted Publisher&lt;/em&gt; feature to connect GitHub (or GitLab) to &lt;strong&gt;npm&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Set the trusted publisher on npm
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Sign into &lt;a href="https://npmjs.com" rel="noopener noreferrer"&gt;npm&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Select the package you want to set up, for example &lt;code&gt;astro-accelerator-utils&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;em&gt;Settings&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;In the &lt;em&gt;Trusted Publishers&lt;/em&gt; section, select your provider, in my case it's &lt;strong&gt;GitHub&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Enter you repository information:

&lt;ul&gt;
&lt;li&gt;Organization or user name, for example &lt;code&gt;Steve-Fenton&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Repository name, for example &lt;code&gt;astro-accelerator-utils&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;The result should be that &lt;code&gt;Steve-Fenton/astro-accelerator-utils&lt;/code&gt; matches your repo in GitHub&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Provide the workflow file name

&lt;ul&gt;
&lt;li&gt;This should match the workflow that will publish the package, in my case &lt;code&gt;build-astro.yml&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;The file must be in &lt;code&gt;.github/workflows/&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Set up connection&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you use environments, you can optionally limit publishing by environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Check you GitHub Action
&lt;/h2&gt;

&lt;p&gt;In your permissions section, you need to allow &lt;code&gt;id-token&lt;/code&gt; to be written.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;permissions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;id-token&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;write&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can then use the &lt;code&gt;npm publish&lt;/code&gt; step in your workflow.&lt;/p&gt;

&lt;p&gt;I conditionally publish based on the version number, so I only publish when the version number changes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Publish if version has been updated&lt;/span&gt;
    &lt;span class="s"&gt;env&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;NPM_AUTH_TOKEN&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.NPM_AUTH_TOKEN }}&lt;/span&gt;
    &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
        &lt;span class="s"&gt;PACKAGE_NAME=$(node -p "require('./package.json').name")&lt;/span&gt;
        &lt;span class="s"&gt;LOCAL_VERSION=$(node -p "require('./package.json').version")&lt;/span&gt;
        &lt;span class="s"&gt;REMOTE_VERSION=$(npm view $PACKAGE_NAME version || echo "0.0.0")&lt;/span&gt;

        &lt;span class="s"&gt;if [ "$LOCAL_VERSION" != "$REMOTE_VERSION" ] &amp;amp;&amp;amp; [ "$(printf '%s\n%s' "$REMOTE_VERSION" "$LOCAL_VERSION" | sort -V | tail -n1)" = "$LOCAL_VERSION" ]; then&lt;/span&gt;
        &lt;span class="s"&gt;echo "Local version $LOCAL_VERSION is higher than remote version $REMOTE_VERSION. Publishing..."&lt;/span&gt;
        &lt;span class="s"&gt;echo "//registry.npmjs.org/:_authToken=$NPM_AUTH_TOKEN" &amp;gt; ~/.npmrc&lt;/span&gt;
        &lt;span class="s"&gt;npm publish --access public&lt;/span&gt;
        &lt;span class="s"&gt;else&lt;/span&gt;
        &lt;span class="s"&gt;echo "Version $LOCAL_VERSION is not newer than $REMOTE_VERSION. Skipping publish."&lt;/span&gt;
        &lt;span class="s"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a more secure way to publish npm packages, but it's also easier because you don't need to keep updating tokens and secrets.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>node</category>
      <category>npm</category>
      <category>github</category>
    </item>
    <item>
      <title>Roll up your chair: How one small change sparked a DevOps revolution</title>
      <dc:creator>Steve Fenton</dc:creator>
      <pubDate>Tue, 31 Mar 2026 09:22:47 +0000</pubDate>
      <link>https://forem.com/_steve_fenton_/roll-up-your-chair-how-one-small-change-sparked-a-devops-revolution-33p4</link>
      <guid>https://forem.com/_steve_fenton_/roll-up-your-chair-how-one-small-change-sparked-a-devops-revolution-33p4</guid>
      <description>&lt;p&gt;My first encounter with DevOps was so simple that I didn’t even realize its power. Let me share the story so you can see how it went from accidental discovery to deliberate practice, and why it was such a dramatic pivot.&lt;/p&gt;

&lt;p&gt;The backdrop to this pivotal moment was a software delivery setup you might find anywhere. The development team built software in a reasonably iterative and incremental fashion. About once a month, the developers created a gold copy and passed it to the ops team.&lt;/p&gt;

&lt;p&gt;The ops team installed the software on our office instance (we drank our own champagne). After two weeks of smooth running, they promoted the version to customer instances.&lt;/p&gt;

&lt;p&gt;It wasn’t a perfect process, but it benefited from muscle memory, so there wasn’t an urgent imperative to change it. The realization that a change was needed came from the first DevOps moment.&lt;/p&gt;

&lt;h2&gt;
  
  
  The unplanned first moment
&lt;/h2&gt;

&lt;p&gt;When the ops team deployed the new version, they would review the logs to see if anything interesting or unexpected popped up as a result of the deployment. If they found something, they couldn’t get a quick answer, and it sometimes meant they opted to roll back rather than wait.&lt;/p&gt;

&lt;p&gt;This was a comic-strip situation because the development team was a few meters away in their team room. It’s incredible how something as simple as a door transforms co-located teams into remote workers.&lt;/p&gt;

&lt;p&gt;The ops team raised their request through official channels, and the developers didn’t even know they were causing more work and stress because the ticket hadn’t reached them yet.&lt;/p&gt;

&lt;p&gt;Thankfully, one of the ops team members highlighted this. The next time they started a deployment, a developer was paired with them to watch the logs. A low-fi solution and not one you’d think much about. That developer was me. For this post, we’ll call my ops team partner “Tony”.&lt;/p&gt;

&lt;h2&gt;
  
  
  Shared surprises lead to learning
&lt;/h2&gt;

&lt;p&gt;The day-one experience of this new collaborative process didn’t seem groundbreaking. When a log message popped up that surprised Tony, it surprised me too. The messages weren’t any more helpful to a developer than they were to the ops team.&lt;/p&gt;

&lt;p&gt;I could think through what might be happening, talk it through, and then Tony and I would come up with a theory. We’d test the theory by trying to make another similar log message appear. Then we’d scratch our heads and try to decide whether this could wait for a fix or warranted a rollback.&lt;/p&gt;

&lt;p&gt;The plan to bring people from the two teams together was intended to remove the massive communication lag, and it did. But further improvements were to come as a side effect, yielding more significant gains.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resolve pain pathways by completing the loop
&lt;/h2&gt;

&lt;p&gt;As a developer, when you generate log messages and then have to interpret them, you’ve completed a pain loop. Pain loops are potent drivers of improvement.&lt;/p&gt;

&lt;p&gt;Most organizations have unresolved pain pathways. That means someone creates pain, like a developer throwing thousands of vague exceptions every minute, and then someone else feels it, like Tony when he’s trying to work out what the log means.&lt;/p&gt;

&lt;p&gt;There are two ways to resolve the pain pathway.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Process: You create procedures to bring pain below the threshold and to limit the rate at which it is generated.&lt;/li&gt;
&lt;li&gt;Loops: You connect the pain into a loop, so the person causing the pain feels its signal.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If I’m the one who gets the electric shock when I press the button, I stop pushing it, even if someone in a white coat instructs me to continue the experiment.&lt;/p&gt;

&lt;p&gt;With the pain loop connected, I realized we should log fewer messages to reduce the scroll and review burden. Instead of needing institutional knowledge of which messages were perpetually present and could therefore be ignored, we could stop logging them.&lt;/p&gt;

&lt;p&gt;The (perhaps asymptotic) goal was to log only the events that required human review, with a toggle that let more verbose logging be generated on demand. Instead of scrolling through a near-infinite list of logs, you’d have a nearly empty view. If a log appeared, it was important enough to warrant your attention.&lt;/p&gt;

&lt;p&gt;The next idea was to improve the information in the log messages. We could identify which customer or user experienced the error and provide context for it. By improving these error messages, we could often identify the bug before we even opened the code, dramatically reducing our investigation time.&lt;/p&gt;

&lt;p&gt;This process evolved into &lt;a href="https://stevefenton.co.uk/blog/2017/11/the-three-fs-of-event-log-monitoring/" rel="noopener noreferrer"&gt;the three Fs of event logging&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create positive spirals with delightful deployments
&lt;/h2&gt;

&lt;p&gt;Another thread that emerged from the simple act of sitting together during deployments was the realization that the deployment process was nasty. We created an installer file, and the ops team would move it to the target server, double-click it, then follow the prompts to configure the instance.&lt;/p&gt;

&lt;p&gt;Having to paste configuration values into the installer was slow and error-prone. We spent a disproportionate amount of time improving this process.&lt;/p&gt;

&lt;p&gt;Admittedly, we were solving this one “inside the box” by improving an individual installation with DIY scripts, a can of lubricating spray, and sticky tape. This didn’t improve the experience of repeating the install across several environments and multiple production instances.&lt;br&gt;
However, I did get to experience the stress of deployments when their probability of success was anything less than “very high”. When deployments weren’t a solved problem, they could damage team reputation, erode trust, and reduce autonomy.&lt;/p&gt;

&lt;p&gt;Failed deployments are the leading cause of organizations working in larger batches. Large batches are a leading cause of failed deployments. This is politely called a negative spiral, and you have to reverse it urgently if you want to survive.&lt;/p&gt;

&lt;h2&gt;
  
  
  At last, a panacea
&lt;/h2&gt;

&lt;p&gt;The act of sitting a developer with an ops team member during deployments isn’t going to solve all your problems. As we scaled from 6 to 30 developers, pursued innovative new directions for our product, and repositioned our offering and pricing, new pain kept emerging. Continuous improvement really is a game of whack-a-mole, and there’s no final state.&lt;/p&gt;

&lt;p&gt;Despite this, the simple act of sitting together, otherwise known as collaboration, caused a chain reaction of beneficial changes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sharing goals and pain
&lt;/h3&gt;

&lt;p&gt;When you’re sitting with someone working on the same problem, all the departmental otherness evaporates. You’re just two humans trying to make things work.&lt;/p&gt;

&lt;p&gt;Instead of holding developers accountable for feature throughput and the ops team for stability, we shared a combined goal of high throughput and high stability in software delivery.&lt;/p&gt;

&lt;p&gt;That removed the goal conflict and encouraged us to share and solve common problems together. This also works when you repeat the alignment exercise with other areas, like compliance and finance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Completing the pain loop
&lt;/h3&gt;

&lt;p&gt;The problem with our logging strategy was immediately apparent when one of the people generating the logs had to wade through them. This is a powerful motivator for change.&lt;/p&gt;

&lt;p&gt;Identifying unresolved pain paths and closing the pain loop isn’t a form of punishment; it’s a moment of realization. It’s the reason we should all use the software we build: it highlights the unresolved pain paths we’re burdening our users with.&lt;/p&gt;

&lt;p&gt;Pain loops are crucial to meaningful improvements in software delivery.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reducing the toil
&lt;/h3&gt;

&lt;p&gt;Great developers are experts at automating things. When you expose this skill set to repetitive work, a developer’s instinct is to eliminate the toil.&lt;/p&gt;

&lt;p&gt;For the ops team, the step-by-step deployment checklist was just part of doing business. They were so familiar with the process that it became invisible.&lt;/p&gt;

&lt;p&gt;When we reduced the toil, the ops team was definitely happier, even though we hadn’t solved all the rough edges yet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Refining the early ideas
&lt;/h2&gt;

&lt;p&gt;The fully-formed ideas didn’t arrive immediately. The rough shapes were polished over time into a set of repeatable and connected DevOps habits.&lt;/p&gt;

&lt;p&gt;The three Fs, incident causation principles, alerting strategy, and monitor selection guidelines graduated into deliberate approaches long after this story.&lt;/p&gt;

&lt;p&gt;I developed an approach to software delivery improvement that used these ideas to address trust issues between developers and the business. By reducing negative signals caused by failed deployments and escaped bugs, we increased trust in the development team, enhanced their reputation, and increased their autonomy.&lt;/p&gt;

&lt;p&gt;We combined these practices with Octopus Deploy for deployment and runbook automation and an observability platform, which meant the team was the first to spot problems rather than users. When there was a problem, it was trivial to fix, and the new version could be rolled out in no time.&lt;/p&gt;

&lt;p&gt;Unlike the original organization, where we increased collaboration between teams, we created fully cross-functional teams that worked together all the time. Every skill required to deliver and operate the software was embedded, minimizing dependencies and the risk of silos, tickets, and bureaucracy.&lt;/p&gt;

&lt;p&gt;These cross-functional teams also proved to be the best way to level up team members.&lt;/p&gt;

&lt;h2&gt;
  
  
  Unicorn portals
&lt;/h2&gt;

&lt;p&gt;You can’t work with a database whizz for long before you start thinking about query performance, maintenance plans, and normalization. You build better software when you develop these skills. You can’t work with an infrastructure expert without learning about failovers, networking, and zero-downtime deployments. You build better software when you develop these skills, too.&lt;/p&gt;

&lt;p&gt;When people say they can’t hire these highly skilled developers, they miss the crucial point. A team designed in this cross-functional style takes new team members and upgrades them into these impossible-to-find unicorns. You may start as a backend developer, a database administrator, or a test analyst, but you grow into a generalizing specialist with many new skills.&lt;/p&gt;

&lt;p&gt;Creating these unicorn portals is the most valuable skill development managers can bring to an organization. You need to hire to fill gaps and foster an environment where skills transfer fluidly throughout the team.&lt;/p&gt;

&lt;h2&gt;
  
  
  Roll up your chair
&lt;/h2&gt;

&lt;p&gt;What became a sophisticated and repeatable process for team transformation could be traced back to that simple act of sitting together. It was a small, easy change that led to increased empathy and understanding, and then a whole set of improvements.&lt;/p&gt;

&lt;p&gt;Staring at that rapid stream of logs was the pivot point that led to the most healthy and human approach to DevOps.&lt;/p&gt;

&lt;p&gt;We didn’t have the research to confirm it back then, but deployment automation, shared goals, observability, small batches, and Continuous Delivery are all linked to better outcomes for the people, teams, and organization. Everybody wins when you do DevOps right.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>software</category>
      <category>culture</category>
    </item>
    <item>
      <title>Modern developer experience has deep roots</title>
      <dc:creator>Steve Fenton</dc:creator>
      <pubDate>Wed, 25 Mar 2026 13:56:33 +0000</pubDate>
      <link>https://forem.com/_steve_fenton_/modern-developer-experience-has-deep-roots-a9a</link>
      <guid>https://forem.com/_steve_fenton_/modern-developer-experience-has-deep-roots-a9a</guid>
      <description>&lt;p&gt;In his 1956 account of the SAGE program, Herbert Benington highlighted the opportunity to use computers to reduce the cost of programming, documentation, and testing.&lt;/p&gt;

&lt;p&gt;The creation of utilities, compilers, and instrumentation accounted for about half of the programming effort for SAGE. Benington had recognized that writing programs to improve developer productivity was an essential investment.&lt;/p&gt;

&lt;p&gt;At the time, computers were costly, so the idea of making programmers more productive had far less economic weight than it does today. Programmers earned approximately $15,000 per year, and computers cost $500 per hour to operate. Now programmers cost about 70 times the cost of the computers they use. That means the value of developer productivity is worth far more today than it was in the 1950s.&lt;/p&gt;

&lt;h2&gt;
  
  
  Betting on software
&lt;/h2&gt;

&lt;p&gt;There has been a thread of developer experience all the way back to the earliest attempts to build software at scale. Savvy organizations know that money spent on providing the right environment and tools is worth more than simply “time saved”.&lt;/p&gt;

&lt;p&gt;There are two ways to look at software development: Cost and value. It certainly costs money to build software, so the software must provide value that exceeds this cost to be viable. Software systems are built based on the anticipation of value and survive if they manage to meet or exceed that expectation. When an organization becomes cost-obsessed with software, it suggests a low anticipated value or a realization that the value won’t be realized. It’s better to bravely abandon attempts with such a thin payoff.&lt;/p&gt;

&lt;p&gt;The sweet spot for software is where the value is highly likely to be obtained, or where there’s a chance of it providing a huge return on investment, so that in any 10 attempts to create value with software, a single success would pay for all the attempts and return a profit.&lt;/p&gt;

&lt;p&gt;When you consider how software is a bet, it divides software delivery approaches into two categories: cost-focused or value-focused. Traditional project management works to keep the promise of cost and timeline. Modern agile methods try to increase the probability of the bet succeeding by adjusting course as you learn more about the problem you’re trying to solve.&lt;/p&gt;

&lt;p&gt;And here’s the crucial insight. When you manage costs effectively, the best you can achieve is zero cost. When you seek out value, there is no absolute limit to how much value you could produce. It could be twice the cost, ten times the cost, or 1,000 times the cost. There is far more upside than downside if you’re creating valuable software.&lt;/p&gt;

&lt;p&gt;Cost-first approaches to software delivery decrease the probability of success, and one (of many) reasons is that it damages developer experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  DevEx is economics, not hugs
&lt;/h2&gt;

&lt;p&gt;For whatever reason, the universe decided that you must treat people well, whether you like them or not. Even when you examine organizational culture through the lens of cold, hard business goals, you’ll find that unhealthy cultures are less successful than healthy ones. You can be a philanthropist or a capitalist; either way, you have to treat your employees well, or it will damage the thing you care about.&lt;/p&gt;

&lt;p&gt;Here’s a simple way it plays out.&lt;/p&gt;

&lt;p&gt;The developers need a bit more screen real estate so they can display more information in front of them without having to switch between background and foreground apps. Additional monitors incur costs, and a cost-focused organization will likely deny the request. Developers will have a lower fully loaded cost, but produce less value. A value-focused organization sees the potential returns and their developers will get more screen space, be less frustrated, produce better work in a shorter time, and produce a lot of value.&lt;/p&gt;

&lt;p&gt;Having an extra monitor moves the needle a little, but it’s a strong signal. Once an organization chooses between the cost or value pathways, it tends to stick to that decision. That means it’s not just monitors; it’s also chairs, code editors, refactoring tools, test tools, and automation tools. The experience diverges further with each decision made based on cost rather than value.&lt;/p&gt;

&lt;p&gt;Another individual who understood the concept of developer experience was Joel Spolsky. He created &lt;a href="https://www.joelonsoftware.com/2000/08/09/the-joel-test-12-steps-to-better-code/" rel="noopener noreferrer"&gt;the Joel Test&lt;/a&gt; as a “highly irresponsible, sloppy test to rate the quality of a software team.” The Joel Test has items like “Do programmers have quiet working conditions?” and “Do you use the best tools money can buy?”&lt;/p&gt;

&lt;p&gt;I haven’t met Joel, so I can’t speak for his motivation, but I don’t need to know if he was motivated by kindness or cash. The result was an excellent workplace for developers and phenomenal value creation; a win-win, as Stephen Covey called it. Spolsky’s most famous products, Trello and Stack Overflow, sold for $425 million and $1.8 billion, respectively.&lt;/p&gt;

&lt;h2&gt;
  
  
  You don’t need to make it easy
&lt;/h2&gt;

&lt;p&gt;There’s a certain amount of inherent complexity to writing great software. You must fully grasp a problem, have a strong opinion about how to solve it, and be able to execute on your plans to make it happen. Developers don’t need protection from the difficulty of building software; they need minimal unnecessary complexity from tools, processes, and the workplace environment.&lt;/p&gt;

&lt;p&gt;There was a trend that prioritized developer comfort above all other needs, which meant providing them with frameworks to tame complexity. The frameworks made development easier, but limited a developer’s options to the extent it damaged user experience. User needs were subverted to developer ease, which is wrong and somewhat patronizing to developers.&lt;/p&gt;

&lt;p&gt;It’s not developer experience if you’re using frameworks that improve the ease of development while annoying those trying to use the software. Developer experience means providing the right environment and tools for developers to build valuable software. Software that doesn’t surprise people with a new paper cut every 5 minutes, pushing them ever closer to demanding an alternative solution.&lt;/p&gt;

&lt;p&gt;Think instead of how we set up a surgeon for success. A sterile room, excellent lighting, high-quality equipment, and working with skilled individuals who can anticipate and respond as the situation unfolds. Surgeon experience is centered around a shared goal of achieving optimal patient outcomes. We don’t simplify the scenario by removing things like the need to prevent infection; we make it possible to handle it well.&lt;/p&gt;

&lt;p&gt;Developer experience is the same. We don’t choose easier problems to solve; we set developers up to succeed at solving hard problems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Modern DevEx and Platform Engineering
&lt;/h2&gt;

&lt;p&gt;With the rise of Platform Engineering, developer experience has been largely absorbed. Your organization might have a DevEx team, a platform team, or even both. Across the industry, the two teams share more commonalities than differences. From a list of 30 features offered by platform and DevEx teams compiled by DX, only 4 were exclusive to a single discipline.&lt;/p&gt;

&lt;p&gt;Platform only:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Certificate management&lt;/li&gt;
&lt;li&gt;DNS&lt;/li&gt;
&lt;li&gt;Networking&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;DevEx only:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Developer training and education&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Everything else is along the scale from DevEx to Platform Engineering, where it may be more common in one or the other, but can be found in both.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fobv8yuowvv3nqkwkm3t2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fobv8yuowvv3nqkwkm3t2.png" alt="Comparing DevEx and platform teams" width="800" height="976"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Platform Engineering and developer experience both build on Benington’s early thoughts and Spolsky’s belief that if we provide developers with the right environment and the best tools, we can amplify their skills and generate lots of value. Forming teams around this idea helps standardize and scale the approach, rather than each team being subjected to differing views based on management styles or simply not knowing what they don’t know.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://newsletter.getdx.com/p/devprod-headcount-benchmarks-q1-2026" rel="noopener noreferrer"&gt;Q1 2026 DevProd headcount benchmarking report&lt;/a&gt; from DX highlights how well this scaling works. Rather than costing a fixed percentage of your engineering organization’s headcount, developer productivity teams scale non-linearly, with their ratio shrinking as the number of engineers increases. This makes sense, as their work is being reused, unlike approaches that work within individual teams and depend on teams having access to the necessary skills and knowledge.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Engineers&lt;/th&gt;
&lt;th&gt;Productivity headcount&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;200-600&lt;/td&gt;
&lt;td&gt;5.1%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;600-1000&lt;/td&gt;
&lt;td&gt;4.2%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1000+&lt;/td&gt;
&lt;td&gt;3.49%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;That’s not to say the goal is to have the smallest possible teams. The goal is to unlock the value you create by developing software. If having half of all engineers in productivity roles resulted in the highest levels of value creation, that would be the right mix. There is likely a point of diminishing returns if you approach 10-15%, but you should be testing this by tracking meaningful outcomes for your organization.&lt;/p&gt;

&lt;p&gt;Make sure developers have the right environment and the best tools so they can generate the most value for your organization.&lt;/p&gt;

</description>
      <category>devex</category>
      <category>software</category>
      <category>culture</category>
    </item>
    <item>
      <title>Snake Oil, Rituals, and Why We’re Wrong To Burn It All Down</title>
      <dc:creator>Steve Fenton</dc:creator>
      <pubDate>Tue, 17 Mar 2026 08:24:59 +0000</pubDate>
      <link>https://forem.com/_steve_fenton_/snake-oil-rituals-and-why-were-wrong-to-burn-it-all-down-5g9l</link>
      <guid>https://forem.com/_steve_fenton_/snake-oil-rituals-and-why-were-wrong-to-burn-it-all-down-5g9l</guid>
      <description>&lt;p&gt;&lt;em&gt;How to benefit from old knowledge without making old mistakes.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The term “snake oil salesman” is often used to describe individuals who engage in deceptive marketing practices. Wild west characters like &lt;a href="https://en.wikipedia.org/wiki/Clark_Stanley" rel="noopener noreferrer"&gt;Clark Stanley&lt;/a&gt; advertised their snake oil as a wondrous cure-all remedy. But in 1916, the U.S. government’s Bureau of Chemistry tested the liniment, found it to be dramatically overpriced and of limited value, and Stanley was fined $20.&lt;/p&gt;

&lt;p&gt;Yet that’s not the end of the story.&lt;/p&gt;

&lt;h2&gt;
  
  
  You Probably Use Snake Oil
&lt;/h2&gt;

&lt;p&gt;Snake oil wasn’t entirely purposeless. While it’s true that it didn’t match the claims on the bottle, certain ingredients, such as capsaicin and camphor, proved valuable when used for valid purposes.&lt;/p&gt;

&lt;p&gt;Capsaicin, derived from chili peppers, is now used in skin-applied pain relief products to relieve muscular and joint pain. It’s an FDA-approved therapeutic treatment. Camphor is also commonly used as a counter-irritant, helping relieve itching from insect bites. It’s also the go-to ingredient for makers of chest rubs, which you’ve likely used as a decongestant when you’ve had a cold.&lt;/p&gt;

&lt;p&gt;So, while snake oil failed to match the wild claims of its peddlers, it wasn’t completely useless. This is also true in the software industry, so being able to separate the valuable ingredients from debunked software delivery recipes is a crucial skill.&lt;/p&gt;

&lt;h2&gt;
  
  
  Waterfall Is Bad, Mostly
&lt;/h2&gt;

&lt;p&gt;The term “waterfall” is often used as a catch-all name for phased software delivery, where tasks are performed in a sequential order that resembles a waterfall. When the lightweight rebellion overthrew the heavyweight models of the time, it created a mistaken belief that the phased software models were simply wrong.&lt;/p&gt;

&lt;p&gt;But the creators of these old models have been short-changed, as they had been telling us to work in this new way all along.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9o2yu35hmff21zq9cdt9.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9o2yu35hmff21zq9cdt9.jpg" alt="The many stages, thought processes, and tests of phased software delivery. Source: Production of Large Programs. Herbert D. Benington. 1956." width="800" height="802"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In his 1956 paper, “Production of Large Programs,” Herbert Benington discusses concepts that we would now label as platform engineering. Benington decried the idea of top-down programming, where a specification would be completed before the code was written. Winston Royce, in his 1970 paper “Managing the Development of Large Software Systems,” advised people to work in small incremental changes, as this would reduce complexity and allow organizations to roll back to a previous version if they moved in the wrong direction. These ideas resurfaced in Barry Boehm’s Spiral Model.&lt;/p&gt;

&lt;p&gt;The success of Agile was largely due to how the proponents of lightweight software delivery carefully extracted the good ingredients from the heavyweight recipe used in the popular processes that dominated the industry in the 1990s. They preserved the good parts and discarded large swathes of the toxic ones.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Recurring Crisis
&lt;/h2&gt;

&lt;p&gt;Our crisis comes from a tendency for management to be attracted to process and repelled by technical and cultural practices. They have a craving to reintroduce elements of phased software models that were expertly removed, and they want to discard crucial techniques that they don’t understand (or sound like hard work).&lt;/p&gt;

&lt;p&gt;Increasing process weight while decreasing technical excellence is a path to destruction.&lt;/p&gt;

&lt;p&gt;The canonical example of this management error comes from the early days of Agile. Around the time of the Agile Manifesto, the leading lightweight method was Extreme Programming (XP). It had similar process elements to Scrum, but also a map of interconnected technical practices that kept the cost of change low, which is the key to sustaining agility over the long term.&lt;/p&gt;

&lt;p&gt;For managers, Scrum’s exclusive focus on process was unthreatening, while XP’s emphasis on technical skills struck fear into their hearts. When it came to management, Scrum was top dog. As a result, we spent a decade spinning wheels until Dave Farley and Jez Humble revived and renewed the ideas of XP in their landmark book, “Continuous Delivery.”&lt;/p&gt;

&lt;p&gt;Of course, it didn’t stop with Scrum. When you don’t have technical excellence, the process elements of Scrum don’t deliver the outcomes that are expected of agile software development. As a result, management responded by bulking up the process to “work at scale” or “handle Enterprise needs”. The real motivation behind this was, of course, the comfort of process working against the complexity of reality, which can only be resolved by social and technical means.&lt;/p&gt;

&lt;p&gt;When DevOps first emerged, it could be summed up as breaking down the silos between development and operations. This idea was further refined to align the goals of the two teams and encourage them to collaborate more effectively. Everyone was on board with this until a decade of research revealed the need for those intimidating technical elements, the necessity of transformational leadership, and the value of lean product management. When DevOps got too real, the desire to run away intensified.&lt;/p&gt;

&lt;p&gt;The rush from complex realities to simplifications is a mistake we repeatedly make. Putting it unkindly, the fall of all good methods is the result of managers fleeing in terror from things they don’t understand as well as they should.&lt;/p&gt;

&lt;h2&gt;
  
  
  Finding The Real Remedy
&lt;/h2&gt;

&lt;p&gt;The software industry’s snake oil problem isn’t that we have too many frameworks and practices; it’s that we have too few. It’s that we’ve lost the ability to think critically about them. We adopt wholesale when we should cherry-pick. We follow prescriptions when we should experiment.&lt;/p&gt;

&lt;p&gt;The most effective software teams aren’t the ones who’ve found the perfect framework. They’re the ones who’ve learned to extract value from imperfect ones, who understand that every practice is context-dependent, and who continuously question whether what they’re doing is actually helping.&lt;/p&gt;

&lt;p&gt;Snake oil taught us an important lesson, but it wasn’t the one we thought. It’s not that old remedies are worthless. It’s that we need to look past the marketing to understand what actually works. The same applies to software practices. Behind every framework, methodology, and best practice lies a kernel of insight that addresses a real problem.&lt;/p&gt;

&lt;p&gt;Our job isn’t to mindlessly follow or unthinkingly reject. It’s about understanding, extracting, and applying wisely.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>agile</category>
      <category>waterfall</category>
    </item>
    <item>
      <title>We Don’t Trust AI (and That’s a Good Thing)</title>
      <dc:creator>Steve Fenton</dc:creator>
      <pubDate>Mon, 09 Mar 2026 15:57:05 +0000</pubDate>
      <link>https://forem.com/_steve_fenton_/we-dont-trust-ai-and-thats-a-good-thing-3oe6</link>
      <guid>https://forem.com/_steve_fenton_/we-dont-trust-ai-and-thats-a-good-thing-3oe6</guid>
      <description>&lt;p&gt;&lt;strong&gt;Why maintaining a healthy skepticism gets you better outcomes.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One of my old hobbies was writing for independent music magazines, such as Spill Magazine (distributed free at music venues) and DV8 (distributed free at hair salons). Over the years, I saw hundreds of unsigned bands and learned a crucial lesson: Amplification makes everything you do really loud, but it doesn’t fundamentally change whether what you’re doing is good or bad.&lt;/p&gt;

&lt;p&gt;This law of amplification applies equally to software development, according to &lt;a href="https://dora.dev/research/2025/" rel="noopener noreferrer"&gt;DORA’s State of AI-assisted Software Development report&lt;/a&gt;. AI is an amplifier that will boost the volume of your software delivery capability, whether good or bad.&lt;/p&gt;

&lt;p&gt;And this is why I find the report’s findings on trust so reassuring.&lt;/p&gt;

&lt;h2&gt;
  
  
  We Don’t Trust AI
&lt;/h2&gt;

&lt;p&gt;The report found that AI is being used practically everywhere. Almost everyone (90%) is using AI for their work and believes it increases their productivity (80%) and code quality (59%). But they don’t trust it. In fact, when asked whether they trust AI-generated output, the response was an overwhelmingly subdued “somewhat”.&lt;/p&gt;

&lt;p&gt;This has led many people to ponder how we can increase trust in AI. There’s a perception that if we can get technical people to trust it, we’ll get even bigger gains. However, this is not an outcome we should strive for.&lt;/p&gt;

&lt;p&gt;One factor contributing to the successful adoption of AI is undoubtedly a healthy level of skepticism regarding the answers it provides. Encouraging people to increase their trust in AI can reduce agency, diminish personal responsibility, and lower vigilance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Absolute Trust Not Required
&lt;/h2&gt;

&lt;p&gt;Successful software developers have acquired critical thinking skills that enable them to envision potential pitfalls and anticipate how things might go wrong. When you create software used at scale, scenarios you perceive as atypical occur frequently.&lt;/p&gt;

&lt;p&gt;When I worked on a platform used by global automotive giants, we would process over 4 million requests in just 5 minutes. We were working on a feature, and my mind was working through potential failure scenarios and edge cases. When I highlighted a potential bear trap, the business folks would often dismiss it. “The chances of that happening are a million to one,” they said. However, that meant it could happen more than 1,152 times each day, so we had to accommodate it.&lt;/p&gt;

&lt;p&gt;When developers have a skeptical mindset, it’s healthy. They are thinking at scale and preventing a constant series of disruptive events. My team was following the “you build it, you run it” pattern, so we were highly motivated to silence the pager by creating robust software.&lt;/p&gt;

&lt;p&gt;Great developers can think ahead and prevent problems before they write a single line of code. Having low trust in AI-generated output is a key aspect of this mindset.&lt;/p&gt;

&lt;h2&gt;
  
  
  The AI Model Is The Q&amp;amp;A Model
&lt;/h2&gt;

&lt;p&gt;Though AI is often considered disruptive, it usually turns out that existing models can (and should) be applied. Those who don’t understand this are relearning lessons on small batches and user-centricity, as AI only exacerbates the problem of changing too much at once and over-investing in a feature idea before learning whether it’s helpful to users.&lt;/p&gt;

&lt;p&gt;Similarly, we have an existing model we can apply to AI-generated code. The Q&amp;amp;A model.&lt;/p&gt;

&lt;p&gt;When you find an answer on Stack Overflow, you don’t just copy and paste it into your application. Answers on these sites often contain a few crucial lines of code that directly address the question, as well as many additional lines that complete the example. There is some risk in taking those essential lines and even more in taking the wrapping ones.&lt;/p&gt;

&lt;p&gt;You’ll see occasional comments from developers highlighting the dangers of those wrapping lines, and while they’re not wrong, the answers would be less helpful if they contained more and better code in the wrapping lines.&lt;/p&gt;

&lt;p&gt;Experienced developers use the answer to understand how to solve their problem and then write their own solution, or make substantial adjustments to the code in the answer. We should apply these same reservations to all code we didn’t author, whether it’s from a Q&amp;amp;A site or from an AI-assistant. There’s no reason to trust the AI-generated code more than you would the answer on a Q&amp;amp;A site that likely formed a part of the training data in the first place.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI Warned You
&lt;/h2&gt;

&lt;p&gt;Skepticism over AI-generated code shouldn’t be a controversial stance. The tools themselves provide these warnings when you start using them. Everyone using coding assistants and AI chat has clicked past a message such as: “Chat GPT can make mistakes. Check important info.” We’d be foolish to place high trust in them and the outcomes would be worse if we did.&lt;/p&gt;

&lt;p&gt;While AI-assistance is relatively new, experienced software developers are applying healthy models for handling the code it produces. That’s why our enthusiasm for toil-reduction is best served by muted trust levels.&lt;/p&gt;

</description>
      <category>ai</category>
    </item>
    <item>
      <title>How To Measure AI’s Organizational Impact</title>
      <dc:creator>Steve Fenton</dc:creator>
      <pubDate>Mon, 02 Mar 2026 08:16:22 +0000</pubDate>
      <link>https://forem.com/_steve_fenton_/how-to-measure-ais-organizational-impact-54ji</link>
      <guid>https://forem.com/_steve_fenton_/how-to-measure-ais-organizational-impact-54ji</guid>
      <description>&lt;p&gt;When organizations introduce AI, they often make a critical error: they create entirely new metrics to measure its impact. This approach misses the fundamental truth that AI is a tool to help achieve existing goals, not a reason to change what success looks like.&lt;/p&gt;

&lt;h2&gt;
  
  
   Your Goals Haven’t Changed
&lt;/h2&gt;

&lt;p&gt;Consider the difference between Formula 1 racing and EcoRally Scotland. Formula 1 teams optimize for speed — whoever crosses the finish line first wins. EcoRally teams have a completely different challenge: complete a 500-kilometer route with the best regularity score while using the least energy possible.&lt;/p&gt;

&lt;p&gt;These teams need different strategies, different driving styles, and different metrics. The goals determine everything else.&lt;/p&gt;

&lt;p&gt;The same principle applies to your organization. When you introduce AI, your fundamental purpose remains unchanged. You still want to create the best quality speakers, save bees, or deliver whatever value you were creating before. AI is simply a new tool to help you achieve those existing goals more effectively.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stick With What Already Works
&lt;/h2&gt;

&lt;p&gt;Organizations often have sophisticated measurement systems in place — financial metrics, mission-based indicators, and proxy measures that track different parts of their value stream. If you’ve already established that software delivery performance correlates with organizational outcomes, for example, then continue using those same measures to evaluate AI’s impact.&lt;/p&gt;

&lt;p&gt;The danger lies in creating new metrics specifically for AI adoption. These measures rarely connect to meaningful business outcomes and can lead you to optimize for activities that don’t actually move the needle on what matters most.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Local Optimization Trap
&lt;/h2&gt;

&lt;p&gt;Here’s a common scenario: A development team starts using AI and reduces their feature delivery time from 16 hours to 12 hours — a 25% improvement that looks impressive on paper. However, when you examine the entire value stream, the lead time from customer request to delivered value remains unchanged at two weeks.&lt;/p&gt;

&lt;p&gt;This isn’t a new problem. Eli Goldratt explored this in “The Goal,” and Lean Software Development emphasizes optimizing for the whole system, not individual parts. AI amplifies this challenge because it’s easy to see immediate productivity gains in specific areas while missing the broader organizational impact.&lt;/p&gt;

&lt;h2&gt;
  
  
  Focus On What Truly Matters
&lt;/h2&gt;

&lt;p&gt;Most teams collect numerous metrics that help them improve their work and maintain standards. But organizationally, only a few metrics are truly critical — usually some combination of financial performance and mission-based indicators that track whether you’re making the intended difference in the world.&lt;/p&gt;

&lt;p&gt;AI only delivers real value when its benefits flow through to these crucial numbers. Everything else is just interesting data.&lt;/p&gt;

&lt;h2&gt;
  
  
   Research-Driven Implementation
&lt;/h2&gt;

&lt;p&gt;The most effective approach follows basic research principles: form a hypothesis, design a test, then evaluate the results. Before implementing AI, articulate clearly how you expect it to impact your mission-level metrics. If you’ve already established relationships between local measures (like software delivery performance) and organizational outcomes, you can build your hypothesis on these proven connections.&lt;/p&gt;

&lt;p&gt;Too many organizations reverse this process — they implement AI first, then scramble to find metrics that show improvement. This backwards approach leads to hockey-stick charts that look impressive but don’t translate to meaningful business value. It’s the difference between running a business and running a marketing campaign.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Path Forward
&lt;/h2&gt;

&lt;p&gt;AI will impact your business — that’s inevitable. But whether that impact is positive depends largely on how thoughtfully you approach adoption. By maintaining focus on your existing goals and proven metrics, you can ensure that AI becomes a genuine accelerator of your mission rather than an expensive distraction.&lt;/p&gt;

&lt;p&gt;The organizations that will see the greatest benefit from AI are those that resist the temptation to change their definition of success and instead use AI to achieve their existing definition of success more effectively.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>analytics</category>
    </item>
    <item>
      <title>Avoiding golden cages in Platform Engineering</title>
      <dc:creator>Steve Fenton</dc:creator>
      <pubDate>Fri, 27 Feb 2026 14:12:48 +0000</pubDate>
      <link>https://forem.com/_steve_fenton_/avoiding-golden-cages-in-platform-engineering-3nda</link>
      <guid>https://forem.com/_steve_fenton_/avoiding-golden-cages-in-platform-engineering-3nda</guid>
      <description>&lt;p&gt;I zipped up to London to share the &lt;a href="https://octopus.com/publications/platform-engineering-pulse" rel="noopener noreferrer"&gt;Platform Engineering Pulse report&lt;/a&gt; with the amazing &lt;a href="https://www.linkedin.com/company/londondevops/" rel="noopener noreferrer"&gt;London DevOps&lt;/a&gt; group. Afterwards, we spent several hours talking through some of the findings and I thought I’d write up some of the results of those discussion.&lt;/p&gt;

&lt;p&gt;In particular, the question of whether platforms should be optional or mandatory has a lot of talking points. It also intersects with the golden cages problem, as an inflexible platform intensifies the nastiest problems of mandatory platforms.&lt;/p&gt;

&lt;p&gt;As we’re continuously talking golden paths, we’ll head to Oz to look through the hazards and how they come together to cause some serious problems.&lt;/p&gt;

&lt;h2&gt;
  
  
  The wizard of ops
&lt;/h2&gt;

&lt;p&gt;Imagine our house has been lifted by a hurricane and deposited in a strange land. The friendly people we meet tell us about a golden path, and off we go to see a wizard. We sing a little tune, because we don’t yet know about the hazards awaiting us along the way.&lt;/p&gt;

&lt;p&gt;Why in Oz didn’t the munchkins mention the wolves, crows, and flying monkeys? They certainly had plenty to say about the darn road.&lt;/p&gt;

&lt;p&gt;Let’s explore the wonderful and magical world of gold and platforms.&lt;/p&gt;

&lt;h2&gt;
  
  
  The golden path
&lt;/h2&gt;

&lt;p&gt;There’s a crucial distinction between a paved path and a golden path. I’m sure the munchkins would have had a verse or two on it.&lt;/p&gt;

&lt;p&gt;Paved paths are an analogy based on desire paths; those animal trails and shortcuts that, over time, create a signal that people want to travel between two points. If your platform is just the encoding of desire paths, it’s not terribly different from whatever came before. You’re missing an excellent opportunity to create something better.&lt;/p&gt;

&lt;p&gt;In product development, we know that you don’t just build what the user asks for. Instead, you explore their needs and design something better than what is currently available to them. The same goes for golden paths.&lt;/p&gt;

&lt;p&gt;If you take existing paths and pave them, you’re just transferring the complexity from developers to platform engineers. There is some benefit of splitting complexity (the developers handle the product’s complexity, and the platform engineer handles, well, whatever toxic waste is ejected into the paved path).&lt;/p&gt;

&lt;p&gt;Golden paths shouldn’t just divide the complexity; they should manage it. This is vital as we hope the golden path handles aspects that were absent from the well-trodden desire path. Things like cost control and security, which were previously applied haphazardly, if at all.&lt;/p&gt;

&lt;p&gt;We’re not trying to achieve the shortest path (through the quicksand, tar pit, and snake-infested rocks), but the shortest route that satisfies the constraints (such as safety).&lt;/p&gt;

&lt;p&gt;Got a golden path? Great, we’ve defeated the wolves, now it’s time to face the crows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Golden cages
&lt;/h2&gt;

&lt;p&gt;On day one, golden paths and golden cages look exactly the same. You only really find out you’re in a cage when the platform you use doesn’t let you do something. You only discover the lack of flexibility when you push on a surface.&lt;/p&gt;

&lt;p&gt;As standardization is high on the list of goals organizations have for Platform Engineering, it’s no surprise to find platform teams taking this to a rigid extreme. Developers may want 90% of what the golden cage offers, but if they can’t achieve the other 10% they become frustrated. This is a contributing factor to cases where developers circumvent the rulebook and find a way to bypass the platform entirely.&lt;/p&gt;

&lt;p&gt;Signals of golden cages include a heck of a lot of negging the platform, highlighting its flaws, pointing out that development goals will be missed, and generally wearing down dev managers until they sign off on letting developers do things their own way.&lt;/p&gt;

&lt;p&gt;The solution isn’t to correct the developers. You have to correct the platform. It should provide extensibility points and escape hatches, so developers can achieve their goals within the policy constraints set by the organization.&lt;/p&gt;

&lt;p&gt;That’s the crows dismissed. Time for some flying monkeys.&lt;/p&gt;

&lt;h2&gt;
  
  
  Golden manacles
&lt;/h2&gt;

&lt;p&gt;Your organization is investing in a platform initiative. They have a bunch of goals in mind, often related to standardization, compliance, security, and cost control (and hopefully flow of value and developer experience). Why would they let all this time, effort, and attention be wasted by allowing development teams to choose whether to adopt it?&lt;/p&gt;

&lt;p&gt;It’s evident that platforms should be mandatory.&lt;/p&gt;

&lt;p&gt;Except this is the breeding ground for some very toxic outcomes. Everybody has some level of rebellion streaking through them, and mandating anything is the perfect way to energize it. Why do so many British kids hate Shakespeare? Because teachers forced them to read it.&lt;/p&gt;

&lt;p&gt;Now, you may think your developers are low on the rebel-scale, so you’ll be okay. You can tell them what to do. The thing is, while those high on the rebel-scale will provide noisy dissent, those lower on the scale will be more silent and subversive. When a mandated platform introduces friction, everyone will rebel, and they’ll do so in their own wonderful and unique style.&lt;/p&gt;

&lt;p&gt;You &lt;em&gt;could&lt;/em&gt; have a great platform and make it mandatory, and maybe never see this problem. If you mix mandatory adoption with a golden cage, you’re guaranteed to see strange behaviors as teams thrash around trying to achieve their conflicting goals. Developers are supposed to be delivering valuable software, platform teams are trying to force compliance, and the two are in constant conflict.&lt;/p&gt;

&lt;p&gt;If this sounds familiar, it’s because DevOps was the solution to this problem. When you have two silos with conflicting goals, you’re in flying monkey territory without a monkey-proof umbrella. The solution to this mandated golden cage conundrum is simple. You need to align goals, encourage collaboration, and let people do the good work.&lt;/p&gt;

&lt;p&gt;In Platform Engineering, the best way to achieve collaborative bliss is:&lt;/p&gt;

&lt;p&gt;Make platforms optional to increase the desire in platform teams to understand the needs of platform users. Make it a shared goal to meet the organization’s policies, so development teams and platform teams both want the same thing.&lt;/p&gt;

&lt;p&gt;When developers and platform teams share the goal to meet policy, the platform becomes a far more appealing option. Other goals, like flow of value, should also be shared, so platform teams are motivated to solve the right problems for the development teams.&lt;/p&gt;

&lt;h2&gt;
  
  
  The silver slippers: Platform as a product
&lt;/h2&gt;

&lt;p&gt;This is why the prevailing advice from smart people is to treat the platform as a product and the developers as customers and prospects. Put a good feedback loop in place so you can see where the platform is starting to fit too tightly. Then, collaborate with your customers to provide a good way to flex where needed.&lt;/p&gt;

&lt;p&gt;Make your platform optional, and your policies mandatory.  &lt;/p&gt;

</description>
      <category>devops</category>
      <category>discuss</category>
      <category>management</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>It may not be Picasso, but it is Brunel</title>
      <dc:creator>Steve Fenton</dc:creator>
      <pubDate>Tue, 17 Feb 2026 10:17:40 +0000</pubDate>
      <link>https://forem.com/_steve_fenton_/it-may-not-be-picasso-but-it-is-brunel-1j5g</link>
      <guid>https://forem.com/_steve_fenton_/it-may-not-be-picasso-but-it-is-brunel-1j5g</guid>
      <description>&lt;p&gt;You want to paint a wall. The fastest way to start is to open the paint tin and start rolling out the color. Except that’s not the quickest way to paint a wall, as expert painters know. If you give a professional this job, they won’t touch the paint until the surface has been prepared.&lt;/p&gt;

&lt;p&gt;This involves removing previous wall coverings, filling holes and divots in the wall, and carefully sanding to achieve a perfect surface. When you apply paint to a prepared wall, it goes on smoothly, it looks great when it dries, and you need fewer coats (we amateur painters tend to use additional coats in attempts to disguise all the problems we left when we didn’t prepare).&lt;/p&gt;

&lt;p&gt;The preparation checklist:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fill holes and cracks&lt;/li&gt;
&lt;li&gt;Sand the walls&lt;/li&gt;
&lt;li&gt;Clean the walls&lt;/li&gt;
&lt;li&gt;Let the walls dry&lt;/li&gt;
&lt;li&gt;Apply paint&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You may need to add additional tasks to the list, such as removing mildew or priming the surface, if these are required in your expert judgment. This is the model professional decorators have refined over decades. It’s not glamorous, but it works.&lt;/p&gt;

&lt;p&gt;So far, so good.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enter the robot
&lt;/h2&gt;

&lt;p&gt;No matter what you do for a living, someone wants you to do more of it in less time. In the software industry, we have scrummy marketing to blame for the overwhelming presence of demanding “twice the work in half the time.”&lt;/p&gt;

&lt;p&gt;So what happens when we decide we want to paint faster? Someone buys a great big paint-spraying robot.&lt;/p&gt;

&lt;p&gt;The paint-spraying robot is 10x faster than a human at painting. It can cover 100 square meters per hour, while a human can only do 10 square meters an hour. It completes projects 60% faster, and it can run 24/7, unlike those pesky humans who want to see their family and sleep. Of course, you need to input floor plans and designate non-paintable areas. Additionally, there’s a 20-minute setup time, as well as a 30-minute post-painting clean cycle.&lt;/p&gt;

&lt;p&gt;Side panel: There are clues in the claims for the robot that tell us things are more complicated than they first appear. The robot is 10x faster, but projects complete only 2–3x faster. Something outside of blasting paint onto the wall is at play here. I’ve worked in enough organizations who purchase based on the 10x claim and then tripped and fell down the stairs of their own excitement.&lt;/p&gt;

&lt;p&gt;Oh, and there’s one more thing. It doesn’t prepare your walls.&lt;br&gt;
If you’ve ever painted walls without preparing them, you’re familiar with the kinds of problems it causes. The finish doesn’t look good, it’s not long-lasting, and your modern lighting turns the wall into a three-dimensional topology map of past picture hook holes. Over time an odd dark patch emerges. A reminder of the time little Lily missed her mouth with the Calpol and made an impromptu purple Rorschach test across the wall.&lt;/p&gt;

&lt;p&gt;Painting, it turns out, is a complex process. We may long for a reality where painting is easy, but we live in one where it’s not.&lt;/p&gt;

&lt;h2&gt;
  
  
  Rediscovering the wheel, one bruise at a time
&lt;/h2&gt;

&lt;p&gt;And that’s why the robot-first painting team is currently providing a fountain of incredible insights as they try to maximize their return on investment.&lt;/p&gt;

&lt;p&gt;They’re discovering that asking people what color they want to paint their walls results in happier customers. An idea about setting windows to be non-paintable is emerging. Some bright spark has worked out that filling cracks before painting achieves a better end result.&lt;/p&gt;

&lt;p&gt;Of course, they haven’t discovered everything on the simple checklist used by every professional decorator. It will take time for them to work it all out. It took professionals time to work it out in the first place, and these pioneers have decided to start from scratch instead of building on existing knowledge.&lt;/p&gt;

&lt;p&gt;Eventually, they’ll have a pre-robot preparation checklist that looks something like the one we had in the first place:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fill holes and cracks&lt;/li&gt;
&lt;li&gt;Sand the walls&lt;/li&gt;
&lt;li&gt;Clean the walls&lt;/li&gt;
&lt;li&gt;Let the walls dry&lt;/li&gt;
&lt;li&gt;Apply paint&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  We’ve seen this movie before
&lt;/h2&gt;

&lt;p&gt;Of course, this isn’t about painting at all. It’s about software delivery.&lt;/p&gt;

&lt;p&gt;We spent decades refining the best way to build software. It’s called Continuous Delivery. We have even expanded this into the DevOps model, which combines practices and capabilities that work well with Continuous Delivery, such as generative workplace culture, lean product management, and transformational leadership.&lt;/p&gt;

&lt;p&gt;We literally have diagrams that show how all these things come together to improve software delivery. That’s right, “software delivery”. Not feature development time. Not coding speed. The whole darn thing.&lt;/p&gt;

&lt;p&gt;And right now, I’m witnessing the most surreal déjà vu of my career.&lt;/p&gt;

&lt;p&gt;Many people using AI are discovering Continuous Delivery practices through bruising experiences. The barrage of social posts from AI-first developers who are finding out from scratch why version control is a good idea, or why they ought to work in small batches with changes frequently integrated with the main branch, or why their builds shouldn’t take an hour.&lt;/p&gt;

&lt;p&gt;It’s funny, while also being not at all funny.&lt;/p&gt;

&lt;p&gt;In the 2000s, as I was first finding my way through Agile, Extreme Programming, and Lean, we drew on books and articles to inform our continuous improvement process. I worked on a team that ditched Scrum and developed a method that made sense for our work. We rapidly went from 6-month cycles to having always-shippable code, with a new version deploying every 3 hours or so.&lt;/p&gt;

&lt;p&gt;Therefore, there’s a whole generation of lean/agile software developers for whom AI doesn’t provide a significant boost. To us, AI is just another tool, like auto-complete or a compiler. Helpful; not transformational.&lt;/p&gt;

&lt;p&gt;We refined the elements of high-performance software delivery through numerous iterations and adjustments.&lt;/p&gt;

&lt;h2&gt;
  
  
  The paint dries on this one
&lt;/h2&gt;

&lt;p&gt;Continuous Delivery remains the best-known way to deliver software.&lt;/p&gt;

&lt;p&gt;A team using only Continuous Delivery will beat a team using only AI, because any benefit you get from AI will be lost to the first bottleneck it encounters on its way to production. Teams that start with Continuous Delivery will be more successful with AI, because they are already more successful than other teams. They have fast builds, automated deployment pipelines, and solid technical practices to enable the fast flow of work.&lt;/p&gt;

&lt;p&gt;Essentially, AI has enabled low-performing development teams to experience some of the speed that comes with Continuous Delivery, but without the enabling practices. They’re getting paint on the wall, but they skipped all the prep work. So far, this hasn’t led them back to Continuous Delivery, but if they want to succeed, that’s where they need to start.&lt;/p&gt;

&lt;p&gt;If you’re looking at seriously improving your productivity, it’s likely the answers that are proving so elusive with AI have been waiting for us all along in Continuous Delivery.&lt;/p&gt;

&lt;p&gt;You can buy the robot if you want. Just don’t be surprised to find your windows painted over and your wall covered in lumps, bumps, and cat-shaped silhouettes.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
