<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Codequal</title>
    <description>The latest articles on Forem by Codequal (@codequal).</description>
    <link>https://forem.com/codequal</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/codequal"/>
    <language>en</language>
    <item>
      <title>I Analyzed 10 Major Open-Source Repos, and Every Single One Had Significant Drift</title>
      <dc:creator>Codequal</dc:creator>
      <pubDate>Tue, 24 Mar 2026 00:57:06 +0000</pubDate>
      <link>https://forem.com/codequal/i-analyzed-10-major-open-source-repos-and-every-single-one-had-significant-drift-15o5</link>
      <guid>https://forem.com/codequal/i-analyzed-10-major-open-source-repos-and-every-single-one-had-significant-drift-15o5</guid>
      <description>&lt;p&gt;Google's own DORA research shows a paradox: AI tools increase developer throughput while &lt;em&gt;decreasing&lt;/em&gt; delivery stability.&lt;/p&gt;

&lt;p&gt;I wanted to understand why. So I built an open-source CLI that detects development process drift, and ran it against 10 of the most widely-used repos in the industry: &lt;strong&gt;&lt;a href="https://codequal.dev/reports/react" rel="noopener noreferrer"&gt;React&lt;/a&gt;, &lt;a href="https://codequal.dev/reports/nextjs" rel="noopener noreferrer"&gt;Next.js&lt;/a&gt;, &lt;a href="https://codequal.dev/reports/vscode" rel="noopener noreferrer"&gt;VS Code&lt;/a&gt;, &lt;a href="https://codequal.dev/reports/aws-cdk" rel="noopener noreferrer"&gt;AWS CDK&lt;/a&gt;, &lt;a href="https://codequal.dev/reports/google-cloud-python" rel="noopener noreferrer"&gt;Google Cloud Python&lt;/a&gt;, &lt;a href="https://codequal.dev/reports/supabase" rel="noopener noreferrer"&gt;Supabase&lt;/a&gt;, &lt;a href="https://codequal.dev/reports/langchain" rel="noopener noreferrer"&gt;LangChain&lt;/a&gt;, &lt;a href="https://codequal.dev/reports/stripe-node" rel="noopener noreferrer"&gt;Stripe Node&lt;/a&gt;, &lt;a href="https://codequal.dev/reports/workers-sdk" rel="noopener noreferrer"&gt;Cloudflare Workers SDK&lt;/a&gt;, and &lt;a href="https://codequal.dev/reports/plaid-node" rel="noopener noreferrer"&gt;Plaid&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Every single one had significant drift signals across multiple dimensions. Here's what I found.&lt;/p&gt;

&lt;h3&gt;
  
  
  CI Builds Are Silently Exploding
&lt;/h3&gt;

&lt;p&gt;Every project showed elevated CI/build times. Not failures — the builds still pass. They just take dramatically longer than baseline, and nobody notices because there's no alarm for "your CI is 100x slower than it was 3 months ago."&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Project&lt;/th&gt;
&lt;th&gt;Build Duration vs Baseline&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://codequal.dev/reports/google-cloud-python" rel="noopener noreferrer"&gt;Google Cloud Python&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;1,552x&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://codequal.dev/reports/stripe-node" rel="noopener noreferrer"&gt;Stripe Node&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;1,361x&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://codequal.dev/reports/vscode" rel="noopener noreferrer"&gt;VS Code&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;946x&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://codequal.dev/reports/langchain" rel="noopener noreferrer"&gt;LangChain&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;889x&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://codequal.dev/reports/supabase" rel="noopener noreferrer"&gt;Supabase&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;611x&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://codequal.dev/reports/aws-cdk" rel="noopener noreferrer"&gt;AWS CDK&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;112x&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://codequal.dev/reports/nextjs" rel="noopener noreferrer"&gt;Next.js&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;104x&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://codequal.dev/reports/workers-sdk" rel="noopener noreferrer"&gt;Cloudflare Workers&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;75x&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://codequal.dev/reports/react" rel="noopener noreferrer"&gt;React&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;13x&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://codequal.dev/reports/plaid-node" rel="noopener noreferrer"&gt;Plaid&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;3.2x&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The median spike across all 10 repos is &lt;strong&gt;~100x baseline&lt;/strong&gt;. These aren't flaky tests or broken builds — they're passing builds that silently consume 100x more compute than they used to.&lt;/p&gt;

&lt;h3&gt;
  
  
  File Change Explosions Reveal Architectural Drift
&lt;/h3&gt;

&lt;p&gt;When a single commit touches thousands of files across unrelated directories, something structural has shifted. This metric — file dispersion combined with files changed — was elevated in all 10 projects.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Project&lt;/th&gt;
&lt;th&gt;Files Changed vs Baseline&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://codequal.dev/reports/aws-cdk" rel="noopener noreferrer"&gt;AWS CDK&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;14,464x&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://codequal.dev/reports/vscode" rel="noopener noreferrer"&gt;VS Code&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;13,593x&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://codequal.dev/reports/google-cloud-python" rel="noopener noreferrer"&gt;Google Cloud Python&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;9,113x&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://codequal.dev/reports/nextjs" rel="noopener noreferrer"&gt;Next.js&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;215x&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://codequal.dev/reports/react" rel="noopener noreferrer"&gt;React&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;208x&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://codequal.dev/reports/stripe-node" rel="noopener noreferrer"&gt;Stripe Node&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;150x&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://codequal.dev/reports/langchain" rel="noopener noreferrer"&gt;LangChain&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;87x&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://codequal.dev/reports/supabase" rel="noopener noreferrer"&gt;Supabase&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;67x&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://codequal.dev/reports/plaid-node" rel="noopener noreferrer"&gt;Plaid&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;65x&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://codequal.dev/reports/workers-sdk" rel="noopener noreferrer"&gt;Cloudflare Workers&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;52x&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The root causes vary — Google Cloud Python has Librarian bots mass-updating image dependencies, Next.js has Turbopack switching chunk hashes from hex to base40, Plaid has OpenAPI code generation. But the pattern is universal: large automated changes that no human reviews file by file.&lt;/p&gt;

&lt;h3&gt;
  
  
  Co-Change Novelty Drops to Zero
&lt;/h3&gt;

&lt;p&gt;This is the most subtle and arguably most important signal. "Co-change novelty" measures whether new combinations of files are changing together, or whether the same files keep getting modified as a group.&lt;/p&gt;

&lt;p&gt;When novelty drops, it means development has become pattern-locked — the same templates, the same file groups, the same automated workflows touching the same paths. It's the fingerprint of bot-driven or template-driven development.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;9 out of 10 repos&lt;/strong&gt; showed depressed co-change novelty:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Project&lt;/th&gt;
&lt;th&gt;Co-change Novelty vs Baseline&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://codequal.dev/reports/aws-cdk" rel="noopener noreferrer"&gt;AWS CDK&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;587x below&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://codequal.dev/reports/supabase" rel="noopener noreferrer"&gt;Supabase&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;587x below&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://codequal.dev/reports/vscode" rel="noopener noreferrer"&gt;VS Code&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;547x below&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://codequal.dev/reports/nextjs" rel="noopener noreferrer"&gt;Next.js&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;509x below&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://codequal.dev/reports/google-cloud-python" rel="noopener noreferrer"&gt;Google Cloud Python&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;439x below&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://codequal.dev/reports/react" rel="noopener noreferrer"&gt;React&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;109x below&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://codequal.dev/reports/workers-sdk" rel="noopener noreferrer"&gt;Cloudflare Workers&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;105x below&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://codequal.dev/reports/langchain" rel="noopener noreferrer"&gt;LangChain&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;88x below&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://codequal.dev/reports/plaid-node" rel="noopener noreferrer"&gt;Plaid&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;29.8x &lt;em&gt;above&lt;/em&gt; (OpenAPI generates novel pairings)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Release Cadence Is All Over the Map
&lt;/h3&gt;

&lt;p&gt;Deployment frequency — one of the four DORA metrics — showed extreme variance:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Project&lt;/th&gt;
&lt;th&gt;Release Cadence Deviation&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://codequal.dev/reports/google-cloud-python" rel="noopener noreferrer"&gt;Google Cloud Python&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;2,548,259x&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://codequal.dev/reports/workers-sdk" rel="noopener noreferrer"&gt;Cloudflare Workers&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;694,167x&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://codequal.dev/reports/react" rel="noopener noreferrer"&gt;React&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;69,548x&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://codequal.dev/reports/supabase" rel="noopener noreferrer"&gt;Supabase&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;229x&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://codequal.dev/reports/langchain" rel="noopener noreferrer"&gt;LangChain&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;220x&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://codequal.dev/reports/vscode" rel="noopener noreferrer"&gt;VS Code&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;104x&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://codequal.dev/reports/nextjs" rel="noopener noreferrer"&gt;Next.js&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;68x&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://codequal.dev/reports/aws-cdk" rel="noopener noreferrer"&gt;AWS CDK&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;13x&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://codequal.dev/reports/stripe-node" rel="noopener noreferrer"&gt;Stripe Node&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;11x&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://codequal.dev/reports/plaid-node" rel="noopener noreferrer"&gt;Plaid&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;These numbers look absurd, but they reflect real gaps between releases — periods where code accumulates without shipping, then gets batch-released. That accumulation is where drift compounds.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cross-Family Correlations: The Hidden Connections
&lt;/h3&gt;

&lt;p&gt;The most interesting findings come from &lt;em&gt;correlating&lt;/em&gt; signals across different dimensions. These patterns are invisible if you only monitor one signal at a time:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CI duration correlates with files touched&lt;/strong&gt; — found in all 10 repos. More scattered commits = longer builds. Obvious in retrospect, but nobody tracks it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Release cadence correlates with code dispersion&lt;/strong&gt; — found in 8/10 repos. When releases slow down, code changes spread across unrelated areas.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dependency growth correlates with build time&lt;/strong&gt; — VS Code (283x dependency growth + 946x build growth), Cloudflare (30x + 75x).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What This Means
&lt;/h3&gt;

&lt;p&gt;No single commit in any of these projects looks wrong. Tests pass. Reviews approve. The code is correct.&lt;/p&gt;

&lt;p&gt;But the &lt;em&gt;aggregate pattern&lt;/em&gt; drifts. Files spread. Builds slow. Releases stall. Dependencies grow. And because each change is individually fine, nobody raises an alarm.&lt;/p&gt;

&lt;p&gt;This is the drift problem. And it affects every project at scale — even the ones built by the best engineering organizations in the world.&lt;/p&gt;

&lt;h3&gt;
  
  
  It Pinpoints the Exact Commit
&lt;/h3&gt;

&lt;p&gt;EE doesn't just say "your builds drifted." It identifies the specific commit where the deviation started — with a clickable link to the PR. For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;React's file explosion? Triggered by &lt;a href="https://codequal.dev/reports/react" rel="noopener noreferrer"&gt;&lt;code&gt;b16b768f&lt;/code&gt; — [compiler] Feature flag cleanup (#35825)&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;React's co-change novelty drop? Started at &lt;a href="https://codequal.dev/reports/react" rel="noopener noreferrer"&gt;&lt;code&gt;6853d7ab&lt;/code&gt; — [Perf Tracks] Prevent crash when accessing &lt;code&gt;$$typeof&lt;/code&gt;&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every finding in the report links back to the trigger commit, so you're not hunting through &lt;code&gt;git log&lt;/code&gt; trying to figure out when things changed. You see the drift, click Commit, and understand the root cause.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Tool
&lt;/h3&gt;

&lt;p&gt;I built &lt;a href="https://github.com/alpsla/evolution-engine" rel="noopener noreferrer"&gt;Evolution Engine&lt;/a&gt; to detect this automatically. It's open source, runs locally (your code never leaves your machine), and works on any git repo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;evolution-engine
evo analyze &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The analysis is pure statistics — no AI APIs called. When you want deeper investigation, EE generates a structured prompt you paste into your own AI tool.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Full interactive reports:&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://codequal.dev/reports/react" rel="noopener noreferrer"&gt;React&lt;/a&gt; | &lt;a href="https://codequal.dev/reports/nextjs" rel="noopener noreferrer"&gt;Next.js&lt;/a&gt; | &lt;a href="https://codequal.dev/reports/vscode" rel="noopener noreferrer"&gt;VS Code&lt;/a&gt; | &lt;a href="https://codequal.dev/reports/aws-cdk" rel="noopener noreferrer"&gt;AWS CDK&lt;/a&gt; | &lt;a href="https://codequal.dev/reports/google-cloud-python" rel="noopener noreferrer"&gt;Google Cloud Python&lt;/a&gt; | &lt;a href="https://codequal.dev/reports/supabase" rel="noopener noreferrer"&gt;Supabase&lt;/a&gt; | &lt;a href="https://codequal.dev/reports/langchain" rel="noopener noreferrer"&gt;LangChain&lt;/a&gt; | &lt;a href="https://codequal.dev/reports/stripe-node" rel="noopener noreferrer"&gt;Stripe Node&lt;/a&gt; | &lt;a href="https://codequal.dev/reports/workers-sdk" rel="noopener noreferrer"&gt;Cloudflare Workers&lt;/a&gt; | &lt;a href="https://codequal.dev/reports/plaid-node" rel="noopener noreferrer"&gt;Plaid&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/alpsla/evolution-engine" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; | &lt;a href="https://pypi.org/project/evolution-engine/" rel="noopener noreferrer"&gt;PyPI&lt;/a&gt; | &lt;a href="https://codequal.dev" rel="noopener noreferrer"&gt;Website&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>opensource</category>
      <category>vibecoding</category>
      <category>programming</category>
    </item>
    <item>
      <title>What 10 Open Source Repos Reveal About Development Drift in the AI Era</title>
      <dc:creator>Codequal</dc:creator>
      <pubDate>Tue, 17 Mar 2026 23:58:57 +0000</pubDate>
      <link>https://forem.com/codequal/what-10-open-source-repos-reveal-about-development-drift-in-the-ai-era-10e2</link>
      <guid>https://forem.com/codequal/what-10-open-source-repos-reveal-about-development-drift-in-the-ai-era-10e2</guid>
      <description>&lt;p&gt;Every major tech company is pushing AI-generated code. Microsoft says 30% (targeting 80%). Google says 30%. Uber reports 65-72%. Amazon mandated 80% AI tool usage. Shopify made AI "mandatory."&lt;/p&gt;

&lt;p&gt;But Google's own DORA research shows a paradox: AI increases throughput while &lt;em&gt;decreasing&lt;/em&gt; delivery stability. Teams ship faster, but the code breaks more often.&lt;/p&gt;

&lt;p&gt;I wanted to understand &lt;em&gt;why&lt;/em&gt;. So I built &lt;a href="https://github.com/alpsla/evolution-engine" rel="noopener noreferrer"&gt;Evolution Engine&lt;/a&gt;, an open source CLI that detects development process drift — when patterns in commit history, CI builds, deployments, and dependency signals shift in ways that often precede production issues. Then I ran it on 10 major open source repos across cloud infrastructure, frontend frameworks, AI tooling, and developer platforms.&lt;/p&gt;

&lt;p&gt;No AI APIs are called during analysis. All pattern detection is deterministic and statistical. The tool runs entirely locally — your code never leaves your machine.&lt;/p&gt;

&lt;p&gt;Here's what I found.&lt;/p&gt;

&lt;h2&gt;
  
  
  The scale
&lt;/h2&gt;

&lt;p&gt;Across 10 repos, the tool analyzed over 130,000 commits, generating 250,000+ events across git, CI, deployment, and dependency signal families. It matched patterns from a knowledge base calibrated across 200+ open source repositories.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Result&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Repos analyzed&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Total commits&lt;/td&gt;
&lt;td&gt;130,000+&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Total events&lt;/td&gt;
&lt;td&gt;250,000+&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Signal families&lt;/td&gt;
&lt;td&gt;4 (git, CI, deployment, dependency)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Repos with significant drift&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;10 out of 10&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Average drift signals per repo&lt;/td&gt;
&lt;td&gt;6.6&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Average correlation patterns per repo&lt;/td&gt;
&lt;td&gt;24.3&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Every single repo had significant drift signals. Every one.&lt;/p&gt;

&lt;h2&gt;
  
  
  Finding 1: CI build times spike dramatically — and nobody notices
&lt;/h2&gt;

&lt;p&gt;The most consistent pattern across all 10 repos: CI build duration spikes that dwarf historical baselines.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Repo type&lt;/th&gt;
&lt;th&gt;Normal CI time&lt;/th&gt;
&lt;th&gt;Spike&lt;/th&gt;
&lt;th&gt;Deviation&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Cloud SDK (monorepo)&lt;/td&gt;
&lt;td&gt;~45 seconds&lt;/td&gt;
&lt;td&gt;6+ hours&lt;/td&gt;
&lt;td&gt;1,552x&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AI framework&lt;/td&gt;
&lt;td&gt;~95 seconds&lt;/td&gt;
&lt;td&gt;55 minutes&lt;/td&gt;
&lt;td&gt;889x&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cloud infrastructure toolkit&lt;/td&gt;
&lt;td&gt;~26 seconds&lt;/td&gt;
&lt;td&gt;70+ minutes&lt;/td&gt;
&lt;td&gt;111x&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Edge platform SDK&lt;/td&gt;
&lt;td&gt;~33 seconds&lt;/td&gt;
&lt;td&gt;60+ minutes&lt;/td&gt;
&lt;td&gt;74x&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Commerce framework&lt;/td&gt;
&lt;td&gt;~64 seconds&lt;/td&gt;
&lt;td&gt;8+ minutes&lt;/td&gt;
&lt;td&gt;43x&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Code editor&lt;/td&gt;
&lt;td&gt;~41 seconds&lt;/td&gt;
&lt;td&gt;23+ minutes&lt;/td&gt;
&lt;td&gt;34x&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Frontend framework&lt;/td&gt;
&lt;td&gt;~45 seconds&lt;/td&gt;
&lt;td&gt;6 minutes&lt;/td&gt;
&lt;td&gt;13x&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Fullstack framework&lt;/td&gt;
&lt;td&gt;~6 minutes&lt;/td&gt;
&lt;td&gt;32 minutes&lt;/td&gt;
&lt;td&gt;5x&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;These aren't gradual slowdowns — they're sudden spikes, often tied to a single commit or dependency change. The problem? Most teams don't track CI duration as a &lt;em&gt;process signal&lt;/em&gt;. They notice when builds fail, but a 34x slowdown that still passes? That drifts silently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8 out of 10 repos&lt;/strong&gt; had CI spikes exceeding 10x their baseline. The median spike was 53x.&lt;/p&gt;

&lt;h2&gt;
  
  
  Finding 2: Release cadence gaps correlate with code spread
&lt;/h2&gt;

&lt;p&gt;When a repo's release cadence suddenly lengthens, it's almost always accompanied by increased code dispersion — changes spread across unrelated parts of the codebase.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Repo type&lt;/th&gt;
&lt;th&gt;Normal cadence&lt;/th&gt;
&lt;th&gt;Gap&lt;/th&gt;
&lt;th&gt;Slowdown&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Cloud SDK (monorepo)&lt;/td&gt;
&lt;td&gt;~2.9 hours&lt;/td&gt;
&lt;td&gt;22 days&lt;/td&gt;
&lt;td&gt;182x&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Commerce framework&lt;/td&gt;
&lt;td&gt;~1.5 days&lt;/td&gt;
&lt;td&gt;37 days&lt;/td&gt;
&lt;td&gt;24x&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cloud infrastructure toolkit&lt;/td&gt;
&lt;td&gt;~21 hours&lt;/td&gt;
&lt;td&gt;16.5 days&lt;/td&gt;
&lt;td&gt;18x&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Fullstack framework&lt;/td&gt;
&lt;td&gt;~6 days&lt;/td&gt;
&lt;td&gt;96 days&lt;/td&gt;
&lt;td&gt;16x&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Logging library&lt;/td&gt;
&lt;td&gt;~13 days&lt;/td&gt;
&lt;td&gt;200 days&lt;/td&gt;
&lt;td&gt;15x&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Frontend framework&lt;/td&gt;
&lt;td&gt;~28 days&lt;/td&gt;
&lt;td&gt;113 days&lt;/td&gt;
&lt;td&gt;4x&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This correlation showed up as a known pattern in 8 out of 10 repos. When engineers touch more unrelated files per commit &lt;em&gt;and&lt;/em&gt; releases slow down, something structural has shifted — often a large refactoring, a dependency migration, or (increasingly) an AI-assisted batch change that touches more files than a human would.&lt;/p&gt;

&lt;h2&gt;
  
  
  Finding 3: Co-change novelty drops to zero
&lt;/h2&gt;

&lt;p&gt;"Co-change novelty" measures how often files that change together in a commit have changed together before. A score of 1.0 means entirely novel pairings. A score of 0.0 means the exact same files are changing together repeatedly.&lt;/p&gt;

&lt;p&gt;In &lt;strong&gt;9 out of 10 repos&lt;/strong&gt;, we found commits where co-change novelty dropped to zero — indicating repetitive, pattern-locked changes rather than organic development. This is a hallmark of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automated dependency bumps (bots touching the same lockfiles repeatedly)&lt;/li&gt;
&lt;li&gt;Code generation tools producing similar diffs&lt;/li&gt;
&lt;li&gt;AI-assisted changes that follow templates rather than addressing unique problems&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The interesting question: is this a problem? Sometimes repetitive changes are exactly right (automated security patches). But when novelty drops to zero &lt;em&gt;and&lt;/em&gt; CI times spike &lt;em&gt;and&lt;/em&gt; release cadence gaps appear, the correlation suggests something has gone wrong.&lt;/p&gt;

&lt;h2&gt;
  
  
  Finding 4: Merge-back commits create statistical blind spots
&lt;/h2&gt;

&lt;p&gt;Three repos had single commits touching 10,000-21,000+ files. These are merge-back commits in monorepos — technically expected, but they create extreme statistical outliers that mask real drift signals underneath.&lt;/p&gt;

&lt;p&gt;If your drift detection (or any metrics tool) doesn't account for these outliers, the signal-to-noise ratio collapses. A legitimate 34x CI spike looks insignificant next to a 14,000x files_touched outlier.&lt;/p&gt;

&lt;h2&gt;
  
  
  Finding 5: Cross-family correlations reveal systemic patterns
&lt;/h2&gt;

&lt;p&gt;The most interesting findings weren't individual metrics — they were correlations &lt;em&gt;between&lt;/em&gt; signal families:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CI duration &amp;lt;-&amp;gt; files touched&lt;/strong&gt;: When commit size increases, build times increase non-linearly. This correlation appeared in &lt;strong&gt;all 10 repos&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deployment cadence &amp;lt;-&amp;gt; code dispersion&lt;/strong&gt;: When releases slow down, changes spread wider. Found in &lt;strong&gt;8/10 repos&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dependency changes &amp;lt;-&amp;gt; change locality&lt;/strong&gt;: When dependencies change, subsequent code changes tend to be less focused. Found in &lt;strong&gt;7/10 repos&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These cross-family patterns are invisible if you only monitor one signal family (just CI, or just git). You need the full picture.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this means for AI-assisted development
&lt;/h2&gt;

&lt;p&gt;Google's DORA research found that AI increases throughput but decreases stability. Our findings suggest &lt;em&gt;why&lt;/em&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;AI generates larger commits&lt;/strong&gt; — more files touched per change, increasing CI load&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI follows templates&lt;/strong&gt; — co-change novelty drops, creating repetitive patterns&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI doesn't respect cadence&lt;/strong&gt; — large batch changes break release rhythm&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The drift is gradual&lt;/strong&gt; — no single commit looks wrong, but the aggregate pattern shifts&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The fix isn't to stop using AI tools. It's to monitor the process signals they affect. The same way you'd monitor application performance after a deployment, you should monitor development process patterns after adopting AI coding tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's next
&lt;/h2&gt;

&lt;p&gt;This is the first in a series. In upcoming posts, I'll publish detailed case studies of individual repos (with permission from maintainers where applicable) and dive deeper into specific patterns — like how dependency drift predicts deployment instability, and what "healthy" drift patterns look like versus problematic ones.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it yourself
&lt;/h2&gt;

&lt;p&gt;Evolution Engine is open source. Install it and run on any repo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;evolution-engine
evo analyze /path/to/your/repo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The tool generates an interactive HTML report with all findings, plus an investigation prompt you can paste into any AI assistant for root cause analysis — so your AI tools can help diagnose the drift patterns they create.&lt;/p&gt;

&lt;p&gt;All analysis is local and statistical. No code leaves your machine. No AI APIs are called.&lt;/p&gt;

&lt;p&gt;GitHub: &lt;a href="https://github.com/alpsla/evolution-engine" rel="noopener noreferrer"&gt;github.com/alpsla/evolution-engine&lt;/a&gt;&lt;br&gt;
Website: &lt;a href="https://codequal.dev" rel="noopener noreferrer"&gt;codequal.dev&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I built this. &lt;a href="https://github.com/alpsla/evolution-engine" rel="noopener noreferrer"&gt;Evolution Engine&lt;/a&gt; is open source — dual-licensed: CLI and adapters are MIT, core engine is BSL 1.1 (converts to MIT in 2029). Happy to answer questions in the comments.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>devops</category>
      <category>opensource</category>
      <category>productivity</category>
    </item>
    <item>
      <title>AI Writes Your Code. Who Watches for Drift?</title>
      <dc:creator>Codequal</dc:creator>
      <pubDate>Tue, 10 Mar 2026 13:31:22 +0000</pubDate>
      <link>https://forem.com/codequal/we-built-a-drift-detector-for-ai-assisted-development-591j</link>
      <guid>https://forem.com/codequal/we-built-a-drift-detector-for-ai-assisted-development-591j</guid>
      <description>&lt;h2&gt;
  
  
  The problem
&lt;/h2&gt;

&lt;p&gt;AI coding tools are incredible — until they quietly drift off course.&lt;/p&gt;

&lt;p&gt;You're using Cursor, Copilot, or Claude Code. The code looks fine. Tests pass. But over a few commits, subtle shifts accumulate: more files touched per commit than usual, dependency trees growing unexpectedly, CI times creeping up.&lt;/p&gt;

&lt;p&gt;By the time you notice, the damage has compounded across dozens of commits.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Evolution Engine does
&lt;/h2&gt;

&lt;p&gt;Evolution Engine is a local-first CLI that monitors your SDLC signals and flags statistical anomalies:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Git patterns&lt;/strong&gt; — file dispersion, change locality, co-change novelty&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CI pipelines&lt;/strong&gt; — duration trends, failure patterns&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dependencies&lt;/strong&gt; — count changes, depth shifts&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deployments&lt;/strong&gt; — release cadence, prerelease patterns&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Testing&lt;/strong&gt; — failure rates, skip rates, suite duration&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Coverage&lt;/strong&gt; — line and branch rate changes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When a metric deviates significantly from your project's baseline, EE raises an advisory — not a bug report, a &lt;em&gt;drift alarm&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  No AI APIs required
&lt;/h2&gt;

&lt;p&gt;This was a deliberate design choice. Your code never leaves your machine. EE does pure statistical analysis locally.&lt;/p&gt;

&lt;p&gt;When you want deeper investigation, EE generates a structured prompt you can paste into &lt;em&gt;your own&lt;/em&gt; AI tool — ChatGPT, Claude, Cursor, whatever you trust. You control what leaves your machine.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to use it
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;evolution-engine
&lt;span class="nb"&gt;cd &lt;/span&gt;your-project
evo analyze &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. EE builds a baseline from your git history and flags deviations.&lt;/p&gt;

&lt;h3&gt;
  
  
  As a GitHub Action
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;alpsla/evolution-engine@v1&lt;/span&gt;
  &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;github-token&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.GITHUB_TOKEN }}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  As a git hook
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;evo init &lt;span class="nt"&gt;--hooks&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Open source
&lt;/h2&gt;

&lt;p&gt;Evolution Engine is available on &lt;a href="https://pypi.org/project/evolution-engine/" rel="noopener noreferrer"&gt;PyPI&lt;/a&gt; and &lt;a href="https://github.com/alpsla/evolution-engine" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;. Git analysis is free — no account, no license key needed.&lt;/p&gt;

&lt;p&gt;Would love feedback from the community. What signals matter most in your workflow?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://codequal.dev/?utm_source=devto" rel="noopener noreferrer"&gt;Website&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/alpsla/evolution-engine" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/marketplace/actions/evolution-engine-analyze" rel="noopener noreferrer"&gt;GitHub Marketplace&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devops</category>
      <category>vibecoding</category>
      <category>aiops</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
