<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: bukin</title>
    <description>The latest articles on Forem by bukin (@bukinator).</description>
    <link>https://forem.com/bukinator</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/bukinator"/>
    <language>en</language>
    <item>
      <title># Unpopular Opinion: Why Shift Left Testing on Large Complex Systems Is Not Working</title>
      <dc:creator>bukin</dc:creator>
      <pubDate>Sun, 12 Apr 2026 08:40:59 +0000</pubDate>
      <link>https://forem.com/bukinator/-unpopular-opinion-why-shift-left-testing-on-large-complex-systems-is-not-working-4hcp</link>
      <guid>https://forem.com/bukinator/-unpopular-opinion-why-shift-left-testing-on-large-complex-systems-is-not-working-4hcp</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;"Shift left" has become one of the most repeated mantras in modern software engineering. The idea is elegant in its simplicity: move testing earlier in the development lifecycle, catch bugs before they become expensive, and empower developers to own quality from the first line of code. In theory, it is a compelling proposition. In practice, on large and complex systems, it is quietly failing — and the industry is not talking about it enough.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Promise vs. The Reality
&lt;/h2&gt;

&lt;p&gt;Shift left testing emerged from Agile and DevOps movements that championed speed, autonomy, and continuous feedback loops. Larry Smith, who coined the term in 2001, described it as a way to address defects when the cost of fixing them is lowest [1]. Early adopters in small, greenfield teams reported impressive results — faster pipelines, fewer production incidents, and happier developers.&lt;/p&gt;

&lt;p&gt;But the enterprise context is fundamentally different. Large complex systems — think banking platforms, aerospace software, distributed healthcare infrastructure, or telecommunications networks — carry layers of legacy dependencies, regulatory constraints, and emergent behaviors that unit tests and static analysis tools simply cannot capture.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Shift Left Breaks Down at Scale
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Unit Tests Cannot Model System-Level Emergent Behavior
&lt;/h3&gt;

&lt;p&gt;Complex systems are defined by emergence — behaviors that arise from the interaction of components, not from the components themselves. A microservice that passes 100% of its unit tests can still bring down an entire platform when it interacts unexpectedly with another service under real load conditions. Research from Google's Site Reliability Engineering team highlights that the majority of production failures in distributed systems involve multi-component interactions that are invisible at the unit level [2].&lt;/p&gt;

&lt;p&gt;Shift left tooling is optimized for the component level. It was never designed to reason about the whole.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. The Cognitive Load Problem Is Real
&lt;/h3&gt;

&lt;p&gt;Shifting quality responsibility to developers sounds empowering. In reality, on complex systems, it creates unsustainable cognitive load. Studies on developer productivity show that context switching between writing features and maintaining comprehensive test suites significantly degrades output quality over time [3]. On large platforms with hundreds of interdependent services, the testing surface area is simply too vast for individual contributors to own meaningfully.&lt;/p&gt;

&lt;p&gt;The result is shallow tests that satisfy coverage metrics without providing genuine safety signals — what Martin Fowler calls "testing theater" [4].&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Legacy Architecture Resists Shift Left by Design
&lt;/h3&gt;

&lt;p&gt;A significant portion of complex enterprise systems are not greenfield. They are built on top of decades of accumulated architecture — mainframes, monoliths, and tightly-coupled components that were never designed for testability. Retrofitting shift left practices onto these systems requires extraordinary investment, and the return is often marginal. A 2022 survey by DORA (DevOps Research and Assessment) found that organizations with high levels of technical debt saw diminishing returns from shift left initiatives compared to those starting fresh [5].&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Compliance and Regulatory Testing Cannot Shift Left
&lt;/h3&gt;

&lt;p&gt;In regulated industries — healthcare, finance, aviation — many testing activities are mandated to occur at specific stages of the delivery lifecycle. FDA validation requirements, DO-178C airborne software standards, and PCI-DSS compliance frameworks all prescribe testing phases that are incompatible with a purely shift-left model. Attempting to compress these into earlier phases does not eliminate the requirement; it only creates duplicate effort and compliance risk [6].&lt;/p&gt;

&lt;h3&gt;
  
  
  5. False Confidence Is More Dangerous Than No Confidence
&lt;/h3&gt;

&lt;p&gt;Perhaps the most underappreciated risk is the illusion of coverage. When teams invest heavily in shift left tooling — linters, SAST scanners, unit tests, contract tests — there is a natural tendency to trust the signal. On complex systems, this trust is misplaced. High test coverage does not equate to high confidence in system behavior. Teams that have "passed" every shift left gate have still experienced catastrophic production failures, because the failure modes were in the spaces between their tests, not inside them.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Should Replace It?
&lt;/h2&gt;

&lt;p&gt;This is not an argument for abandoning shift left principles entirely. Early feedback loops, developer-owned quality, and automated checks remain valuable — but they must be positioned honestly as one layer in a defense-in-depth quality strategy, not as a silver bullet.&lt;/p&gt;

&lt;p&gt;For large complex systems, organizations should consider:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Investing in integration and chaos engineering&lt;/strong&gt; at the system level, not just component level.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maintaining dedicated QA expertise&lt;/strong&gt; with deep system knowledge that cannot be distributed to individual developers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Embracing staged testing&lt;/strong&gt; that acknowledges different defect types surface at different stages of complexity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Measuring outcomes, not coverage&lt;/strong&gt; — production reliability and mean time to recovery are better signals than test coverage percentages.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Shift left is not wrong. It is incomplete. The software industry has a tendency to over-rotate on methodologies that work well in one context and apply them universally without scrutiny. For small, self-contained services, shift left is powerful. For large, complex, interdependent systems operating under regulatory or safety constraints, it is insufficient — and the pressure to adopt it uncritically can actively harm quality outcomes.&lt;/p&gt;

&lt;p&gt;The unpopular truth is this: some bugs can only be found late, and pretending otherwise does not make systems safer. It makes teams overconfident.&lt;/p&gt;




&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;[1] Smith, L. (2001). &lt;em&gt;Make the Bugs Stop&lt;/em&gt;. IEEE Software, 18(5), 23–26. &lt;a href="https://doi.org/10.1109/52.951491" rel="noopener noreferrer"&gt;https://doi.org/10.1109/52.951491&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[2] Beyer, B., Jones, C., Petoff, J., &amp;amp; Murphy, N. R. (Eds.). (2016). &lt;em&gt;Site Reliability Engineering: How Google Runs Production Systems&lt;/em&gt;. O'Reilly Media. &lt;a href="https://sre.google/sre-book/introduction/" rel="noopener noreferrer"&gt;https://sre.google/sre-book/introduction/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[3] Lehtinen, T. O. A., Mäntylä, M. V., Vanhanen, J., Itkonen, J., &amp;amp; Lassenius, C. (2014). Perceived causes of software project failures – An analysis of their relationships. &lt;em&gt;Information and Software Technology&lt;/em&gt;, 56(6), 623–643. &lt;a href="https://doi.org/10.1016/j.infsof.2014.01.015" rel="noopener noreferrer"&gt;https://doi.org/10.1016/j.infsof.2014.01.015&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[4] Fowler, M. (2006). &lt;em&gt;Test Coverage&lt;/em&gt;. MartinFowler.com. &lt;a href="https://martinfowler.com/bliki/TestCoverage.html" rel="noopener noreferrer"&gt;https://martinfowler.com/bliki/TestCoverage.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[5] Forsgren, N., Smith, D., Humble, J., &amp;amp; Frazelle, J. (2022). &lt;em&gt;Accelerate: State of DevOps Report&lt;/em&gt;. DORA / Google Cloud. &lt;a href="https://dora.dev/research/2022/dora-report/" rel="noopener noreferrer"&gt;https://dora.dev/research/2022/dora-report/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[6] DO-178C: Software Considerations in Airborne Systems and Equipment Certification. (2011). RTCA, Inc. &lt;a href="https://www.rtca.org/content/do-178c" rel="noopener noreferrer"&gt;https://www.rtca.org/content/do-178c&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This article represents my opinion based on observed industry patterns and referenced research. Comments and counterarguments are welcome.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>automation</category>
      <category>testing</category>
      <category>career</category>
    </item>
    <item>
      <title>QA in chaos. How Do You Test Anyway?</title>
      <dc:creator>bukin</dc:creator>
      <pubDate>Mon, 30 Mar 2026 20:01:35 +0000</pubDate>
      <link>https://forem.com/bukinator/qa-in-chaos-how-do-you-test-anyway-2dpm</link>
      <guid>https://forem.com/bukinator/qa-in-chaos-how-do-you-test-anyway-2dpm</guid>
      <description>&lt;p&gt;If you've been in QA or SDET work for more than a year, you know the job description and the actual job are two completely different things.&lt;/p&gt;

&lt;p&gt;The description says: "Ensure software quality through systematic testing."&lt;/p&gt;

&lt;p&gt;The reality: Slack ping at 9am — prod is down. Your staging environment hasn't worked since Tuesday. Someone deployed custom configs telling anyone. The test data is corrupted.&lt;/p&gt;

&lt;p&gt;Welcome to QA in chaos. &lt;/p&gt;

&lt;p&gt;I've spent 8+ years doing this in fintech — the kind where a bug in numbers calculation systems means someone loses actual money. And after years of living in this beautiful mess, I started wondering: is it just me, or is everyone running on caffeine and controlled panic?&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>testing</category>
      <category>qa</category>
    </item>
  </channel>
</rss>
