<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Leon Pennings</title>
    <description>The latest articles on Forem by Leon Pennings (@leonpennings).</description>
    <link>https://forem.com/leonpennings</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/leonpennings"/>
    <language>en</language>
    <item>
      <title>Software Engineering Is Living The Golden Hammer Antipattern — And Everyone Loves It</title>
      <dc:creator>Leon Pennings</dc:creator>
      <pubDate>Tue, 14 Apr 2026 05:25:25 +0000</pubDate>
      <link>https://forem.com/leonpennings/software-engineering-is-living-the-golden-hammer-antipattern-and-everyone-loves-it-3e83</link>
      <guid>https://forem.com/leonpennings/software-engineering-is-living-the-golden-hammer-antipattern-and-everyone-loves-it-3e83</guid>
      <description>&lt;p&gt;Why the industry simultaneously agrees with Brooks and ignores him — and why it's structured to stay that way&lt;/p&gt;

&lt;p&gt;The Paradox Nobody Talks About&lt;/p&gt;

&lt;p&gt;Ask any experienced software engineer about essential versus accidental complexity. They will nod. Ask them about Brooks' central argument in No Silver Bullet — that the hard part of software is the conceptual work of understanding the problem, not the mechanical work of expressing it in code. They will nod again.&lt;/p&gt;

&lt;p&gt;Then watch what happens when the next project starts.&lt;/p&gt;

&lt;p&gt;Someone opens Spring Initializr. Someone proposes microservices. Someone puts Kubernetes in the architecture diagram before a single domain concept has been named. The technology stack is decided in the first week. The business domain is still being understood in month six.&lt;/p&gt;

&lt;p&gt;Nobody in that room forgot Brooks. The choice was never really about Brooks.&lt;/p&gt;

&lt;p&gt;That is the paradox this essay is about. Not that the industry is ignorant of the problem — but that it is structured to reproduce it perfectly, indefinitely, at enormous and invisible cost.&lt;/p&gt;

&lt;p&gt;What Brooks Actually Said&lt;/p&gt;

&lt;p&gt;In 1975, Frederick Brooks published The Mythical Man-Month, based on his experience managing the development of OS/360 at IBM. The project was late, over budget, and initially didn't work particularly well. Brooks spent the rest of his career trying to understand why.&lt;/p&gt;

&lt;p&gt;The insight most people remember is the coordination problem. Adding people to a late software project makes it later. Nine women cannot make a baby in one month. Communication overhead scales quadratically. You cannot parallelise work that is fundamentally interdependent. Everyone knows this. It shows up in every post-mortem, every engineering blog, every conference talk about why the rewrite took three years instead of six months.&lt;/p&gt;

&lt;p&gt;What people remember less clearly is the deeper argument Brooks made in his 1986 essay No Silver Bullet, later added to the anniversary edition of the book.&lt;/p&gt;

&lt;p&gt;Brooks drew a distinction between two kinds of complexity in software. Essential complexity is inherent to the problem itself — the rules, the relationships, the invariants, the genuine difficulty of the business domain being modelled. Accidental complexity is everything else — the tools, the frameworks, the infrastructure, the deployment machinery, the coordination overhead introduced by the way we choose to build systems.&lt;/p&gt;

&lt;p&gt;His claim was precise and devastating: there is no silver bullet because the hard part of software is essential complexity, and no tool or methodology can compress it. You cannot automate your way out of needing to understand the problem. You cannot framework your way past the conceptual work.&lt;/p&gt;

&lt;p&gt;Then he said something that was either ignored or misunderstood: the industry's persistent belief that the next tool, the next methodology, the next architectural pattern will finally solve the problem of software difficulty is itself the symptom of failing to make this distinction.&lt;/p&gt;

&lt;p&gt;That was 1986. Since then the industry has produced structured programming, object orientation, UML, SOA, agile, microservices, event-driven architecture, CQRS, cloud-native development, and AI-assisted coding.&lt;/p&gt;

&lt;p&gt;Each one arrived as a silver bullet. Each one was greeted with the same enthusiasm. Each one was applied before the domain was understood.&lt;/p&gt;

&lt;p&gt;Brooks' own framework predicted every step of it&lt;/p&gt;

&lt;p&gt;The Golden Hammer The Industry Forgot To Question&lt;/p&gt;

&lt;p&gt;There is a well-known antipattern in software called the golden hammer. It describes the tendency to over-apply a familiar tool regardless of whether it fits the problem. Named after Maslow's observation that if all you have is a hammer, everything looks like a nail.&lt;/p&gt;

&lt;p&gt;The modern software industry does not have one golden hammer. It has a coordinated set of them — and they are chosen as a bundle, before the problem is understood, in almost every project that starts today.&lt;/p&gt;

&lt;p&gt;The bundle looks like this: a popular framework for the application layer, microservices for decomposition, an event-driven or REST-based communication model, a cloud platform for deployment, and Kubernetes for orchestration. The specific tools vary by organisation and year. The pattern does not vary.&lt;/p&gt;

&lt;p&gt;What makes this particular golden hammer different from the textbook antipattern is a crucial property: it is unfalsifiable.&lt;/p&gt;

&lt;p&gt;A normal golden hammer eventually gets retired. Something demonstrates it was the wrong tool — the screw still won't turn, the nail bent, the joint failed. There is a moment of visible failure that creates pressure to reconsider.&lt;/p&gt;

&lt;p&gt;The modern software stack has no such moment. If the system runs in production, the stack gets the credit. If the system struggles — if changes are expensive, if the team grows endlessly, if understanding the codebase requires months of archaeology — the blame goes to requirements changing, team turnover, business complexity, or simply the nature of software. The stack is never in the dock.&lt;/p&gt;

&lt;p&gt;This is not an accident. It is a structural property of how software success is defined. A system running in production passes the only test anyone applies. There is no test for whether it could have been built at a fraction of the cost with a fraction of the complexity. Nobody built that version. Nobody ever does.&lt;/p&gt;

&lt;p&gt;The golden hammer persists not because people are lazy or ignorant — but because the thing that should replace it is invisible to every organisational instrument the industry has built.&lt;/p&gt;

&lt;p&gt;Agile Was The Correction. Then It Was Captured.&lt;/p&gt;

&lt;p&gt;In 2001, the Agile Manifesto proposed something that was, underneath its somewhat vague language, a precise epistemological claim.&lt;/p&gt;

&lt;p&gt;Software development is fundamentally a process of learning. You do not fully understand the domain at the start. You build a version of your understanding, expose it to reality — specifically to the domain experts who live in that business every day — and you refine it. Each iteration is not primarily a delivery mechanism. It is a question: did we understand the domain correctly?&lt;/p&gt;

&lt;p&gt;The working software at the end of a sprint is not the point. It is the test. The test of whether your conceptual model of the business — your understanding of what the domain actually is, what rules govern it, what concepts belong together — corresponds to reality. Domain experts are not approving features. They are stress-testing your model.&lt;/p&gt;

&lt;p&gt;That is what Agile was. A mechanism for continuously refining essential understanding through structured contact with reality.&lt;/p&gt;

&lt;p&gt;That is not what Agile became.&lt;/p&gt;

&lt;p&gt;What Agile became was a process for efficiently transcribing user stories into framework components. Two-week sprints. Velocity points. Definition of done. Backlog refinement. The ceremonies survived. The epistemology was quietly discarded.&lt;/p&gt;

&lt;p&gt;And then CI/CD completed the transformation.&lt;/p&gt;

&lt;p&gt;Continuous integration and continuous deployment are genuinely valuable practices for managing the operational complexity of releasing software. But they introduced a subtle and devastating redefinition of what "production ready" means.&lt;/p&gt;

&lt;p&gt;Before, production readiness was at least nominally connected to domain correctness — does this system correctly implement the business? After, production readiness means the pipeline is green. Tests pass. Build succeeds. Deploy proceeds.&lt;/p&gt;

&lt;p&gt;These are not the same question. A passing test suite validates that the code does what the code was written to do. It says nothing about whether the code was written to do the right thing. Whether the domain concepts are correctly identified. Whether the invariants are correctly enforced. Whether the model reflects the business reality or merely the user story that described one interaction with it.&lt;/p&gt;

&lt;p&gt;You can have one hundred percent test coverage and zero domain correctness. The pipeline will be green. The system will go to production. The retrospective will be positive.&lt;/p&gt;

&lt;p&gt;The feedback loop Agile promised — between domain experts and the conceptual model being built — was replaced by a feedback loop between the code and its own tests. We optimised the loop while removing the thing it was supposed to validate.&lt;/p&gt;

&lt;p&gt;The Sociological Lock-In&lt;/p&gt;

&lt;p&gt;So far this looks like an intellectual failure. Engineers and organisations that know better making choices they shouldn't. A problem of discipline or culture that better education might eventually correct.&lt;/p&gt;

&lt;p&gt;It is not. It is structural. And the structure actively selects against correction.&lt;/p&gt;

&lt;p&gt;Consider how a software project begins. Before a single domain conversation happens, several things must occur. The project must be staffed. That requires a job posting. A job posting requires a technology stack. The project must be estimated. Estimation requires a known architecture. The kickoff deck must be prepared. The kickoff deck needs something in the architecture diagram.&lt;/p&gt;

&lt;p&gt;All of these organisational necessities demand a technology decision at the precise moment when the only intellectually honest answer is: we don't know yet. We haven't understood the domain.&lt;/p&gt;

&lt;p&gt;That answer is organisationally impossible to give. So the stack gets chosen. Not out of ignorance. Not out of laziness. Out of genuine organisational necessity. The machinery of project initiation requires it.&lt;/p&gt;

&lt;p&gt;And once the stack is chosen, it shapes everything that follows. The hiring criteria. The team composition. The onboarding process. The architecture decisions. The decomposition strategy. The system that emerges is not primarily a model of the business domain. It is primarily an expression of the technology choices made before the domain was understood.&lt;/p&gt;

&lt;p&gt;This is not the worst part.&lt;/p&gt;

&lt;p&gt;The worst part is what happens at the hiring stage.&lt;/p&gt;

&lt;p&gt;Conceptual thinking — the ability to reason about what a business concept actually is, what it should own, what it should never be responsible for, where the real boundaries lie — is extremely difficult to assess in an interview. It requires time, domain context, and a level of conversation that most hiring processes cannot accommodate. It does not show up cleanly on a CV.&lt;/p&gt;

&lt;p&gt;Tool fluency shows up immediately. Spring Boot, Kubernetes, Kafka, event-driven architecture — these are expressible, searchable, assessable. You can screen for them in thirty seconds. You can test them in a one-hour technical interview. You can verify them with a take-home assignment.&lt;/p&gt;

&lt;p&gt;So organisations hire for tool fluency. Not because they don't value conceptual thinking. Because tool fluency is what their hiring process can see.&lt;/p&gt;

&lt;p&gt;The consequence is a team that reaches for the familiar tools. The team ships systems using those tools. Those systems run in production. The hiring criteria get validated. The loop closes.&lt;/p&gt;

&lt;p&gt;Engineers who push back on premature technology decisions get filtered out at the CV screen, outvoted in the kickoff meeting, or labelled as impractical idealists who don't understand how real projects work. The selection pressure is quiet, consistent, and almost entirely invisible.&lt;/p&gt;

&lt;p&gt;When everyone hired thinks the same way, the golden hammer stops looking like a hammer. It looks like engineering.&lt;/p&gt;

&lt;p&gt;The Cost Nobody Can See&lt;/p&gt;

&lt;p&gt;Here is the claim that cannot be proven and cannot be dismissed.&lt;/p&gt;

&lt;p&gt;A system built with a full modern distributed stack — framework, microservices, cloud infrastructure, orchestration — could in many cases have been built far more simply, maintained by a fraction of the team, and been more correct, more stable, and more responsive to business change.&lt;/p&gt;

&lt;p&gt;That statement cannot be verified. Because the simpler version was never built. Nobody built it. The team that chose the distributed architecture never built the alternative to compare against. The organisation that approved the budget never saw a competing proposal. The engineers who maintained the system never worked on a well-modelled equivalent.&lt;/p&gt;

&lt;p&gt;This is not a gap in the data. It is the mechanism of the problem.&lt;/p&gt;

&lt;p&gt;Brooks identified it precisely: most systems are built only once. There is no second system built with different assumptions, run for five years, and compared on total cost of ownership, ease of change, and conceptual correctness. The counterfactual does not exist. Therefore the cost of the wrong choice is permanently invisible.&lt;/p&gt;

&lt;p&gt;And here is what makes it truly unfalsifiable: the entire industry is paying the same inflated price. There is no reference point. When every team uses the same stack, incurs the same coordination overhead, grows to the same size, and struggles with the same maintenance costs — those costs stop being visible as costs. They become the definition of what software costs. Normal and wasteful become indistinguishable.&lt;/p&gt;

&lt;p&gt;But the difference is not just in cost. It is in what the work actually consists of every single day.&lt;/p&gt;

&lt;p&gt;In a team organised around accidental complexity, the daily work is about the technology. Configuring services. Connecting components. Managing framework upgrades. Fixing pipeline failures. Debugging integration issues. Updating dependencies. Understanding the codebase means knowing which service owns which endpoint and how the data flows between them. The business domain is somewhere in there, translated into controllers and DTOs and event schemas, but it is not what the day is about.&lt;/p&gt;

&lt;p&gt;In a team organised around essential complexity, the daily work is about the domain. Which concept owns this responsibility. What this rule actually means. What the domain expert said yesterday that changed how they understand the model. The implementation follows from that understanding — and because the model is clear, the implementation is the smaller part of the day, not the larger.&lt;/p&gt;

&lt;p&gt;The difference is visible — immediately and without any instrumentation — in the daily standup.&lt;/p&gt;

&lt;p&gt;In one team, the language is technical. Spring, Kafka, the pipeline, the service, the endpoint, the migration. Progress is reported in terms of tickets and story completion. The word "business" appears occasionally, usually in the phrase "business requirement."&lt;/p&gt;

&lt;p&gt;In the other team, the language is conceptual. The Order, the Invoice, the Payment, what a Shipment is responsible for, whether a Client and a User are really the same thing. Technology appears occasionally, usually briefly, because the implementation of a well-understood concept is rarely the hard part.&lt;/p&gt;

&lt;p&gt;You do not need metrics or cost analyses to know which team is working on the right problems. You need one standup.&lt;/p&gt;

&lt;p&gt;If every item on the standup is about accidental complexity — go back. Ask what the essential complexity actually demands. Then and only then choose the technology that serves it.&lt;/p&gt;

&lt;p&gt;If every garage in the world were built to the standard of a luxury hotel, nobody would know a garage could cost less. The price would simply be what it is. The inflated standard would be the only standard anyone had ever seen.&lt;/p&gt;

&lt;p&gt;That is where the software industry is today. Paying Burj Al Arab prices for a garage that needed to store a jar of paint. And maintaining a universal, genuine, unforced consensus that this is simply what garages cost.&lt;/p&gt;

&lt;p&gt;Two Rules That Cost Nothing&lt;/p&gt;

&lt;p&gt;Most prescriptions for this problem are expensive. Hire differently. Retrain your engineers. Adopt a new methodology. Bring in consultants. Run workshops.&lt;/p&gt;

&lt;p&gt;These are not wrong. But they require budget, time, and organisational will that most teams do not have in the moment a project starts.&lt;/p&gt;

&lt;p&gt;There are two rules that cost nothing, require no external help, and can be applied starting tomorrow.&lt;/p&gt;

&lt;p&gt;Do not choose technology upfront.&lt;/p&gt;

&lt;p&gt;Technology enters the project when the domain demands it, not when the kickoff deck needs an architecture diagram. The first weeks of a project produce domain understanding — what the business actually is, what concepts exist in it, what rules govern them. Technology choices follow from that understanding, added only when essential complexity makes them necessary, and only to the degree that it does.&lt;/p&gt;

&lt;p&gt;This feels impossible in most organisations. The job posting needs a stack. The estimate needs an architecture. The kickoff slide needs something in the boxes.&lt;/p&gt;

&lt;p&gt;Those are real constraints. They are also exactly the organisational machinery that inverts Brooks before the first line of code is written. Recognising that the machinery is the problem is the first step toward not letting it make the decision by default.&lt;/p&gt;

&lt;p&gt;Mandate that standups should be about business concepts only. Never technology.&lt;/p&gt;

&lt;p&gt;This is the litmus test made into a practice. If someone says "I'm working on the Kafka consumer," the immediate question is: what business concept does that serve, and does that business concept actually require it? If the answer is unclear, the technology choice is premature. If the answer is clear, state the business concept first and let the technology be the footnote it should be.&lt;/p&gt;

&lt;p&gt;A standup where every item is about services, frameworks, pipelines, and endpoints is a standup where the team has been captured by accidental complexity. It will feel entirely normal. It will sound like engineering. The terminology will be confident and precise.&lt;/p&gt;

&lt;p&gt;But the business domain — the essential complexity that justifies the system's existence — will be invisible. And a team that cannot talk about the business in its daily standup is a team that is not working on the business. It is working on the technology that was supposed to serve it.&lt;/p&gt;

&lt;p&gt;These two rules do not solve the problem entirely. The sociological pressures remain. The hiring pipelines remain. The organisational machinery remains. But they create two moments — one at the start of a project, one every single day — where the inversion becomes visible. Where someone can point at the standup and say: we have not mentioned a business concept in three days. What are we actually building?&lt;/p&gt;

&lt;p&gt;That question, asked consistently, is more powerful than any methodology.&lt;/p&gt;

&lt;p&gt;Closing&lt;/p&gt;

&lt;p&gt;The most expensive software is the software everyone agrees is fine.&lt;/p&gt;

&lt;p&gt;It runs in production. The pipeline is green. The team is stable. The architecture is recognisable. The job postings write themselves. The onboarding takes three months instead of three days, but that is just how software works. The changes take longer than they should, but the domain is complex. The team keeps growing, but the system keeps growing too. The costs keep rising, but software is expensive.&lt;/p&gt;

&lt;p&gt;None of this is inevitable. All of it is a consequence of a single inversion: accidental complexity chosen before essential complexity is understood. A choice made not out of ignorance, but out of organisational necessity, sociological pressure, and the permanent invisibility of the alternative.&lt;/p&gt;

&lt;p&gt;Brooks saw it in 1975. Named it clearly. Watched the industry quote him extensively and change nothing.&lt;/p&gt;

&lt;p&gt;The golden hammer is not a mistake. It is the product. The template is not a shortcut. It is the destination. The assembly is not the means. It has become the craft.&lt;/p&gt;

&lt;p&gt;Two rules. No technology upfront. Standups about the business only.&lt;/p&gt;

&lt;p&gt;They will feel radical. They are just Brooks, applied.&lt;/p&gt;

&lt;p&gt;Everyone agrees with Brooks.&lt;/p&gt;

&lt;p&gt;Then the next project starts.&lt;/p&gt;

</description>
      <category>softwareengineering</category>
      <category>java</category>
      <category>architecture</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>Fast Onboarding of Software Engineers: The Two Learning Modes</title>
      <dc:creator>Leon Pennings</dc:creator>
      <pubDate>Fri, 10 Apr 2026 11:43:42 +0000</pubDate>
      <link>https://forem.com/leonpennings/fast-onboarding-of-software-engineers-the-two-learning-modes-52ge</link>
      <guid>https://forem.com/leonpennings/fast-onboarding-of-software-engineers-the-two-learning-modes-52ge</guid>
      <description>&lt;p&gt;There is a persistent belief in software organizations that standardizing on a single framework — Spring Boot being the popular example — makes developers interchangeable across teams. If every system is built the same way, engineers can move between projects with minimal friction.&lt;/p&gt;

&lt;p&gt;It sounds efficient. It feels scalable. It is also largely wrong — and understanding why reveals something important about how developers actually learn.&lt;/p&gt;

&lt;h2&gt;
  
  
  Two Ways to Learn a Codebase
&lt;/h2&gt;

&lt;p&gt;There are fundamentally two modes through which a developer can come to understand a system.&lt;/p&gt;

&lt;p&gt;The first is &lt;strong&gt;comprehension-based learning&lt;/strong&gt;. The developer is walked through the core domain concepts — typically in a whiteboard session — and can then trace those exact concepts in the code. The system is legible. Understanding precedes execution.&lt;/p&gt;

&lt;p&gt;The second is &lt;strong&gt;execution-based learning&lt;/strong&gt;. The developer runs the system, breaks it, watches it, traces calls through layers. Understanding is assembled gradually from observed behavior. This is the default mode for procedural and layered architectures.&lt;/p&gt;

&lt;p&gt;The practical consequence of this difference is not marginal. Comprehension-based onboarding can bring a developer to effective contribution within &lt;strong&gt;hours to days&lt;/strong&gt;. Execution-based onboarding routinely takes &lt;strong&gt;weeks to months&lt;/strong&gt; before a developer can contribute without close supervision.&lt;/p&gt;

&lt;p&gt;That gap is not a matter of individual ability. It is a structural property of the codebase.&lt;/p&gt;

&lt;p&gt;Pair programming, shadowing, and extensive debugging sessions are not learning strategies in this context. They are compensations — workarounds for the absence of anything readable at the conceptual level. Organizations that rely on them have simply normalized the cost of an illegible system.&lt;/p&gt;

&lt;p&gt;Framework standardization does nothing to change this. Recognizing a controller is not the same as understanding why the endpoint exists, what constraints govern it, or what invariants must never be broken. That knowledge lives in the domain — and in most codebases, it lives nowhere at all.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Comprehension-Based Onboarding Actually Looks Like
&lt;/h2&gt;

&lt;p&gt;In a well-crafted domain model, onboarding follows a different rhythm entirely.&lt;/p&gt;

&lt;p&gt;A developer new to the system joins a whiteboard session — 30 to 60 minutes — where the core domain concepts are walked through. They then open the codebase and can trace those exact concepts in the code. Within an hour, the relationships between concepts — what governs what, what depends on what, what is allowed and what is forbidden — form a coherent picture. By the end of the first day, they can participate meaningfully in design discussions, and in many cases begin implementing new functionality.&lt;/p&gt;

&lt;p&gt;This is not aspirational. It is the direct consequence of a system that makes its intent legible.&lt;/p&gt;

&lt;p&gt;The critical insight is this: new functionality must be grounded in what a system &lt;em&gt;does&lt;/em&gt;, not in how it is written. A developer who understands the domain can reason about where new behavior belongs, what rules it must respect, and how it connects to existing concepts — without having traced a single execution path. That is what makes hours-to-days contribution possible. Without it, the developer has no foundation to build from, and execution-based exploration begins — with all the time cost that entails.&lt;/p&gt;

&lt;h2&gt;
  
  
  First Principles, Not Seniority
&lt;/h2&gt;

&lt;p&gt;This is not about experience level. A junior developer who thinks from first principles — who reasons about what a system &lt;em&gt;is&lt;/em&gt; and what it must never do before asking how it runs — will orient just as quickly as a senior. First-principles thinking is a mode, not a career stage. It is the ability to think and talk in concepts and responsibilities, to ask the right questions before reaching for the debugger.&lt;/p&gt;

&lt;p&gt;Execution-based systems actively disadvantage this kind of thinking. There is nothing to reason from. The only available strategy is empirical — run it, break it, observe. That favors pattern recognition over understanding, and accumulated exposure over insight. It rewards engineers who are good at navigating complexity rather than those who are good at resolving it.&lt;/p&gt;

&lt;p&gt;Over time this has consequences beyond onboarding. The system comes to be understood only by those who have been exposed to it long enough — and institutional knowledge becomes a function of tenure rather than clarity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Most Systems Make This Impossible
&lt;/h2&gt;

&lt;p&gt;The root causes of slow onboarding are almost never the framework. They are structural.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implicit domain knowledge.&lt;/strong&gt; Critical business rules are undocumented and embedded in conditionals, naming conventions, and historical decisions nobody questions anymore. New engineers are forced into archaeology before they can contribute. Every answer is buried somewhere in the execution history.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fragmented business logic.&lt;/strong&gt; When behavior is spread across controllers, services, and repositories, there is no single place to understand what the system enforces. Every answer requires assembling fragments from multiple layers — which means execution-based exploration is the only path available, regardless of how familiar the framework feels.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Workflow-centric design.&lt;/strong&gt; Systems modeled around flows — requests, events, pipelines — force developers to reconstruct intent from execution paths. The &lt;em&gt;what&lt;/em&gt; is buried inside the &lt;em&gt;how&lt;/em&gt;. Reading the code tells you what happens; it does not tell you why, or what must never happen.&lt;/p&gt;

&lt;p&gt;These are not framework problems. A Spring Boot application can have a rich domain model. It rarely does, because framework-driven thinking optimizes for &lt;em&gt;how we build&lt;/em&gt; rather than &lt;em&gt;what we model&lt;/em&gt; — and that trade-off silently pushes onboarding from days into months.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Domain Model as a Table of Context
&lt;/h2&gt;

&lt;p&gt;A well-structured domain model acts as a compression mechanism for complexity. Core concepts are named clearly. Invariants are enforced in one place. Relationships are explicit.&lt;/p&gt;

&lt;p&gt;This gives the codebase something more useful than a table of contents. It provides a &lt;strong&gt;table of context&lt;/strong&gt;: each concept is not just listed but anchored in meaning and relationship. A new developer does not navigate files — they navigate intent. And navigating intent is something a first-principles thinker can do quickly, regardless of how many years they have been writing code.&lt;/p&gt;

&lt;p&gt;For this to work, the code must speak the language of the business. If stakeholders say &lt;em&gt;DocumentRequest&lt;/em&gt;, the code should not say &lt;em&gt;PayloadDTO&lt;/em&gt;. When the language of the domain is reflected faithfully in the implementation, onboarding becomes a translation exercise rather than a decoding one. Translation is fast. Decoding is slow.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Useful Side Effect: Simpler Code
&lt;/h2&gt;

&lt;p&gt;Systems with rich domain models tend to produce surprisingly simple implementations. This is not accidental.&lt;/p&gt;

&lt;p&gt;When complexity is resolved at the conceptual level — when the model clearly captures what is allowed, what is forbidden, and where behavior belongs — it does not accumulate elsewhere. There is less need for elaborate orchestration, framework configuration, or infrastructure glue.&lt;/p&gt;

&lt;p&gt;In contrast, systems that lack a strong domain model push complexity into the gaps: coordination logic spreads across components, edge cases get patched rather than modeled, and understanding the system requires tracing runtime behavior rather than reading domain logic. Infrastructure complexity grows not because it is necessary but because the domain complexity has nowhere else to go. This is precisely what makes execution-based onboarding so expensive — the system keeps revealing new layers of implicit complexity the longer you look.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Role of Frameworks, Properly Understood
&lt;/h2&gt;

&lt;p&gt;Frameworks are not without value in onboarding. They reduce setup friction, provide familiar scaffolding, and standardize infrastructure concerns. A developer who knows Spring Boot will navigate a Spring Boot project faster than a complete stranger would.&lt;/p&gt;

&lt;p&gt;But this is surface-layer familiarity. It accelerates the first few hours. It does not touch the weeks that follow.&lt;/p&gt;

&lt;p&gt;Onboarding has two layers:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Surface layer&lt;/strong&gt; — framework, build tools, deployment setup, API conventions. Fast to learn, low in durable value. Framework standardization helps here.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deep layer&lt;/strong&gt; — domain concepts, object responsibilities, business rules, architectural boundaries. This is where the weeks-to-months cost lives. Framework standardization does nothing here.&lt;/p&gt;

&lt;p&gt;Most organizations optimize the surface layer because it is visible and measurable. They neglect the deep layer, absorb the onboarding cost as a fact of life, and attribute slow ramp-up to individual developers rather than to the structure of their systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The question of onboarding speed is ultimately a question of legibility.&lt;/p&gt;

&lt;p&gt;A codebase with a well-crafted domain model is legible. A single whiteboard session on the core concepts is enough to orient a developer — because when they open the codebase, those exact concepts are right there, named and structured as explained. The session and the code reinforce each other. From that foundation, a first-principles thinker — junior or senior — can form a complete picture and begin making meaningful contributions within hours to days.&lt;/p&gt;

&lt;p&gt;A codebase without one is an execution environment. You learn it by running it, breaking it, and asking the person who wrote it. That process takes weeks. Often months. And it repeats itself every time a new engineer joins.&lt;/p&gt;

&lt;p&gt;If you want engineers to move between projects effectively, do not standardize the tools. The tools are not the barrier.&lt;/p&gt;

&lt;p&gt;Standardize the clarity of the domain. Make systems understandable rather than developers interchangeable.&lt;/p&gt;

&lt;p&gt;That is the real multiplier.&lt;/p&gt;

</description>
      <category>softwareengineering</category>
      <category>java</category>
      <category>programming</category>
      <category>architecture</category>
    </item>
    <item>
      <title>When Distribution Becomes a Substitute for Design — and Fails</title>
      <dc:creator>Leon Pennings</dc:creator>
      <pubDate>Tue, 07 Apr 2026 08:41:51 +0000</pubDate>
      <link>https://forem.com/leonpennings/when-distribution-becomes-a-substitute-for-design-and-fails-5gec</link>
      <guid>https://forem.com/leonpennings/when-distribution-becomes-a-substitute-for-design-and-fails-5gec</guid>
      <description>&lt;p&gt;A lot of modern software architecture—microservices, event-driven systems, CQRS—is not born from deeply understanding the domain. It is what teams reach for when the existing application has become a mess: nobody really knows what’s happening where anymore, behavior is unpredictable, and making changes feels risky and expensive. Instead of asking “What does this concept actually mean and where does it truly belong?”, they ask “How do we split this?”&lt;/p&gt;

&lt;p&gt;That is where a lot of modern architecture begins.&lt;br&gt;&lt;br&gt;
Not in necessity.&lt;br&gt;&lt;br&gt;
Not in insight.&lt;br&gt;&lt;br&gt;
But in the growing discomfort of trying to manage software that was never modeled well in the first place.&lt;/p&gt;

&lt;p&gt;And because the resulting system still runs in production, the cost of that move often remains invisible for years.&lt;/p&gt;

&lt;p&gt;That is one of the most expensive traps in software.&lt;/p&gt;




&lt;h2&gt;
  
  
  Framework Fluency Is Not Software Design
&lt;/h2&gt;

&lt;p&gt;A lot of developers today are highly fluent in frameworks.&lt;br&gt;&lt;br&gt;
They know how to build controllers, services, repositories, DTOs, entities, integrations, and configuration.&lt;/p&gt;

&lt;p&gt;From the outside, that often looks like competence.&lt;/p&gt;

&lt;p&gt;But that kind of fluency can be deeply misleading.&lt;/p&gt;

&lt;p&gt;Because building software out of familiar framework-shaped parts is not the same thing as designing software well.&lt;/p&gt;

&lt;p&gt;The real questions are different:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;What is the actual business concept here?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What belongs together?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What behavior is intrinsic to the domain?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What is a real boundary, and what is just an implementation detail?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What rules should be explicit in the model rather than implied by orchestration?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Real domain modeling is not about applying a catalog of patterns. It is the disciplined, often uncomfortable work of discovering what belongs together, what behavior is intrinsic, and expressing those concepts as clearly and cohesively as possible—whether that lives in modules, functions, or simple objects. The goal is conceptual integrity, not architectural ceremony.&lt;/p&gt;

&lt;p&gt;Without those questions, software tends to take on a very predictable shape: fat service classes, anemic entities, persistence-first design, procedural workflows, business logic smeared across layers.&lt;/p&gt;

&lt;p&gt;The code works. The endpoints return data. The database persists state.&lt;/p&gt;

&lt;p&gt;But the system has not really been designed.&lt;br&gt;&lt;br&gt;
It has been assembled.&lt;/p&gt;

&lt;p&gt;And that difference matters far more than most teams realize.&lt;/p&gt;




&lt;h2&gt;
  
  
  Weak Models Create Cognitive Overload
&lt;/h2&gt;

&lt;p&gt;The cost of poor design does not usually show up immediately. At first, the system still feels manageable. A few controllers. A few services. A few repositories. Everything is still “clean.”&lt;/p&gt;

&lt;p&gt;But over time, something starts to happen. Business rules accumulate. Exceptions pile up. New requirements interact with old assumptions. Concepts that looked simple turn out to be related in ways the software never captured.&lt;/p&gt;

&lt;p&gt;And because there is no strong domain model holding those concepts together, the complexity has nowhere coherent to go. So it leaks—into service methods, orchestration flows, integration glue, persistence logic, special-case conditionals, “helper” abstractions, and coordination code.&lt;/p&gt;

&lt;p&gt;At that point, the team starts feeling something very real:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Nobody understands the whole thing anymore.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And that is the crucial moment.&lt;/p&gt;

&lt;p&gt;Because once a system becomes cognitively overwhelming, the team has two options:&lt;/p&gt;

&lt;h3&gt;
  
  
  Option A
&lt;/h3&gt;

&lt;p&gt;Reduce the complexity by improving the model.&lt;/p&gt;

&lt;h3&gt;
  
  
  Option B
&lt;/h3&gt;

&lt;p&gt;Reduce the &lt;em&gt;scope&lt;/em&gt; of the confusion by splitting it apart.&lt;/p&gt;

&lt;p&gt;A lot of teams choose Option B.&lt;/p&gt;




&lt;h2&gt;
  
  
  Distribution Becomes Compensation
&lt;/h2&gt;

&lt;p&gt;This is where architecture often stops being a design choice and starts becoming a coping mechanism.&lt;/p&gt;

&lt;p&gt;When the internal model is weak, teams still need some way to create order. And distribution gives them one.&lt;/p&gt;

&lt;p&gt;So they introduce microservices, event-driven architecture, CQRS, separate read models, ownership boundaries, queues, and asynchronous coordination.&lt;/p&gt;

&lt;p&gt;Distribution, CQRS, and event-driven architecture can have legitimate uses in rare cases of extreme scale or unavoidable organizational boundaries. But in the vast majority of systems, they are not introduced because the domain demands them. They are introduced because the internal model is too weak to provide clarity. What looks like sophisticated architecture is often just confusion hiding behind cleaner service boundaries.&lt;/p&gt;

&lt;p&gt;What they are really doing is this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;They are trying to create externally, through distribution, the boundaries they failed to create internally, through design.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And that can work. At least for a while.&lt;/p&gt;

&lt;p&gt;A smaller service &lt;em&gt;does&lt;/em&gt; feel easier to understand than a large monolith. A separate read model &lt;em&gt;does&lt;/em&gt; reduce some friction. A queue &lt;em&gt;does&lt;/em&gt; create some local decoupling.&lt;/p&gt;

&lt;p&gt;But none of that means the software has become conceptually better. It often just means the confusion has been sliced into smaller containers.&lt;/p&gt;




&lt;h2&gt;
  
  
  Local Clarity Comes at a Global Cost
&lt;/h2&gt;

&lt;p&gt;That trade is where the real damage happens.&lt;/p&gt;

&lt;p&gt;Because distribution absolutely can create local context. A team can say, “This service owns billing.” And that does help.&lt;/p&gt;

&lt;p&gt;But it is a much weaker form of clarity than a real domain model. A service boundary can tell you &lt;strong&gt;where code lives&lt;/strong&gt;. A good model can tell you what something &lt;em&gt;is&lt;/em&gt;, what it &lt;em&gt;means&lt;/em&gt;, what rules govern it, what its lifecycle is, and what relationships are essential.&lt;/p&gt;

&lt;p&gt;Those are very different levels of understanding.&lt;/p&gt;

&lt;p&gt;And when teams use distribution to manufacture context, they often gain short-term manageability at the cost of long-term agility. Because now the system starts paying the distribution tax: network failure, eventual consistency, contract drift, duplicated concepts, duplicated logic, coordination overhead, deployment complexity, operational burden, and fractured causality.&lt;/p&gt;

&lt;p&gt;And perhaps most importantly: &lt;strong&gt;lost refactorability&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;When the model is strong and cohesive, changing your mind usually means a local refactor—sometimes even a delightful collapse of concepts. When boundaries have been hardened into services, the same insight triggers contracts, versioning, migration scripts, and cross-team coordination. The cost of learning is no longer paid in thought, but in infrastructure and politics.&lt;/p&gt;

&lt;p&gt;And in software, changing your mind is not a failure. It is the job.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Real Cost Is Paid When the Business Learns Something New
&lt;/h2&gt;

&lt;p&gt;This is where badly structured software reveals itself. Not when it is first deployed. Not when the first endpoints work. Not when the dashboards are green. But when the business itself becomes better understood.&lt;/p&gt;

&lt;p&gt;Because that is what always happens. Sooner or later, the business learns: these two concepts are actually one thing, this workflow was modeled incorrectly, this rule has important exceptions, this distinction is more important than we thought, or this process should not exist at all.&lt;/p&gt;

&lt;p&gt;That is normal. That is what software is supposed to accommodate.&lt;/p&gt;

&lt;p&gt;A coherent domain model makes that kind of change survivable. A fragmented, distributed, weakly modeled system makes it expensive.&lt;/p&gt;

&lt;p&gt;Note that “coherent domain model” here does not mean the tactical patterns that became associated with DDD—entities, repositories, aggregates, and the rest. Those often added their own accidental complexity. Real modeling is simpler and deeper: it is the ongoing work of refining ubiquitous language and discovering natural conceptual boundaries so that new business insight can be absorbed with minimal violence to the existing code.&lt;/p&gt;

&lt;p&gt;Because now the insight has to travel through APIs, queues, read models, event contracts, deployment boundaries, ownership lines, duplicated rules, and partial consistency guarantees. What should have been a conceptual refactor becomes a cross-system negotiation.&lt;/p&gt;

&lt;p&gt;And that is where the bill arrives. Not because the domain was inherently impossible. But because the architecture froze yesterday’s misunderstandings into today’s structure.&lt;/p&gt;

&lt;p&gt;That is one of the worst things software can do.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This So Often Goes Unnoticed
&lt;/h2&gt;

&lt;p&gt;The most dangerous part is that this kind of architecture often looks successful. The system runs. Users use it. The company makes money. So the architecture gets treated as validated.&lt;/p&gt;

&lt;p&gt;But “it works” is one of the weakest standards in software. A system running in production proves only that it is viable enough to survive. It does &lt;strong&gt;not&lt;/strong&gt; prove that it is cheap to change, conceptually sound, structurally coherent, or good at absorbing new understanding.&lt;/p&gt;

&lt;p&gt;Most teams never get to experience how different software feels when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Concepts have a single, obvious home instead of being smeared across services&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Rules are explicit and enforceable rather than scattered in orchestration and glue code&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;New business understanding leads to a clean refactor instead of distributed coordination&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The system invites insight instead of resisting change&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without that contrast, the pain of weak modeling hidden behind distribution gets normalized as “just how complex software is.”&lt;/p&gt;

&lt;p&gt;Often, it is not. Often, it is just the cost of weak design hidden behind architecture.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Thought
&lt;/h2&gt;

&lt;p&gt;Much of today’s distributed architecture is not the result of domain insight. It is compensation for the conceptual clarity that was never built into the model. By reaching for separation instead of deeper understanding, teams gain local manageability at the expense of long-term coherence and cheap evolution.&lt;/p&gt;

&lt;p&gt;The problem is that the original lack of clarity doesn’t disappear — it just gets distributed. In the end, the same confusion that made the monolith unmaintainable will make the distributed system fail just as hard, only now it’s far more expensive and painful to fix.&lt;/p&gt;

&lt;p&gt;This is why so much “sophisticated” architecture is, in truth, just sophisticated coping.&lt;/p&gt;

</description>
      <category>softwareengineering</category>
      <category>java</category>
      <category>microservices</category>
      <category>cqrs</category>
    </item>
    <item>
      <title>Rich Domain Models: Start with What Is, Not What Happens</title>
      <dc:creator>Leon Pennings</dc:creator>
      <pubDate>Sat, 04 Apr 2026 10:18:12 +0000</pubDate>
      <link>https://forem.com/leonpennings/rich-domain-models-start-with-what-is-not-what-happens-4817</link>
      <guid>https://forem.com/leonpennings/rich-domain-models-start-with-what-is-not-what-happens-4817</guid>
      <description>&lt;p&gt;A lot of software is more difficult to build and maintain than it needs to be.&lt;/p&gt;

&lt;p&gt;Not because the business itself is inherently complex.&lt;/p&gt;

&lt;p&gt;Not because the requirements keep changing.&lt;/p&gt;

&lt;p&gt;But because the software is usually structured around the wrong things: workflows, events, commands, technical layers, frameworks, or current implementation details.&lt;/p&gt;

&lt;p&gt;When that happens, the business logic becomes scattered, hard to reason about, and expensive to evolve. The fix is not more patterns, more ceremonies, or more events. The fix is proper domain modelling.&lt;/p&gt;

&lt;p&gt;A rich domain model is built by first identifying the core concepts of the business and giving each one clear responsibilities and boundaries. Once that foundation is in place, everything else—events, workflows, persistence, integrations—becomes simpler and more stable.&lt;/p&gt;

&lt;p&gt;This is not a new technique or a branded method. It is basic systems engineering done in the right order.&lt;/p&gt;




&lt;h3&gt;
  
  
  The purpose of domain modelling
&lt;/h3&gt;

&lt;p&gt;Domain modelling is about discovering &lt;em&gt;what exists&lt;/em&gt; in the business, independent of how we happen to implement it today.&lt;/p&gt;

&lt;p&gt;It means answering a small set of fundamental questions for every important concept:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;What &lt;em&gt;is&lt;/em&gt; this thing?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What is it responsible for?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What may it know?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What may it decide?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What belongs inside its boundary, and what does not?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These questions come before any talk of events, commands, database tables, API payloads, or user flows. If those questions have not been asked and answered, domain modelling has not actually started. At best, we are only mapping interactions.&lt;/p&gt;




&lt;h3&gt;
  
  
  Start with responsibilities, not representation
&lt;/h3&gt;

&lt;p&gt;The most common mistake is beginning with &lt;em&gt;representation&lt;/em&gt; instead of responsibility.&lt;/p&gt;

&lt;p&gt;Teams start listing fields, DTOs, JSON shapes, database columns, or REST endpoints. Those are not the model; they are merely one possible way to represent the model. When you start there, you almost always end up with passive data structures and procedural logic spread across services, handlers, and utility classes.&lt;/p&gt;

&lt;p&gt;A rich domain model begins the other way around. The first questions are never “What properties does this object have?” or “What does the request body look like?” They are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;What is this thing?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What does it &lt;em&gt;do&lt;/em&gt;?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What is it responsible for?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What should it &lt;em&gt;never&lt;/em&gt; be responsible for?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Structure and representation emerge naturally once responsibilities are clear.&lt;/p&gt;




&lt;h3&gt;
  
  
  A simple way to begin
&lt;/h3&gt;

&lt;p&gt;You do not need extensive workshops, coloured sticky notes, or elaborate frameworks.&lt;/p&gt;

&lt;p&gt;The most effective technique is almost embarrassingly simple: put people in a circle of chairs. Tell one person, “You are the Order. What are you? What do you know? What are you responsible for? What should you never do?” Then add the next concept—Client, Invoice, Payment—and let them talk to each other. Let them negotiate boundaries. When something feels wrong, revise the definitions or pull up a new chair for a missing concept.&lt;/p&gt;

&lt;p&gt;The medium does not matter—cards, people, puppets, or just conversation. What matters is that you can point at a concept and force it to declare its own identity and responsibilities. When two concepts constantly need to know each other’s internals, the boundaries are probably wrong. When no one knows who should decide something, the responsibility has not been assigned yet. When a concept only exists because a UI flow needed it, it may not be a real domain concept at all.&lt;/p&gt;

&lt;p&gt;This is domain discovery. It starts with “What are we &lt;em&gt;about&lt;/em&gt;?” and then “Who does what?”—not in the sense of users or actors, but in the sense of the actual participants in the business reality: Client, Order, Invoice, Payment, Subscription, Shipment, Notification.&lt;/p&gt;




&lt;h3&gt;
  
  
  Why starting with events or workflows feels backwards
&lt;/h3&gt;

&lt;p&gt;Many popular modelling techniques (Event Storming being the most visible) begin with domain events, commands, actors, and processes. They are excellent at mapping &lt;em&gt;what happens&lt;/em&gt; and at surfacing integration points. But they are weak at discovering &lt;em&gt;what is&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;They describe motion around the business rather than the business itself. A process map can tell you that a payment failed. It cannot tell you what a Payment &lt;em&gt;is&lt;/em&gt;, what responsibilities it owns, or whether Invoice resolution belongs to the Invoice, the Order, or a separate Payment concept. It cannot distinguish a Client (the legal/commercial entity) from a User (merely a web-access mechanism for that Client).&lt;/p&gt;

&lt;p&gt;Those are modelling questions, and they must come first. Events and workflows are valuable &lt;em&gt;after&lt;/em&gt; the core model exists; they should not be the starting point. Otherwise the domain becomes limited to today’s usage patterns instead of reflecting the stable underlying reality.&lt;/p&gt;




&lt;h3&gt;
  
  
  Aggregates and the danger of procedural models
&lt;/h3&gt;

&lt;p&gt;The concept of “Aggregate” is often presented as a necessary consistency boundary. In practice it frequently becomes a procedural container: a cluster of data that a command mutates and an event is emitted from. When responsibilities have not been properly assigned, those aggregates turn into little more than transaction scripts with a fancy name.&lt;/p&gt;

&lt;p&gt;In a rich model the question is simpler: does this concept have a coherent responsibility? If it does, it owns its invariants and decisions. If it does not, no artificial boundary will save it. Objects can collaborate, but they do not need to be artificially clustered just to satisfy technical consistency rules.&lt;/p&gt;




&lt;h3&gt;
  
  
  Rich domain models make the core simple
&lt;/h3&gt;

&lt;p&gt;A well-defined domain model does not add complexity; it removes accidental complexity.&lt;/p&gt;

&lt;p&gt;Consider a typical payment flow. An Order contains Items. An Invoice points to an Order. An Invoice can be resolved by a Payment. A Payment has a type (Online, BankTransfer, etc.). That type determines how execution actually happens.&lt;/p&gt;

&lt;p&gt;In a responsibility-driven model this is straightforward:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The Invoice knows it needs to be resolved.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Payment knows it must execute according to its type.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The type itself (implemented as an enum with a strategy or small implementing classes) encapsulates the variability.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Adding a new payment mechanism tomorrow is a local change inside the Payment concept. No new workshop, no new event storm, no ripple through services. The core model stays stable; only the variable part grows.&lt;/p&gt;

&lt;p&gt;Complexity lives exactly where the variability is—not scattered across workflows, services, or “process managers.”&lt;/p&gt;




&lt;h3&gt;
  
  
  Keep the domain central; push technology to the border
&lt;/h3&gt;

&lt;p&gt;The real architectural decision is not whether a domain object may call a database or invoke an external service. The question is: does this action belong to the responsibility of this concept?&lt;/p&gt;

&lt;p&gt;If the answer is yes, the call can live inside the domain object. Technology is not the organising principle. The business meaning is.&lt;/p&gt;

&lt;p&gt;When you organise around technology layers instead (controllers, services, repositories, adapters), the business becomes invisible. Every change requires archaeological digging. When you organise around the domain, the business stays transparent and technology becomes replaceable.&lt;/p&gt;




&lt;h3&gt;
  
  
  Outcomes — short term and long term
&lt;/h3&gt;

&lt;p&gt;A domain model built this way delivers measurable improvements from the very first delivery and compounds dramatically over time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Short term:&lt;/strong&gt; Time to first production is usually &lt;em&gt;shorter&lt;/em&gt;, not longer. With a rich domain model you know the destination clearly from the start, so you can take the direct route. It is the difference between driving from The Hague to Utrecht on the A12 motorway versus taking the long detour via Amsterdam and the Afsluitdijk. Both paths eventually get you there, but the workflow-first approach feels like continuously driving “somewhat in the right direction” while you figure things out on the fly. By modelling what the business is, you learn faster, decide faster, write less boilerplate, and avoid the lengthy refactoring cycles that come from discovering the business domain later in the project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Long term:&lt;/strong&gt; The difference becomes stark — especially in non-CRUD domains such as complex ETL pipelines, logistics orchestration, risk engines, or any system with real business rules and variability.&lt;/p&gt;

&lt;p&gt;The “framework-first” or “workflow-first” approach can appear to work for a while. You can wire together services, handlers, and event processors and ship something functional. But as soon as the business evolves — new payment types, new regulatory rules, new integration partners, or changed data flows — the system turns into a web of scattered logic. Maintenance becomes slow, error-prone, and expensive. Changes ripple unpredictably because the business is no longer visible in one coherent place.&lt;/p&gt;

&lt;p&gt;In contrast, a rich domain model keeps the stable business reality in the centre. Change stays local. Payment providers, ETL transformations, or logistics carriers can be swapped without touching the core model. Fewer classes, fewer hand-offs, and far less rediscovery work are required. The result is software that is significantly cheaper to keep alive over its lifetime — often by a large margin.&lt;/p&gt;

&lt;p&gt;The economic benefit is real, but it is not the goal. It is the natural outcome of doing the engineering work correctly: modelling the domain first, responsibilities first, structure first.&lt;/p&gt;




&lt;h3&gt;
  
  
  On AI and domain modelling
&lt;/h3&gt;

&lt;p&gt;Modern AI tools are already excellent at helping with the &lt;em&gt;implementation&lt;/em&gt; phase. They can generate clean code snippets, suggest conventions, enforce patterns, and accelerate boilerplate work once the model is clear.&lt;/p&gt;

&lt;p&gt;But they have no meaningful role in the actual domain modelling itself.&lt;/p&gt;

&lt;p&gt;AI cannot sit in the circle of chairs. It cannot negotiate what a concept &lt;em&gt;is&lt;/em&gt;, what it should know, or what it should never be responsible for. It can mimic patterns it has seen in other codebases, but it lacks the lived understanding of business reality and the ability to discover stable invariants through dialogue.&lt;/p&gt;

&lt;p&gt;Writing the code remains the best mirror for your design. As soon as you start implementing, flaws in the model become visible immediately — that feedback loop is irreplaceable and deeply human. AI can polish and speed up the coding, but it should not be the one discovering or deciding the model. That work still belongs to the people who understand the domain.&lt;/p&gt;




&lt;h3&gt;
  
  
  Final thought
&lt;/h3&gt;

&lt;p&gt;Basic domain modelling is not complicated. It is simply insisting on answering the most fundamental questions first:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;What is this thing?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What is it responsible for?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What belongs inside it?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What should remain outside it?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When those questions are answered clearly, the business becomes visible in the code. Once the business is visible, the system becomes maintainable — from day one and for years to come.&lt;/p&gt;

&lt;p&gt;That is not a luxury. For any software expected to live longer than its current tech stack, it is the foundation.&lt;/p&gt;

</description>
      <category>softwareengineering</category>
      <category>java</category>
      <category>richdomainmodels</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Your software development approach is too expensive and too brittle</title>
      <dc:creator>Leon Pennings</dc:creator>
      <pubDate>Thu, 02 Apr 2026 08:14:22 +0000</pubDate>
      <link>https://forem.com/leonpennings/your-software-development-approach-is-too-expensive-and-too-brittle-4fja</link>
      <guid>https://forem.com/leonpennings/your-software-development-approach-is-too-expensive-and-too-brittle-4fja</guid>
      <description>&lt;p&gt;Most software teams are not struggling because software is inherently chaotic.&lt;/p&gt;

&lt;p&gt;They are struggling because they are paying enormous amounts of money to keep the wrong machine barely usable.&lt;/p&gt;

&lt;p&gt;That sounds dramatic.&lt;/p&gt;

&lt;p&gt;It is not.&lt;/p&gt;

&lt;p&gt;In fact, it is one of the most normal things in modern software development.&lt;/p&gt;

&lt;p&gt;A lot of systems are built in ways that are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;more expensive than they need to be,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;more fragile than they need to be,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;harder to change than they need to be,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;and harder to reason about than they need to be.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And yet they still get called “well architected.”&lt;/p&gt;

&lt;p&gt;Why?&lt;/p&gt;

&lt;p&gt;Because in software, there is usually no comparison case.&lt;/p&gt;

&lt;p&gt;No control group.&lt;/p&gt;

&lt;p&gt;No alternate implementation.&lt;/p&gt;

&lt;p&gt;No tractor parked next to the Ferrari.&lt;/p&gt;

&lt;p&gt;So if the thing eventually works, the architecture often gets promoted from merely functional to supposedly good.&lt;/p&gt;

&lt;p&gt;That is one of the deepest blind spots in software engineering.&lt;/p&gt;

&lt;p&gt;And it is how teams end up trying to plow fields with a Ferrari F40.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;The Ferrari and the tractor&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Imagine you need to plow a field.&lt;/p&gt;

&lt;p&gt;You can choose between:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;a Ferrari F40, or&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;a tractor.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This should not be a difficult decision.&lt;/p&gt;

&lt;p&gt;The tractor is not glamorous, but it is aligned to the work.&lt;/p&gt;

&lt;p&gt;It has:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;the right ground clearance,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the right tires,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the right torque profile,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the right durability characteristics,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the right maintenance expectations,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;and the right operational shape.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Ferrari has none of that.&lt;/p&gt;

&lt;p&gt;It is a remarkable machine.&lt;/p&gt;

&lt;p&gt;It is just the wrong one.&lt;/p&gt;

&lt;p&gt;And the mismatch does not merely show up once the work starts.&lt;/p&gt;

&lt;p&gt;It shows up immediately.&lt;/p&gt;

&lt;p&gt;Because before the Ferrari can even begin to perform badly in the field, someone first has to solve a completely absurd problem:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;How do we even make this thing usable for field work?&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That is where the real cost begins.&lt;/p&gt;

&lt;p&gt;Because now you need compensations.&lt;/p&gt;

&lt;p&gt;You need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;custom adaptations,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;support structures,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;protective workarounds,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;non-native operational handling,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;specialist maintenance,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;and constant care to keep the machine functioning in an environment it was never shaped for.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is the real problem with a mismatch.&lt;/p&gt;

&lt;p&gt;Not just that it performs badly.&lt;/p&gt;

&lt;p&gt;But that you now have to build an entire support ecosystem around the fact that it is wrong.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;And even that is a cheap mismatch compared to software&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In the physical world, the mismatch would at least be visible.&lt;/p&gt;

&lt;p&gt;A Ferrari F40 is obviously a terrible agricultural investment.&lt;/p&gt;

&lt;p&gt;Even with rough but realistic assumptions, the economics are absurd.&lt;/p&gt;

&lt;p&gt;In the physical world, the absurdity would be obvious on a balance sheet. A collector Ferrari F40 trades for millions, while a capable farm tractor costs a fraction of that — with maintenance profiles to match. Using the supercar for field work would not just perform poorly; it would demand absurd custom adaptations before it could even start.&lt;/p&gt;

&lt;p&gt;Software hides this mismatch better, which is why teams can run the equivalent for years and still call it maturity.&lt;/p&gt;

&lt;p&gt;So yes: in the real world, using a Ferrari to plow a field would already be economically insane.&lt;/p&gt;

&lt;p&gt;But in software, the mismatch is often much worse.&lt;/p&gt;

&lt;p&gt;Because in software:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;the cost is less visible,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the pain is spread over time,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the friction is normalized,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;and the organization often has no simpler implementation to compare it to.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That means software teams can spend years operating the equivalent of a Ferrari in a muddy field and still call it “engineering maturity.”&lt;/p&gt;

&lt;p&gt;That is the danger.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;The uniqueness trap&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This is one of the hardest structural problems in software development:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;most applications are built only once.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Not once in terms of business purpose, perhaps.&lt;/p&gt;

&lt;p&gt;But once in terms of implementation.&lt;/p&gt;

&lt;p&gt;A team typically does not build:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;one version with a cohesive domain model,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;another with CQRS and event choreography,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;another with five microservices,&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;and then compare cost, reliability, comprehensibility, and adaptability over five years.&lt;/p&gt;

&lt;p&gt;That almost never happens.&lt;/p&gt;

&lt;p&gt;So architecture is rarely judged comparatively.&lt;/p&gt;

&lt;p&gt;It is judged internally.&lt;/p&gt;

&lt;p&gt;And that means if a system eventually “works,” people often conclude that the architecture must have been reasonable.&lt;/p&gt;

&lt;p&gt;But that conclusion is deeply unreliable.&lt;/p&gt;

&lt;p&gt;Because there may have been a far cheaper, simpler, more robust, and more truthful way to build the same thing.&lt;/p&gt;

&lt;p&gt;No one knows.&lt;/p&gt;

&lt;p&gt;Because the tractor version was never built.&lt;/p&gt;

&lt;p&gt;That is the uniqueness trap.&lt;/p&gt;

&lt;p&gt;And it is one of the main reasons accidental complexity survives so easily in software.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Most software architecture is expensive support structure around a mismatch&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This is where the Ferrari metaphor becomes useful.&lt;/p&gt;

&lt;p&gt;If someone insisted on plowing a field with an F40, they would not simply “start plowing.”&lt;/p&gt;

&lt;p&gt;They would first need to invent a whole support system around the mismatch.&lt;/p&gt;

&lt;p&gt;They would need to answer questions like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;How do we prevent the chassis from bottoming out?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How do we maintain traction in mud?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How do we protect components from wear profiles they were never designed for?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How do we attach the wrong machine to the wrong task?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How do we keep it alive under repeated misuse?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In other words:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;they would need to build a compensating architecture around the fact that the machine is wrong.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That is exactly what many software teams do.&lt;/p&gt;

&lt;p&gt;They choose an architectural shape before they understand the domain, and then spend years building support mechanisms around the mismatch.&lt;/p&gt;

&lt;p&gt;That support structure often looks like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;CQRS,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;EDA,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;orchestration layers,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;distributed workflows,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;microservices,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;command buses,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;event buses,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;retries,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;compensations,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;synchronization logic,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;observability scaffolding,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;deployment choreography,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;and framework conventions.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And because all of this is technical work, it often feels sophisticated.&lt;/p&gt;

&lt;p&gt;But much of it exists only because the software was shaped incorrectly to begin with.&lt;/p&gt;

&lt;p&gt;That is the setup tax of accidental complexity.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Back to Brooks: essential versus accidental complexity&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Fred Brooks gave us the cleanest possible vocabulary for this problem decades ago.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Essential complexity&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Essential complexity is the irreducible complexity of the business domain itself.&lt;/p&gt;

&lt;p&gt;This is the complexity that actually belongs.&lt;/p&gt;

&lt;p&gt;Examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;pricing rules,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;eligibility constraints,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;shipment state transitions,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;reconciliation logic,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;metadata rules,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;legal behavior,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;catalog semantics,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;scheduling constraints.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This complexity exists because reality is complex.&lt;/p&gt;

&lt;p&gt;You cannot remove it.&lt;/p&gt;

&lt;p&gt;You can only understand it, model it, and localize it properly.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Accidental complexity&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Accidental complexity is everything introduced by the solution that the problem itself did not require.&lt;/p&gt;

&lt;p&gt;Examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;framework conventions,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;architectural ceremony,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;messaging choreography,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;unnecessary distribution,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;layered indirection,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;technical orchestration,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;compensating workflows,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;integration-driven domain shape,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;“enterprise” abstraction stacks.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This complexity is not business truth.&lt;/p&gt;

&lt;p&gt;It is construction overhead.&lt;/p&gt;

&lt;p&gt;And much of modern software architecture is simply accidental complexity with better branding.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;The first job of software design is not to choose an architecture&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;It is to understand the domain.&lt;/p&gt;

&lt;p&gt;That should not be controversial.&lt;/p&gt;

&lt;p&gt;And yet much of modern software development behaves as if the opposite were true.&lt;/p&gt;

&lt;p&gt;Teams routinely begin with questions like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Should we use CQRS?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Should we use EDA?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Should we split this into microservices?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Should this be event-driven?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Should we separate reads and writes?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Should this be asynchronous?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Should we introduce orchestration?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Those are not first questions.&lt;/p&gt;

&lt;p&gt;Those are late questions.&lt;/p&gt;

&lt;p&gt;The first question is:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;What is the business, really?&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Until that question is answered properly, every major architectural choice is at risk of being premature.&lt;/p&gt;

&lt;p&gt;And premature architecture is usually just accidental complexity entering the system early enough to become permanent.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;The real problem is Pattern-Driven Design&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The issue is not that CQRS, EDA, or messaging can never appear in a system.&lt;/p&gt;

&lt;p&gt;The issue is that many teams no longer design from the domain outward.&lt;/p&gt;

&lt;p&gt;They design from patterns inward.&lt;/p&gt;

&lt;p&gt;That is how software ends up shaped by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;command handlers,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;event buses,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;orchestration layers,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;service templates,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;and framework conventions&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;before anyone has actually understood what the business is.&lt;/p&gt;

&lt;p&gt;That is not architecture.&lt;/p&gt;

&lt;p&gt;That is &lt;strong&gt;Pattern-Driven Design&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;And Pattern-Driven Design is one of the fastest ways to bury essential complexity under accidental complexity.&lt;/p&gt;

&lt;p&gt;Because once the pattern becomes the starting point, the business no longer gets modeled on its own terms.&lt;/p&gt;

&lt;p&gt;It gets forced to fit the machinery.&lt;/p&gt;

&lt;p&gt;That is not simplification.&lt;/p&gt;

&lt;p&gt;That is distortion.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Always start with the domain model&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;If the goal is to avoid expensive, brittle, overcompensated systems, then the starting point is straightforward:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Always start with the domain model.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Not because every system needs an elaborate object hierarchy.&lt;/p&gt;

&lt;p&gt;Not because “DDD” is fashionable.&lt;/p&gt;

&lt;p&gt;Not because object orientation is sacred.&lt;/p&gt;

&lt;p&gt;But because if you do not start there, something else will define the shape of the software instead.&lt;/p&gt;

&lt;p&gt;And that “something else” is usually accidental.&lt;/p&gt;

&lt;p&gt;If you do not begin with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;what the business concepts are,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;what they mean,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;what they are responsible for,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;what must always be true,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;how they are allowed to change,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;and how they interact,&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;then the system will instead be shaped by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;endpoints,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;persistence structure,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;framework constraints,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;service boundaries,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;message flows,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;handler conventions,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;or transport semantics.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And once that happens, the business is no longer being modeled.&lt;/p&gt;

&lt;p&gt;It is being adapted to the machinery.&lt;/p&gt;

&lt;p&gt;That is where software becomes expensive and brittle.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;A user story is not a model&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This is one of the most common and costly confusions in software teams.&lt;/p&gt;

&lt;p&gt;A user story is not a model.&lt;/p&gt;

&lt;p&gt;A ticket is not a model.&lt;/p&gt;

&lt;p&gt;A process diagram is not a model.&lt;/p&gt;

&lt;p&gt;A request from the business is not yet the business.&lt;/p&gt;

&lt;p&gt;These things describe surface behavior.&lt;/p&gt;

&lt;p&gt;They do not necessarily describe the actual structure or semantics of the domain.&lt;/p&gt;

&lt;p&gt;That means implementation should never start by merely wiring the request into the chosen architecture.&lt;/p&gt;

&lt;p&gt;It should start by asking:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;What actually exists here?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What is this concept responsible for?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Which rules belong together?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Which state transitions are valid?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Which interactions are intrinsic?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Which behaviors are essential and which are incidental?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is the real work of software design.&lt;/p&gt;

&lt;p&gt;And the clearest place to do that work is the domain model.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;A rich domain model is not overengineering&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This is where a lot of modern teams have become confused.&lt;/p&gt;

&lt;p&gt;There is a recurring assumption that a rich domain model is somehow “too much.”&lt;/p&gt;

&lt;p&gt;But in practice, what often happens is not that the logic disappears.&lt;/p&gt;

&lt;p&gt;It simply moves elsewhere.&lt;/p&gt;

&lt;p&gt;If the business logic is not in the model, it will end up in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;services,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;handlers,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;orchestrators,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;subscribers,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;validators,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;workflows,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;pipelines,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;process managers,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;or framework glue.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is not simplification.&lt;/p&gt;

&lt;p&gt;That is displacement.&lt;/p&gt;

&lt;p&gt;A rich domain model is not about making software “academic.”&lt;/p&gt;

&lt;p&gt;It is about ensuring that the unavoidable business complexity lives where it is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;explicit,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;cohesive,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;inspectable,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;and semantically meaningful.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In other words:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;the model should contain the business.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Not the framework.&lt;/p&gt;

&lt;p&gt;Not the message bus.&lt;/p&gt;

&lt;p&gt;Not the choreography.&lt;/p&gt;

&lt;p&gt;Not the deployment topology.&lt;/p&gt;

&lt;p&gt;The business.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;If the domain is simple, the model will be simple&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This is where the usual objection appears:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“But not every system needs a rich domain model.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Correct.&lt;/p&gt;

&lt;p&gt;But that does not weaken the argument at all.&lt;/p&gt;

&lt;p&gt;Because the real point is not that every system needs a complex model.&lt;/p&gt;

&lt;p&gt;The point is:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;every system should begin by discovering whether the domain is simple or complex.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And the correct place to do that is still the model.&lt;/p&gt;

&lt;p&gt;If the domain turns out to be simple, then good.&lt;/p&gt;

&lt;p&gt;The model will simply remain small and quiet.&lt;/p&gt;

&lt;p&gt;That is not failure.&lt;/p&gt;

&lt;p&gt;That is successful discovery of simplicity.&lt;/p&gt;

&lt;p&gt;But deciding not to start there is a mistake.&lt;/p&gt;

&lt;p&gt;Because then simplicity is not being discovered.&lt;/p&gt;

&lt;p&gt;It is being assumed.&lt;/p&gt;

&lt;p&gt;And assumed simplicity is one of the easiest ways accidental complexity gets invited in.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;CQRS and EDA are often compensations for unclear modeling&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Here is the part many people will resist.&lt;/p&gt;

&lt;p&gt;That is fine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CQRS and EDA are very often workarounds for bad design or not knowing how to model.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That does not mean they can never appear.&lt;/p&gt;

&lt;p&gt;It means they should almost never appear as &lt;strong&gt;up-front architectural choices&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;That distinction matters enormously.&lt;/p&gt;

&lt;p&gt;They can absolutely emerge later as observations in retrospect.&lt;/p&gt;

&lt;p&gt;But they should not be adopted as predefined frameworks before the domain has been understood.&lt;/p&gt;

&lt;p&gt;Because once that happens, the architecture is no longer responding to the domain.&lt;/p&gt;

&lt;p&gt;The domain is being forced into the architecture.&lt;/p&gt;

&lt;p&gt;That is backwards.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;CQRS is usually an observation, not a design starting point&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Properly understood, CQRS is not something you “do.”&lt;/p&gt;

&lt;p&gt;It is simply the recognition that:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;the model used to change business state is not always the same model best suited for retrieving and navigating information.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That is all.&lt;/p&gt;

&lt;p&gt;And sometimes that is perfectly valid.&lt;/p&gt;

&lt;p&gt;A search engine like Lucene is a very good example.&lt;/p&gt;

&lt;p&gt;The write side may simply persist documents or structured domain state.&lt;/p&gt;

&lt;p&gt;The read side may support:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;indexing,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;tokenization,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ranking,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;full-text search,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;query optimization.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Those are not the same concern.&lt;/p&gt;

&lt;p&gt;That is a natural asymmetry.&lt;/p&gt;

&lt;p&gt;That is CQRS as an observation.&lt;/p&gt;

&lt;p&gt;But that is very different from deciding on day one that the architecture will have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;command handlers,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;query handlers,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;buses,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;mediators,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;folders,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;pipelines,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;and all the associated ceremony.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is not domain modeling.&lt;/p&gt;

&lt;p&gt;That is accidental complexity pretending to be rigor.&lt;/p&gt;

&lt;p&gt;Most CQRS implementations are just &lt;strong&gt;CRUD with bureaucracy&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;EDA is often the same mistake, but with more latency&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Event-driven architecture is often sold as if it were inherently sophisticated.&lt;/p&gt;

&lt;p&gt;It is not.&lt;/p&gt;

&lt;p&gt;Very often, it is simply a sign that direct responsibility was not modeled clearly enough.&lt;/p&gt;

&lt;p&gt;There is a major difference between:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;recognizing a domain fact,&lt;br&gt;&lt;br&gt;
and&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;externalizing causality into a distributed system.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Those are not the same thing.&lt;/p&gt;

&lt;p&gt;A domain event can be a useful modeling concept.&lt;/p&gt;

&lt;p&gt;But when every business consequence gets turned into:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;a message,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;a subscriber,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;a consumer,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;a queue,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;a retry policy,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;a dead-letter topic,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;a compensating process,&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;then what often happened is not decoupling.&lt;/p&gt;

&lt;p&gt;What happened is that one coherent business act was split into multiple technical acts — and the system now needs operational rituals to pretend they are still one thing.&lt;/p&gt;

&lt;p&gt;That is not elegance.&lt;/p&gt;

&lt;p&gt;That is fragmentation.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;If an event is required for correctness, it belongs in the same transaction&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This is where a lot of “event-driven” thinking falls apart.&lt;/p&gt;

&lt;p&gt;If an event represents something the business considers part of the same completed action, then it should not be externalized into eventual consistency theater.&lt;/p&gt;

&lt;p&gt;It should be processed within the same transactional consistency boundary.&lt;/p&gt;

&lt;p&gt;Often that means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;same model,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;same process,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;same database transaction,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;same JDBC transaction.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Because if correctness depends on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;retries,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;cleanup,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;compensating actions,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;dead-letter queues,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;reconciliation jobs,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;or support scripts,&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;then the architecture has usually split apart something the business still considers one coherent act.&lt;/p&gt;

&lt;p&gt;That is not decoupling.&lt;/p&gt;

&lt;p&gt;That is a modeling failure disguised as scalability.&lt;/p&gt;

&lt;p&gt;The simple rule is this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;If the business says these things are one thing, the software should not split them into many things.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Only effects that are genuinely:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;external,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;observational,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;optional,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;or secondary&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;should be allowed to escape the core transactional boundary asynchronously.&lt;/p&gt;

&lt;p&gt;Everything else belongs together.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Microservices are often bad design with Kubernetes&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;And yes, the same critique applies to microservices.&lt;/p&gt;

&lt;p&gt;Microservices are one of the most overprescribed and underjustified architectural choices in modern software.&lt;/p&gt;

&lt;p&gt;They are usually discussed in terms of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;scaling,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;team autonomy,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;resilience,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;independent deployment,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ownership.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But that framing hides the actual cost.&lt;/p&gt;

&lt;p&gt;Because microservices are not just a deployment decision.&lt;/p&gt;

&lt;p&gt;They are a fragmentation decision.&lt;/p&gt;

&lt;p&gt;They force teams to commit to distributed boundaries early — often before anyone has proven those boundaries are semantically real.&lt;/p&gt;

&lt;p&gt;And once the split is made, the business has to pretend those boundaries are natural.&lt;/p&gt;

&lt;p&gt;That is how teams end up with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;cross-service workflows,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;distributed invariants,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;duplicated concepts,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;compensating logic,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;service orchestration,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;and “eventual consistency” as a lifestyle.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is not architecture.&lt;/p&gt;

&lt;p&gt;That is often just what happens when one cohesive domain gets cut into pieces because “small services” sounded modern.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Logical cohesion comes before physical scale&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This is where the usual counterargument appears:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Yes, but what about scale?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Fair question.&lt;/p&gt;

&lt;p&gt;But scale does not rescue bad boundaries.&lt;/p&gt;

&lt;p&gt;It amplifies them.&lt;/p&gt;

&lt;p&gt;If you cannot model a business capability coherently in one process, you are very unlikely to improve it by scattering it across twenty.&lt;/p&gt;

&lt;p&gt;That is because &lt;strong&gt;logical cohesion is a prerequisite for physical distribution&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A coherent system can sometimes be split later if reality genuinely demands it.&lt;/p&gt;

&lt;p&gt;An incoherent system does not become better by being distributed.&lt;/p&gt;

&lt;p&gt;It just becomes harder to debug, harder to reason about, and more expensive to keep alive.&lt;/p&gt;

&lt;p&gt;So yes, scale matters.&lt;/p&gt;

&lt;p&gt;But scale is not an excuse to abandon cohesion before you have even found it.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Small is not the goal. Cohesion is.&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The phrase “microservice” already biases the conversation in the wrong direction.&lt;/p&gt;

&lt;p&gt;Because it encourages optimization for smallness.&lt;/p&gt;

&lt;p&gt;But smallness is not the goal.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Cohesion is the goal.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The real objective is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;semantically meaningful boundaries,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;high internal density of behavior,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;low cross-boundary coordination.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is very different.&lt;/p&gt;

&lt;p&gt;If one business action routinely requires orchestration across multiple internal services, the split is probably wrong.&lt;/p&gt;

&lt;p&gt;That is one of the best architectural tests there is.&lt;/p&gt;

&lt;p&gt;Because if the business still experiences something as one coherent operation, but the software requires:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;service A,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;then service B,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;then service C,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;then retries and compensations if one fails,&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;then the architecture has not discovered a boundary.&lt;/p&gt;

&lt;p&gt;It has manufactured one.&lt;/p&gt;

&lt;p&gt;And now it has to manage the damage.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;The real cost of framework-first architecture is not implementation. It is drag.&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This is where the economics become severe.&lt;/p&gt;

&lt;p&gt;Bad architecture is not expensive merely because it takes slightly longer to build.&lt;/p&gt;

&lt;p&gt;It is expensive because it creates organizational drag for years.&lt;/p&gt;

&lt;p&gt;That drag shows up everywhere.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Slower feature development&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Every change now has to move through machinery that was introduced before the business was properly understood.&lt;/p&gt;

&lt;p&gt;So even small changes require:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;coordination,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;contract changes,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;handler updates,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;event flow changes,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;service touchpoints,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;deployment sequencing,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;orchestration review.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is not domain complexity.&lt;/p&gt;

&lt;p&gt;That is architecture tax.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;More defects and harder recovery&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;When one coherent business action has been fragmented across:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;services,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;queues,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;projections,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;retries,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;and compensations,&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;then failure handling becomes vastly more expensive.&lt;/p&gt;

&lt;p&gt;The question is no longer:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Did the business rule execute correctly?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It becomes:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Which part of the distributed choreography failed, and what state is the system now in?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That is a much more expensive problem to solve.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Permanent cognitive overhead&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This is one of the biggest hidden costs in software.&lt;/p&gt;

&lt;p&gt;A misaligned architecture forces every engineer to carry extra mental load just to understand the system.&lt;/p&gt;

&lt;p&gt;Instead of reasoning directly about the business, they must first reason about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;the framework,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the orchestration model,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the service topology,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the event timing,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the deployment shape,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the technical conventions.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That means every change is more mentally expensive than it should be.&lt;/p&gt;

&lt;p&gt;And because salaries are the dominant cost in software, &lt;strong&gt;cognitive inefficiency is financial inefficiency&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;The architecture becomes a second problem&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;At some point, the software is no longer difficult because the business is difficult.&lt;/p&gt;

&lt;p&gt;It is difficult because the architecture has become a second problem layered on top of the first.&lt;/p&gt;

&lt;p&gt;The system is now solving:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;the business domain, and&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the consequences of its own design choices.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That is pure waste.&lt;/p&gt;

&lt;p&gt;And because most teams never built the tractor version, they often do not even realize how much of their effort is going into supporting the machine rather than solving the problem.&lt;/p&gt;

&lt;p&gt;That is the uniqueness trap again.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;The most expensive architecture is not the one that fails immediately&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;It is the one that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;works just enough,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;survives just long enough,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;and obscures its own cost just well enough&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;that nobody ever questions whether the machine was appropriate in the first place.&lt;/p&gt;

&lt;p&gt;That is what makes framework-first architecture so dangerous.&lt;/p&gt;

&lt;p&gt;It often does not fail loudly.&lt;/p&gt;

&lt;p&gt;It succeeds &lt;strong&gt;expensively&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;And that is much worse.&lt;/p&gt;

&lt;p&gt;Because visible failure can trigger redesign.&lt;/p&gt;

&lt;p&gt;But expensive success gets institutionalized.&lt;/p&gt;

&lt;p&gt;It becomes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;“our platform,”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;“our standard architecture,”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;“our scalable foundation,”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;“our engineering maturity.”&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When in reality, it may just be a Ferrari that the organization has spent five years trying to teach to plow a field.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;The first responsibility of software architecture is not scalability&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;It is not flexibility.&lt;br&gt;&lt;br&gt;
It is not “future-proofing.”&lt;br&gt;&lt;br&gt;
It is not pattern compliance.&lt;br&gt;&lt;br&gt;
It is not cloud nativeness.&lt;br&gt;&lt;br&gt;
It is not distributed elegance.&lt;/p&gt;

&lt;p&gt;It is this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;to make the essential complexity of the business explicit, cohesive, and understandable.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That is the job.&lt;/p&gt;

&lt;p&gt;Everything else comes later.&lt;/p&gt;

&lt;p&gt;And if the software cannot explain the business clearly through its model, then it is not well architected — no matter how many services, handlers, events, buses, frameworks, or diagrams surround it.&lt;/p&gt;

&lt;p&gt;Because at that point, the architecture is no longer serving the business.&lt;/p&gt;

&lt;p&gt;The business is serving the architecture.&lt;/p&gt;

&lt;p&gt;And that is why so much modern software is too expensive and too brittle.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;A much better default&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A better architectural instinct is this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Do not ask what architecture you can build.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ask what architecture the domain actually justifies.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And if the answer is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;smaller,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;more cohesive,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;more local,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;less distributed,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;less framework-driven,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;and more explicit in its model&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;than current fashion prefers, that is not a sign of immaturity.&lt;/p&gt;

&lt;p&gt;It is often a sign that the problem is finally being understood.&lt;/p&gt;

&lt;p&gt;The next time a team is asked to “choose an architecture,” the first question should not be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Which framework?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Which pattern?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Which cloud primitive?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Which service template?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It should be:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;What is the business, and what is the cheapest, most coherent way to represent it truthfully?&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Because software does not become expensive and brittle by accident.&lt;/p&gt;

&lt;p&gt;It becomes expensive and brittle when teams choose machinery before they understand the work.&lt;/p&gt;

&lt;p&gt;And from that point on, they do not just have a domain to solve.&lt;/p&gt;

&lt;p&gt;They also have an architecture to survive.&lt;/p&gt;

&lt;p&gt;That is not engineering maturity.&lt;/p&gt;

&lt;p&gt;That is paying interest on a design mistake.&lt;/p&gt;

</description>
      <category>softwareengineering</category>
      <category>cqrs</category>
      <category>eventdriven</category>
      <category>ddd</category>
    </item>
    <item>
      <title>When CI/CD Becomes the Goal: The Quiet Erosion of Engineering Ownership</title>
      <dc:creator>Leon Pennings</dc:creator>
      <pubDate>Mon, 30 Mar 2026 06:04:01 +0000</pubDate>
      <link>https://forem.com/leonpennings/when-cicd-becomes-the-goal-the-quiet-erosion-of-engineering-ownership-3006</link>
      <guid>https://forem.com/leonpennings/when-cicd-becomes-the-goal-the-quiet-erosion-of-engineering-ownership-3006</guid>
      <description>&lt;p&gt;Software delivery has become one of the most ritualized practices in modern development.&lt;/p&gt;

&lt;p&gt;Pipelines are longer.&lt;br&gt;&lt;br&gt;
Checks are stricter.&lt;br&gt;&lt;br&gt;
Deployments are more automated.&lt;br&gt;&lt;br&gt;
Dashboards are greener than ever.&lt;/p&gt;

&lt;p&gt;Yet in many teams, software has not become more engineered.&lt;/p&gt;

&lt;p&gt;It has become more processed.&lt;/p&gt;

&lt;p&gt;That distinction matters.&lt;/p&gt;

&lt;p&gt;CI/CD was never intended as an excuse to pile machinery on top of weak engineering. It started as a practical response to real problems. But somewhere along the way, much of the industry stopped using it to support strong engineering and began using it to compensate for its absence.&lt;/p&gt;

&lt;p&gt;That is where things quietly went wrong.&lt;/p&gt;




&lt;h2&gt;
  
  
  What CI Originally Solved
&lt;/h2&gt;

&lt;p&gt;The original idea behind Continuous Integration was straightforward.&lt;/p&gt;

&lt;p&gt;It was never primarily about pipelines, YAML, or branch policies. It was about forcing reality into the room early.&lt;/p&gt;

&lt;p&gt;Developers were expected to integrate frequently — often daily — into a shared codebase. The goal was simple: prevent teams from drifting into parallel worlds and discovering too late that their work didn’t fit together.&lt;/p&gt;

&lt;p&gt;That solved a real problem.&lt;/p&gt;

&lt;p&gt;Frequent integration forced teams to confront overlap, collisions, ambiguity, and unintended coupling while the cost of correction was still low. But CI did something subtler and arguably more important: it reinforced the team while development was still happening.&lt;/p&gt;

&lt;p&gt;Developers didn’t merely discover each other’s work after the fact. They had to continuously adapt to one another’s choices, assumptions, and interpretations of the system in the moment. That pressure was not a flaw. It was the point.&lt;/p&gt;

&lt;p&gt;This is how engineering sharpens itself — not by letting everyone disappear into isolated implementation tunnels and comparing answers at the end, but by shaping and correcting each other during the act of construction. Real engineering teams do not just divide work. They reinforce shared understanding.&lt;/p&gt;

&lt;p&gt;Original CI made integration a living team concern rather than a delayed administrative event.&lt;/p&gt;

&lt;p&gt;That was healthy engineering.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Continuous Delivery Originally Solved
&lt;/h2&gt;

&lt;p&gt;Continuous Delivery was aimed at a different concern than CI.&lt;/p&gt;

&lt;p&gt;Not integration itself, but the path from integrated code to running software.&lt;/p&gt;

&lt;p&gt;And to be fair, that was not a fake concern.&lt;/p&gt;

&lt;p&gt;But it also was not universally the disaster modern delivery culture sometimes pretends it was.&lt;/p&gt;

&lt;p&gt;In many Java systems, deployment was already fairly boring. An application server was stopped, a WAR or EAR was replaced, the instance was restarted, and the system was verified. That was not always elegant, but neither was it some fundamental engineering crisis.&lt;/p&gt;

&lt;p&gt;So the real value of CD was not that it magically solved an impossible deployment problem.&lt;/p&gt;

&lt;p&gt;Its promise was narrower and more practical: to make the release path more repeatable, more standardized, less person-dependent, and easier to execute consistently across teams and environments.&lt;/p&gt;

&lt;p&gt;That is a reasonable goal.&lt;/p&gt;

&lt;p&gt;And in some environments, it becomes more than reasonable — it becomes necessary.&lt;/p&gt;

&lt;p&gt;Once deployments span multiple machines, rolling restarts, clustered services, or orchestrated server fleets, manual deployment stops being merely inconvenient and starts becoming operationally impractical. At that point, automation is not theater. It is simply the sane way to move software safely and consistently.&lt;/p&gt;

&lt;p&gt;That is where CD has real value.&lt;/p&gt;

&lt;p&gt;But not all release friction was technical.&lt;/p&gt;

&lt;p&gt;In many organizations, a significant part of the “deployment problem” came from the surrounding structure itself: separate infrastructure departments, ticket-driven handoffs, release scheduling rituals, and operational processes that turned even simple deployments into expensive coordination exercises.&lt;/p&gt;

&lt;p&gt;That pain was real — but it is important to name it accurately.&lt;/p&gt;

&lt;p&gt;Often, the difficulty was not in replacing the software.&lt;/p&gt;

&lt;p&gt;It was in navigating the organization around it.&lt;/p&gt;

&lt;p&gt;Modern delivery automation did remove a great deal of that friction.&lt;/p&gt;

&lt;p&gt;But in many cases, the underlying pattern did not disappear. It simply moved.&lt;/p&gt;

&lt;p&gt;Where infrastructure teams once controlled servers and release windows, platform and pipeline teams now increasingly control the mechanics of delivery itself. The form changed. The separation often did not.&lt;/p&gt;

&lt;p&gt;And that matters more than it first appears.&lt;/p&gt;

&lt;p&gt;Because once the release path is defined by people who do not carry the semantic or business consequences of the software, the pipeline can quietly become a surrogate for ownership.&lt;/p&gt;

&lt;p&gt;That is where the trade-offs began.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where It Started to Go Wrong
&lt;/h2&gt;

&lt;p&gt;The issue is not that CI/CD solved fake problems. The issue is that much of the industry adopted the tooling and rituals while quietly abandoning the engineering assumptions that gave those practices their value.&lt;/p&gt;

&lt;p&gt;Once that happened, CI/CD stopped reinforcing good engineering and started compensating for weak engineering instead.&lt;/p&gt;

&lt;p&gt;A lot of what now passes for “CI” is no longer continuous integration.&lt;/p&gt;

&lt;p&gt;It is deferred reconciliation.&lt;/p&gt;

&lt;p&gt;Developers work in isolation on long-lived branches, treating the merge as the first serious moment of contact with the rest of the system. The pain that CI was designed to expose early is now allowed to accumulate until the branch is “ready.” The pipeline creates the illusion of discipline, but the underlying practice has shifted.&lt;/p&gt;

&lt;p&gt;The old model forced developers to adapt to each other continuously.&lt;/p&gt;

&lt;p&gt;The modern branch-heavy model lets them adapt only at the end.&lt;/p&gt;

&lt;p&gt;What makes this regression more serious is that it did not happen accidentally. In many teams, CI was gradually reshaped to serve a different goal: continuous deployment of independently developed changes.&lt;/p&gt;

&lt;p&gt;That sounds efficient, but it came with a structural trade-off.&lt;/p&gt;

&lt;p&gt;In order to deploy “each feature” continuously, work first had to become isolatable. That pushed development toward branch-based workflows, delayed integration, and feature-level thinking. The unit of progress stopped being the continuously evolving shared system and became the individually shippable change.&lt;/p&gt;

&lt;p&gt;And once that shift happened, CI changed with it.&lt;/p&gt;

&lt;p&gt;What used to be immediate feedback on a real check-in against the shared codebase became a staged validation process around isolated work. The branch is tested. The pull request is reviewed. The pipeline is green. But the fully integrated system — in motion, under changing conditions, with multiple real changes meeting each other — is often encountered meaningfully much later.&lt;/p&gt;

&lt;p&gt;That is not a small process adjustment.&lt;/p&gt;

&lt;p&gt;It is a relocation of feedback.&lt;/p&gt;

&lt;p&gt;And when feedback moves later, risk moves with it.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Social Cost Nobody Mentions
&lt;/h2&gt;

&lt;p&gt;This changes far more than code flow.&lt;/p&gt;

&lt;p&gt;It changes the social structure of development itself.&lt;/p&gt;

&lt;p&gt;Instead of reinforcing each other during construction, developers increasingly become delayed reviewers, test-runners, or approval gates. The shared act of building gives way to a serialized process of isolated work followed by late validation.&lt;/p&gt;

&lt;p&gt;That may still produce working software, but it does not produce the same quality of team thinking.&lt;/p&gt;

&lt;p&gt;The old model created friction early, while people were still shaping the solution together. The newer model often postpones that friction until after mental commitment has set in. At that point, integration becomes negotiation rather than collaboration.&lt;/p&gt;

&lt;p&gt;That is a significant regression.&lt;/p&gt;

&lt;p&gt;A team stops behaving like a team.&lt;/p&gt;

&lt;p&gt;It starts behaving like a collection of individuals working in parallel and negotiating reality afterward.&lt;/p&gt;

&lt;p&gt;And once that happens, the pipeline begins to replace the team as the thing that “validates” software.&lt;/p&gt;

&lt;p&gt;That is a dangerous substitution.&lt;/p&gt;

&lt;p&gt;Because a team can challenge assumptions, surface ambiguity, and expose misunderstandings while the system is still being shaped.&lt;/p&gt;

&lt;p&gt;A pipeline cannot.&lt;/p&gt;

&lt;p&gt;It can only tell you whether a predefined process passed.&lt;/p&gt;

&lt;p&gt;It cannot tell you whether the software still makes sense.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Illusion of Delivery Maturity
&lt;/h2&gt;

&lt;p&gt;Continuous Delivery has suffered a parallel fate.&lt;/p&gt;

&lt;p&gt;In theory, CD makes deployments safe by making them repeatable. In practice, many teams achieve “safety” by surrounding brittle systems with ever-growing layers of process, abstraction, and automation. The application becomes harder to understand. The deployment model grows more complex. And the pipeline swells to absorb complexity that should never have existed in the software itself.&lt;/p&gt;

&lt;p&gt;Eventually, the release system becomes more elaborate than the software it delivers.&lt;/p&gt;

&lt;p&gt;This raises an uncomfortable question:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Are we automating a healthy system, or are we automating around an unhealthy one?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If deployment is difficult, brittle, or mysterious, there are usually only two explanations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The system genuinely operates in a complex environment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The software was never designed with operability in mind.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The first is sometimes unavoidable.&lt;/p&gt;

&lt;p&gt;The second is too often ignored.&lt;/p&gt;




&lt;h2&gt;
  
  
  Good Deployment Begins in Design
&lt;/h2&gt;

&lt;p&gt;Much deployment pain is treated as the inevitable cost of “modern systems.” In many business applications, that pain is not inevitable — it is designed in.&lt;/p&gt;

&lt;p&gt;A well-engineered application should be deployable because it was built to be deployable: operational state kept where it belongs, environment-specific behavior minimized, startup made deterministic, migrations treated as part of the lifecycle, and only what truly needs to vary externalized.&lt;/p&gt;

&lt;p&gt;When deployment is simple by design, the need for pipeline heroics drops dramatically.&lt;/p&gt;

&lt;p&gt;Automation then becomes what it was meant to be: a way to remove repetition and error from a sound process — not a bandage for an unsound one.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Dangerous Slide into “Production as Test Environment”
&lt;/h2&gt;

&lt;p&gt;This is where the earlier shift becomes dangerous.&lt;/p&gt;

&lt;p&gt;When integration is no longer happening continuously during development, reality does not disappear.&lt;/p&gt;

&lt;p&gt;It simply waits.&lt;/p&gt;

&lt;p&gt;And increasingly, that reality is encountered much later — often in environments close to or inside production.&lt;/p&gt;

&lt;p&gt;This is why so many modern delivery models quietly drift toward using production as their final validation environment. Not because teams explicitly decide to “test in production,” but because the system as a whole often meets changing real-world conditions there more meaningfully than anywhere before it.&lt;/p&gt;

&lt;p&gt;That is a very different feedback model from original CI.&lt;/p&gt;

&lt;p&gt;Original CI gave teams rapid feedback on check-ins against a shared and continuously evolving codebase. Modern branch-heavy CI/CD often gives rapid feedback on isolated changes, then relies on deployment frequency to surface what only the integrated whole can reveal.&lt;/p&gt;

&lt;p&gt;That is not the same kind of safety.&lt;/p&gt;

&lt;p&gt;It is simply a different place to discover reality.&lt;/p&gt;

&lt;p&gt;Smaller and faster deployments are often presented as inherently safer.&lt;/p&gt;

&lt;p&gt;But that is only true if one quietly assumes that the meaning and impact of a change are already well understood.&lt;/p&gt;

&lt;p&gt;In practice, that is often exactly what is not true.&lt;/p&gt;

&lt;p&gt;A smaller deployment unit may reduce rollback scope or make blame attribution easier, but that is not the same as reducing actual engineering risk. If anything, the opposite can happen: the change is seen by fewer people, discussed less deeply, and integrated less continuously before it reaches production.&lt;/p&gt;

&lt;p&gt;That does not reduce uncertainty.&lt;/p&gt;

&lt;p&gt;It merely packages uncertainty into smaller increments.&lt;/p&gt;

&lt;p&gt;And when production changes multiple times per day, stability itself begins to shrink. The system is only as stable as the scenarios already captured in the automated tests — tests which are themselves usually adapted to the most recent expected path into production.&lt;/p&gt;

&lt;p&gt;That creates a dangerous illusion of control.&lt;/p&gt;

&lt;p&gt;The software appears validated, but only within the shrinking boundary of what was recently anticipated.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Semantic Risk Pipelines Cannot See
&lt;/h2&gt;

&lt;p&gt;More importantly, the true impact of a change is often not visible from the code itself.&lt;/p&gt;

&lt;p&gt;A seemingly trivial modification for a developer can carry major domain consequences. And a technically substantial change can sometimes be domain-trivial. That asymmetry matters.&lt;/p&gt;

&lt;p&gt;Because developers are not domain experts.&lt;/p&gt;

&lt;p&gt;They can understand the implementation, but they cannot reliably infer the full business meaning of a change from code alone — not without sustained discussion and feedback from people who actually understand the domain.&lt;/p&gt;

&lt;p&gt;And the most dangerous part is that this is not predictable.&lt;/p&gt;

&lt;p&gt;It is not true that every change requires deep domain validation.&lt;/p&gt;

&lt;p&gt;But it is also not reliably obvious which changes do.&lt;/p&gt;

&lt;p&gt;That is exactly why semantic risk cannot be reduced to diff size, deployment frequency, or pipeline confidence.&lt;/p&gt;

&lt;p&gt;Many of the hardest failures are not technical crashes or exceptions. They are semantic failures: the system behaves exactly as the code and tests dictate, yet wrongly according to the business.&lt;/p&gt;

&lt;p&gt;That is where domain experts matter.&lt;/p&gt;

&lt;p&gt;And no amount of deployment frequency changes that fact.&lt;/p&gt;




&lt;h2&gt;
  
  
  Human Validation Is Not the Enemy of Engineering
&lt;/h2&gt;

&lt;p&gt;One of the stranger modern assumptions is that removing human judgment from the release path is always progress.&lt;/p&gt;

&lt;p&gt;It is not.&lt;/p&gt;

&lt;p&gt;There is a crucial difference between automating repeatable mechanics and eliminating deliberate validation. Those should never be conflated.&lt;/p&gt;

&lt;p&gt;A strong delivery process should automate the mechanical parts — build, package, verify, deploy to controlled environments, reproduce release steps consistently.&lt;/p&gt;

&lt;p&gt;That is sensible.&lt;/p&gt;

&lt;p&gt;But whether a business-critical change should be exposed to real users is not always a purely technical question. In many systems, it is also a domain question.&lt;/p&gt;

&lt;p&gt;Human validation is not a sign of immaturity. Sometimes it is the last remaining sign that someone still understands the difference between technical correctness and business correctness.&lt;/p&gt;

&lt;p&gt;That distinction is too often lost.&lt;/p&gt;




&lt;h2&gt;
  
  
  Application Quality Is Not Generated By Tooling
&lt;/h2&gt;

&lt;p&gt;Part of the problem is that “quality” itself has increasingly been redefined through the lens of tooling.&lt;/p&gt;

&lt;p&gt;In many organizations, delivery practices are no longer primarily shaped by engineers with deep ownership of the software and its domain. They are shaped by process-specialized roles, platform teams, and tooling consultants whose authority often comes from familiarity with delivery systems rather than from responsibility for the software’s behavior, design, or business consequence.&lt;/p&gt;

&lt;p&gt;That changes what gets optimized.&lt;/p&gt;

&lt;p&gt;Quality slowly stops meaning clarity, simplicity, robustness, and domain correctness.&lt;/p&gt;

&lt;p&gt;It starts meaning compliance: green pipelines, approved stages, scan completion, branch policy adherence, and process conformance.&lt;/p&gt;

&lt;p&gt;Those may be useful signals.&lt;/p&gt;

&lt;p&gt;But useful signals can become dangerous substitutes.&lt;/p&gt;

&lt;p&gt;And that is how problem analysis gets replaced by cargo cults.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Real Regression: Loss of Ownership
&lt;/h2&gt;

&lt;p&gt;Underneath all of this lies a deeper problem than pipelines or deployment buttons.&lt;/p&gt;

&lt;p&gt;The quiet regression is the loss of engineering ownership.&lt;/p&gt;

&lt;p&gt;Modern delivery culture has made it increasingly possible for developers to produce deployable software without truly understanding:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;how the system runs&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;how it is released&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;how it evolves&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;how it fails&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;how it behaves in production&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;how it fits the business domain as a whole&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is not progress.&lt;/p&gt;

&lt;p&gt;That is separation from consequence.&lt;/p&gt;

&lt;p&gt;Once that separation occurs, the pipeline stops being a tool.&lt;/p&gt;

&lt;p&gt;It becomes a substitute for engineering responsibility.&lt;/p&gt;

&lt;p&gt;Pipelines can tell you whether something passed the process.&lt;/p&gt;

&lt;p&gt;They cannot tell you whether the software is truly understood.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Healthy CI/CD Should Actually Look Like
&lt;/h2&gt;

&lt;p&gt;Good CI/CD is not about maximum automation.&lt;/p&gt;

&lt;p&gt;It is about preserving engineering discipline while reducing mechanical waste.&lt;/p&gt;

&lt;p&gt;That usually looks far less glamorous than modern tooling culture suggests:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Developers integrate continuously into a shared mainline&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Incomplete work is handled through discipline and design, not default branch isolation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Build and verification are automated and fast&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deployment to lower environments is repeatable and low-friction&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Acceptance happens in a controlled way&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Production deployment is simple enough to trust&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Human validation exists where domain risk justifies it&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The release path is designed to support ownership, not replace it&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is not anti-automation.&lt;/p&gt;

&lt;p&gt;It is anti-theater.&lt;/p&gt;

&lt;p&gt;And that distinction matters.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Real Question
&lt;/h2&gt;

&lt;p&gt;CI/CD is not really a tooling question.&lt;/p&gt;

&lt;p&gt;It is a quality question.&lt;/p&gt;

&lt;p&gt;The real issue is not whether a team has pipelines, feature flags, deployment jobs, or environment promotion stages.&lt;/p&gt;

&lt;p&gt;The real issue is this:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Does the delivery process reflect a well-engineered system and a team that understands it — or is it compensating for the absence of both?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That is the question most teams avoid.&lt;/p&gt;

&lt;p&gt;Because if the honest answer is the second one, then the pipeline is not a sign of maturity.&lt;/p&gt;

&lt;p&gt;It is camouflage.&lt;/p&gt;

&lt;p&gt;And that may be the most uncomfortable truth in modern software delivery:&lt;/p&gt;

&lt;p&gt;sometimes what looks like engineering progress is really just process growth around declining engineering depth.&lt;/p&gt;

&lt;p&gt;CI/CD used as a substitute for the very discipline it was supposed to support.&lt;/p&gt;

&lt;p&gt;And once that happens, delivery stops being an expression of engineering quality.&lt;br&gt;&lt;br&gt;
It becomes a process for moving misunderstood software into production more efficiently.&lt;/p&gt;

</description>
      <category>softwareengineering</category>
      <category>cicd</category>
      <category>java</category>
      <category>software</category>
    </item>
    <item>
      <title>Software Testing: You’re Probably Doing It Wrong</title>
      <dc:creator>Leon Pennings</dc:creator>
      <pubDate>Thu, 26 Mar 2026 08:32:29 +0000</pubDate>
      <link>https://forem.com/leonpennings/software-testing-youre-probably-doing-it-wrong-564h</link>
      <guid>https://forem.com/leonpennings/software-testing-youre-probably-doing-it-wrong-564h</guid>
      <description>&lt;p&gt;Software testing has become one of the most ritualized practices in modern development.&lt;/p&gt;

&lt;p&gt;That is not because testing is unimportant. Quite the opposite.&lt;/p&gt;

&lt;p&gt;Testing matters.&lt;/p&gt;

&lt;p&gt;But in many teams, testing has quietly expanded beyond its actual role. It is no longer treated as a tool for verifying software behavior. It is increasingly treated as a proxy for understanding, a proxy for design, and even a proxy for quality itself.&lt;/p&gt;

&lt;p&gt;And that is where the problem begins.&lt;/p&gt;

&lt;p&gt;Because testing can verify behavior.&lt;br&gt;&lt;br&gt;
But it cannot replace engineering.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Testing in Software Is a Verification Discipline&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;At its core, testing in software has a very specific role:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;to verify whether a system behaves acceptably under certain conditions.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That is valuable. Necessary, even.&lt;/p&gt;

&lt;p&gt;A good test can help answer questions like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Does this behavior still work?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Does this input still lead to the expected output?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Did this change introduce a regression?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is where testing is strong.&lt;/p&gt;

&lt;p&gt;But notice what testing does &lt;strong&gt;not&lt;/strong&gt; answer:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Is the design coherent?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Is the architecture proportional to the problem?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Is the model a good representation of the domain?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Is this implementation economical to evolve?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Those are engineering questions.&lt;/p&gt;

&lt;p&gt;And when teams start treating test suites as if they answer them, behavioral verification gets confused with software quality itself.&lt;/p&gt;

&lt;p&gt;That is a costly mistake.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;The Math Test Problem&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A great deal of modern team testing resembles cheating on a math exam.&lt;/p&gt;

&lt;p&gt;Imagine students defining the exam questions during class, together with the teacher, while learning the material. By the time the exam arrives, the goal is no longer to understand the mathematics. The goal is to reproduce the answers that were already agreed upon.&lt;/p&gt;

&lt;p&gt;Something very similar happens in software teams.&lt;/p&gt;

&lt;p&gt;During refinement, development, or collaborative scenario-writing sessions, expected behavior is often defined in detail in advance. Tests are written, scenarios are formalized, and the team aligns around them.&lt;/p&gt;

&lt;p&gt;In theory, this sounds excellent.&lt;/p&gt;

&lt;p&gt;In practice, it introduces a subtle distortion:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;the implementation target shifts from understanding the business domain to passing the agreed test scenarios.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That is a very different goal.&lt;/p&gt;

&lt;p&gt;The result is not necessarily a bad system. But it is often a system optimized for compliance rather than understanding.&lt;/p&gt;

&lt;p&gt;And the danger is obvious:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;how often is the first interpretation of a business need fully correct?&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If the test scenarios are based on incomplete understanding, then all the rigor in the world only helps build the wrong thing more reliably.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Verification Is Not Validation&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This is the distinction many teams lose.&lt;/p&gt;

&lt;p&gt;Testing is very good at &lt;strong&gt;verification&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Did the implementation behave as intended?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Does the system still behave as expected?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But verification is not the same as &lt;strong&gt;validation&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Was the right thing built?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Is this actually a fitting solution for the domain?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A system can satisfy every agreed scenario and still be fundamentally wrong.&lt;/p&gt;

&lt;p&gt;It can behave correctly while being poorly modeled.&lt;br&gt;&lt;br&gt;
It can produce the expected output while being overcomplicated.&lt;br&gt;&lt;br&gt;
It can pass every acceptance test while solving the wrong problem in the wrong way.&lt;/p&gt;

&lt;p&gt;In other words:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;A passing test suite proves behavioral agreement—not solution fitness.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And that distinction matters far more than many teams admit.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;The Ferrari in the Field&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A Ferrari F40 can absolutely move across a field.&lt;/p&gt;

&lt;p&gt;It can produce motion. It can get from one side to the other. It can, in the most literal sense, “do the job.”&lt;/p&gt;

&lt;p&gt;That does not make it a tractor.&lt;/p&gt;

&lt;p&gt;The same is true in software.&lt;/p&gt;

&lt;p&gt;A system can satisfy all functional expectations and still be the wrong machine for the domain. It can be too expensive to change, too fragile to extend, too over-engineered for the actual need, or too structurally rigid to survive evolving business requirements.&lt;/p&gt;

&lt;p&gt;Testing does not expose that.&lt;/p&gt;

&lt;p&gt;Because testing can tell whether the machine moves.&lt;/p&gt;

&lt;p&gt;It cannot tell whether it is the right machine.&lt;/p&gt;

&lt;p&gt;And that is not a trivial distinction.&lt;br&gt;&lt;br&gt;
That is the distinction between &lt;strong&gt;working software&lt;/strong&gt; and &lt;strong&gt;good engineering&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;When Tests Stop Following Behavior and Start Following Structure&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This is where testing often becomes actively harmful.&lt;/p&gt;

&lt;p&gt;If testing is a behavioral verification discipline, then it should limit itself to verifying behavior.&lt;/p&gt;

&lt;p&gt;But many modern testing practices go deeper than that.&lt;/p&gt;

&lt;p&gt;They start testing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;local call structures&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;internal collaborations&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;class-level decomposition&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;implementation fragments in isolation&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At that point, the tests are no longer verifying the system in any meaningful way.&lt;/p&gt;

&lt;p&gt;They are verifying the current shape of the code.&lt;/p&gt;

&lt;p&gt;That is not the same thing.&lt;/p&gt;

&lt;p&gt;And once that happens, the test suite stops protecting change and starts resisting it.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;The moment a test depends on how the behavior is achieved instead of what behavior is observed, it becomes a brake on refactoring.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That is one of the most under-discussed quality problems in software teams.&lt;/p&gt;

&lt;p&gt;Because now every structural improvement becomes expensive:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;rename a collaborator → tests break&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;merge responsibilities → tests break&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;simplify orchestration → tests break&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;move logic to a better abstraction → tests break&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Not because behavior changed.&lt;br&gt;&lt;br&gt;
But because the test suite was never really about behavior to begin with.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Why Isolated Class Testing Often Misses the Point&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;One of the clearest examples of this problem is isolated class testing.&lt;/p&gt;

&lt;p&gt;A class exists in code. Therefore, many teams assume it should be testable independently.&lt;/p&gt;

&lt;p&gt;But a technical unit is not automatically a meaningful behavioral unit.&lt;/p&gt;

&lt;p&gt;That assumption is rarely challenged.&lt;/p&gt;

&lt;p&gt;Take something like a PDF information extractor.&lt;/p&gt;

&lt;p&gt;That behavior does not meaningfully exist in a vacuum. It depends on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;parsing logic&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;normalization logic&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;extraction rules&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;object interpretation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;domain-level decisions&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Yet what often happens?&lt;/p&gt;

&lt;p&gt;A single class gets tested in isolation.&lt;br&gt;&lt;br&gt;
Its collaborators are mocked.&lt;br&gt;&lt;br&gt;
Its environment is simulated.&lt;br&gt;&lt;br&gt;
Its context is stripped away.&lt;/p&gt;

&lt;p&gt;Now the test no longer asks:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Can the system reliably extract useful information from PDFs?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Instead, it asks something far weaker:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Does this one implementation fragment behave under synthetic scaffolding?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That is not meaningful verification.&lt;/p&gt;

&lt;p&gt;That is structural rehearsal.&lt;/p&gt;

&lt;p&gt;And the cost is not just conceptual—it is practical.&lt;/p&gt;

&lt;p&gt;Because now the test suite is coupled to a local decomposition that may not even survive the next decent refactor.&lt;/p&gt;

&lt;p&gt;We end up with a test suite that passes perfectly even if the integration between those fragments is fundamentally broken—because we’ve tested the components, but ignored the composition.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Coverage Is Not Confidence&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Test coverage is another example of verification ritual turning into proxy engineering.&lt;/p&gt;

&lt;p&gt;Coverage has become a metric in its own right.&lt;/p&gt;

&lt;p&gt;Teams report it. Managers ask for it. Pipelines display it as if it were a signal of quality.&lt;/p&gt;

&lt;p&gt;But coverage says only one thing:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;this code was executed while a test ran.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That’s it.&lt;/p&gt;

&lt;p&gt;It does &lt;strong&gt;not&lt;/strong&gt; tell:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;whether the test is meaningful&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;whether important behavior is protected&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;whether the assertions matter&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;whether the design is safe to evolve&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And yet teams optimize for it anyway.&lt;/p&gt;

&lt;p&gt;That leads to the usual absurdities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;getter/setter tests&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;trivial constructor tests&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;one-line branch inflation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;synthetic assertions written only to satisfy the metric&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is not quality.&lt;br&gt;&lt;br&gt;
It is administrative theater.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Coverage is a measure of execution, not a measure of insight.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And once a team starts chasing the number instead of the confidence, the metric has already failed.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Testing Is Not a Design Discipline&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This may be the most important point of all.&lt;/p&gt;

&lt;p&gt;Testing can verify whether software behaves as expected.&lt;/p&gt;

&lt;p&gt;It cannot tell whether the software is well-designed.&lt;/p&gt;

&lt;p&gt;It cannot tell whether:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;the abstraction boundaries are good&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the model is coherent&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the architecture is sustainable&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the implementation cost is proportional to the value&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;future stories will remain easy to add&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Those are not test outcomes.&lt;/p&gt;

&lt;p&gt;Those are design and engineering concerns.&lt;/p&gt;

&lt;p&gt;And if a team replaces those concerns with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;framework templates&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;scenario scripts&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;coverage thresholds&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;pipeline greenness&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;…then better engineering is not happening.&lt;/p&gt;

&lt;p&gt;Judgment is simply being outsourced to artifacts.&lt;/p&gt;

&lt;p&gt;That may feel safer.&lt;br&gt;&lt;br&gt;
It may even look more rigorous.&lt;/p&gt;

&lt;p&gt;But it is still a substitute for actual thought.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;What Testing Is Actually For&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Testing does have a real and valuable place.&lt;/p&gt;

&lt;p&gt;Used well, testing is for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;verifying externally observable behavior&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;protecting against meaningful regressions&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;increasing confidence during change&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;supporting safe evolution of a system&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is already enough.&lt;/p&gt;

&lt;p&gt;Testing does &lt;strong&gt;not&lt;/strong&gt; need to become:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;a replacement for design&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;a replacement for domain understanding&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;a replacement for architecture&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;a replacement for engineering judgment&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The moment testing is asked to do those things, it becomes overloaded.&lt;/p&gt;

&lt;p&gt;And overloaded tools do not become more powerful.&lt;/p&gt;

&lt;p&gt;They become more misleading.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;The Cost of a Misaligned System&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A system does not need to be broken to be expensive.&lt;/p&gt;

&lt;p&gt;It only needs to be misaligned.&lt;/p&gt;

&lt;p&gt;That is one of the most dangerous illusions in software development: if the system behaves correctly, it is easy to assume the engineering must also be sound.&lt;/p&gt;

&lt;p&gt;But a system can pass tests, satisfy stories, and still be fundamentally costly in all the places that matter over time.&lt;/p&gt;

&lt;p&gt;It can be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;too expensive to extend&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;too brittle to refactor&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;too complex to reason about&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;too rigid to absorb new requirements cleanly&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the software equivalent of using a Ferrari F40 to plow a field.&lt;/p&gt;

&lt;p&gt;The machine moves.&lt;br&gt;&lt;br&gt;
The task gets completed.&lt;br&gt;&lt;br&gt;
But every future change becomes more expensive than it should be.&lt;/p&gt;

&lt;p&gt;That cost rarely appears in the first implementation. It appears later:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;in slower feature development&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;in rising maintenance effort&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;in increasingly fragile changes&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;in the growing difficulty of correcting earlier assumptions&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And this is precisely where testing, on its own, offers very little protection.&lt;/p&gt;

&lt;p&gt;Because testing can confirm that a system still behaves the same.&lt;/p&gt;

&lt;p&gt;It cannot tell whether that behavior is now trapped inside the wrong machine.&lt;/p&gt;

&lt;p&gt;That is an engineering problem.&lt;/p&gt;

&lt;p&gt;And when that distinction is missed, software quality gets reduced to present-day correctness while long-term adaptability quietly deteriorates.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Software engineering has become increasingly comfortable with proxies.&lt;/p&gt;

&lt;p&gt;Metrics are used as substitutes for judgment.&lt;br&gt;&lt;br&gt;
Artifacts are used as substitutes for understanding.&lt;br&gt;&lt;br&gt;
Test suites are used as substitutes for design confidence.&lt;/p&gt;

&lt;p&gt;And in doing so, many teams create the appearance of rigor while quietly undermining the adaptability of the system itself.&lt;/p&gt;

&lt;p&gt;Testing is valuable.&lt;br&gt;&lt;br&gt;
But only when it stays in its lane.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Testing should verify software behavior. It should not define the software, freeze its structure, or pretend to certify its design.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Because the moment verification starts replacing engineering, pipelines may still signal green — but better systems do not follow.&lt;/p&gt;

&lt;p&gt;Ferraris get built where tractors would have been enough.&lt;/p&gt;

</description>
      <category>softwareengineering</category>
      <category>testing</category>
      <category>java</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>The Mirror and the Machine: Reclaiming Scrum Refinement in the Age of AI</title>
      <dc:creator>Leon Pennings</dc:creator>
      <pubDate>Tue, 24 Mar 2026 07:51:20 +0000</pubDate>
      <link>https://forem.com/leonpennings/the-mirror-and-the-machine-reclaiming-scrum-refinement-in-the-age-of-ai-1mhl</link>
      <guid>https://forem.com/leonpennings/the-mirror-and-the-machine-reclaiming-scrum-refinement-in-the-age-of-ai-1mhl</guid>
      <description>&lt;p&gt;Agile was never meant to be a delivery machine. It was meant to be a learning system.&lt;/p&gt;

&lt;p&gt;At its core, Agile shortens the feedback loop between business intent and working software—to expose ideas early, validate them quickly, and adapt continuously. The goal was never just to build software, but to &lt;em&gt;discover what the business actually needs&lt;/em&gt; by building it.&lt;/p&gt;

&lt;p&gt;Somewhere along the way, many teams drifted. User stories became work orders instead of expressions of intent. Refinement became premature implementation design instead of shared understanding. And the feedback loop quietly stretched back to the end of the sprint.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Problem with User Stories as Work Orders
&lt;/h2&gt;

&lt;p&gt;A good user story expresses intent: What is the user trying to achieve, and why does it matter?&lt;/p&gt;

&lt;p&gt;In practice, stories too often look like predefined solutions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;“I want a button in the top right corner to search.”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;“Add a ‘costs’ field to each order.”&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These constrain the solution space from the start. Better alternatives go unexplored. The system quietly accumulates unnecessary complexity.&lt;/p&gt;

&lt;p&gt;What’s missing is the actual problem: Is this about searching, or about finding something quickly? Is this about storing costs, or about understanding profitability?&lt;/p&gt;

&lt;p&gt;Without that clarity, we aren’t building solutions—we’re implementing assumptions.&lt;/p&gt;




&lt;h2&gt;
  
  
  Refinement as Understanding, Not Design
&lt;/h2&gt;

&lt;p&gt;Refinement is where the misunderstanding should be corrected. Yet too many sessions devolve into:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;“Where should the button go?”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;“What fields do we need?”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;“How do we implement this?”&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is early design on incomplete information.&lt;/p&gt;

&lt;p&gt;Real refinement focuses first on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Intent&lt;/strong&gt;: What is the user truly trying to achieve?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Context&lt;/strong&gt;: When and why does this happen?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Available information&lt;/strong&gt;: What does the user already know?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Problem type&lt;/strong&gt;: Is this a lookup, exploration, or navigation task?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Only after the problem is clearly understood should solution ideas emerge.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Practical Example: When “Search” Isn’t Search
&lt;/h2&gt;

&lt;p&gt;In an ETL context, a functional manager requests a search feature. On the surface it sounds reasonable. Dig deeper, though, and the real need surfaces:&lt;/p&gt;

&lt;p&gt;The manager is often asked by a colleague (or for their own reference) to pull up a &lt;em&gt;specific case&lt;/em&gt;. They have only a vague description of the object type and when it occurred. The goal isn’t broad exploration—it’s &lt;strong&gt;identification and direct navigation&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Instead of a generic search system, a far simpler solution appears:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Use meaningful, relatable identifiers (a combination of object type and unique ID).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enable direct navigation via those identifiers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add contextual links to related cases.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The result is simpler, faster, and far better aligned with actual usage.&lt;/p&gt;

&lt;p&gt;This is refinement at its best: turning vague requests into precise problem definitions.&lt;/p&gt;




&lt;h2&gt;
  
  
  Mirror Pieces: The First Implementation Is Not the Product
&lt;/h2&gt;

&lt;p&gt;Even strong refinement leaves understanding theoretical until it meets reality. That’s where the first implementation enters—not as a finished deliverable, but as a &lt;strong&gt;mirror piece&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A mirror piece is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A minimal, functional slice of the system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Built specifically to reflect business intent back to stakeholders.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deliberately incomplete and open to rapid change.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Its real purpose isn’t immediate value. It answers a more important question: “Is this what you meant?”&lt;/p&gt;

&lt;p&gt;By creating mirror pieces early, teams shift from end-of-sprint validation to &lt;strong&gt;continuous feedback during development&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why a UI Is Crucial—Even for Technical Systems
&lt;/h2&gt;

&lt;p&gt;A mirror piece without a UI is often invisible to the business. Raw data, logs, or backend flows require interpretation, reopening the very gap we’re trying to close.&lt;/p&gt;

&lt;p&gt;A simple, even rough UI changes everything. It provides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Observability&lt;/strong&gt;: What is actually happening in the system?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Navigability&lt;/strong&gt;: How do entities relate (e.g., DeliveryUnit → PreservationUnit)?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Clarity&lt;/strong&gt;: Does this match how the business understands the domain?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The UI becomes the &lt;strong&gt;event horizon&lt;/strong&gt;—the meeting point of business intent and technical execution. Without it the system stays abstract. With it, the system becomes a shared language.&lt;/p&gt;




&lt;h2&gt;
  
  
  Refinement in a Living System
&lt;/h2&gt;

&lt;p&gt;No story arrives in a vacuum. Every new request lands inside an existing system—complete with implemented logic, established flows, and embedded assumptions about how the business works.&lt;/p&gt;

&lt;p&gt;Refinement must therefore do double duty: deeply understand the new intent &lt;em&gt;and&lt;/em&gt; re-evaluate what already exists.&lt;/p&gt;

&lt;p&gt;The first question shifts from “How do we build this?” to “How does this relate to what we already have?”&lt;/p&gt;

&lt;p&gt;New stories often reveal deeper truths:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;An earlier assumption was incomplete.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A rule was too simplistic.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A flow was designed for a narrower case than reality demands.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is not failure—it is the system doing its job: &lt;strong&gt;exposing gaps in understanding&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;When a story interacts with existing behavior, there are typically three paths:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;It fits the current model&lt;/strong&gt; → Simply extend what is already there.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;It introduces a variation within the same flow&lt;/strong&gt; → Isolate the difference cleanly (e.g., using strategy-like patterns) without fracturing the stable core.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;It challenges earlier assumptions&lt;/strong&gt; → Revisit and evolve the underlying model itself.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Treating all three the same—by just adding patches or conditionals—breeds accumulating complexity, duplicated logic, and a system that grows harder to reason about.&lt;/p&gt;

&lt;p&gt;In this light, refinement becomes far more than story clarification. It is a &lt;strong&gt;checkpoint for system integrity&lt;/strong&gt;: a deliberate moment to ask, “Does our current system still reflect how the business actually operates?”&lt;/p&gt;

&lt;p&gt;Software is not a static machine. It is an evolving mirror of the domain. Every new story offers a chance to confirm, refine, or correct what we thought we knew. Refinement is where that evolution should happen consciously—not accidentally through technical debt.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Real Cost of Skipping Intent-Focused Refinement
&lt;/h2&gt;

&lt;p&gt;When refinement stays shallow and we build on assumptions instead of understanding, the consequences are predictable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Misaligned solutions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Duplicated or conflicting functionality.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Growing technical debt.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Late and expensive rework.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Systems that pass internal checks but fail in real use.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most importantly, the system stops being a tool for learning and becomes a machine for executing yesterday’s assumptions.&lt;/p&gt;




&lt;h2&gt;
  
  
  Reclaiming the Feedback Loop
&lt;/h2&gt;

&lt;p&gt;The original promise of Agile was fast, continuous feedback. To reclaim it, we need a mindset shift:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;From &lt;strong&gt;stories as work orders&lt;/strong&gt; → to &lt;strong&gt;stories as intent&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;From &lt;strong&gt;refinement as design&lt;/strong&gt; → to &lt;strong&gt;refinement as understanding&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;From &lt;strong&gt;implementation as delivery&lt;/strong&gt; → to &lt;strong&gt;implementation as a mirror&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;From &lt;strong&gt;end-of-sprint feedback&lt;/strong&gt; → to &lt;strong&gt;continuous feedback through mirror pieces and early UI&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Software development is often treated as a delivery process. In reality, it is a &lt;strong&gt;learning process&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The goal is not merely to build what was asked. The goal is to discover what is actually needed.&lt;/p&gt;

&lt;p&gt;Refinement, mirror pieces, early UI, and deliberate validation are not overhead. They are the mechanisms that make genuine learning possible.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Software is not just a tool to serve the business.&lt;br&gt;&lt;br&gt;
It is a mirror that helps the business—and the team—understand itself.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The sooner we look into that mirror, the better what we build will become.&lt;/p&gt;




&lt;h2&gt;
  
  
  In the Age of AI: Where Does AI Fit?
&lt;/h2&gt;

&lt;p&gt;Looking at refinement as understanding, mirror pieces as feedback, and software as a learning tool, the natural question arises: Where does AI fit?&lt;/p&gt;

&lt;p&gt;AI excels at implementation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Translating well-understood requirements into code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Generating boilerplate and structure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Accelerating familiar patterns.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In short: &lt;strong&gt;AI operates most effectively in the solution space.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The central challenge in this article lies elsewhere—in understanding intent, interpreting context, challenging assumptions, and discovering what the problem actually is. That remains fundamentally human work.&lt;/p&gt;

&lt;p&gt;AI introduces a subtle risk. Because it can generate working code so quickly, it creates an illusion of progress even when understanding is incomplete. If refinement is weak:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;AI will still produce code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The system will still behave as specified.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Tests will still pass.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But the result is simply a faster realization of the same flawed assumptions.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;AI doesn’t correct misunderstanding—it accelerates it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;What AI makes unmistakably clear is something that was always true: writing code is often the easiest part of building software. The real difficulty lies in knowing &lt;em&gt;what&lt;/em&gt; to build, understanding why it matters, and recognizing when our assumptions are wrong.&lt;/p&gt;

&lt;p&gt;That is exactly where strong refinement, mirror pieces, and early feedback matter most.&lt;/p&gt;

&lt;p&gt;Within the model described here, AI fits naturally:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Refinement&lt;/strong&gt; → human-driven discovery.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Mirror pieces + UI&lt;/strong&gt; → shared validation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AI&lt;/strong&gt; → accelerated implementation of what has been learned.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AI lets teams build mirror pieces faster, iterate more quickly, and validate ideas sooner. But it does not replace the need for discovery—it makes that discovery loop &lt;em&gt;more&lt;/em&gt; critical, not less.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Final Note&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;If software development becomes a process of executing predefined solutions, AI will do that exceptionally well.&lt;/p&gt;

&lt;p&gt;But if we treat it as a process of learning and deeply understanding a domain, then AI becomes a powerful tool—without ever being the one that asks the important questions.&lt;/p&gt;

&lt;p&gt;And those questions are still where the real work begins.&lt;/p&gt;

</description>
      <category>softwareengineering</category>
      <category>ai</category>
      <category>scrum</category>
      <category>java</category>
    </item>
    <item>
      <title>Less Code, Lost Meaning: Why Boilerplate Reduction Misses the Point</title>
      <dc:creator>Leon Pennings</dc:creator>
      <pubDate>Fri, 20 Mar 2026 07:46:08 +0000</pubDate>
      <link>https://forem.com/leonpennings/less-code-lost-meaning-why-boilerplate-reduction-misses-the-point-2fho</link>
      <guid>https://forem.com/leonpennings/less-code-lost-meaning-why-boilerplate-reduction-misses-the-point-2fho</guid>
      <description>&lt;p&gt;In modern software development, one theme keeps returning:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;reduce boilerplate&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;write less code&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;increase conciseness&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Frameworks, annotations, and code generators promise cleaner classes and faster development. Tools in ecosystems like Spring Boot emphasize exactly that: less code, less friction, more output.&lt;/p&gt;

&lt;p&gt;At first glance, this seems like obvious progress.&lt;/p&gt;

&lt;p&gt;But it raises a fundamental question:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Does writing less code actually lead to better software?&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  The Appeal of Code Reduction
&lt;/h2&gt;

&lt;p&gt;Code reduction is attractive because it delivers immediate, visible results:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;fewer lines of code&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;less repetition&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;faster initial development&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A class with 200 lines becomes 50. Configuration disappears behind annotations. Common patterns are abstracted away.&lt;/p&gt;

&lt;p&gt;From a distance, this looks like improvement.&lt;/p&gt;

&lt;p&gt;And at the level of syntax, it is.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Problem: Optimizing the Wrong Layer
&lt;/h2&gt;

&lt;p&gt;Reducing boilerplate optimizes &lt;em&gt;how&lt;/em&gt; we write code.&lt;/p&gt;

&lt;p&gt;It does not address:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;what the code represents&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;how responsibilities are defined&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;whether the model is correct&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In other words:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It improves expression without improving meaning.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You can have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;perfectly concise code&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;minimal syntax&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;elegant constructs&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;…that still represent a poor understanding of the domain.&lt;/p&gt;

&lt;p&gt;And when that happens, the system remains:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;hard to understand&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;fragile under change&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;difficult to extend&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No amount of syntactic improvement fixes that.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Missing Dimension: The Story
&lt;/h2&gt;

&lt;p&gt;A well-designed system tells a story.&lt;/p&gt;

&lt;p&gt;Not in comments or documentation, but in its structure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;objects represent real concepts&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;behavior lives where it belongs&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;interactions reflect actual processes&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can read the code and understand:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;what the system does and why&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is the “story” of the system.&lt;/p&gt;

&lt;p&gt;And it is where most of the value lies.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Less Code Doesn’t Mean a Better Story
&lt;/h2&gt;

&lt;p&gt;Reducing code does not automatically improve that story.&lt;/p&gt;

&lt;p&gt;In many cases, it does the opposite.&lt;/p&gt;

&lt;p&gt;Consider what often happens:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;explicit logic is replaced with annotations&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;behavior is hidden behind framework conventions&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;configuration replaces clear structure&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The result:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;less visible code&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;but more implicit behavior&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And implicit behavior is harder to reason about.&lt;/p&gt;

&lt;p&gt;You didn’t remove complexity.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You made it harder to see.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  The Illusion of Simplicity
&lt;/h2&gt;

&lt;p&gt;Code reduction creates a powerful illusion:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If there is less code, the system must be simpler.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;But simplicity in software is not about size.&lt;/p&gt;

&lt;p&gt;It is about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;clarity of responsibilities&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;correctness of the model&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;predictability of behavior&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A small, unclear system is more complex than a larger, well-structured one.&lt;/p&gt;

&lt;p&gt;And a concise system with hidden behavior is more dangerous than an explicit one.&lt;/p&gt;




&lt;h2&gt;
  
  
  When Code Reduction Helps
&lt;/h2&gt;

&lt;p&gt;This is not an argument against reducing boilerplate.&lt;/p&gt;

&lt;p&gt;There are clear benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;eliminating repetition&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;removing mechanical code&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;standardizing common patterns&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When applied carefully, code reduction can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;improve readability&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;reduce noise&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;allow focus on important parts&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But only under one condition:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The underlying model must already be sound.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  When It Becomes Harmful
&lt;/h2&gt;

&lt;p&gt;Code reduction becomes problematic when it is used as a substitute for thinking.&lt;/p&gt;

&lt;p&gt;When teams focus on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;making code shorter&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;following framework conventions&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;reducing visible complexity&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Instead of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;modeling the domain&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;defining responsibilities&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;understanding behavior&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At that point, development becomes:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;an exercise in fitting problems into existing constructs&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Rather than solving them.&lt;/p&gt;




&lt;h2&gt;
  
  
  When the Story Disappears
&lt;/h2&gt;

&lt;p&gt;If software engineering increasingly focuses on syntax optimization—on writing less code, faster—then an important question emerges:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Who is responsible for the quality of the story?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Because if we optimize for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;fewer lines of code&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;more generation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;less manual effort&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We also reduce something else:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;the amount of direct engagement with the model itself&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Traditionally, writing code served a dual purpose:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;implementing behavior&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;validating understanding&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The act of writing forced decisions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;where does this responsibility belong?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;does this concept make sense?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;do these rules contradict each other?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Code was not just output.&lt;/p&gt;

&lt;p&gt;It was a &lt;strong&gt;mirror&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Role of Friction
&lt;/h2&gt;

&lt;p&gt;Some level of friction in development is valuable.&lt;/p&gt;

&lt;p&gt;Not accidental friction—like fighting a framework—but &lt;strong&gt;conceptual friction&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;needing to define boundaries&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;needing to resolve ambiguity&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;needing to make trade-offs explicit&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This friction forces clarity.&lt;/p&gt;

&lt;p&gt;It exposes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;inconsistencies in requirements&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;gaps in understanding&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;misplaced responsibilities&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When you remove too much of that friction, you don’t just gain speed.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You lose feedback.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Code Generation as the Endgame
&lt;/h2&gt;

&lt;p&gt;Tools like Claude and similar code generation systems represent the logical extreme of this trend.&lt;/p&gt;

&lt;p&gt;They can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;generate large amounts of code instantly&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;remove almost all boilerplate&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;translate intent into implementation&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;From a productivity standpoint, this is remarkable.&lt;/p&gt;

&lt;p&gt;But it introduces a new risk:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If code is no longer written, it is no longer &lt;em&gt;used to think&lt;/em&gt;.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  When “Working” Is No Longer Proof
&lt;/h2&gt;

&lt;p&gt;Traditionally, writing code forced validation.&lt;/p&gt;

&lt;p&gt;Each decision had to be made explicitly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;where does this behavior belong?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;do these concepts align?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;are these rules consistent?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In that process, contradictions surface.&lt;/p&gt;

&lt;p&gt;With code generation, that feedback loop weakens.&lt;/p&gt;

&lt;p&gt;You describe intent.&lt;br&gt;&lt;br&gt;
The system produces implementation.&lt;/p&gt;

&lt;p&gt;And because the result runs, it creates a powerful signal:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It works.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;But that signal is misleading.&lt;/p&gt;

&lt;p&gt;What you get is not necessarily a system that is &lt;em&gt;correct&lt;/em&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It is a system that &lt;strong&gt;appears to work under current conditions&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  The Silent Failure Mode
&lt;/h2&gt;

&lt;p&gt;Without active engagement in shaping the model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;contradictions in the domain are not surfaced&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;responsibilities are not fully resolved&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;assumptions are not challenged&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They don’t disappear.&lt;/p&gt;

&lt;p&gt;They remain latent.&lt;/p&gt;

&lt;p&gt;And instead of being caught during construction, they emerge later as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;inconsistent behavior&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;edge-case failures&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;unpredictable interactions&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At that point, the problem is no longer local.&lt;/p&gt;

&lt;p&gt;It is systemic.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Loss of Pressure on the Model
&lt;/h2&gt;

&lt;p&gt;A well-designed system is not just built—it is &lt;strong&gt;continuously refined&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Each line of code adds pressure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;on the model&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;on the boundaries&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;on the assumptions&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Code generation removes much of that pressure.&lt;/p&gt;

&lt;p&gt;It allows systems to grow without forcing the same level of scrutiny.&lt;/p&gt;

&lt;p&gt;So the model is no longer:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;shaped&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;challenged&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;corrected&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is merely &lt;em&gt;extended&lt;/em&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  From Engineering to Assembly
&lt;/h2&gt;

&lt;p&gt;The risk is not that code generation produces bad code.&lt;/p&gt;

&lt;p&gt;The risk is that it enables a different mode of development:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;assembling systems without fully understanding them&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;At small scale, this works.&lt;/p&gt;

&lt;p&gt;At larger scale, it leads to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;hidden inconsistencies&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;fragile structures&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;systems that behave correctly—until they don’t&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And when they fail, they fail in ways that are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;hard to trace&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;hard to reason about&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;hard to fix&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Real Risk
&lt;/h2&gt;

&lt;p&gt;The danger is subtle.&lt;/p&gt;

&lt;p&gt;The system does not immediately break.&lt;/p&gt;

&lt;p&gt;It delivers output.&lt;br&gt;&lt;br&gt;
It passes tests.&lt;br&gt;&lt;br&gt;
It supports current use cases.&lt;/p&gt;

&lt;p&gt;But underneath:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;the model has never been fully validated.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And over time, that leads to a system that is not truly stable, but:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;conditionally correct and fundamentally unpredictable&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Closing Thought
&lt;/h2&gt;

&lt;p&gt;Code generation removes effort.&lt;/p&gt;

&lt;p&gt;But it also removes something essential:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;the act of forcing clarity through construction&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And without that:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;we risk building systems that don’t fail fast—&lt;br&gt;&lt;br&gt;
but fail late, and fail hard.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>ai</category>
      <category>java</category>
      <category>softwareengineering</category>
      <category>architecture</category>
    </item>
    <item>
      <title>The Illusion of Progress: Why Tooling Can’t Replace Engineering</title>
      <dc:creator>Leon Pennings</dc:creator>
      <pubDate>Wed, 18 Mar 2026 14:10:52 +0000</pubDate>
      <link>https://forem.com/leonpennings/the-illusion-of-progress-why-tooling-cant-replace-engineering-5977</link>
      <guid>https://forem.com/leonpennings/the-illusion-of-progress-why-tooling-cant-replace-engineering-5977</guid>
      <description>&lt;p&gt;Walk into almost any modern enterprise Java codebase and you’ll see the same pattern: controllers, services, repositories, configuration, and a dense web of injected dependencies—often built on frameworks like Spring Boot.&lt;/p&gt;

&lt;p&gt;It works. Requests flow through the system. Data is persisted. Features get delivered.&lt;/p&gt;

&lt;p&gt;By most organizational standards, this is considered a success.&lt;/p&gt;

&lt;p&gt;But there’s a fundamental question almost never asked:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Is this system well engineered—or does it merely appear to work?&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  The Industry’s Blind Spot
&lt;/h2&gt;

&lt;p&gt;Software development suffers from a unique problem: we almost never get to compare two fundamentally different approaches to the &lt;em&gt;same&lt;/em&gt; system.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;One system is built by Team A, using framework templates&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Another is built by Team B, using a strong conceptual model&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Different teams, different timelines, different constraints.&lt;/p&gt;

&lt;p&gt;So when a system “works,” we assume:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;the approach must be valid&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;But we never see:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;what that same system could have looked like with a better model&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That absence of comparison creates a blind spot—one where &lt;strong&gt;“working software” is mistaken for “well-designed software.”&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Rise of Template-Driven Development
&lt;/h2&gt;

&lt;p&gt;Frameworks like Spring Boot didn’t become dominant by accident.&lt;/p&gt;

&lt;p&gt;They offer:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;immediate productivity&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;standardized structure&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;fast onboarding&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They allow teams to produce output quickly—often without deeply understanding the domain.&lt;/p&gt;

&lt;p&gt;And that’s where the shift happens.&lt;/p&gt;

&lt;p&gt;Instead of asking:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;What is the correct model of this domain?&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Teams start asking:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Where does this go in the template?&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;At that point, development turns into something else entirely:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Stenography.&lt;/strong&gt; Translating user stories into predefined technical slots.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The result:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;large amounts of integration code&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;thin or absent domain logic&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;systems that function—but are difficult to evolve&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Cost You Don’t See
&lt;/h2&gt;

&lt;p&gt;Systems built on heavy frameworks don’t usually fail.&lt;/p&gt;

&lt;p&gt;They degrade.&lt;/p&gt;

&lt;p&gt;Not in obvious ways, but in how time and effort are spent.&lt;/p&gt;

&lt;p&gt;At first, development feels fast:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;scaffolding is generated&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;endpoints are wired&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;persistence is handled&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Features appear quickly.&lt;/p&gt;

&lt;p&gt;But over time, a shift happens.&lt;/p&gt;

&lt;p&gt;The system is no longer primarily about automating the business domain.&lt;/p&gt;

&lt;p&gt;It becomes increasingly about maintaining the technical environment around it.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Hidden Shift in Effort
&lt;/h3&gt;

&lt;p&gt;In many enterprise systems, a large portion of engineering effort is spent on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;upgrading frameworks (e.g. annual cycles in Spring Boot)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;adapting to breaking changes&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;resolving dependency conflicts&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;aligning with new conventions and best practices&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;re-testing behavior that should not have changed&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is not business value.&lt;/p&gt;

&lt;p&gt;It is &lt;strong&gt;tooling maintenance&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Over time, the ratio shifts:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Less effort goes into improving the domain.&lt;br&gt;&lt;br&gt;
More effort goes into keeping the system compatible with its own foundation.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In extreme cases, the majority of work is no longer about &lt;em&gt;what the system does&lt;/em&gt;, but about &lt;em&gt;what it runs on&lt;/em&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Integration Becomes the System
&lt;/h3&gt;

&lt;p&gt;As frameworks evolve, systems accumulate:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;layers of adapters&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;configuration overrides&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;compatibility fixes&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The result is a codebase where:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;most code connects things&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;very little code expresses the domain&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At that point:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The system is no longer a model of the business.&lt;br&gt;&lt;br&gt;
It is a network of integrations.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  The Upgrade Trap
&lt;/h3&gt;

&lt;p&gt;Modern frameworks evolve continuously.&lt;/p&gt;

&lt;p&gt;Each upgrade promises:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;improvements&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;performance gains&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;new capabilities&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But each upgrade also introduces:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;migration effort&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;subtle behavioral changes&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;renewed testing cycles&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Individually, these seem manageable.&lt;/p&gt;

&lt;p&gt;Collectively, they create a constant background load.&lt;/p&gt;

&lt;p&gt;A system that was supposed to simplify development now requires &lt;strong&gt;continuous adaptation just to remain operational&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Loss of Focus
&lt;/h3&gt;

&lt;p&gt;The most damaging effect is not technical—it’s directional.&lt;/p&gt;

&lt;p&gt;When most effort is spent on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;frameworks&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;infrastructure&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;compatibility&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then the business domain becomes secondary.&lt;/p&gt;

&lt;p&gt;Teams stop asking:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;How do we model this problem better?&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And start asking:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;How do we make this work within the framework?&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;At that point, the system is no longer driven by the domain.&lt;/p&gt;

&lt;p&gt;It is driven by the tooling.&lt;/p&gt;




&lt;h3&gt;
  
  
  The Real Cost
&lt;/h3&gt;

&lt;p&gt;This cost rarely appears in metrics.&lt;/p&gt;

&lt;p&gt;It shows up as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;slower feature delivery over time&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;increasing effort for simple changes&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;growing system fragility&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;loss of clarity about what the system actually does&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And most critically:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A large portion of engineering capacity is spent on work that does not move the business forward.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  The Alternative: Start With the Model
&lt;/h2&gt;

&lt;p&gt;There is another way to build systems—one that doesn’t start with frameworks or templates.&lt;/p&gt;

&lt;p&gt;It starts with a different premise:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Software development is primarily a modeling activity.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Before writing code, you ask:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;What are the core responsibilities?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Where does behavior belong?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What are the invariants of the system?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;From there, you build:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;rich domain objects&lt;/strong&gt; that own behavior&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;clear boundaries&lt;/strong&gt; that prevent concern leakage&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;explicit lifecycles&lt;/strong&gt; that reflect real interactions&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In such a system:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;objects are &lt;em&gt;used&lt;/em&gt;, not orchestrated&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;behavior is &lt;em&gt;invoked&lt;/em&gt;, not assembled&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;structure reflects &lt;em&gt;meaning&lt;/em&gt;, not framework conventions&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  “But What About Wiring?”
&lt;/h2&gt;

&lt;p&gt;A common assumption in enterprise development is that systems require extensive wiring:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;dependency injection&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;service composition&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;configuration graphs&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But this is often a symptom, not a necessity.&lt;/p&gt;

&lt;p&gt;When responsibilities are well-defined and localized:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;objects don’t need to be assembled dynamically&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;behavior doesn’t need external orchestration&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;lifecycle can be handled at clear boundaries&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Instead of wiring a system together, you:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;define objects that already make sense together&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Infrastructure concerns—like persistence or messaging—can be handled through:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;decorators&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;well-defined interaction boundaries&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Not scattered across the system.&lt;/p&gt;

&lt;p&gt;The result is not “no composition,” but:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;composition that is internal, stable, and invisible&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Why This Feels Slower (But Isn’t)
&lt;/h2&gt;

&lt;p&gt;Taking time to understand the domain can feel like a delay.&lt;/p&gt;

&lt;p&gt;But consider the alternative:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;building quickly in the wrong direction&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;discovering mismatches later&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;restructuring under pressure&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It’s the difference between:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;planning a route before driving&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;or heading “roughly east” and hoping to arrive&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The first appears slower. The second &lt;em&gt;is&lt;/em&gt; slower—just not immediately.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Real Constraint
&lt;/h2&gt;

&lt;p&gt;If this approach is so effective, why isn’t it the norm?&lt;/p&gt;

&lt;p&gt;Because it depends on something rare:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Strong conceptual thinking&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Framework-driven development scales because it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;reduces decision-making&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;standardizes structure&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;works with uneven skill levels&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Conceptual modeling does not:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;it requires alignment&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;it requires discipline&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;it requires engineers who can think in systems&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So organizations optimize for:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;predictable output&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Instead of:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;optimal design&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  The Resulting Illusion
&lt;/h2&gt;

&lt;p&gt;This leads to a persistent illusion in the industry:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Systems built with heavy tooling are seen as “modern”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Systems built with strong models are seen as “overthinking”&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Because one produces:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  immediate, visible progress&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And the other produces:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  long-term structural integrity&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But without a direct comparison, the difference remains invisible.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Different Standard of Success
&lt;/h2&gt;

&lt;p&gt;If we want to build sustainable systems, we need to change the definition of success.&lt;/p&gt;

&lt;p&gt;Not:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Does it work?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;But:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;How easy is it to understand?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How localized is change?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How much of the code reflects the domain vs integration?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Because ultimately:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A system that merely works today can become a liability tomorrow. A system that is well modeled continues to work—even as it evolves.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Closing Thought
&lt;/h2&gt;

&lt;p&gt;Tooling is not the enemy.&lt;/p&gt;

&lt;p&gt;But it becomes a problem when it replaces the very thing it was meant to support:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Engineering.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Frameworks can accelerate implementation. They cannot replace understanding.&lt;/p&gt;

&lt;p&gt;And without understanding, we’re not engineering systems.&lt;/p&gt;

&lt;p&gt;We’re assembling them—and hoping they hold.&lt;/p&gt;

</description>
      <category>softwareengineering</category>
      <category>architecture</category>
      <category>java</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>The Ghostwriter, the House Builder, and the Missing Domain Model Walk Into a Bar</title>
      <dc:creator>Leon Pennings</dc:creator>
      <pubDate>Mon, 16 Mar 2026 06:57:08 +0000</pubDate>
      <link>https://forem.com/leonpennings/the-ghostwriter-the-house-builder-and-the-missing-domain-model-walk-into-a-bar-1m9f</link>
      <guid>https://forem.com/leonpennings/the-ghostwriter-the-house-builder-and-the-missing-domain-model-walk-into-a-bar-1m9f</guid>
      <description>&lt;p&gt;Software development is often described as “building systems”.&lt;br&gt;&lt;br&gt;
But there are two professions that might describe the job much better: &lt;strong&gt;writing a book&lt;/strong&gt; and &lt;strong&gt;designing a house&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Both involve creating something coherent out of many parts.&lt;br&gt;&lt;br&gt;
Both require understanding purpose before execution.&lt;br&gt;&lt;br&gt;
And both reveal a common mistake that appears surprisingly often in modern software development.&lt;/p&gt;

&lt;p&gt;To see why, imagine two professionals: a ghostwriter and a house builder.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Ghostwriter Version of Software Development
&lt;/h2&gt;

&lt;p&gt;Imagine you hire a ghostwriter to write a book based on your ideas.&lt;/p&gt;

&lt;p&gt;You send them fragments:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;a story about a childhood memory&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;a chapter about leadership&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;a few anecdotes about business&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;a paragraph about innovation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;a repeated explanation of a concept you already mentioned earlier&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A bad ghostwriter simply writes everything down exactly as provided.&lt;/p&gt;

&lt;p&gt;The result is technically correct. The grammar is fine. The sentences are clear.&lt;/p&gt;

&lt;p&gt;But the book becomes a mess:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;topics overlap&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;concepts repeat&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;arguments contradict each other&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the narrative jumps randomly between ideas&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Nothing ties the material together into a coherent story.&lt;/p&gt;

&lt;p&gt;The ghostwriter has focused on &lt;strong&gt;transcription&lt;/strong&gt;, not &lt;strong&gt;authorship&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Unfortunately, software development is sometimes practiced in a very similar way.&lt;/p&gt;

&lt;p&gt;A team receives a series of user stories:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;“Customers should be able to create orders.”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;“Orders can have discounts.”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;“Admins can modify customer data.”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;“Orders must be validated before submission.”&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each story is implemented somewhere in the codebase. A controller here, a service there, a validation rule somewhere else.&lt;/p&gt;

&lt;p&gt;Every story is technically implemented.&lt;/p&gt;

&lt;p&gt;But over time the system starts to show symptoms:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;business rules appear in multiple places&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;behavior becomes inconsistent&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;changes require touching many unrelated components&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;nobody is entirely sure where certain logic belongs anymore&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The system becomes the equivalent of the badly written book: &lt;strong&gt;a collection of fragments without a coherent narrative&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The problem is not coding skill. The problem is the absence of &lt;strong&gt;structure&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  The House Builder Version of Software Development
&lt;/h2&gt;

&lt;p&gt;Now imagine designing a wooden house.&lt;/p&gt;

&lt;p&gt;But instead of starting with how people will live in it, the builders start with their tools.&lt;/p&gt;

&lt;p&gt;The plumber places the bathroom where the pipes are easiest to install.&lt;/p&gt;

&lt;p&gt;The electrician places the kitchen where wiring is convenient.&lt;/p&gt;

&lt;p&gt;The carpenter builds bedrooms wherever the structure is simplest.&lt;/p&gt;

&lt;p&gt;Each professional does excellent work.&lt;/p&gt;

&lt;p&gt;The plumbing is perfect.&lt;br&gt;&lt;br&gt;
The wiring is flawless.&lt;br&gt;&lt;br&gt;
The construction is solid.&lt;/p&gt;

&lt;p&gt;But when the house is finished, something feels very wrong.&lt;/p&gt;

&lt;p&gt;The dining room is on the opposite end of the kitchen.&lt;br&gt;&lt;br&gt;
The shower is installed in the kitchen because the pipes were already there.&lt;br&gt;&lt;br&gt;
The bedrooms are nowhere near the bathroom.&lt;/p&gt;

&lt;p&gt;Every part of the house is technically well built.&lt;/p&gt;

&lt;p&gt;But the house &lt;strong&gt;does not work as a house&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;No one began by asking the most important question:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How will people live here?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The same thing happens in software development when architecture is driven primarily by tools and technologies.&lt;/p&gt;

&lt;p&gt;Discussions revolve around:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;frameworks&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;infrastructure&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;microservices&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;deployment pipelines&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;cloud platforms&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All important tools.&lt;/p&gt;

&lt;p&gt;But they are &lt;strong&gt;construction techniques&lt;/strong&gt;, not &lt;strong&gt;design principles&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Without understanding how the system is supposed to behave as a whole, even the best tools can produce a system that is technically impressive but conceptually broken.&lt;/p&gt;




&lt;h2&gt;
  
  
  What’s Missing: Coherent Design
&lt;/h2&gt;

&lt;p&gt;Both examples expose the same underlying problem.&lt;/p&gt;

&lt;p&gt;The ghostwriter fails because they never discovered the &lt;strong&gt;story&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The house builders fail because they never understood the &lt;strong&gt;purpose of the house&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In software engineering, the equivalent is failing to understand the &lt;strong&gt;domain&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;User stories describe fragments of behavior.&lt;/p&gt;

&lt;p&gt;But if every story is simply implemented as-is, the system slowly loses coherence.&lt;/p&gt;

&lt;p&gt;Engineering cannot start with implementation. It must start with &lt;strong&gt;understanding&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;What activities exist in the domain?&lt;br&gt;&lt;br&gt;
What concepts matter?&lt;br&gt;&lt;br&gt;
What rules must always hold?&lt;br&gt;&lt;br&gt;
Where should responsibilities live?&lt;/p&gt;

&lt;p&gt;Only when those questions are answered does the structure of the system begin to emerge.&lt;/p&gt;




&lt;h2&gt;
  
  
  Enter the Domain Model
&lt;/h2&gt;

&lt;p&gt;This is where the &lt;strong&gt;domain model&lt;/strong&gt; becomes essential.&lt;/p&gt;

&lt;p&gt;A domain model acts as a &lt;strong&gt;responsibility localizer&lt;/strong&gt; and &lt;strong&gt;logic contextualizer&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Instead of scattering behavior across the codebase, the model provides structure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;concepts are represented explicitly&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;rules live with the concepts they govern&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;responsibilities have clear homes&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When a new user story arrives, the question is no longer:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Where should we add this piece of code?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Instead the question becomes:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“What does this story mean for our understanding of the domain?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Sometimes the answer is simple.&lt;/p&gt;

&lt;p&gt;Sometimes it requires adjusting the model itself.&lt;/p&gt;

&lt;p&gt;But the goal remains the same: &lt;strong&gt;preserve conceptual integrity&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Without that integrity, software inevitably turns into the badly written book or the badly designed house.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Risk of AI
&lt;/h2&gt;

&lt;p&gt;AI-assisted coding is incredibly powerful.&lt;/p&gt;

&lt;p&gt;It can generate code, implement features, suggest refactorings, and remove enormous amounts of repetitive work. Used well, it is an enormous productivity accelerator.&lt;/p&gt;

&lt;p&gt;But AI is strongest at &lt;strong&gt;local implementation&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It excels at doing exactly what it is asked: implementing a function, adding a feature, modifying an existing piece of code. In that sense it behaves very much like the literal ghostwriter who writes the paragraph that was requested, or the contractor who builds a perfectly constructed room.&lt;/p&gt;

&lt;p&gt;What AI does not replace is &lt;strong&gt;modeling the domain&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It does not determine:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;what the core concepts of the system should be&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;where responsibilities belong&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;how rules should be structured&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;how the system should reflect the purpose of the business&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Those decisions require understanding intent and discovering structure. They are design activities.&lt;/p&gt;

&lt;p&gt;AI can dramatically accelerate the &lt;strong&gt;technical execution&lt;/strong&gt; of a system. But it cannot replace the need for &lt;strong&gt;coherent design&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Otherwise we risk the same outcomes as before: a technically correct book without a story, or a well-built house that no one can live in.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In an interesting way, the rise of AI coding tools highlights something that has always been true in software development.&lt;/p&gt;

&lt;p&gt;Many teams have already been operating primarily at the level of &lt;strong&gt;feature implementation&lt;/strong&gt;, while the deeper design work was often implicit, inconsistent, or missing entirely.&lt;/p&gt;

&lt;p&gt;Custom software is essentially a &lt;strong&gt;one-off prototype&lt;/strong&gt;. There is no reference design to compare it to. There is no second version of the same system built by another team. There is only one implementation: the one that ends up running in production.&lt;/p&gt;

&lt;p&gt;That makes design mistakes difficult to spot early.&lt;/p&gt;

&lt;p&gt;A book with a broken narrative may only reveal its problems once the entire manuscript is finished.&lt;br&gt;&lt;br&gt;
A badly designed house may only reveal its flaws once people try to live in it.&lt;/p&gt;

&lt;p&gt;Software is no different.&lt;/p&gt;

&lt;p&gt;Which is why the design phase — understanding the purpose of the system and shaping a coherent domain model — cannot be skipped.&lt;/p&gt;

&lt;p&gt;The ghostwriter must understand the story.&lt;br&gt;&lt;br&gt;
The architect must understand how the house will be lived in.&lt;/p&gt;

&lt;p&gt;And the software engineer must understand the &lt;strong&gt;domain&lt;/strong&gt; before writing the code that brings it to life.&lt;/p&gt;

</description>
      <category>softwareengineering</category>
      <category>java</category>
      <category>ai</category>
      <category>richdomainmodel</category>
    </item>
    <item>
      <title>The Two Levels of Software Development — And Why Most Enterprise Applications Fail Over Time</title>
      <dc:creator>Leon Pennings</dc:creator>
      <pubDate>Tue, 10 Mar 2026 07:39:44 +0000</pubDate>
      <link>https://forem.com/leonpennings/the-two-levels-of-software-development-and-why-most-enterprise-applications-fail-over-time-3lk7</link>
      <guid>https://forem.com/leonpennings/the-two-levels-of-software-development-and-why-most-enterprise-applications-fail-over-time-3lk7</guid>
      <description>&lt;p&gt;There are two fundamentally different levels in software development.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Level 1 — Getting the system to run&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;At this level the goal is straightforward:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;the application compiles&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the system deploys&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;features behave as expected&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;users can operate the software&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If those conditions are met, the project is considered successful.&lt;/p&gt;

&lt;p&gt;Most modern frameworks are extremely good at helping teams achieve this level. A stack such as Spring Framework provides a ready-made structure for building applications quickly: web infrastructure, dependency injection, persistence integration, configuration management, and more. With the right templates and tooling, teams can produce a working system in relatively little time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Level 2 — Keeping the system economically evolvable&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The second level is far harder.&lt;/p&gt;

&lt;p&gt;Once the system has been running for several years, the real question becomes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Can developers still reason about the business rules?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Can features be added without breaking existing behavior?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Does the cost of change remain predictable?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the level where software must remain &lt;strong&gt;economically viable&lt;/strong&gt;. The system must evolve along with the business without collapsing under its own complexity.&lt;/p&gt;

&lt;p&gt;Most of the industry focuses almost entirely on Level 1, because Level 2 is much harder to see.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Observability Problem
&lt;/h2&gt;

&lt;p&gt;In many engineering disciplines, different designs can be compared directly.&lt;/p&gt;

&lt;p&gt;Two airplane designs can be tested against each other. Two bridge designs can be analyzed under the same load conditions. Engineers can evaluate alternatives objectively.&lt;/p&gt;

&lt;p&gt;Enterprise software is different.&lt;/p&gt;

&lt;p&gt;Most systems are &lt;strong&gt;unique implementations of a specific business domain&lt;/strong&gt;. A logistics system for one company will not be rebuilt several times with different architectures just to compare which approach works best.&lt;/p&gt;

&lt;p&gt;Because of that, organizations rarely observe multiple competing implementations of the same domain.&lt;/p&gt;

&lt;p&gt;There is no side-by-side comparison like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;System A — rich domain model
System B — procedural service architecture
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;running the same business processes for several years.&lt;/p&gt;

&lt;p&gt;Instead there is only one system: the one that was built.&lt;/p&gt;

&lt;p&gt;As a result, success is evaluated using the only clearly visible metric:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Does the application run?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the answer is yes, the architecture is usually considered successful.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Rise of the Software Factory
&lt;/h2&gt;

&lt;p&gt;Over time, the industry optimized for what was easiest to measure: producing working applications quickly.&lt;/p&gt;

&lt;p&gt;This led to the emergence of &lt;strong&gt;standardized software production lines&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Typical enterprise stacks often follow a very familiar structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;controller
service
repository
database
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add REST APIs, containerization, messaging infrastructure, and often microservices, and a predictable pattern emerges.&lt;/p&gt;

&lt;p&gt;Framework ecosystems reinforce this pattern. They provide conventions, templates, and project generators that make it easy to spin up new services quickly.&lt;/p&gt;

&lt;p&gt;From an organizational perspective, this approach &lt;strong&gt;promises several advantages&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;developers can move between projects easily&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;teams can scale rapidly&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;systems follow familiar patterns&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;onboarding new engineers becomes simpler&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These promises are appealing, especially to organizations managing large engineering teams.&lt;/p&gt;

&lt;p&gt;However, these advantages are rarely tested against alternative architectural approaches. Because most enterprise systems are built only once, there is no direct comparison showing whether a different design would actually have been more efficient or easier to evolve.&lt;/p&gt;

&lt;p&gt;As a result, the perceived success of factory-style development often rests on a simple observation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The application runs&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Features are delivered&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without a comparable implementation built around a different architectural philosophy, it is difficult to see whether the chosen approach truly delivered its promised benefits.&lt;/p&gt;

&lt;p&gt;The result is a development process that resembles a &lt;strong&gt;software factory&lt;/strong&gt;: a standardized production line designed to produce working applications quickly.&lt;/p&gt;

&lt;p&gt;Whether it produces the &lt;strong&gt;right kind of system for the domain&lt;/strong&gt; is a different question entirely.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Model T Problem
&lt;/h2&gt;

&lt;p&gt;The logic behind this standardization is similar to the philosophy associated with Henry Ford and the Ford Model T.&lt;/p&gt;

&lt;p&gt;The Model T revolutionized manufacturing through standardization. One of the famous ideas attributed to Ford was that customers could choose any color, &lt;strong&gt;as long as it was black&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This approach worked because the product itself was standardized.&lt;/p&gt;

&lt;p&gt;Cars were produced for a broad market with relatively similar requirements.&lt;/p&gt;

&lt;p&gt;Enterprise software is fundamentally different.&lt;/p&gt;

&lt;p&gt;Each system represents a &lt;strong&gt;specific business domain&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;logistics operations&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;insurance policies&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;trading platforms&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;healthcare workflows&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These domains have very different requirements and behaviors.&lt;/p&gt;

&lt;p&gt;In effect, each enterprise application needs a different type of vehicle.&lt;/p&gt;

&lt;p&gt;Some domains resemble &lt;strong&gt;heavy trucks&lt;/strong&gt; carrying complex transactional logic. Others behave more like &lt;strong&gt;high-performance machines&lt;/strong&gt;, where performance and precision matter enormously. Some systems are small and lightweight.&lt;/p&gt;

&lt;p&gt;Yet many development ecosystems attempt to solve all of them with the same architectural pattern — the equivalent of producing a &lt;strong&gt;Model T for every possible use case&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Appears to Work
&lt;/h2&gt;

&lt;p&gt;Despite the mismatch, the Model T architecture still appears successful.&lt;/p&gt;

&lt;p&gt;After all, a Model T can still move forward. It can transport people and even carry small loads.&lt;/p&gt;

&lt;p&gt;Similarly, standardized enterprise architectures can deliver features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;endpoints respond to requests&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;data is stored and retrieved&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;workflows execute&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;From the outside, the application works.&lt;/p&gt;

&lt;p&gt;Because organizations rarely build the same system twice with different architectures, they never see a direct comparison. There is no competing design demonstrating that the system could have been far simpler or easier to evolve.&lt;/p&gt;

&lt;p&gt;As long as the application runs and delivers features, the architecture appears to perform as expected.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Hidden Cost of Factory Style Development
&lt;/h2&gt;

&lt;p&gt;The real cost of factory-style architectures emerges gradually and usually in two dimensions: &lt;strong&gt;effort&lt;/strong&gt; and &lt;strong&gt;functional quality&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Effort: Why development gets slower over time
&lt;/h3&gt;

&lt;p&gt;In factory-style systems, implementing new functionality tends to require roughly the same effort every time.&lt;/p&gt;

&lt;p&gt;Every feature follows the same pattern:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;controller
service
repository
integration logic
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Because the architecture is primarily procedural, the system rarely accumulates reusable domain behavior. Each feature often introduces new service logic rather than building upon existing concepts.&lt;/p&gt;

&lt;p&gt;As the system grows, the situation frequently worsens:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;similar logic appears in multiple services&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;developers must read many parts of the system to understand behavior&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;debugging requires tracing through multiple layers and integrations&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;knowledge transfer becomes difficult&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The effort required to implement new functionality often &lt;strong&gt;remains constant or even increases over time&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Function-driven systems behave very differently.&lt;/p&gt;

&lt;p&gt;When a system evolves around a coherent domain model, the model itself becomes a &lt;strong&gt;growing knowledge base of the business&lt;/strong&gt;. Domain objects accumulate responsibilities and reusable behavior.&lt;/p&gt;

&lt;p&gt;As the model matures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;new features often extend existing objects&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;behavior is reused rather than reimplemented&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;developers can understand the system by understanding the model&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Knowledge transfer becomes easier because the model tells the story of the application.&lt;/p&gt;

&lt;p&gt;Debugging is also simpler. When rules live in their responsible objects, it becomes immediately clear where behavior originates. Developers do not need to search across multiple services implementing slightly different versions of the same logic.&lt;/p&gt;

&lt;p&gt;Over time, the effort required to add functionality &lt;strong&gt;tends to decrease&lt;/strong&gt;, because the model provides increasing leverage.&lt;/p&gt;

&lt;h3&gt;
  
  
  Quality: One version of the truth
&lt;/h3&gt;

&lt;p&gt;Factory-style architectures often distribute business rules across multiple services.&lt;/p&gt;

&lt;p&gt;It is common to find logic that is similar but not identical in different places:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;slightly different validation rules&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;small variations in calculations&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;edge cases handled in one service but not another&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These inconsistencies are rarely intentional. They appear gradually as new features are implemented independently.&lt;/p&gt;

&lt;p&gt;The result is a system with &lt;strong&gt;multiple interpretations of the same business rule&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Function-driven systems address this differently.&lt;/p&gt;

&lt;p&gt;Each business rule belongs to the object responsible for that concept. The rule has &lt;strong&gt;one canonical implementation&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If a system contains an &lt;code&gt;Order&lt;/code&gt; concept, the logic related to orders lives with the &lt;code&gt;Order&lt;/code&gt; object. If there is pricing logic, it belongs to the pricing model.&lt;/p&gt;

&lt;p&gt;This creates a &lt;strong&gt;single version of the truth&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Rules are not scattered across services or hidden inside orchestration layers. They are located where the business concept itself lives.&lt;/p&gt;

&lt;p&gt;This greatly reduces contradictions and makes the system far easier to reason about.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Engineering Principle
&lt;/h2&gt;

&lt;p&gt;Enterprise systems should be &lt;strong&gt;function-driven rather than tool-driven&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Architecture should begin with understanding the domain:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;the concepts involved&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the relationships between them&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the rules that must remain consistent&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Only after that understanding emerges should tools and frameworks be introduced to support the system.&lt;/p&gt;

&lt;p&gt;Tools are valuable when they solve real problems in the running application:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;persistence&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;messaging&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;scaling&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;reliability&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But they should not dictate the structure of the domain model.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Rich Domain Models Help
&lt;/h2&gt;

&lt;p&gt;A function-driven system begins with the &lt;strong&gt;domain model&lt;/strong&gt;, not with architectural patterns or infrastructure.&lt;/p&gt;

&lt;p&gt;Instead of starting with decisions like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;microservices&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;event-driven architecture&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;CQRS&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;messaging platforms&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;development begins with understanding the domain itself.&lt;/p&gt;

&lt;p&gt;The first goal is to model the core concepts and their responsibilities.&lt;/p&gt;

&lt;p&gt;Typical objects might represent concepts such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Order&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Customer&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Shipment&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Invoice&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These objects contain the behavior that defines the business logic.&lt;/p&gt;

&lt;p&gt;At this stage, the focus is entirely on implementing the &lt;strong&gt;core functionality of the application&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Infrastructure concerns are introduced only when they become necessary.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;persistence is added when data must be stored&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;messaging appears when asynchronous coordination is required&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;scaling mechanisms appear when load actually demands them&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In practice, many enterprise systems never reach the scale that requires complex distributed architectures.&lt;/p&gt;

&lt;p&gt;For the majority of applications, a well-designed domain model within a cohesive system is entirely sufficient.&lt;/p&gt;

&lt;p&gt;Only when real operational constraints appear should the architecture evolve technically.&lt;/p&gt;

&lt;p&gt;This approach keeps the system aligned with the domain while avoiding premature technical complexity.&lt;/p&gt;

&lt;p&gt;The result is software that grows &lt;strong&gt;organically around the business model&lt;/strong&gt;, rather than being constrained by predefined architectural templates.&lt;/p&gt;




&lt;h2&gt;
  
  
  Closing Thought
&lt;/h2&gt;

&lt;p&gt;TThe software industry has become extremely good at producing applications that run.&lt;/p&gt;

&lt;p&gt;Frameworks, templates, and standardized stacks make it possible to build complex systems faster than ever before.&lt;/p&gt;

&lt;p&gt;But enterprise software is not a short-term product. It is a long-lived system that must evolve together with the business.&lt;/p&gt;

&lt;p&gt;Designing such systems requires something different from a software factory. It requires starting with the domain, building a coherent model, and letting the architecture grow from the problem instead of from the tools.&lt;/p&gt;

&lt;p&gt;Otherwise we keep producing the same solution for every problem:&lt;/p&gt;

&lt;p&gt;another black Model T.&lt;/p&gt;

&lt;p&gt;The problem with that approach is not aesthetic. It is economic.&lt;/p&gt;

&lt;p&gt;When the architecture does not match the domain, the mismatch shows up in three places:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;more engineering effort&lt;/strong&gt; to implement and understand functionality&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;higher long-term costs&lt;/strong&gt; as development slows and operational complexity increases&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;more fragile systems&lt;/strong&gt; where business rules are scattered and difficult to reason about&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In other words, the wrong architectural vehicle does not merely look inelegant — it makes the system harder and more expensive to operate for the rest of its lifetime.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Long-lived enterprise software requires something better than a one-size-fits-all production line.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It requires architectures that are designed around the domain they serve.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>softwaredevelopment</category>
      <category>softwareengineering</category>
    </item>
  </channel>
</rss>
