<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Balthazar de Moncuit</title>
    <description>The latest articles on Forem by Balthazar de Moncuit (@dexluce).</description>
    <link>https://forem.com/dexluce</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/dexluce"/>
    <language>en</language>
    <item>
      <title>Your Demo Slot Is a Leadership Opportunity</title>
      <dc:creator>Balthazar de Moncuit</dc:creator>
      <pubDate>Tue, 20 Jan 2026 07:45:00 +0000</pubDate>
      <link>https://forem.com/dexluce/your-demo-slot-is-a-leadership-opportunity-3hi8</link>
      <guid>https://forem.com/dexluce/your-demo-slot-is-a-leadership-opportunity-3hi8</guid>
      <description>&lt;p&gt;Every two months, I stand in front of stakeholders and present what my team shipped.&lt;/p&gt;

&lt;p&gt;For years, I did what everyone does: walked through features.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"We added an error page to the docs."&lt;br&gt;
"We improved the callback API."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Checkbox. Checkbox. Next slide.&lt;/p&gt;

&lt;p&gt;It's efficient. It's safe. And it's forgettable.&lt;/p&gt;

&lt;p&gt;More importantly, it quietly trains the organization to think in outputs, not situations. A demo becomes proof that work happened — not a place where shared understanding is built.&lt;/p&gt;

&lt;p&gt;But a demo slot is stage time.&lt;br&gt;
And stage time is influence — whether you intend to use it or not.&lt;/p&gt;




&lt;h2&gt;
  
  
  Trying Something Different
&lt;/h2&gt;

&lt;p&gt;Last Friday, I had a PI demo. New features in our component library.&lt;/p&gt;

&lt;p&gt;The old version of me would have said:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"We added a comprehensive error documentation page listing all error codes and explanations."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That's accurate.&lt;br&gt;
And it would have evaporated instantly.&lt;/p&gt;

&lt;p&gt;Instead, I tried something else.&lt;/p&gt;

&lt;p&gt;I told a story — and I gave the user a name.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Meet Bob.&lt;br&gt;
Bob is a developer integrating our authentication component into his research platform.&lt;/p&gt;

&lt;p&gt;Without us, Bob is staring at weeks of work: identity provider quirks, permission matrices, undocumented edge cases.&lt;/p&gt;

&lt;p&gt;With the component, he copies the quick-start example. Five minutes later, it runs.&lt;/p&gt;

&lt;p&gt;Then it crashes.&lt;/p&gt;

&lt;p&gt;He binds the error callback — like the docs suggest — and gets this message:&lt;br&gt;
'Your test user lacks permission XYZ. Send this email to support. Response time is usually two days.'&lt;/p&gt;

&lt;p&gt;Bob sends the email.&lt;br&gt;
Two days later, it works. He's not stuck anymore.&lt;/p&gt;

&lt;p&gt;Later, he discovers our error reference page and realizes something important: every failure mode is documented. He's no longer guessing. He can build."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Same features.&lt;br&gt;
Same code.&lt;br&gt;
Same sprint.&lt;/p&gt;

&lt;p&gt;Different framing.&lt;/p&gt;

&lt;p&gt;One is a deliverable.&lt;br&gt;
The other is a worldview.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Matters
&lt;/h2&gt;

&lt;p&gt;Demos quietly teach people what kind of work matters.&lt;/p&gt;

&lt;p&gt;A feature list teaches:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Progress is shipping things."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A story teaches:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Progress is changing someone's situation."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Those lessons don't show up as questions in the meeting.&lt;br&gt;
They show up later — in how problems are framed, in what's considered "done," in which gaps feel uncomfortable.&lt;/p&gt;

&lt;p&gt;They show up when someone says "Bob" — and everyone knows what that means.&lt;/p&gt;

&lt;p&gt;A named character is a compression mechanism.&lt;br&gt;
It turns an abstract "integrator" into a reference point the organization can think with.&lt;/p&gt;

&lt;p&gt;In environments where "safe" means status reports and checklists, this models something else: that alignment comes from empathy, not precision slides.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Signal I'm Looking For
&lt;/h2&gt;

&lt;p&gt;I'm not expecting applause.&lt;/p&gt;

&lt;p&gt;The signal I care about is delayed — and indirect.&lt;/p&gt;

&lt;p&gt;Weeks or months from now, I'll listen for:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Bob is stuck at this step."&lt;br&gt;
"Bob's journey breaks here."&lt;br&gt;
"Bob needs one last thing to close the loop."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If "Bob" ever reappears in a ticket, a meeting, or a backlog item, I'll know the framing landed.&lt;/p&gt;

&lt;p&gt;If Bob never comes back — that's data too.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Actually Happened
&lt;/h2&gt;

&lt;p&gt;Nobody asked questions. No follow-up messages. No visible reaction.&lt;/p&gt;

&lt;p&gt;It was a Friday afternoon PI demo on Teams. We wrapped. People went into the weekend.&lt;/p&gt;

&lt;p&gt;The experiment is running. No results yet.&lt;/p&gt;




&lt;h2&gt;
  
  
  Try It Once
&lt;/h2&gt;

&lt;p&gt;Nobody asked me to do this. Nobody rewarded it. The slot was mine. So I used it differently.&lt;/p&gt;

&lt;p&gt;This connects to something I wrote recently: &lt;a href="https://dev.to/dexluce/most-micromanagement-is-invited-330k"&gt;you often have more agency than you think&lt;/a&gt; — but only if you take it.&lt;/p&gt;

&lt;p&gt;I came up with the story ten minutes before the demo. It showed. And I didn't even name the character — "Bob" arrived while writing this.&lt;/p&gt;

&lt;p&gt;But I did it once. Imperfectly.&lt;/p&gt;

&lt;p&gt;Next demo, pick one feature. Don't describe what it does. Describe who uses it. Give them a name.&lt;/p&gt;

&lt;p&gt;Then watch what comes back.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Views and experiments described here are my own and don't represent my employer.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>leadership</category>
      <category>career</category>
      <category>productivity</category>
      <category>agile</category>
    </item>
    <item>
      <title>Most Micromanagement Is Invited</title>
      <dc:creator>Balthazar de Moncuit</dc:creator>
      <pubDate>Tue, 13 Jan 2026 11:58:37 +0000</pubDate>
      <link>https://forem.com/dexluce/most-micromanagement-is-invited-330k</link>
      <guid>https://forem.com/dexluce/most-micromanagement-is-invited-330k</guid>
      <description>&lt;p&gt;I spent years blaming management for micromanaging me. Turns out I was begging for it.&lt;/p&gt;

&lt;p&gt;Not consciously. Nobody walks into a meeting thinking &lt;em&gt;“please, take this decision away from me.”&lt;/em&gt; But that’s what I was doing. That’s what my team was doing. That’s what most teams do.&lt;/p&gt;

&lt;p&gt;It took me three companies to see the pattern. And to realize I was part of it.&lt;/p&gt;

&lt;p&gt;Nobody designed it. But once it exists, people with authority decide whether it continues.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Accidental Education
&lt;/h2&gt;

&lt;p&gt;My first real engineering job was in a startup. Tiny. No process. No handbook. No one to ask.&lt;/p&gt;

&lt;p&gt;When something needed deciding, I decided. Not because I was competent—I was a mass of anxiety and Stack Overflow tabs—but because there was nobody else. The alternative to deciding was not shipping.&lt;/p&gt;

&lt;p&gt;I made mistakes. Plenty. But I learned something I didn’t know was rare: when there’s no safety net, you build judgment. You learn to estimate consequences. You ship, you watch what breaks, you fix it, you remember.&lt;/p&gt;

&lt;p&gt;I thought this was just how software worked. It wasn’t.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Institution
&lt;/h2&gt;

&lt;p&gt;A few years later, I joined a large public organization. Big digitalization project. Agile transformation. I thought I’d finally learn how “real” engineering worked.&lt;/p&gt;

&lt;p&gt;We had sprints. Ceremonies. Retros. I had latitude on my UI and implementation details. It felt like ownership.&lt;/p&gt;

&lt;p&gt;Then I hit authentication.&lt;/p&gt;

&lt;p&gt;We were building a suite of internal applications. Users needed to move between them seamlessly. Basic requirement: log into App A, and App B knows who you are. That requires a central identity provider.&lt;/p&gt;

&lt;p&gt;We didn’t have one.&lt;/p&gt;

&lt;p&gt;I raised the issue. The response was polite and absolute: the infrastructure had been decided years earlier. In meetings. By people unreachable by the integration team. The architecture was politically frozen.&lt;/p&gt;

&lt;p&gt;Reversing it would have meant admitting those meetings got it wrong. That had a cost. So the decision lived on.&lt;/p&gt;

&lt;p&gt;I hacked together a workaround. Shared cookies under the same subdomain. It mostly worked. I documented the edge cases where it didn’t. Then I quit.&lt;/p&gt;

&lt;p&gt;What I learned later is this: decisions made in rooms full of people tend to fossilize. Not because they’re good, but because undoing them has political cost. Seniority becomes a shield against revisiting them.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Title Comes With Authority (On Paper)
&lt;/h2&gt;

&lt;p&gt;Later, I joined a large organization as a tech lead. This time, I had real authority—at least formally.&lt;/p&gt;

&lt;p&gt;The onboarding was full of modern language: lean product thinking, agile principles, outcome-driven development. I was optimistic.&lt;/p&gt;

&lt;p&gt;It took time to understand how decisions actually flowed. Who influenced what. Where power really sat.&lt;/p&gt;

&lt;p&gt;And then the frustration crept in.&lt;/p&gt;

&lt;p&gt;Product decisions felt locked before the problems were even discussed. I blamed leadership. All of it. People preaching agility while it quietly failed underneath.&lt;/p&gt;

&lt;p&gt;That story was comforting. It was also incomplete.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Mirror
&lt;/h2&gt;

&lt;p&gt;I started watching a very specific kind of meeting: decision meetings.&lt;/p&gt;

&lt;p&gt;Here’s the pattern. A developer faces a technical choice—a library, a pattern, an approach. Instead of deciding, they book a meeting. Five or six people. An hour. At least one senior person present.&lt;/p&gt;

&lt;p&gt;The stakes are usually low. A recoverable mistake. Something a well-paid senior engineer could decide alone.&lt;/p&gt;

&lt;p&gt;The developer presents options. Discussion follows. Some engage, others stay silent. Forty-five minutes pass.&lt;/p&gt;

&lt;p&gt;Then the most senior person says: &lt;em&gt;“OK. Let’s go with option B.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Meeting over. Decision made. Everyone was in the room. Nobody owns the outcome.&lt;/p&gt;

&lt;p&gt;The manager didn’t want to be there. But by staying and breaking the tie, they become the decision mechanism. Every time.&lt;/p&gt;

&lt;p&gt;They decide. They get invited again. The team notices. It confirms the belief they already had: decisions need approval from above.&lt;/p&gt;

&lt;p&gt;The cycle feeds itself because authority intervenes at the wrong level: deciding outcomes instead of defining decision boundaries.&lt;/p&gt;

&lt;p&gt;And who calls these meetings? Not management.&lt;/p&gt;

&lt;p&gt;Developers. Tech leads. Us.&lt;/p&gt;

&lt;p&gt;We ask to be micromanaged — and management rarely refuses.&lt;br&gt;
We just don’t call it that. We call it “alignment.” We call it “collaboration.”&lt;/p&gt;

&lt;p&gt;Agile without ownership is theater.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Word We Don’t Use
&lt;/h2&gt;

&lt;p&gt;Agile assumes ownership. Teams own decisions. Teams own outcomes.&lt;/p&gt;

&lt;p&gt;But “ownership” isn’t a word we actually use. We talk about tasks and responsibilities. Ownership implies something harder: when something fails, it’s yours. Not the committee’s. Yours.&lt;/p&gt;

&lt;p&gt;That’s terrifying in most corporate cultures.&lt;/p&gt;

&lt;p&gt;“Fail fast” assumes failure is survivable. That a wrong call doesn’t follow you for years. That the market is liquid enough to absorb mistakes—you fail here, you learn, you get hired there.&lt;/p&gt;

&lt;p&gt;In Switzerland—and across much of Europe—failure is expensive. I’ve seen it in my own career, and I’ve heard the same pattern from engineers at other companies across industries: a bad decision becomes a story. It resurfaces in calibrations. It colors perception for years. It accumulates as reputation debt.&lt;/p&gt;

&lt;p&gt;The rational move isn’t to decide fast. It’s to make sure you’re never wrong alone.&lt;/p&gt;

&lt;p&gt;The fear is rational. But rational fear is not a design constraint — it's a signal the organization has failed to make ownership safe.&lt;/p&gt;

&lt;p&gt;So people optimize for shared responsibility. Meetings aren’t decision tools. They’re insurance policies. If this blows up, at least six people approved it. Including someone senior. Nobody’s career dies alone.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Ownership Actually Requires
&lt;/h2&gt;

&lt;p&gt;Across multiple organizations, in Swiss tech, healthcare, and finance, I’ve seen the same pattern. When I stopped blaming individuals, the real gaps became visible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vocabulary.&lt;/strong&gt; We avoid “ownership” because we don’t mean it. Accountability happens after something breaks. Ownership happens before—the decision, the risk, the bet. Most organizations have rich language for blame and almost none for intent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data.&lt;/strong&gt; You can’t own decisions about users you don’t know. Many developers can’t describe their users beyond job titles. Not “pharmacists”—which ones? The rural ones juggling three legacy systems? The hospital ones with workflows imposed from above?&lt;/p&gt;

&lt;p&gt;If you’ve never watched a user struggle with your product, you don’t have insight. You have assumptions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Domain boundaries.&lt;/strong&gt; Every domain has red lines—legal, financial, safety, reputational. If you don’t know where they are, you can’t evaluate whether a decision crosses them.&lt;/p&gt;

&lt;p&gt;This isn’t an excuse to escalate everything. It’s a reason to learn.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Method.&lt;/strong&gt; People aren’t taught how to evaluate impact, reversibility, or blast radius. We assume judgment is absorbed through experience. It isn’t. It’s built by deciding, observing consequences, and adjusting.&lt;/p&gt;

&lt;p&gt;Most organizations never allow that loop to run.&lt;/p&gt;

&lt;p&gt;These aren’t nice-to-haves. They’re prerequisites. Everything else is process theater without them.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Simple Way to Sort Decisions
&lt;/h2&gt;

&lt;p&gt;Once those prerequisites exist, most decisions can be sorted quickly using two axes: reversibility and impact.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Low impact&lt;/th&gt;
&lt;th&gt;High impact&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Reversible&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Decide alone, document&lt;/td&gt;
&lt;td&gt;Propose async, validate&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Irreversible&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Tech lead decides&lt;/td&gt;
&lt;td&gt;Meeting justified&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Three boxes out of four don’t require a meeting.&lt;/p&gt;

&lt;p&gt;The hard part isn’t the matrix. It’s honesty about which box you’re actually in.&lt;/p&gt;

&lt;p&gt;Fear upgrades everything to “irreversible, high-impact.” That’s the reflex to fight.&lt;/p&gt;

&lt;p&gt;A gut check: if the worst-case cost of being wrong is less than roughly 10% of your annual salary, you can probably decide alone. That covers most library choices, most refactors, most architectural micro-decisions.&lt;/p&gt;

&lt;p&gt;If you’re unsure, ask one peer. Just one. Dialogue sharpens thinking; committees dilute it.&lt;/p&gt;

&lt;p&gt;If you’re escalating below that threshold, you’re not being careful. You’re being afraid.&lt;/p&gt;




&lt;h2&gt;
  
  
  Making It Stick
&lt;/h2&gt;

&lt;p&gt;Frameworks don’t change culture. Habits do.&lt;/p&gt;

&lt;p&gt;Ownership doesn’t have to be granted globally to exist locally.&lt;/p&gt;

&lt;p&gt;It can be created inside a team, within explicit boundaries, even when the wider organization isn’t ready for it — and without waiting for permission.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The decision log.&lt;/strong&gt; Decisions made alone still need visibility—not for approval, but for learning. Date, decision, context, alternatives rejected, owner. Public to the team.&lt;/p&gt;

&lt;p&gt;When something breaks, you don’t hunt for blame. You trace thinking.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Normalize recovered mistakes.&lt;/strong&gt; When a decision goes wrong and someone fixes it, say it out loud. Celebrate the loop, not just the outcome.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Protection from above.&lt;/strong&gt; None of this works if the first mistake gets punished.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Tech leads&lt;/em&gt;: when someone on your team makes a call within their authority and it’s wrong, you own it upward.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Managers&lt;/em&gt;: same principle, higher leverage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Break the cycle actively.&lt;/strong&gt; If you’re in a position of authority, stop attending the theater.&lt;/p&gt;

&lt;p&gt;Every time you show up and decide something that didn’t need you, you train the system to need you.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Uncomfortable Truth
&lt;/h2&gt;

&lt;p&gt;Ownership is uncomfortable by design. The real test isn't letting your team decide; it's letting them decide differently.&lt;/p&gt;

&lt;p&gt;Most organizations don't actually want that. Ownership means mistakes. It means managers letting go. This is why it rarely survives contact with hierarchy.&lt;/p&gt;

&lt;p&gt;If you can't watch your team make a call you disagree with without stepping in, you don't want ownership. You want compliance with extra steps.&lt;/p&gt;

&lt;p&gt;Ceremonies are easier. But stagnating safely has a cost too—it's just not on any dashboard.&lt;/p&gt;

&lt;p&gt;We're all training the system. But not all roles train it equally. We can also untrain it.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Views and experiments described here are my own and don’t represent my employer.&lt;/em&gt;&lt;/p&gt;




</description>
      <category>leadership</category>
      <category>agile</category>
      <category>career</category>
      <category>productivity</category>
    </item>
    <item>
      <title>The DoD Experiment: A Diagnostic Tool in Disguise</title>
      <dc:creator>Balthazar de Moncuit</dc:creator>
      <pubDate>Mon, 22 Dec 2025 19:49:37 +0000</pubDate>
      <link>https://forem.com/dexluce/the-dod-experiment-a-diagnostic-tool-in-disguise-5ee0</link>
      <guid>https://forem.com/dexluce/the-dod-experiment-a-diagnostic-tool-in-disguise-5ee0</guid>
      <description>&lt;p&gt;&lt;em&gt;This is part 2 of a series documenting an experiment with Definition of Done. &lt;a href="https://dev.to/dexluce/the-dod-experiment-trying-to-fix-done-before-it-breaks-us-5ho2"&gt;Part 1: Trying to Fix 'Done' Before It Breaks Us&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;Two sprints in, the original hypothesis remains largely untested. The experiment was under-executed: one refinement session with the DoD, one ticket meaningfully shaped by it, and too much environmental noise to draw conclusions about delivery time.&lt;/p&gt;

&lt;p&gt;But something unexpected emerged. The DoD didn't primarily change estimation—it forced the creation of a handbook. That handbook is now reshaping how the team talks about code, and surfacing problems the DoD itself can't solve.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Six items became fourteen pages&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The DoD itself is minimal. Six items under "always included," a few conditional ones, explicit "never assumed." One page.&lt;/p&gt;

&lt;p&gt;But enforcement requires shared definitions. "Errors are handled" points to a page describing the DomainError pattern and how errors bubble through the state layer. "Styling uses prefixes" needs concrete examples. "Code follows existing patterns" is meaningless unless those patterns exist somewhere outside people's heads.&lt;/p&gt;

&lt;p&gt;The DoD became a table of contents. The handbook is the system it forces into existence.&lt;/p&gt;

&lt;p&gt;This was anticipated—documented knowledge architecture was always a prerequisite. What wasn't anticipated was the volume, and how much of that knowledge had previously been implicit. Writing it down didn't invent complexity; it revealed how much implicit knowledge the system had been running on—and hiding. Fourteen pages today—fewer once the linter stops being technical debt and starts enforcing what shouldn't need prose.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Keeping documentation alive&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Fourteen pages is a liability if it remains one person's opinions dressed as consensus.&lt;/p&gt;

&lt;p&gt;The mitigation is structural: the handbook lives in the monorepo, written in markdown and mermaid, editable in the same PR as the feature that depends on it. "Update handbook" is a DoD item itself.&lt;/p&gt;

&lt;p&gt;The logic: when you introduce a pattern, document it in the same commit. When you notice something missing, fix it where you are. The handbook isn't a parallel artifact owned by someone—it's part of the codebase, subject to the same review process.&lt;/p&gt;

&lt;p&gt;So far, this structure is holding. Multiple developers have edited the handbook unprompted, in the same PRs as feature work. That's validation that ownership is starting to distribute.&lt;/p&gt;

&lt;p&gt;What hasn't happened yet is pushback—someone proposing to change an existing pattern rather than just adding to it. A handbook nobody challenges may indicate passive compliance rather than genuine shared ownership. That distinction matters, and it's something I'm now watching for explicitly.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Documentation depersonalizes feedback&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before the handbook, code review feedback felt personal. "You should name this variable differently." "This error handling isn't complete." Each comment implicitly claimed authority.&lt;/p&gt;

&lt;p&gt;Now there's a page. Naming conventions exist in writing. Error handling has a flowchart. When a comment says "per the naming conventions, this should use the hci- prefix" and links to documentation, the conversation shifts.&lt;/p&gt;

&lt;p&gt;The reviewer isn't asserting taste; they're invoking an agreement. The correction comes from the system, not the person.&lt;/p&gt;

&lt;p&gt;In these two sprints, about a third of PRs explicitly referenced handbook items—links replacing opinionated comments. Developers proposed additions when they noticed gaps. The handbook created shared language, but more importantly, it created distance between people and feedback.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;One ticket, real numbers&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Theory is one thing. Here's what the data actually looks like.&lt;/p&gt;

&lt;p&gt;A technical debt ticket: document and consolidate error codes. Estimated at one day after refinement using the DoD. Actual time: two to three days.&lt;/p&gt;

&lt;p&gt;The developer started the work and found a mess—error codes scattered without consistent patterns. The instinct was natural: fix the system while you're in there. He began refactoring how error codes work across the codebase. Necessary eventually. Not in scope.&lt;/p&gt;

&lt;p&gt;"Refactoring adjacent code" sits explicitly under "never assumed" in the DoD. The ticket had been refined with that in mind. The refactor wasn't part of the estimate because it wasn't part of the agreement.&lt;/p&gt;

&lt;p&gt;The intervention happened mid-refactor, not before. The response was simple: the DoD had just gone live, so this one passes—but look at the line. The estimate didn't include this work. Without it, the ticket would have landed on time.&lt;/p&gt;

&lt;p&gt;There was no pushback. In Sprint Review, the developer brought up the conversation himself as an example of how the DoD would help going forward. The DoD didn't stop the refactor. It made visible an investment that would otherwise have stayed hidden in "it took longer than expected." That visibility is the leverage.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;What Product saw&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Product didn't resist the DoD—they adopted it. Refinement became methodical: take the ticket, run through the DoD. "Always included"—how long? "Never assumed"—do we need any? What's the cost?&lt;/p&gt;

&lt;p&gt;Their response wasn't compliance. They proposed embedding DoD items directly in the ticket template.&lt;/p&gt;

&lt;p&gt;The real test hasn't come yet: a ticket where DoD-complete estimates are dramatically higher than expected. When that happens, the system either produces explicit trade-offs—include the work or defer it as documented debt—or it buckles under pressure to "just ship." That moment will reveal whether this is process or theater.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Adoption at capacity&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In direct conversations, motivations diverge.&lt;/p&gt;

&lt;p&gt;Some see the system working and advocate for it. Others are more guarded—process frameworks have come and gone before, and this one looks like more overhead without guaranteed payoff.&lt;/p&gt;

&lt;p&gt;The framing that landed with skeptics wasn't aspirational. It was defensive: this is for us, not for anyone else. It clarifies what we agreed to build. It tracks technical debt explicitly. And if gaps become problems later, there's a paper trail—tickets written, trade-offs documented, decisions made with eyes open.&lt;/p&gt;

&lt;p&gt;Cynical? Maybe. But practical. And it works. Skepticism hasn't blocked adoption; it's shifted the motivation. Some believe the DoD will make things better. Others believe it will make things defensible. Both are contributing.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;What the DoD made visible&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The DoD was meant to clarify what "done" means for a ticket. It ended up clarifying something else: scope.&lt;/p&gt;

&lt;p&gt;The handbook documents error handling, WebComponent patterns, state management, styling conventions, SSR concerns, authentication flows. Fourteen pages of technical decisions—all within a coherent perimeter.&lt;/p&gt;

&lt;p&gt;Here's what that looks like in practice: when a handbook pattern needs discussion in standup, 80% of the room has no context. When Sprint Review evaluates DoD compliance, most attendees can't assess it. The coordination the DoD requires doesn't map to the current rituals—not because the rituals are wrong, but because they serve a broader group.&lt;/p&gt;

&lt;p&gt;Reading the handbook back, a question surfaced: whose work is this actually describing?&lt;/p&gt;

&lt;p&gt;The answer isn't "the frontend half of a full-stack team." It's closer to "a product team that consumes an API." Our monorepo has its own backend concerns—auth, session, SSR. Our touchpoint with the backend team is contractual: API shape, response formats, error codes. Day-to-day, the collaboration is minimal.&lt;/p&gt;

&lt;p&gt;This isn't a criticism. Teams evolve, structures shift. But the DoD forced an articulation of what we actually build—and that articulation suggests a perimeter that might deserve its own coordination rhythms.&lt;/p&gt;

&lt;p&gt;What that looks like, I don't know yet. But the next step is concrete: map the actual coordination flows, evaluate options for dedicated frontend rituals, and test whether the mismatch is friction or fracture. That's Part 3.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;The current state&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The estimation hypothesis remains unvalidated. Testing it properly requires protected refinements, tracked estimates versus actuals, and more samples.&lt;/p&gt;

&lt;p&gt;What exists now: a 14-page handbook in the monorepo, edited by multiple developers. A DoD that Product wants embedded in ticket templates. A shared language that depersonalizes feedback. A structural mismatch now impossible to ignore.&lt;/p&gt;

&lt;p&gt;The DoD was meant to fix estimation. Instead, it surfaced what estimation had been compensating for.&lt;/p&gt;

&lt;p&gt;Whether it ultimately improves delivery is still open. But for the team, the question has shifted from "why did this take so long?" to "what are we actually agreeing to build?"—and that's a different starting point.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Views and experiments described here are my own and don't represent my employer.&lt;/em&gt;&lt;/p&gt;




</description>
      <category>leadership</category>
      <category>softwareengineering</category>
      <category>productivity</category>
      <category>agile</category>
    </item>
    <item>
      <title>The DoD Experiment: Trying to Fix ‘Done’ Before It Breaks Us</title>
      <dc:creator>Balthazar de Moncuit</dc:creator>
      <pubDate>Fri, 21 Nov 2025 22:17:43 +0000</pubDate>
      <link>https://forem.com/dexluce/the-dod-experiment-trying-to-fix-done-before-it-breaks-us-5ho2</link>
      <guid>https://forem.com/dexluce/the-dod-experiment-trying-to-fix-done-before-it-breaks-us-5ho2</guid>
      <description>&lt;p&gt;&lt;em&gt;This is the first in a series documenting an experiment: bringing clarity and explicit agreements to a distributed engineering team. Starting with something deceptively simple—the Definition of Done.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;I've been putting off writing about my work for years. The excuse was always the same: "I'll write once I've figured it out."&lt;/p&gt;

&lt;p&gt;Which is bullshit, obviously. You never figure it out. You just accumulate scar tissue and call it experience.&lt;/p&gt;

&lt;p&gt;The truth is simpler: I was waiting to have a success story worth telling. Because who wants to read "here's what I'm trying, it might crash"?&lt;/p&gt;

&lt;p&gt;Turns out, that's exactly what's interesting. The messy middle where you don't know if your hypothesis is brilliant or stupid.&lt;/p&gt;

&lt;p&gt;So here we go. I'm a frontend lead at a Swiss healthcare company, and I'm about to try something that will either improve how we work or confirm that I've been solving the wrong problem for months.&lt;/p&gt;

&lt;p&gt;Place your bets.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How it starts...&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Last month, we shipped a fairly standard feature.&lt;/p&gt;

&lt;p&gt;Straightforward on paper: some UI, some interactions, some integration points.&lt;/p&gt;

&lt;p&gt;Estimation: 3 days.&lt;/p&gt;

&lt;p&gt;Reality: 9 days.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The search for answers&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I spent evenings reading the GitLab handbook. Procrastination disguised as professional development.&lt;/p&gt;

&lt;p&gt;I fell in love with it. The async-first culture. Everything written down. Everything accessible. No one's productivity held hostage by someone else's calendar.&lt;/p&gt;

&lt;p&gt;My first thought: "We should do this. Better Confluence structure. Better Teams channels. Clear async workflows."&lt;/p&gt;

&lt;p&gt;I started sketching out the infrastructure. How we'd organize channels. What notifications would go where. How documentation would flow.&lt;/p&gt;

&lt;p&gt;Then it hit me: I was designing the plumbing for a house with no foundation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The culture problem&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;GitLab's async-first approach doesn't work because of their tools. It works because of their culture.&lt;/p&gt;

&lt;p&gt;But what is that culture, exactly?&lt;/p&gt;

&lt;p&gt;Trust? Sure, but trust in what?&lt;/p&gt;

&lt;p&gt;Shared understanding? Getting closer.&lt;/p&gt;

&lt;p&gt;The ability to read something once and know what's expected? That's it.&lt;/p&gt;

&lt;p&gt;When everyone agrees on what "complete" means, you don't need meetings to clarify. When systems are documented, you don't need to interrupt someone to understand them. When definitions are explicit, you don't need to negotiate scope on every ticket.&lt;/p&gt;

&lt;p&gt;The async infrastructure is the result, not the cause.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What was under my nose the whole time&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;During sprint reviews, developers had been pointing it out: "We don't have a Definition of Done."&lt;/p&gt;

&lt;p&gt;Not as an accusation. As an observation. A problem they saw but couldn't fix themselves.&lt;/p&gt;

&lt;p&gt;Basic stuff. In every agile textbook. Some of my developers were actively advocating for it.&lt;/p&gt;

&lt;p&gt;And I'd been nodding along, thinking "yeah, we should have that," while not realizing the obvious: as the lead, bootstrapping that Definition of Done is my job. Not Product's job to define. Not the team's job to spontaneously create. Mine to initiate.&lt;/p&gt;

&lt;p&gt;I needed it to come from GitLab—from somewhere that felt authoritative—to actually understand what I was looking at.&lt;/p&gt;

&lt;p&gt;Slow learner. Or maybe just immune to anything that doesn't come wrapped in a 300-page handbook from a company I admire from a distance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The hidden cost of implicit contracts&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That feature that took 9 days instead of 3? Here's what actually happened:&lt;/p&gt;

&lt;p&gt;I forgot to apply my own error system and spent overtime retrofitting it. We had to refactor how Tailwind works in our monorepo—something we'd never anticipated. Public documentation wasn't in the scope, but integrators needed it. We fixed technical debt along the way because we couldn't ignore it once we saw it.&lt;/p&gt;

&lt;p&gt;None of this was hidden complexity. It was all knowable work. We just never made it explicit.&lt;/p&gt;

&lt;p&gt;The problem isn't unique to that ticket. We work on implicit contracts everywhere.&lt;/p&gt;

&lt;p&gt;I expect tests. You expect tests only for critical paths. Another developer expects tests only when explicitly requested. Nobody's wrong. We just never agreed.&lt;/p&gt;

&lt;p&gt;Every ticket becomes a negotiation: "Is this really done, or done enough?"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The experiment: make the implicit explicit&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So here's what I'm trying: a Definition of Done. A shared agreement about what "complete" actually means.&lt;/p&gt;

&lt;p&gt;Not a theoretical document. Not a 50-item checklist. A living agreement that says:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Here's what's always included&lt;/li&gt;
&lt;li&gt;Here's what depends on context&lt;/li&gt;
&lt;li&gt;Here's what's never assumed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When a developer picks up a ticket, they know the complete scope without asking. When Product writes a ticket, they understand what they're buying. When I review code, I can point to specific missing items without it feeling arbitrary.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Making it stick&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The DoD doesn't stand alone. It references systems we already use but never documented properly.&lt;/p&gt;

&lt;p&gt;Our error system exists—I built it, it's in the code, it works. But it's scattered across READMEs, tribal knowledge, and code examples. The DoD will say "error handling: follows Error System," and that system will be properly documented with why it exists, how it works, and what to check.&lt;/p&gt;

&lt;p&gt;Same for internationalization patterns, monorepo structure, naming conventions—systems that exist implicitly but need explicit documentation.&lt;/p&gt;

&lt;p&gt;Where does this live? Confluence. Yes, Confluence. I know. Nobody's excited about it. But it's what we have, and perfect is the enemy of done—ironic, given the topic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What happens Monday&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Monday morning, I'm running a workshop with the frontend team.&lt;/p&gt;

&lt;p&gt;We're not defining "my" DoD. We're defining ours. Co-created, not imposed. Because if they don't own it, they won't use it.&lt;/p&gt;

&lt;p&gt;The agenda:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;List everything that should always be done on tickets&lt;/li&gt;
&lt;li&gt;Classify each item: always included, sometimes included, or never assumed&lt;/li&gt;
&lt;li&gt;Estimate the real impact on ticket time&lt;/li&gt;
&lt;li&gt;Decide where to document it&lt;/li&gt;
&lt;li&gt;Agree on how to apply it&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then I spend this week consolidating what we decided. Documenting the DoD, migrating our scattered systems to one place, creating the structure we agreed on.&lt;/p&gt;

&lt;p&gt;Next week, we test it on real tickets with real measurements.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What happens when estimates meet reality&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I expect estimations to roughly double compared to our current "feature only" estimates.&lt;/p&gt;

&lt;p&gt;But—and this is critical—the total time from ticket start to actual completion should stay the same or improve. Because we're making visible what we were already doing: the rework, the forgotten requirements, the debt accumulated silently.&lt;/p&gt;

&lt;p&gt;Before: estimate 2 days, deliver in 5 days (with hidden rework). Product sees 2, gets frustrated when it takes 5.&lt;/p&gt;

&lt;p&gt;After: estimate 5 days, deliver in 5 days. Product sees 5, makes informed decisions.&lt;/p&gt;

&lt;p&gt;The second scenario is honest. And honesty creates space for actual optimization.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What could go wrong&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Product might refuse the higher estimates. "We can't afford that."&lt;/p&gt;

&lt;p&gt;The team might find the DoD bureaucratic. "More overhead."&lt;/p&gt;

&lt;p&gt;The documentation might not get used. "Nobody reads Confluence anyway."&lt;/p&gt;

&lt;p&gt;I might be solving the wrong problem entirely.&lt;/p&gt;

&lt;p&gt;These are real risks. I don't have guarantees.&lt;/p&gt;

&lt;p&gt;What I do have: a team that's already asking for this, clear hypotheses to test, and a plan to measure actual impact over two sprints. If it doesn't work, we'll know why. If it does, we'll have data to expand it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why I'm doing this anyway&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Because the current state isn't sustainable.&lt;/p&gt;

&lt;p&gt;Estimations are fiction. Overtime is normalized. Technical debt accumulates silently. New developers take months to understand implicit rules.&lt;/p&gt;

&lt;p&gt;Something has to change.&lt;/p&gt;

&lt;p&gt;A Definition of Done won't solve everything. But it might create a shared language about what work actually involves.&lt;/p&gt;

&lt;p&gt;And when people share a language, fewer things need real-time clarification. Documentation gets referenced instead of people getting interrupted. Decisions get recorded instead of lost.&lt;/p&gt;

&lt;p&gt;Not because we engineered an async-first culture. Just because we removed enough friction that clarity became the default.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why I'm writing this now&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So why write about this before I know if it works?&lt;/p&gt;

&lt;p&gt;Partly because writing forces me to think clearly. If I can't explain the hypothesis without handwaving, I probably don't understand it.&lt;/p&gt;

&lt;p&gt;Partly because public documentation creates accountability. Harder to quietly abandon an experiment when you've told the internet about it.&lt;/p&gt;

&lt;p&gt;And partly because I'm tired of reading polished case studies where everything worked perfectly and everyone clapped. The sanitized success story teaches nothing. The messy reality—the resistance, the pivots, the "oh fuck I was completely wrong about that"—that's where the actual learning lives.&lt;/p&gt;

&lt;p&gt;In two sprints, I'll know if this DoD experiment holds or crashes. Either way, I'll write about what actually happened. Not the LinkedIn version. The real one.&lt;/p&gt;

&lt;p&gt;If you're dealing with similar chaos—estimations that are fiction, teams speaking different languages, the eternal question of "is this really done?"—drop a comment with your biggest scope creep disaster. Especially if your "solution" failed spectacularly. &lt;/p&gt;

&lt;p&gt;The failures are usually more instructive anyway.&lt;/p&gt;

&lt;p&gt;This is the beginning of the experiment. Let's see if I'm solving the right problem.&lt;/p&gt;




&lt;p&gt;Update: &lt;a href="https://dev.to/dexluce/the-dod-experiment-a-diagnostic-tool-in-disguise-5ee0"&gt;Part 2&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Views and experiments described here are my own and don't represent my employer.&lt;/em&gt;&lt;/p&gt;




</description>
      <category>leadership</category>
      <category>softwareengineering</category>
      <category>productivity</category>
      <category>agile</category>
    </item>
  </channel>
</rss>
