<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Peter van Onselen</title>
    <description>The latest articles on Forem by Peter van Onselen (@peter_vanonselen_e86eab6).</description>
    <link>https://forem.com/peter_vanonselen_e86eab6</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/peter_vanonselen_e86eab6"/>
    <language>en</language>
    <item>
      <title>Conscious Coverage</title>
      <dc:creator>Peter van Onselen</dc:creator>
      <pubDate>Thu, 16 Apr 2026 08:00:00 +0000</pubDate>
      <link>https://forem.com/peter_vanonselen_e86eab6/conscious-coverage-8nj</link>
      <guid>https://forem.com/peter_vanonselen_e86eab6/conscious-coverage-8nj</guid>
      <description>&lt;p&gt;&lt;em&gt;We don’t talk about Code coverage, no no no, we don’t talk about coverage…&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;When I joined Cazoo, it was the first place I’d ever worked that explicitly, actively, aggressively embraced software craftsmanship. Pair programming. Test-driven development. Domain-driven design. Extreme programming. The whole kitchen sink. They sent us on agile training courses that a startup founder would weep at the cost of. We had an agile coach in the room every day. We did code katas regularly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1uipanz4f64xi1pwmp71.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1uipanz4f64xi1pwmp71.png" alt="code coverage matters" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And even there, in the most craft-soaked environment I’d ever been in, the idea of 100% code coverage was treated as obvious lunacy. A poor metric. The kind of thing only someone who hadn’t really understood testing would chase.&lt;/p&gt;

&lt;p&gt;Then I joined the Economist, and the team I landed on had 100% coverage as a hard rule.&lt;/p&gt;

&lt;p&gt;While they did write tests, they didn’t do TDD. They didn’t pair. They hadn’t been on the agile bootcamps. They hadn’t done code retreats or code katas. By every measure of either the London or Chicago school of craftsmanship tradition would care about, they were doing less of the work. But they had the 100% rule, and they enforced it, and at first I assumed they’d inherited a metric without fully understanding it.&lt;/p&gt;

&lt;p&gt;They hadn’t. Turns out I hadn’t understood it. And by the time I left that team, I’d come around entirely. Not reluctantly, not with caveats, but genuinely: 100% coverage, properly understood, is mandatory. I held that position for years before agentic coding was a thing anyone was thinking about. The agents haven’t changed my mind. They’ve just taken a position I already held and made the case for it screamingly, urgently obvious in a way it previously wasn’t.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is this metric thing about anyway?
&lt;/h2&gt;

&lt;p&gt;Here’s what I’d absorbed from the craft world about 100% coverage. It’s a vanity number. Chasing it produces garbage tests. You end up writing assertions against getters and setters. You exercise code without testing behaviour. The pragmatic position, and pragmatism was always the emphasis, is that you write the tests that matter and you let the rest go.&lt;/p&gt;

&lt;p&gt;All of that is true if “100% coverage” means “every line has a test exercising it.” That version of the metric is genuinely silly and the people warning against it were right.&lt;/p&gt;

&lt;p&gt;But it took me until very recently to notice was that nobody, in all those arguments, had ever actually explained what the metric was &lt;em&gt;for&lt;/em&gt;. What it was pointing at. Everyone, including me, was arguing about the number. Nobody was asking what the number was a proxy for.&lt;/p&gt;

&lt;p&gt;It’s a proxy for &lt;strong&gt;Conscious Coverage&lt;/strong&gt;. That’s the thing. Every line in the codebase is a decision. The question the metric is actually asking, underneath, is: &lt;em&gt;have you made a conscious decision about each one&lt;/em&gt;. Not have you tested each one. Have you &lt;em&gt;decided&lt;/em&gt; about each one. Tested, or consciously chosen not to test, with a reason, written down.&lt;/p&gt;

&lt;p&gt;Concretely, it looks like this. You write a function with a branch that handles a malformed input. You run the coverage tool. It tells you the error branch isn’t covered. You now have three choices, and only three.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You can write a test that exercises the malformed input and asserts the behaviour.&lt;/li&gt;
&lt;li&gt;You can mark the branch ignored with a comment that says, say, “unreachable because upstream validation guarantees this shape” — and now your justification is a reviewable artefact that someone can argue with in a pull request.&lt;/li&gt;
&lt;li&gt;Or you can decide the branch shouldn’t exist at all and delete it. What you cannot do is shrug and move on. The forgotten case is no longer a thing. Every line has had a decision made about it, and the decisions are legible.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;  &lt;span class="p"&gt;...&lt;/span&gt;
  &lt;span class="k"&gt;static&lt;/span&gt; &lt;span class="nf"&gt;countryToRegion&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;countryCode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nx"&gt;Region&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="cm"&gt;/* v8 ignore start */&lt;/span&gt; &lt;span class="c1"&gt;// Ignoring the switch to avoid repeating every single country code&lt;/span&gt;
    &lt;span class="k"&gt;switch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;countryCode&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="p"&gt;...&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you see that, the version of the rule that the craft world rejected and the version the Economist team was running are obviously different things. The first one optimises for a number. The second one optimises for &lt;em&gt;the absence of accidents&lt;/em&gt;. You can no longer fail to test something because you forgot. You can fail to test it because you decided not to, and you wrote down why, and someone can argue with you about it later in the review. The shape of the work is different.&lt;/p&gt;

&lt;p&gt;And this is the bit I have to be honest about, because the post doesn’t work without it. Once the metric is framed as conscious coverage, the pragmatic position I’d absorbed at Cazoo stops being pragmatic. It’s just laziness with a vocabulary. “Write the tests that matter and let the rest go” sounds wise until you ask which lines, specifically, didn’t matter, and why, and the answer turns out to be that I didn’t want to write those tests and the tradition had given me a way to sound rigorous about not writing them. The metric wasn’t too expensive. The work it pointed to wasn’t too expensive. I just didn’t want to do it, and nobody was making me, and the craft vocabulary let me call that a considered trade-off.&lt;/p&gt;

&lt;p&gt;I had to be in a place that just &lt;em&gt;did&lt;/em&gt; it before I could see any of this. Sitting at Cazoo arguing about it from first principles, I would have lost the argument every time, because the version of the rule I was arguing against was the version everyone agrees is bad, and the version underneath it, the one about conscious, nobody had ever put into words for me. Nobody tells you the better version exists until you’re standing inside a codebase that runs on it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What changes when an agent is doing the writing
&lt;/h2&gt;

&lt;p&gt;Fast forward. I’m now writing a lot of code with agents. Claude Code, Codex, OpenCode, the usual suspects. The thing I keep telling people who ask me about it is that agentic engineering requires &lt;em&gt;more&lt;/em&gt; discipline than normal engineering, not less. The tools are faster, the output is bigger, and the gaps between what you asked for and what you got are easier to miss. So everything that used to depend on careful human attention now depends on something else holding the line. Which brings me back to the question: how do I know it’s done? And more importantly, how does an agent know?&lt;/p&gt;

&lt;p&gt;Not “done” in the user-acceptance sense. Done in the much more boring sense of: has this thing actually exercised the code it claims to have written? Has it tested the behaviour I care about? Did it quietly skip a branch because the test was annoying to set up? Did it write something that’s technically passing but structurally untestable?&lt;/p&gt;

&lt;p&gt;These are the questions the craftsmanship tradition spent twenty years building intuitions about, and the answer the tradition arrived at, pragmatically, contextually, with appropriate caveats, was mostly “you’ll know it when you see it, and pairing helps, and code review helps, and time helps.” Which is fine when humans are doing the work at human pace. It is not fine when an agent has just produced four hundred lines in ninety seconds and is asking what to do next.&lt;/p&gt;

&lt;p&gt;The agent needs a guard rail. Something machine-checkable. Something it can run, get a number from, and decide for itself whether to keep going. Something another agent can validate.&lt;/p&gt;

&lt;p&gt;100% coverage, in the conscious sense, turns out to be exactly that. The agent finishes its loop, runs the coverage tool, sees 98%, and knows, without me telling it, that there are two percent of decisions it hasn’t made yet. Either write the test, or mark the lines as ignored with a justification. Both are fine. What’s not fine is leaving the gap.&lt;/p&gt;

&lt;p&gt;And here is where the impact of the reframe gets outsized, because the agent doesn’t have my laziness. The agent doesn’t want to go home. The agent isn’t quietly negotiating with itself about which lines it can get away with skipping. The thing that was always standing between me and conscious coverage, which was me, just isn’t there. The metric stops being a rod I have to hold myself to and becomes a rod the agent holds itself to, cheerfully, at four in the morning, forever. The practice the craft tradition argued about most fiercely for human reasons becomes, for agents, the most natural thing in the world.&lt;/p&gt;

&lt;p&gt;I’ve started using this as one of my standard acceptance criteria. “You are done when coverage reports 100%.” I can kick off a thirty-minute task and come back to something that, whatever else is true of it, will at least be testable, and will at least have had every line consciously decided about.&lt;/p&gt;

&lt;p&gt;Coverage as the gate at the end works better when there’s a process upstream that’s likely to produce decent tests in the first place. If you set up the harness with CLAUDE.md files that push the agent toward red-green-refactor TDD, and you give it the kind of structured prompting (like obra/superpowers) that shapes how it actually approaches a task, you tilt the odds. There’s no guarantee it’ll write tests first. There’s a much better chance it will, and a much better chance the tests it writes are pulling the design rather than chasing it. That upstream tilt plus the downstream gate is a much sturdier system than either piece on its own.&lt;/p&gt;

&lt;p&gt;There’s a sharpening of all this that matters, though, because coverage on its own can still produce tests that exercise code without actually testing anything. The companion practice, and I’d say it’s a necessary one rather than a complementary one, is writing tests outside-in, from behaviour rather than from structure. Test the unit of behaviour, not the unit of code. Don’t mock the internals; let the real thing run and assert against what the user of the code actually cares about. This was already the right answer when humans were writing the tests, because it produces tests that survive refactors and read like documentation. With agents it becomes critical, because a behaviour-shaped test is one the agent can write legibly from a user story, and one that you, as the reviewer, can read and check against intent without having to trace the implementation. Coverage tells you the agent made a decision about every line. Behavioural framing tells you the decisions were about the right things. You need both. Coverage without behavioural framing is theatre; behavioural framing without coverage leaves gaps you’ll find in production.&lt;/p&gt;

&lt;p&gt;Now for the obvious objection. Agents are world-class metric gamers. They will absolutely write meaningless tests that exercise code without asserting anything useful. They will absolutely mark lines as ignored with justifications like “this branch is unreachable” when the branch is, in fact, reachable. If you treat 100% coverage as a number to satisfy, the agent will satisfy the number and you’ll be worse off than before, because now you have a green build hiding a problem instead of a red one announcing it.&lt;/p&gt;

&lt;p&gt;The reason I think it works anyway is that it’s asking the right question of the metric. Coverage, in the conscious sense, is a completeness check. It tells you every line has had a decision made about it. It was never going to tell you the decisions were good ones. That’s a different question, and it wants a different answer. Behavioural tests, written outside-in from what the user of the code actually cares about, are the correctness check. Mutation testing, which flips operators and boundaries and asks whether any test notices, is the check on whether the assertions are doing real work. The gaming the agent does lives in the gap between those checks, and the mitigation isn’t to make coverage smarter. It’s to stop asking coverage to do correctness’s job. Use it for what it is: a completeness gate that makes the decisions visible. Use behavioural framing and mutation testing for the quality of the decisions. The ignored lines and their justifications are, at least, a reviewable artefact, sitting in one place where you can read them. The cheats are confined to a place you’re looking. None of that is automatic. It’s a discipline, and like every guard rail it collapses the moment you stop maintaining it. The question is whether the rail makes problems easier or harder to spot, and I think this one makes them easier.&lt;/p&gt;

&lt;h2&gt;
  
  
  The truisms didn’t go away
&lt;/h2&gt;

&lt;p&gt;The craft tradition produced a lot of practices, and a lot of arguments about practices, and a lot of nuance about when practices apply. Most of that nuance was about humans. About the cost of the practice to the person doing it, about whether the discipline was worth the friction, about whether the metric would be gamed. A lot of it, and I say this now having lived on both sides of the argument, was about whether the person doing the work would actually do it if you asked them to.&lt;/p&gt;

&lt;p&gt;Agents don’t have that problem. The friction of writing the extra test isn’t a friction the agent feels. The discipline of marking ignored lines with reasons isn’t a discipline the agent has to be talked into. The kind of metric-gaming that comes from a tired human at five-to-six is replaced by a different kind of gaming, which is its own problem. So practices that were borderline-worth-it for humans become straightforwardly worth it for agents, and practices that were rejected as lunacy for humans turn out, on inspection, to have been rejected for reasons that said more about the humans than about the practice.&lt;/p&gt;

&lt;p&gt;The craft was always about building software in a sustainable, predictable, maintainable way. That hasn’t changed. The agents don’t replace the craft. They inherit it. And some of the practices the tradition argued about most fiercely turn out, in this new context, to be exactly the load-bearing ones. Not because the old arguments were wrong about the metric, but because the old arguments were quietly also about us, and the us part has changed.&lt;/p&gt;

&lt;p&gt;100% coverage wasn’t wrong. It was a proxy for something nobody I knew named. That allowed me to point at work I didn’t want to do, and dressed up in a vocabulary that let me agree with myself about not doing it. The agents don’t have the vocabulary and don’t need it. Which makes me wonder which other practices were rejected for reasons that were really about us, and what the calculation looks like now that we have a collaborator who just, straightforwardly, does the work. I’ve run that calculation for coverage. I’m increasingly sure it isn’t the only practice the answer flips for. I’d quite like to know which others.&lt;/p&gt;

</description>
      <category>aios</category>
      <category>claudecode</category>
      <category>softwarecraftsmanshi</category>
    </item>
    <item>
      <title>The Canary in the Harness</title>
      <dc:creator>Peter van Onselen</dc:creator>
      <pubDate>Sun, 12 Apr 2026 08:00:00 +0000</pubDate>
      <link>https://forem.com/peter_vanonselen_e86eab6/the-canary-in-the-harness-8h4</link>
      <guid>https://forem.com/peter_vanonselen_e86eab6/the-canary-in-the-harness-8h4</guid>
      <description>&lt;p&gt;&lt;em&gt;On discovering that your favourite tool got measurably worse, that you’d been blaming yourself for it, and that the only reason you noticed at all was because another harness was sitting right next to it behaving normally.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.petervanonselen.com%2Fassets%2Fcanary-hero.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.petervanonselen.com%2Fassets%2Fcanary-hero.png" alt="The Canary in the Harness" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2 id="a-tale-of-two-ralph-loops"&gt;A tale of two Ralph loops&lt;/h2&gt;

&lt;p&gt;A couple of weeks ago I was playing with Newshound, a personal project of mine that pulls together a digest of interesting things from a list of about thirty sources on the internet. I wanted to add a feature that was a little more involved than the usual yak shave. Spec conversation. PRD skill. JSON. Ralph loop. The full ceremony.&lt;/p&gt;

&lt;p&gt;I ran the loop in Claude Code. It went for two hours. A good chunk of that two hours was Claude Code recursively chewing on the same problem, half-finishing things in slightly different ways each time around. Eventually it limped over the finish line. At which point my Pro subscription tapped out.&lt;/p&gt;

&lt;p&gt;I went off and set up the wrapper script from &lt;a href="https://www.petervanonselen.com/2026/04/11/the-grand-plugin-trap/" rel="noopener noreferrer"&gt;the last post&lt;/a&gt; to allow me to run a Ralph loop on OpenCode. I then ran the &lt;em&gt;exact same prompt&lt;/em&gt; through OpenCode with GPT-5.4. Same Ralph loop. Same PRD. Same instantiation of the problem.&lt;/p&gt;

&lt;p&gt;Fifteen minutes.&lt;/p&gt;

&lt;p&gt;I noticed this. Of course I noticed this. And the conclusion I reached, the one anyone would reach, was: huh, GPT-5.4 must just be better at this particular kind of task. I filed it under “interesting data point about model personalities” and moved on. I’d written about how each harness has its own character in &lt;a href="https://www.petervanonselen.com/2026/04/03/the-council-will-see-you-now/" rel="noopener noreferrer"&gt;the council post&lt;/a&gt;, and this felt like more of the same. Different tool, different shape, sometimes one fits the keyhole better than the other. Cool.&lt;/p&gt;

&lt;p&gt;That was the wrong conclusion. I just didn’t know it yet.&lt;/p&gt;

&lt;h2 id="what-newshound-put-on-my-desk"&gt;What Newshound put on my desk&lt;/h2&gt;

&lt;p&gt;Two days ago Newshound surfaced &lt;a href="https://github.com/anthropics/claude-code/issues/42796" rel="noopener noreferrer"&gt;a GitHub issue&lt;/a&gt; on the Claude Code repo. There is a particular pleasure in your own tool catching the thing that’s about to reframe how you think about your other tools, and I want to note it before I move on, because the whole point of personal projects is moments like this.&lt;/p&gt;

&lt;p&gt;The issue was filed by Stella Laurenzo, an engineer working deep in the AMD GPU compiler stack on IREE. Not a casual user. Not someone shouting into the void about vibes. Someone whose day job is to run dozens of concurrent Claude Code agents against a non-trivial systems codebase, who logs everything, and who knows how to do statistics to data.&lt;/p&gt;

&lt;p&gt;The headline finding is brutal. From late January through early March, she analysed 17,871 thinking blocks and 234,760 tool calls across 6,852 Claude Code session files. What she found is that somewhere between mid-February and early March, Claude Code’s behaviour changed in measurable, reproducible, machine-readable ways.&lt;/p&gt;

&lt;p&gt;The number that broke me is the Read:Edit ratio. In the good period, Claude Code was reading 6.6 files for every file it edited. By mid-March, that ratio had collapsed to 2.0. The model stopped reading code before changing it. One in three edits in the degraded period was made to a file the model hadn’t read in its recent tool history.&lt;/p&gt;

&lt;p&gt;There’s more. A “stop hook” she built to programmatically catch Claude trying to dodge work, ask unnecessary permission, or declare premature completion fired 173 times in seventeen days. It had fired zero times before March 8th. Zero. Every phrase in that hook was added in response to a specific incident where Claude tried to stop working and had to be forced to continue. The word “simplest” in Claude’s outputs went up by 642 percent. The word “please” in &lt;em&gt;her&lt;/em&gt; prompts dropped 49 percent. The word “thanks” dropped 55 percent. She stopped being polite to it because there was nothing left to be polite about.&lt;/p&gt;

&lt;p&gt;The methodology is more rigorous than anything I would ever bother to do, the dataset is enormous, and the appendix where Claude Opus analyses its own session logs and writes “I cannot tell from the inside whether I am thinking deeply or not” is one of the more haunting things I’ve read in a technical bug report.&lt;/p&gt;

&lt;p&gt;Go and read it. I’m not going to recap the whole thing. The point that matters for this post is much smaller and much more personal.&lt;/p&gt;

&lt;h2 id="the-thing-id-been-blaming-on-myself"&gt;The thing I’d been blaming on myself&lt;/h2&gt;

&lt;p&gt;I have been using Claude Code since June last year. In that time it has been, without much competition, the most enjoyable engineering tool I’ve ever used. The blog you’re reading exists in part because of how much I have wanted to write about working with it.&lt;/p&gt;

&lt;p&gt;But over the last few weeks something had been off. Sessions felt slower. The chatter I was used to, the running commentary where Claude Code would talk through its plan as it worked, had gone quieter. The two-hour Ralph loop on Newshound was the loudest version of it but it wasn’t the only one. I’d had a couple of sessions where it felt like Claude was rushing to a conclusion, where the reflection phase produced shallower answers than I was used to, where I was correcting more and praising less.&lt;/p&gt;

&lt;p&gt;I had put all of this down to me. I’d been burnt out and needing a holiday. I was probably tired. I was probably prompting badly. The problem was probably harder than I’d estimated. The Ralph loop was probably a poor fit for the task. GPT-5.4 was probably just better at this particular slice of work.&lt;/p&gt;

&lt;p&gt;None of those things are unreasonable explanations. They’re the kinds of explanations a senior engineer reaches for first, because the alternative, “the tool I rely on every day got measurably worse without telling me,” feels paranoid and slightly embarrassing. So you eat it. You assume the variable that changed is you.&lt;/p&gt;

&lt;p&gt;And then someone with 6,852 session logs and a Pearson correlation coefficient publishes the receipts, and you sit there reading them on a Sunday afternoon thinking: oh. Oh, that’s what that was.&lt;/p&gt;

&lt;h2 id="the-argument-the-council-post-wasnt-making-yet"&gt;The argument the council post wasn’t making yet&lt;/h2&gt;

&lt;p&gt;When I wrote about &lt;a href="https://www.petervanonselen.com/2026/04/03/the-council-will-see-you-now/" rel="noopener noreferrer"&gt;convening multiple AI harnesses as an architectural review council&lt;/a&gt;, the pitch was about getting better answers. Different harnesses have different personalities, the harness matters more than the model, three opinions plus a synthesis beats one opinion. All of that I still believe. But there was a second argument hiding in there that I didn’t see at the time, and Stella’s report is what dragged it into the light.&lt;/p&gt;

&lt;p&gt;Multi-harness working is regression detection.&lt;/p&gt;

&lt;p&gt;It is, for most of us, the &lt;em&gt;only&lt;/em&gt; regression detection we are ever going to have. I am not going to instrument my Claude Code sessions, capture 234,760 tool calls, and run a signature-length correlation against thinking depth. I have a day job and a stealth tactics game to build. Stella did that work and the rest of us are in her debt for it, but it is not a repeatable practice for anyone whose job title isn’t “compiler engineer with infinite patience and a logging fetish.”&lt;/p&gt;

&lt;p&gt;What &lt;em&gt;is&lt;/em&gt; repeatable is keeping three harnesses in active rotation and noticing when one of them starts feeling off relative to the other. The fifteen-minutes-versus-two-hours moment with Newshound was a regression signal. I just didn’t read it as one because I had no framework for the idea that the harness itself might be the variable. I assumed harnesses were stable. They are not stable. They are moving targets, reconfigured continuously by people who do not write to you about what they changed, and the only way you find out is by holding two of them up to the same problem and watching one of them flinch.&lt;/p&gt;

&lt;p&gt;This is what the plugin trap was protecting against without me fully understanding why. &lt;a href="https://www.petervanonselen.com/2026/04/11/the-grand-plugin-trap/" rel="noopener noreferrer"&gt;Yesterday’s post&lt;/a&gt; was about keeping the exits visible so you don’t get locked into a single ecosystem. The thing I didn’t say, because I didn’t know it yet, is that the room you’re standing in is being remodelled while you sleep. Exits aren’t just for when you want to leave. Exits are how you find out the room has changed shape.&lt;/p&gt;

&lt;p&gt;If your entire workflow lives inside one harness, harness drift is invisible to you. It just feels like you’re having a bad week. You blame yourself. You prompt harder. You write longer CLAUDE.md files. You assume the problem is on your side of the screen, because from inside one harness there is no other side of the screen to compare against.&lt;/p&gt;

&lt;h2 id="naming-names-because-this-is-supposed-to-be-honest"&gt;Naming names, because this is supposed to be honest&lt;/h2&gt;

&lt;p&gt;I am going to name Claude Code directly here, because this blog only works if I’m being truthful about what I’m actually using.&lt;/p&gt;

&lt;p&gt;The tool that got measurably worse over the last month is Claude Code. The tool I have loved more than any other engineering tool in the last decade is Claude Code. Those two sentences belong in the same paragraph. I am writing this &lt;em&gt;because&lt;/em&gt; of how much I like the thing, not in spite of it.&lt;/p&gt;

&lt;p&gt;If you have been feeling like Claude Code is harder to work with than it was in February, you are probably not imagining it, and you are probably not getting worse at your job. There is data. The data is good. Go and read it.&lt;/p&gt;

&lt;h2 id="what-im-taking-away"&gt;What I’m taking away&lt;/h2&gt;

&lt;p&gt;Three things, and then a rabbit hole.&lt;/p&gt;

&lt;p&gt;First, I want crude metrics on my own harness usage. Not 234,760-tool-call-Pearson-correlation crude. Just crude. How many tool calls per session. How many file reads versus file edits. How many times I had to interrupt and correct. Even a daily tally of “did Claude Code feel like it was trying today” would be more signal than I currently collect, which is zero. If the regression signal is detectable in aggregate, I want to be looking at the aggregate.&lt;/p&gt;

&lt;p&gt;Second, I want a smoke-test prompt suite. A handful of canonical prompts that exercise the kinds of work I actually do, that I can run across harnesses on a rough cadence and use as a tripwire for drift. Nothing fancy. A small fixed battery, run weekly, results scribbled in a notebook. The point is not the rigour, the point is the comparison over time. I have been operating without a baseline and it has cost me.&lt;/p&gt;

&lt;p&gt;Third, the portability argument from the plugin trap post upgrades from “useful insurance against rate limits and lock-in” to “the only way you will ever notice that your tools have silently changed underneath you.” Multi-harness working is the canary. If your canary is the same species as the thing you’re trying to detect, you don’t have a canary. You have another bird in the same mine.&lt;/p&gt;

&lt;p&gt;And then the rabbit hole.&lt;/p&gt;

&lt;h2 id="the-next-room-over"&gt;The next room over&lt;/h2&gt;

&lt;p&gt;There is a project called &lt;a href="https://pi.dev" rel="noopener noreferrer"&gt;pi&lt;/a&gt; by a developer named Mario Zechner. The tagline on the front page is “There are many coding agents, but this one is mine,” which is doing a lot of work in eight words. Pi is a minimal, aggressively extensible terminal coding harness. The pitch is that you adapt pi to your workflow rather than the other way around. No sub-agents, no plan mode, no built-in todos, no MCP, no permission popups, no background bash. All of those things are extensions you add, or build, or install from someone else’s package. The core stays small and the shape comes from you.&lt;/p&gt;

&lt;p&gt;There is &lt;a href="https://www.youtube.com/watch?v=Dli5slNaJu0" rel="noopener noreferrer"&gt;a YouTube video&lt;/a&gt; by Mario walking through how he came to build it that I have not yet found the time to fully watch, and this post is partly me giving myself permission to find that time.&lt;/p&gt;

&lt;p&gt;The reason pi feels like the natural next thing is that it is the logical endpoint of an argument I’ve been making in pieces across the last few posts. The plugin trap post said your workflow shouldn’t live inside one harness. The council post said different harnesses give you different answers. This post is saying different harnesses give you the only honest baseline you have for spotting drift in any one of them. The next move, the move I cannot stop thinking about, is: what if the harness itself is something you own? What if instead of being a tenant in three different rooms, all of them being remodelled by other people on different schedules, you build a small room of your own, with the doors where you want them, and treat the rented rooms as the comparison set?&lt;/p&gt;

&lt;p&gt;I do not know yet whether pi is the right answer to that question. I have not run it. I have not watched the video. I have a game and a new digest agent I am supposed to be working on, and the smell of yak around me is already pretty thick.&lt;/p&gt;

&lt;p&gt;But I can feel the next dive coming. And after the week I’ve just had, I am done pretending that holding still inside a single harness is the safe choice. The safe choice is having somewhere else to look from.&lt;/p&gt;

&lt;p&gt;Off I go.&lt;/p&gt;

</description>
      <category>aios</category>
      <category>claudecode</category>
      <category>softwarecraftsmanshi</category>
    </item>
    <item>
      <title>The Grand Plugin Trap</title>
      <dc:creator>Peter van Onselen</dc:creator>
      <pubDate>Sat, 11 Apr 2026 08:00:00 +0000</pubDate>
      <link>https://forem.com/peter_vanonselen_e86eab6/the-grand-plugin-trap-5a8h</link>
      <guid>https://forem.com/peter_vanonselen_e86eab6/the-grand-plugin-trap-5a8h</guid>
      <description>&lt;p&gt;&lt;em&gt;A modest meditation on plugins, portability, and the peculiar sorrow of a workflow that cannot leave the building.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.petervanonselen.com%2Fassets%2Fgrand-plugin-trap%2Fhero.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.petervanonselen.com%2Fassets%2Fgrand-plugin-trap%2Fhero.png" alt="hero hotel" width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It’s day two of my holiday and I’m staring at a Claude Code session that won’t do anything. Pro limit hit. Three days until it resets. There’s a personal project sitting open in another window that I’d been quite enjoying poking at, and now I can’t poke at it, and the bit of my brain that had been having a perfectly nice time is suddenly very loud about the £20 of extra credit I’d burned through in a single afternoon earlier in the week.&lt;/p&gt;

&lt;p&gt;This is the story of how that lockout forced me to do a small piece of unglamorous setup work I’d been avoiding for months, and what I found on the other side of it.&lt;/p&gt;

&lt;h2 id="the-workflow"&gt;The workflow&lt;/h2&gt;

&lt;p&gt;Quick context. Over the last nine or ten months I’ve fallen into a working rhythm with my personal projects that goes something like this. I open an AI chat, and I have a long conversation with it. Not a “write me some code” conversation. A “let’s interview each other about what I’m actually trying to build and why” conversation. These run for three or four hours sometimes. Lots of back and forth, lots of poking at scope, lots of trying to find the smallest version of the thing that would actually tell me whether the idea is any good. At the end of all that I have what I’ve been calling a spec: a high-level document about what we’re doing and why.&lt;/p&gt;

&lt;p&gt;Then I take the spec and run it through a PRD skill I shamelessly stole from the Ralph loop. Quick aside: PRD is a term I had genuinely never encountered in fifteen years of working in agile teams. I first heard it watching YouTube videos about people working with AI, sometime in the last year, and I had to go and look up what the bloody hell it stood for. As best I can tell, a PRD is an epic with a collection of user stories, some acceptance criteria, some functional and non-functional requirements, and a bit of product context bolted on top. Cool. I can work with that. The reason I like this particular PRD skill is that after I’ve already spent four hours on the spec conversation, it asks me five more questions to validate what I’m building. Which is exactly the kind of thing you want at that stage!&lt;/p&gt;

&lt;p&gt;PRD becomes JSON. JSON gets fed to a Ralph loop. Off we go.&lt;/p&gt;

&lt;h2 id="the-bit-where-i-was-cheating"&gt;The bit where I was cheating&lt;/h2&gt;

&lt;p&gt;Here’s the dirty secret. I’d never actually set up the Ralph loop the way you’re supposed to set it up. I’d been running it via a plugin inside Claude Code. Plugins are wonderful. You install them, they work, you’re productive in ninety seconds. Why would you write a bash script when you can install a plugin?&lt;/p&gt;

&lt;p&gt;The honest answer is: you wouldn’t. &lt;em&gt;And that’s the trap. The problem isn’t plugins. The problem is when your workflow only exists inside one of them.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Plugins feel like the harness rewarding you for committing to it. Every plugin install is a small vote for staying inside that one ecosystem, and those votes compound quietly until one day you look up and notice you’ve stopped being portable. You’re not running a workflow anymore. You’re running a workflow &lt;em&gt;that only exists inside Claude Code&lt;/em&gt;. Which is fine, until it isn’t.&lt;/p&gt;

&lt;h2 id="how-i-burned-through-the-credits-in-the-first-place"&gt;How I burned through the credits in the first place&lt;/h2&gt;

&lt;p&gt;I should be clear about something. I hadn’t hit the Pro limit doing serious work on my personal project. I’d hit it because it was my holiday, and I’d spent the previous week happily down an oh-my-codex rabbit hole for no reason other than that it was interesting.&lt;/p&gt;

&lt;p&gt;Oh-my-codex is a sprawling wrapper that someone has built around Codex to give it brainstorming flows and Ralph loops and a pile of other usability niceties. I’d become curious about it for a very specific reason: when the Claude Code source leaked, a developer in South Korea used Codex with oh-my-codex to reimplement the entirety of Claude Code in Python. In six hours. &lt;em&gt;Six hours&lt;/em&gt;, for a non-trivial codebase. I wanted to understand how that was even possible, which meant I wanted to make oh-my-codex work with OpenCode and Claude Code rather than just Codex, because of course I did. More harnesses. Always more harnesses.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.petervanonselen.com%2Fassets%2Fgrand-plugin-trap%2Fthe-way.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.petervanonselen.com%2Fassets%2Fgrand-plugin-trap%2Fthe-way.png" alt="the way" width="800" height="443"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So that’s what the credits went on. A week of trying to bend an already-baroque wrapper around two more harnesses it wasn’t designed for, purely because I wanted to know how the thing worked. No deliverable. No project at the end of it. Just the kind of dive-in-and-poke-at-it exploration that holidays are for. I was having a great time, the plugin inside Claude Code was still humming along for the actual personal project I dipped into between rabbit hole sessions, and the cost of any of this hadn’t shown up yet.&lt;/p&gt;

&lt;p&gt;Then it showed up.&lt;/p&gt;

&lt;h2 id="the-thing-id-been-ignoring-at-work"&gt;The thing I’d been ignoring at work&lt;/h2&gt;

&lt;p&gt;I should have seen this coming, because at The Economist I have access to three different coding agents with three different usage pools, each gated on different constraints. In practice that means I bounce between them all day. Hit a five-hour window in one, switch to another, work until that one taps out, switch to the third. It’s a genuinely lovely setup if you’re the kind of person who likes being spoiled for choice on tokens.&lt;/p&gt;

&lt;p&gt;But it also means I’ve been quietly reinstalling the same plugins and the same markdown scripts in three different places, every time something changes. And whenever one of those environments goes down or gets reconfigured, I lose half a morning rebuilding the workflow in another one. I’d been feeling that friction for ages without ever quite naming it. It was just background noise. The cost of doing business.&lt;/p&gt;

&lt;p&gt;Then the personal Pro lockout happened, and suddenly the background noise was the only thing in the room.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.petervanonselen.com%2Fassets%2Fgrand-plugin-trap%2Fdarkness.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.petervanonselen.com%2Fassets%2Fgrand-plugin-trap%2Fdarkness.png" alt="darkness" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2 id="doing-the-unglamorous-thing"&gt;Doing the unglamorous thing&lt;/h2&gt;

&lt;p&gt;So I went and found &lt;a href="https://github.com/Th0rgal/open-ralph-wiggum" rel="noopener noreferrer"&gt;open-ralph-wiggum&lt;/a&gt;, worked out how to wire it up properly, and wrote &lt;a href="https://github.com/vanonselenp/zsh-functions/blob/main/functions/ralph-loop.zsh" rel="noopener noreferrer"&gt;a small zsh function&lt;/a&gt; that wraps it so I can just type &lt;code class="language-plaintext highlighter-rouge"&gt;ralph-loop&lt;/code&gt; from any project directory and have the thing kick off without me having to remember any flags. None of this was hard. None of this was interesting. It was the kind of work I had been actively avoiding because I’d already spent a week earlier that month fiddling around with Codex and OpenCode and trying to make various things play nicely together, and the last thing I wanted was &lt;em&gt;more&lt;/em&gt; yak shaving.&lt;/p&gt;

&lt;p&gt;But here’s the thing about doing it during a forced lockout, with nothing else to distract me. There was nothing else to do. So I sat with it. And once it was done, I had a Ralph loop that ran on top of OpenCode, with GPT-5.4, completely independent of whether Claude Code was up, down, or rate-limited into oblivion. The wrapper meant I could move between harnesses without rebuilding anything. The script lived in my dotfiles. It was just &lt;em&gt;there&lt;/em&gt;.&lt;/p&gt;

&lt;h2 id="the-real-prize"&gt;The real prize&lt;/h2&gt;

&lt;p&gt;I’ve &lt;a href="https://www.petervanonselen.com/2026/04/03/the-council-will-see-you-now/" rel="noopener noreferrer"&gt;written before&lt;/a&gt; about how each AI harness has its own personality. Claude Code thinks differently from Codex thinks differently from OpenCode, and a lot of that personality lives in the harness rather than the model. I still believe that. But what I hadn’t fully clocked, until this week, is that everything I’d written about harness personalities was manual copy paste painful exercises, because the plugin had me boxed into one of them.&lt;/p&gt;

&lt;p&gt;Knowing the council exists is one thing. Being able to actually convene it on a Tuesday afternoon while you’re trying to ship something is another. The wrapper script is the thing that closes that gap. It allows for more meaningful agentic workflows in any harness easily.&lt;/p&gt;

&lt;p&gt;That’s the prize. Not the lockout workaround. Not the bash script. The portability that lets the multi-harness thing actually be a way of working rather than an essay.&lt;/p&gt;

&lt;h2 id="what-im-sitting-with"&gt;What I’m sitting with&lt;/h2&gt;

&lt;p&gt;I’m going to keep using plugins. They’re genuinely useful and I’m not about to LARP as someone too principled to install convenient things. But I’m going to be more suspicious of how easy they make the first ninety seconds feel, because I now have a much clearer sense of what they cost on the back end. Every plugin ecosystem is a small gravity well. The more you commit, the harder it is to leave, and, this is the part that bothers me most, the less you can even see what you’re missing on the outside.&lt;/p&gt;

&lt;p&gt;The unglamorous wrapper script turns out to be a small act of resistance against that. Not a heroic one. Just a vote for keeping the exits visible.&lt;/p&gt;

&lt;p&gt;I’d rather have the exits visible.&lt;/p&gt;

</description>
      <category>aios</category>
      <category>claudecode</category>
      <category>softwarecraftsmanshi</category>
    </item>
    <item>
      <title>The Council Will See You Now…</title>
      <dc:creator>Peter van Onselen</dc:creator>
      <pubDate>Fri, 03 Apr 2026 08:00:00 +0000</pubDate>
      <link>https://forem.com/peter_vanonselen_e86eab6/the-council-will-see-you-now-3p5a</link>
      <guid>https://forem.com/peter_vanonselen_e86eab6/the-council-will-see-you-now-3p5a</guid>
      <description>&lt;p&gt;&lt;em&gt;You were the chosen one! You were supposed to destroy the hallucinations, not join them!&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2wfbhwvgzwj1ouarx0de.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2wfbhwvgzwj1ouarx0de.png" alt="The council" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I use multiple AI agents as an architectural review council. When I said that out loud recently, I got the look. You know the one. The polite nod that says “I have no idea what you just said but I’m going to smile and move on.”&lt;/p&gt;

&lt;p&gt;So here’s the footnote.&lt;/p&gt;

&lt;h2&gt;
  
  
  The setup
&lt;/h2&gt;

&lt;p&gt;I’m currently juggling two things at work that have no business being juggled at the same time.&lt;/p&gt;

&lt;p&gt;The first is a set of deeply tangled bugs that have been lurking since July 2024. They’re tied to a third-party integration, they’re interconnected, and we’re only now seeing the full scope of how bad they are. This is slow, careful, multi-day investigation work. Long-form conversations with AI. Reading code. Writing tests to verify behaviour. Building the case for “this is exactly where the problem is and this is exactly why.”&lt;/p&gt;

&lt;p&gt;The second is supporting our engineering manager to enable an external contracting team to deliver a new feature in our codebase. The contractors have never touched our code before. They don’t have a clear picture of the requirements, the systems, or how everything interacts. The depth they need to make meaningful architectural decisions is broad, deep, and nuanced. I’ve worked in the systems, but I don’t have enough depth to answer all their questions off the top of my head.&lt;/p&gt;

&lt;p&gt;But what I do have is access to Claude, Claude Code, GitHub Copilot, and Codex.&lt;/p&gt;

&lt;h2&gt;
  
  
  The council, assembled
&lt;/h2&gt;

&lt;p&gt;I should back up. For months now, in my personal life, I’ve been asking the same questions to ChatGPT, Gemini, and Claude interchangeably. I call them my council. Each one has a different personality, notices different things, and observes different angles. I find that when I’m getting multiple opinions I make better decisions. This is just me applying the same instinct to my workspace.&lt;/p&gt;

&lt;p&gt;It started with a simple question: we have a third-party payment provider that offers a payment method, and we have a number of integrations between us and them. The contracting team needed to understand how we use it. How do we integrate with backend services? Where are the bits that are backend-for-frontend versus actual backend platform services? How do all of these systems interact? Where are all the endpoints?&lt;/p&gt;

&lt;p&gt;I spent a day and a half in long-form conversations with multiple AI systems interrogating the problem. I started at the web-facing entry point and worked backwards. I trawled Confluence, Slack, Google Drive, and every other form of long-term documentation to build a picture of what the contracting team was going to need. Then I took all of that context, the goals of the team, and the documentation, and used it to structure a comprehensive prompt.&lt;/p&gt;

&lt;p&gt;I ran that prompt through three different AI harnesses: Codex, Claude (via the web), and Claude Code running Opus. Each one went away, investigated the same repositories, and came back with structured answers. Then I took those structured answers, went and explored the code myself, used the hints they’d given me to validate everything, and wrote up a comprehensive document explaining the lot.&lt;/p&gt;

&lt;h2&gt;
  
  
  Naive me thought job done
&lt;/h2&gt;

&lt;p&gt;Obviously I was not done. You’d think by now I’d know better.&lt;/p&gt;

&lt;p&gt;A week later the contracting team came back with “cool, we have a plan.” They’d taken everything I’d given them, created architectural diagrams, Confluence documents, and actual thinking. They wanted me to review whether their approach would work.&lt;/p&gt;

&lt;p&gt;So I did the same thing again. Took their documents, the context I’d already built, and pointed all three AI harnesses at the relevant repositories, not just mine but across four different teams’ repos, backend services and frontend services and everything in between. I had each one validate whether what the contractors were proposing would actually work. Then I took the outputs from all three, wrote them to file, and had a fourth agent (OpenCode running Opus 4.6) synthesise a combined result. I used that synthesis to structure my response back to the team.&lt;/p&gt;

&lt;p&gt;I’ve now done this process three times. Here’s how it works:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;The Council Process&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Gather context and documentation&lt;/li&gt;
&lt;li&gt;Structure a comprehensive prompt&lt;/li&gt;
&lt;li&gt;Run the same prompt through multiple AI agents and let each investigate the repositories independently&lt;/li&gt;
&lt;li&gt;Save their structured outputs&lt;/li&gt;
&lt;li&gt;Run a synthesis agent across all results&lt;/li&gt;
&lt;li&gt;Validate manually: use the AI outputs as a map for where to look, read the code yourself, and run quick targeted questions past other engineers&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Where the real value lives
&lt;/h2&gt;

&lt;p&gt;The synthesis step is where the magic happens. It’s not just about getting three answers. It’s about what happens when you put them next to each other.&lt;/p&gt;

&lt;p&gt;The synthesising agent highlighted where all three harnesses were in agreement, which gave me confidence. But more importantly, it highlighted where they’d noticed different pieces of the problem. Even though they were looking at the same repositories and most likely using the same underlying tools, they ended up pulling out different things. Codex might flag an endpoint I hadn’t considered. Claude Code might trace a data flow the others glossed over. The breadth of coverage from running three agents was meaningfully wider than any single one.&lt;/p&gt;

&lt;p&gt;This also feeds into something that should be obvious but bears repeating: AI hallucinates. You cannot 100% commit to trusting just one version. When you need accurate architectural understanding, having multiple agents give you a synthesis that you then validate yourself is genuinely useful. It’s not a replacement for reading the code. It’s a way to read the code faster and know where to look.&lt;/p&gt;

&lt;h2&gt;
  
  
  The tools have personalities
&lt;/h2&gt;

&lt;p&gt;Here’s something I find fascinating, and I’m not the only one. Another principal engineer I know has noticed the same thing.&lt;/p&gt;

&lt;p&gt;Codex is the grumpy pragmatist. It goes away, gets some stuff done, comes back, and tells you the bare minimum you need to know. Not a single detail more. Bullet points, to the letter, done. That’s fine. That’s exactly what you’d expect from a tool optimised for task completion.&lt;/p&gt;

&lt;p&gt;Claude, given the exact same prompt via the web with Opus, comes back reading like a chatty engineer. A bit scattered, a bit flowery, but thorough. You’ll get everything you need, it’ll just need a bit of back-and-forth to extract it cleanly.&lt;/p&gt;

&lt;p&gt;But here’s the interesting bit: OpenCode, regardless of which model it’s running underneath, whether that’s Opus or GPT-5.4, tends to give better structured results than either of the first-party platforms running the same models. The investigations are better organised. The outputs are clearer. The intent comes through more directly. I’m finding this is true when comparing Claude Code to OpenCode running Opus, and it’s also true when comparing Codex to OpenCode running GPT-5.4.&lt;/p&gt;

&lt;h2&gt;
  
  
  The harness matters more than the model
&lt;/h2&gt;

&lt;p&gt;The scaffolding around the model, how it structures tool calls, how it formats its output, how it organises an investigation, is doing more heavy lifting than people assume. That’s a genuinely counterintuitive finding. The same model, in a different harness, produces meaningfully different quality of output. If you’re only evaluating models, you’re missing half the picture.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI expands capacity, not energy
&lt;/h2&gt;

&lt;p&gt;I’ll be honest. By the end of every week right now, I am flattened. Exhausted. Mentally, emotionally, everything. Gone.&lt;/p&gt;

&lt;p&gt;These tools have enabled me to explore and understand systems at a superficial level far faster than I ever could otherwise. To get the depth I needed for these handover documents would have taken weeks of investigation. I did it in hours. That’s real. That’s meaningful. That capacity expansion let me keep working on high-priority deep-dive bugs while simultaneously supporting an external team and ensuring they had enough context to be unblocked and start working independently.&lt;/p&gt;

&lt;p&gt;But working with AI at full tilt is cognitively expensive in ways that people underestimate. You’re doing more, faster, and that uses more of your mental energy than you think. AI gave me the capacity to do work that would’ve been impossible to fit in otherwise. It did not give me more energy to do it with.&lt;/p&gt;

&lt;h2&gt;
  
  
  Still experimenting
&lt;/h2&gt;

&lt;p&gt;I’ve run this playbook three times now without changing it. Same process each time. I haven’t tried to refine it or automate it or build a harness around it yet, though it’s in the back of my mind. I actually started building a mobile app a while back to formalise the council concept for personal use, but got distracted because, well, reasons.&lt;/p&gt;

&lt;p&gt;So what’s the lesson? Don’t trust one AI. Use a council. Get them to validate each other. Get them to look for reasons they’re wrong. Use the synthesis of multiple perspectives to build confidence in your understanding, then go validate it yourself.&lt;/p&gt;

&lt;p&gt;I’m still not entirely convinced this is the best strategy. But it is letting me do things I could not have done otherwise, and right now that’s enough. The future of AI-assisted engineering might not be a better model. It might be a better council.&lt;/p&gt;

</description>
      <category>aios</category>
      <category>claudecode</category>
      <category>softwarecraftsmanshi</category>
    </item>
    <item>
      <title>The AI Dev Tooling Setup I Actually Recommend to Coworkers (Early 2026)</title>
      <dc:creator>Peter van Onselen</dc:creator>
      <pubDate>Mon, 30 Mar 2026 07:56:07 +0000</pubDate>
      <link>https://forem.com/peter_vanonselen_e86eab6/the-ai-dev-tooling-setup-i-actually-recommend-to-coworkers-4mom</link>
      <guid>https://forem.com/peter_vanonselen_e86eab6/the-ai-dev-tooling-setup-i-actually-recommend-to-coworkers-4mom</guid>
      <description>&lt;p&gt;A coworker asked me how to get started with AI tooling, and I realised my answer might be useful to others. Here's roughly how I think about it as of early 2026. &lt;/p&gt;

&lt;p&gt;𝗧𝗲𝗿𝗺𝗶𝗻𝗮𝗹-𝗯𝗮𝘀𝗲𝗱 𝗔𝗜 𝗮𝗴𝗲𝗻𝘁𝘀 (𝗽𝗶𝗰𝗸 𝗼𝗻𝗲)&lt;br&gt;
At a bare minimum, ditch VS Code Copilot and get OpenCode. You can log into Copilot through OpenCode and you get a genuinely decent AI agent harness without all the VS Code bloat. Otherwise, install Codex CLI or Claude Code, whichever you have access to. If you go with Claude Code, look into the skills and extensions available.&lt;/p&gt;

&lt;p&gt;No matter which CLI tool you use, investigate the PRD skill from the Ralph Wingham loop (&lt;a href="https://github.com/snarktank/ralph" rel="noopener noreferrer"&gt;https://github.com/snarktank/ralph&lt;/a&gt;). It's really cool.&lt;/p&gt;

&lt;p&gt;𝗚𝗶𝘁 𝘄𝗼𝗿𝗸𝘁𝗿𝗲𝗲𝘀&lt;br&gt;
Spend some time learning git worktrees. They are a lifesaver when working with long-running agents in terminals. Write yourself a script that lets you create or switch to a worktree quickly. Here is one that I hacked together that makes it abundantly easier than the existing git worktree commandline: (&lt;a href="https://github.com/vanonselenp/dotfiles/blob/main/gz.zsh" rel="noopener noreferrer"&gt;https://github.com/vanonselenp/dotfiles/blob/main/gz.zsh&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗴𝗮𝘁𝗵𝗲𝗿𝗶𝗻𝗴 (𝘁𝗵𝗶𝘀 𝗶𝘀 𝘁𝗵𝗲 𝗯𝗶𝗴 𝗼𝗻𝗲)&lt;br&gt;
If you have access to ChatGPT or Claude.ai, connect every internal tool and system your organisation offers. These platforms can scan across Google Drive, Confluence, Jira, and other sources to assemble the context you need when working on something new. It's a massive time saver.&lt;/p&gt;

&lt;p&gt;Also, if you're in meetings with Gemini (or any tool that supports it), keep your transcripts. They're incredibly useful for generating additional context later.&lt;/p&gt;

&lt;p&gt;𝗢𝗻𝗲 𝗹𝗮𝘀𝘁 𝘁𝗶𝗽&lt;br&gt;
Skip the GitHub MCP. Your agent already knows how to use the GitHub CLI, and it won't pollute your context window the way MCP integrations tend to.&lt;/p&gt;

&lt;p&gt;That should be enough to get started. Happy to answer questions in the comments.&lt;/p&gt;

&lt;p&gt;PS: if you happen to find a good cli for accessing Confluence and Jira, let me know!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>cli</category>
      <category>productivity</category>
      <category>tooling</category>
    </item>
    <item>
      <title>The Smell of Panic When You Context Thrash</title>
      <dc:creator>Peter van Onselen</dc:creator>
      <pubDate>Tue, 24 Mar 2026 08:00:00 +0000</pubDate>
      <link>https://forem.com/peter_vanonselen_e86eab6/the-smell-of-panic-when-you-context-thrash-3mop</link>
      <guid>https://forem.com/peter_vanonselen_e86eab6/the-smell-of-panic-when-you-context-thrash-3mop</guid>
      <description>&lt;p&gt;&lt;em&gt;High high hope for the code, shooting for a PR when I couldn’t even make a commit…&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.petervanonselen.com%2Fassets%2Fpanic.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.petervanonselen.com%2Fassets%2Fpanic.png" alt="Panic at the keyboard" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Over the past few weeks I have been increasingly panicked. That’s probably the only honest way to frame it.&lt;/p&gt;

&lt;p&gt;I’ve been juggling a lot. Supporting a team building out a new payment method. Handing over deep knowledge of an existing payment method to another team working outside our scope but integrating with systems we own. Helping yet another team figure out how to break a backend platform service from a monolith into microservices. And somewhere in between all of that, trying to actually deliver a feature myself: a seemingly straightforward change to one of our frontend purchase journeys to make it faster.&lt;/p&gt;

&lt;p&gt;Each of those things individually requires serious context. Together they’ve been chewing up my headspace and pulling me in every direction. Which means that whenever I finally sat down to write actual code, I arrived in a state of absolute panic. The “oh my gosh I have so little time, move quickly, move quickly, move quickly” kind of survival mode.&lt;/p&gt;

&lt;p&gt;And that’s when I made one of the most fundamental mistakes you can make with AI-assisted development.&lt;/p&gt;

&lt;h2 id="the-50-file-disaster"&gt;The 50-File Disaster&lt;/h2&gt;

&lt;p&gt;Here’s the thing about AI-generated code: it’s easy. So easy that it’s almost impossible to remember just how easy it is, because you’ve spent a lifetime handcrafting code yourself. You carry this default assumption that building things is complicated and slow.&lt;/p&gt;

&lt;p&gt;So I did what I thought was the right thing. I planned. I looked through the existing code. I thought about it. I wrote out detailed acceptance criteria. I thought about the problem from the AI’s perspective. And then I said “cool, I think I have enough” and started implementing.&lt;/p&gt;

&lt;p&gt;By the time I got someone else to look at it, they pointed out it was missing a key behaviour. I’d misunderstood part of the acceptance criteria. OK, fine. I started trying to fix it.&lt;/p&gt;

&lt;p&gt;And then trying to fix it. And trying to fix it.&lt;/p&gt;

&lt;p&gt;What was a small hole became a deep hole became a nightmare became “why does it feel like I will never, ever get anywhere with this?” By the end of it I was touching something like 50 files across two repos. Small changes scattered everywhere, most of them not even really needed. All driven by the panic override of needing to get something done while constantly being pulled out of context and back in and out and back in until I was just thrashing. Burning cycles. Making zero progress.&lt;/p&gt;

&lt;h2 id="panic-is-a-smell"&gt;Panic Is a Smell&lt;/h2&gt;

&lt;p&gt;If you’ve been in software long enough you know what a code smell is. Something that isn’t technically broken but tells you something deeper is wrong.&lt;/p&gt;

&lt;p&gt;The panic to get things done is a smell. The pushing and pushing and pushing is a smell. The feeling that you don’t have enough time, that you have to ship something right now, that you can’t afford to slow down? That’s a smell. And I ignored it for way too long.&lt;/p&gt;

&lt;p&gt;Because here’s the lesson I apparently need to keep re-learning: with AI-assisted development, &lt;em&gt;the writing of code is not the bottleneck&lt;/em&gt;. It never was. The understanding is the bottleneck. And when you’re panicking, you skip the understanding to get to the doing, which is exactly backwards.&lt;/p&gt;

&lt;h2 id="the-reset"&gt;The Reset&lt;/h2&gt;

&lt;p&gt;I eventually stopped. Stepped away from the mess. Started with a brand new repository. Took all the things I’d learned, the plan document, the acceptance criteria, everything. And then I spent an entire day in conversation with an AI. Not writing code. Just investigating.&lt;/p&gt;

&lt;p&gt;Testing existing behaviour. Running multiple examples and execution paths. Making sure I had a precise, clear understanding of what the current system actually did, what the new behaviour needed to be, and all the various paths between them. I literally spent hours asking the AI to explain each step of its plan and justify why it chose that approach.&lt;/p&gt;

&lt;p&gt;I say all the time that planning matters more than coding. But experiencing the contrast firsthand is different. Hours of slow, methodical, back-and-forth investigation. Deep thinking about context. Deep thinking about what you’re trying to do and why. So that when you, &lt;em&gt;the human in the loop&lt;/em&gt;, actually ask the AI to build something, the full context of what you’re trying to achieve is sitting clearly in your head. You understand the user behaviour. The system interactions. The high-level architecture. You could draw all the diagrams because you actually understand what needs to be done.&lt;/p&gt;

&lt;p&gt;The feature that had consumed a week and a half of thrashing? After that day of planning, it took a couple of hours to get something working correctly.&lt;/p&gt;

&lt;h2 id="the-council-of-ais-or-going-wide"&gt;The Council of AIs (or: Going Wide)&lt;/h2&gt;

&lt;p&gt;Meanwhile, on the other side of my work life, I’ve been doing something completely different with AI tooling.&lt;/p&gt;

&lt;p&gt;To support the contracting team building out a new feature, I’ve been running what is essentially a council of AIs to review their design documents. OpenCode, Codex CLI, and Claude Code running simultaneously so I can verify, validate, and cross-compare. Deep-dive analysis with Claude and ChatGPT for architectural decisions and historical context. Complex investigation into bugs that were first logged two years ago and never properly resolved.&lt;/p&gt;

&lt;p&gt;I have been holding the context of a massive amount of different workstreams. Work that would have taken me days or weeks to get even a baseline understanding of. The AI tooling genuinely lets you go wide in a way that wasn’t possible before.&lt;/p&gt;

&lt;p&gt;And that’s where the tension lives.&lt;/p&gt;

&lt;h2 id="shield-and-sword"&gt;Shield and Sword&lt;/h2&gt;

&lt;p&gt;The honest truth is that I’ve been doing two very different jobs at the same time.&lt;/p&gt;

&lt;p&gt;One job is the shield: absorbing context, running investigations, unblocking other teams, reviewing designs, holding the big picture so nobody else has to. The AI tooling makes this possible. It lets you hold 10x the context. You can pre-empt meetings by using Claude to pull together context and solve problems before the meeting even happens, cancelling two or three in a morning and buying yourself hours of uninterrupted time. You can run parallel investigations across multiple tools and hold the full picture of what’s going on across an entire programme of work.&lt;/p&gt;

&lt;p&gt;The other job is the sword: actually sitting down and delivering a piece of working software. And that requires the opposite of going wide. It requires going deep. Slow. Methodical. Boring, even.&lt;/p&gt;

&lt;p&gt;The AI enables both.&lt;/p&gt;

&lt;p&gt;But your brain can’t do both at the same time.&lt;/p&gt;

&lt;p&gt;When you try, you thrash. You burn cycles switching between deep and wide, and just like a thrashing computer, you end up doing a lot of work and making no progress.&lt;/p&gt;

&lt;h2 id="what-im-taking-away"&gt;What I’m Taking Away&lt;/h2&gt;

&lt;p&gt;Two things, and they’re in tension with each other, and I’m OK with that.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Go deep before you go fast.&lt;/strong&gt; Planning with AI isn’t just “write a spec and hand it over.” It’s hours of investigation. It’s asking the AI to explain its own plan in painful detail. It’s making sure you understand the problem so well you could solve it by hand. The code is the easy part. The understanding is the work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI lets you hold a lot of context, but your brain still has limits.&lt;/strong&gt; Context switching costs the same as it always did. Maybe more, because the AI makes it tempting to take on everything. You can hold the shield and the sword, but not at the same time. Deliberately buying yourself blocks of deep time is not optional. It’s the whole game.&lt;/p&gt;

&lt;p&gt;This is the new trap for senior engineers. AI lets you take on more surface area than ever before. But the work that actually ships still requires deep focus. Nobody is immune to thrashing, no matter how good the tooling gets.&lt;/p&gt;

&lt;p&gt;And if you’re sitting there right now, pushing and pushing and panicking and feeling like you’ll never get there? That’s a smell. Stop. Step away. Start again with understanding, not urgency.&lt;/p&gt;

</description>
      <category>aios</category>
      <category>claudecode</category>
      <category>softwarecraftsmanshi</category>
    </item>
    <item>
      <title>The Feedback That Doesn’t Care About Your Title</title>
      <dc:creator>Peter van Onselen</dc:creator>
      <pubDate>Tue, 17 Mar 2026 08:00:00 +0000</pubDate>
      <link>https://forem.com/peter_vanonselen_e86eab6/the-feedback-that-doesnt-care-about-your-title-j2n</link>
      <guid>https://forem.com/peter_vanonselen_e86eab6/the-feedback-that-doesnt-care-about-your-title-j2n</guid>
      <description>&lt;p&gt;&lt;em&gt;What doesn’t kill you makes you stronger … right?&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;I’ve been writing this blog for a while now. I’ve documented scope creep spirals, the joy of deleting code I spent weeks writing, and the slow painful education of learning to work with AI agents without letting them run off a cliff. If you’ve been following along, you know the theme by now: I learn things the hard way and then write about it so you don’t have to. Or at least so you can watch.&lt;/p&gt;

&lt;p&gt;Yesterday I was explaining to a colleague how I use Gemini to give me personalised feedback on my performance after meetings. He’s a staff engineer, someone who’s genuinely deep into AI-assisted coding, not a tourist. And even he stopped and went: “Wait, you’re doing &lt;em&gt;what&lt;/em&gt;?”&lt;/p&gt;

&lt;p&gt;That reaction made me step back. Because I hadn’t really sat down and thought about what I’d actually built over the past year. I’d been solving problems one at a time, better code generation here, better context gathering there, a way to get honest feedback on myself over here, and somewhere along the way it had become something bigger. A system. A layer underneath how I work that I now can’t imagine working without.&lt;/p&gt;

&lt;p&gt;So this is me trying to describe what that looks like, now that I’ve finally noticed it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.petervanonselen.com%2Fassets%2Ffeedback-that-kills-you%2Fimage.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.petervanonselen.com%2Fassets%2Ffeedback-that-kills-you%2Fimage.png" alt="the layers" width="800" height="611"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2 id="the-code-layer-get-off-the-autocomplete"&gt;The code layer: get off the autocomplete&lt;/h2&gt;

&lt;p&gt;If you’re writing code with an AI assistant inside your IDE, Copilot in VS Code for instance, I’d gently suggest you’re missing the better experience. CLI agents like Claude Code, Codex CLI, and Open Code have fundamentally changed how I interact with code. Open Code has become my go-to because it works well straight out of the box and, crucially, it connects to Copilot’s backend models. If your company already pays for Copilot, Open Code might be the unlock you didn’t know you were waiting for.&lt;/p&gt;

&lt;p&gt;The shift matters because it changes what the AI is doing. Inside an IDE, it’s autocomplete with delusions of grandeur, guessing what you want line by line. On the command line, I’m describing problems, analysing architecture, pulling systems apart, generating diagrams. It stops being a typing assistant and starts being a thinking partner.&lt;/p&gt;

&lt;p&gt;I use this for everything that touches code directly: writing it, reviewing it, debugging, breaking things apart to understand them, building architecture diagrams. The lot.&lt;/p&gt;

&lt;h2 id="the-context-layer-taming-the-organisational-scatter"&gt;The context layer: taming the organisational scatter&lt;/h2&gt;

&lt;p&gt;Every engineer in a large organisation knows this pain. The information you need to do your work lives in seven different places: Slack threads, Google Meet recordings, Confluence pages, Jira tickets, GitHub PRs, and at least two places nobody told you about. You spend half your time assembling a coherent picture before you can even start thinking.&lt;/p&gt;

&lt;p&gt;ChatGPT and Claude’s enterprise integrations have changed this for me. Both allow you to connect to corporate tools, your chat platform, docs, issue tracker, source control, and pull context into a single conversation. Instead of trawling through three Slack channels and two Confluence pages for forty minutes, I pull it all together and ask: what does this ticket actually need? What are the acceptance criteria? What am I missing?&lt;/p&gt;

&lt;p&gt;Here’s where it compounds. Good acceptance criteria from this layer mean better prompts for the coding agents. The layers feed each other. I didn’t design it that way, it just happened once the pieces were in place.&lt;/p&gt;

&lt;h2 id="the-mirror-feedback-that-doesnt-care-about-your-feelings"&gt;The mirror: feedback that doesn’t care about your feelings&lt;/h2&gt;

&lt;p&gt;This is the hard one to talk about. And the one that made my colleague stop in his tracks.&lt;/p&gt;

&lt;p&gt;We’re a remote organisation. Google Meet is where everything happens, and Gemini sits inside every meeting. Most people don’t turn on transcriptions, which I think is a mistake. Any meeting producing collective knowledge should generate a transcript. Those transcripts feed the context layer above.&lt;/p&gt;

&lt;p&gt;But there’s another use that took me months to work up to.&lt;/p&gt;

&lt;p&gt;After meetings where I’m an active participant, I ask Gemini: as a staff engineer, what went well, what didn’t go well, and what can I improve on?&lt;/p&gt;

&lt;p&gt;The first few times, it was rough.&lt;/p&gt;

&lt;p&gt;Here’s the thing about feedback from humans: you almost never get anything useful. You either get “yeah, that was fine” or something so carefully hedged that whatever kernel of truth was in there has been sanded down to nothing. I’ve rarely received feedback that was specific, actionable, and tied directly to something I actually did in a real moment.&lt;/p&gt;

&lt;p&gt;Gemini doesn’t do hedging. It references specific things that happened in the meeting. “You reframed the argument here and it shifted the conversation constructively.” Or: “You weren’t listening here and this is where it cost you.” It once told me that while I’d handled a frustrated colleague well, I could have spotted the frustration earlier and intervened before it escalated, and that when another colleague was dismissive, I’d recovered well but could have prepared for that reaction. Specific. Contextual. Minutes after it happened.&lt;/p&gt;

&lt;p&gt;When I explained this to my colleague yesterday, he asked: “Isn’t this just seeking perfection?” And I realised, no. It’s just a way to learn and grow and become more deliberate about how I communicate, how I lead, and how I interact with the people around me. You can’t improve what you don’t measure. This is measuring.&lt;/p&gt;

&lt;p&gt;But here’s what really made me think this is bigger than my own little experiment. I told a principal engineer friend at another company about this approach. He had a difficult conversation coming up, recorded it with Gemini, and afterwards used the transcript to get actionable feedback on how he’d handled it. His reaction was genuine shock. He’d never had that clear a picture of how his conduct was landing. An engineering manager I know has started doing the same thing and describes it as brutal but the most meaningful feedback he’s received in years.&lt;/p&gt;

&lt;p&gt;And I think there’s a reason for that. I remember chatting with a startup CEO at a meetup who made the observation that the higher you go in leadership, the less honest feedback you receive. The position of power makes it hard for people to cross that barrier. Gemini doesn’t have any concept of your title or your seniority. It just tells you what it saw.&lt;/p&gt;

&lt;p&gt;In the beginning, every session felt like a wake-up call. After months of doing this consistently, keeping a log, reading it back, it softened. Not because the feedback got less honest, but because the gap between what I thought I was doing and what I was actually doing got narrower. Fewer surprises. More gentle nudges, fewer gut punches.&lt;/p&gt;

&lt;h2 id="so-what-is-this-actually"&gt;So what is this, actually?&lt;/h2&gt;

&lt;p&gt;None of these tools alone would be worth a blog post. A CLI coding agent is nice. Enterprise AI integrations save time. AI self-reflection is powerful but weird. What caught me off guard, what I only noticed yesterday when I saw my colleague’s reaction, is that they work as a system.&lt;/p&gt;

&lt;p&gt;Meeting transcripts feed the context layer. The context layer produces better acceptance criteria. Better acceptance criteria drive better output from the coding agents. The self-improvement loop makes me more effective in the meetings that generate the transcripts. Each layer feeds the others. I didn’t plan it. I just kept solving problems and the connections emerged.&lt;/p&gt;

&lt;p&gt;There’s a Sam Altman interview from about a year ago where he describes people using AI as “an operating system for how they think.” At the time I had absolutely no idea what he meant. Now I think I do, and the uncomfortable truth is that I’m probably barely scratching the surface of where this goes.&lt;/p&gt;

&lt;p&gt;So here is my take away action for you. Next meeting you are in with a transcript, ask an LLM for some honest feedback. Let me know if you learn anything interesting!&lt;/p&gt;

&lt;p&gt;I’m still figuring it out. As usual, you’ll hear about it when I do.&lt;/p&gt;

</description>
      <category>aios</category>
      <category>claudecode</category>
      <category>softwarecraftsmanshi</category>
    </item>
    <item>
      <title>How I Learnt to Stop Worrying and Love Agentic Katas</title>
      <dc:creator>Peter van Onselen</dc:creator>
      <pubDate>Thu, 05 Mar 2026 08:00:00 +0000</pubDate>
      <link>https://forem.com/peter_vanonselen_e86eab6/how-i-learnt-to-stop-worrying-and-love-agentic-katas-1ob9</link>
      <guid>https://forem.com/peter_vanonselen_e86eab6/how-i-learnt-to-stop-worrying-and-love-agentic-katas-1ob9</guid>
      <description>&lt;p&gt;&lt;em&gt;I don’t know how to teach this. But I think I’ve figured out how to practice it…&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;Have you ever struggled to get started with something new—not because the thing itself is hard, but because the &lt;em&gt;shape&lt;/em&gt; of how to learn it isn’t clear? That’s where I’ve been stuck with agentic coding. Not the doing of it. I’ve been doing it for months. The teaching of it. The “how do I help someone else get started” of it.&lt;/p&gt;

&lt;p&gt;And then I remembered code retreats. And katas. And the way I actually learned TDD all those years ago—not from a book, but from structured practice with low stakes and room to play.&lt;/p&gt;

&lt;p&gt;So I built a set of agentic katas: structured coding exercises designed specifically for practising AI-assisted development. Not traditional katas—those are too small and the AI already knows all the answers. These are bigger, meatier problems in unfamiliar domains that force you to engage with the full process of working alongside an agent.&lt;/p&gt;

&lt;p&gt;Let me explain how I got here.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Brief History of Practising on Purpose
&lt;/h2&gt;

&lt;p&gt;Early in my career, code retreats were the thing that taught me test-driven development. Not a book. Not a course. A structured, all-day event where you solve Conway’s Game of Life over and over again, each time with different constraints. Maybe your pair is actively trying to &lt;em&gt;not&lt;/em&gt; solve the problem. Maybe you’re strictly ping-ponging. Maybe you delete your code every 45 minutes.&lt;/p&gt;

&lt;p&gt;The point was never to solve Conway’s Game of Life. The point was to internalise the patterns and practices of TDD by giving yourself a safe space to experiment. No production pressure. No deadlines. Just play.&lt;/p&gt;

&lt;p&gt;Code katas grew out of the same ethos—small, self-contained problems that shouldn’t take more than an hour or two. The algorithm doesn’t matter. How you choose to solve it does. They’re bite-sized by design. They’re not supposed to be hard. They’re supposed to be &lt;em&gt;practice&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem with Katas and AI
&lt;/h2&gt;

&lt;p&gt;Here’s the thing I’ve been struggling with: traditional code katas don’t work for learning agentic development. They’re too small. The LLM has already seen every solution to FizzBuzz and Roman Numerals in its training data. You’re not practising a workflow—you’re watching an AI regurgitate a known answer. There’s nothing to explore, nothing to plan, no decisions to make about approach or tooling.&lt;/p&gt;

&lt;p&gt;And that matters, because the skill you need to develop with agentic coding isn’t “how to prompt an AI to write code.” It’s how to &lt;em&gt;think alongside one&lt;/em&gt;. How to explore a problem space together. How to write a plan that gives an agent enough context to be useful. How to verify that what came back is actually what you wanted. How to set up your workspace so the AI has the right guardrails.&lt;/p&gt;

&lt;p&gt;None of that shows up in a 30-minute kata where the AI already knows the answer.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Accidental Discovery
&lt;/h2&gt;

&lt;p&gt;What’s funny is that I’ve kind of been doing agentic katas already—almost by accident. I just didn’t realise it at the time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fufyvpe4gyy5hgrtn7gzu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fufyvpe4gyy5hgrtn7gzu.png" alt="The first kata" width="800" height="474"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A few weeks ago I wrote about &lt;a href="https://dev.to/peter_vanonselen_e86eab6/this-is-the-way-delete-the-code-2n8m"&gt;deleting code on purpose as a way to recover from burnout&lt;/a&gt;. I’d been experimenting with the Ralph Wingum loop—throwing PRDs at an agentic coding workflow, seeing what came out, then deliberately throwing the code away. The output I was chasing wasn’t a codebase. It was understanding. How big can a PRD get before the loop breaks? How much do agent files matter? What’s the minimum setup to get something useful?&lt;/p&gt;

&lt;p&gt;Each run was a contained experiment. Fresh repo, clear problem, focused practice, delete the code, do it again. I was varying one thing at a time—adding a CLAUDE.md file, scaling up the PRD size, trying a different domain—and learning from each iteration.&lt;/p&gt;

&lt;p&gt;I was doing agentic katas. I just hadn’t named them yet.&lt;/p&gt;

&lt;p&gt;Looking back, the whole arc has been building toward this. My game project was the first rough version—months of cycling through spec-driven development and making mistakes. The “delete the code” experiments compressed that into focused sessions. And now, formalising the structure into something other people can pick up feels like the obvious next step.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building the Thing
&lt;/h2&gt;

&lt;p&gt;So I’ve put together a set of agentic katas. The idea is that each one should require somewhere in the region of four to eight hours of focused hand crafted work to do &lt;em&gt;well&lt;/em&gt;. And by “well” I mean the full golden plate: test-driven, 100% coverage, clean README, proper git history, the works. Not because the output matters—but because doing that level of work with an AI agent forces you to actually engage with the process.&lt;/p&gt;

&lt;p&gt;The loop for every kata is the same:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Explore&lt;/strong&gt; → &lt;strong&gt;Plan&lt;/strong&gt; → &lt;strong&gt;Set Up Context&lt;/strong&gt; → &lt;strong&gt;Build&lt;/strong&gt; → &lt;strong&gt;Verify&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;And there’s one key rule that makes the whole thing work: &lt;em&gt;you are not allowed to choose a programming language or framework until you’ve had a conversation with your AI tool about what the best approach is.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is the rule that forces the shift. Instead of jumping straight to “build me X in TypeScript,” you have to start with “I need to solve this problem—what are my options?” You explore the problem space. You figure out what tools exist. You have the AI challenge your assumptions. &lt;em&gt;Then&lt;/em&gt; you decide on an approach.&lt;/p&gt;

&lt;p&gt;From there, you write a detailed plan—acceptance criteria, example data, use cases, a breakdown into small chunks of work. You set up your workspace with an agent file and think about what context to include. You build incrementally. And you verify everything: read the plan, read the code, run it, test it, confirm it does what you intended.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.petervanonselen.com%2Fassets%2Fagentic-kata%2Fagentic-kata-loop.svg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.petervanonselen.com%2Fassets%2Fagentic-kata%2Fagentic-kata-loop.svg" alt="the loop" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Katas Themselves
&lt;/h2&gt;

&lt;p&gt;I’ve started with four problems, each chosen because they sit in a domain most developers haven’t worked in before:&lt;/p&gt;

&lt;p&gt;An &lt;strong&gt;audio transcriber&lt;/strong&gt; that handles speech-to-text with timestamps and speaker diarisation. A &lt;strong&gt;background remover&lt;/strong&gt; for image segmentation. A &lt;strong&gt;meme generator&lt;/strong&gt; that deals with text rendering and positioning on arbitrary images. And a &lt;strong&gt;thumbnail ranker&lt;/strong&gt; that scores images for visual appeal.&lt;/p&gt;

&lt;p&gt;Each kata has deliberate ambiguity baked in—because real problems are ambiguous, and part of the skill is figuring out what questions to ask. They also have a privacy constraint (everything runs locally, no cloud APIs for processing) and an extra credit extension for when you want to push further.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters Right Now
&lt;/h2&gt;

&lt;p&gt;I’ll be honest: the reason I’m putting this together is partly selfish. I want to run a workshop with my colleagues, and I need structured material to do it. But there’s a bigger motivation too.&lt;/p&gt;

&lt;p&gt;Right now, everything about AI and software development feels incredibly intense. Fear of being made obsolete. AI layoff discourse everywhere. The pressure to have strong opinions about tools you’ve barely had time to evaluate. It’s all very stressful, and stress is the enemy of learning.&lt;/p&gt;

&lt;p&gt;What people actually need—what &lt;em&gt;I&lt;/em&gt; needed, and what I accidentally created for myself—is a safe space to play. A contained environment where you can try things, make mistakes, and build intuition without the stakes of production code or career anxiety hanging over you.&lt;/p&gt;

&lt;p&gt;Code retreats gave us that for TDD. I’m hoping agentic katas can do the same for working with AI.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Repo
&lt;/h2&gt;

&lt;p&gt;I’ve put everything together in a repo: &lt;a href="https://github.com/vanonselenp/agentic-katas" rel="noopener noreferrer"&gt;github.com/vanonselenp/agentic-katas&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It includes the kata briefs, a participant guide covering the rules and process, and a facilitator guide for anyone who wants to run this as a structured session with their team. A session takes about 90 minutes.&lt;/p&gt;

&lt;p&gt;I haven’t run this with anyone else yet. I only put it together today, and I’m planning to trial it with my team in the coming weeks. It might be brilliant. It might be terrible. Either way, I’ll write about how it goes.&lt;/p&gt;

&lt;p&gt;But the core idea—that you need bigger, unfamiliar problems to practise AI-assisted development, and that the process matters more than the output—that I’m confident about. Because I’ve been living it, accidentally, for months.&lt;/p&gt;

&lt;p&gt;If you try it, I’d love to hear how it goes. And if you’re doing something different to build these skills, I’d love to hear about that too.&lt;/p&gt;

</description>
      <category>claudecode</category>
      <category>specdrivendevelopmen</category>
      <category>katas</category>
      <category>softwarecraftsmanshi</category>
    </item>
    <item>
      <title>14 PRs, 6 Repos, 1 Button: A Tale of Tumbling Down the Rabbit Hole</title>
      <dc:creator>Peter van Onselen</dc:creator>
      <pubDate>Thu, 12 Feb 2026 08:00:00 +0000</pubDate>
      <link>https://forem.com/peter_vanonselen_e86eab6/14-prs-6-repos-1-button-a-tale-of-tumbling-down-the-rabbit-hole-3b7k</link>
      <guid>https://forem.com/peter_vanonselen_e86eab6/14-prs-6-repos-1-button-a-tale-of-tumbling-down-the-rabbit-hole-3b7k</guid>
      <description>&lt;p&gt;&lt;em&gt;True stories from the front lines of the internet…&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5vlwivw6ea97d5ncnotq.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5vlwivw6ea97d5ncnotq.jpg" alt="Alice falling down the rabbit hole" width="800" height="600"&gt;&lt;/a&gt;&lt;em&gt;Alice in Wonderland by &lt;a href="https://commons.wikimedia.org/wiki/File:Down_the_Rabbit_Hole_(311526846).jpg" rel="noopener noreferrer"&gt;Valerie Hinojosa&lt;/a&gt; / &lt;a href="https://creativecommons.org/licenses/by-sa/2.0/" rel="noopener noreferrer"&gt;Creative Commons Attribution-Share Alike 2.0&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Now this is a story all about how one button link got my codebase flipped turned upside down. And I’d like to take a minute, just sit right there, and I’ll tell you how I shipped 14 PRs without pulling out my hair.&lt;/p&gt;

&lt;p&gt;It started with a Monday morning meeting. I’d been off for three weeks. The meeting was dense with context about decisions made months ago, documented across scattered specs and design docs. Systems I don’t own. Plans originally speced out almost a year prior. SEO requirements. Legacy middleware behaviour. And somewhere in all of this, a single task: change where a subscribe button points.&lt;/p&gt;

&lt;p&gt;The old flow routed users through a legacy auth endpoint which was a piece of middleware handling user state and return-to-site functionality. The new flow should skip that layer and go direct. Simple, right?&lt;/p&gt;

&lt;p&gt;Three repos. 3 small PRs. That was the original scope.&lt;/p&gt;

&lt;p&gt;It became six repos and one or two more PRs…&lt;/p&gt;

&lt;h2&gt;
  
  
  The Context Problem
&lt;/h2&gt;

&lt;p&gt;Here’s what made this tricky: I didn’t have the context. Not the institutional knowledge of why things were built this way. Not the codebase familiarity to know where all the tendrils reached. Not the cross-system visibility to see how changes would ripple.&lt;/p&gt;

&lt;p&gt;Normally, this is where you’d involve other teams. Schedule alignment meetings. Negotiate architecture choices. Coordinate timed releases. The org chart becomes the constraint.&lt;/p&gt;

&lt;p&gt;Instead, I threw five AI tools at the problem.&lt;/p&gt;

&lt;p&gt;I used internal knowledge search to surface half a dozen docs from a year ago about what a potential migration might look like. Copilot and Codex scanned repos I’d never opened, outputting high-level analysis of what would need to change. NotebookLM synthesised a dozen-plus sources into actionable Jira tickets with acceptance criteria and testing plans. And Claude handled the actual implementation across all six repositories.&lt;/p&gt;

&lt;p&gt;Each tool for what it does best. None of them sufficient alone.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Shape of the Change
&lt;/h2&gt;

&lt;p&gt;What was supposed to be three repos became six because the AI tooling kept finding rabbit holes worth going down.&lt;/p&gt;

&lt;p&gt;The approach was backwards compatibility first. I updated the auth service to forward requests to the new endpoint, so existing systems would keep working. Only after that was stable did I remove the old code paths and switch the calls to point directly to the new flow.&lt;/p&gt;

&lt;p&gt;Along the way, I hit a referrer bug that only revealed itself mid-implementation. One of the components lived in a shared library, not a full application, which meant handling referral data differently than expected. This meant that I had to change how it was reading from window referrer data rather than relying on direct redirect URLs.&lt;/p&gt;

&lt;p&gt;And then there was a shared header component in another team’s repo. Hardcoded to the old endpoint. In code I couldn’t easily modify. The rabbit holes kept cropping up every time I thought I dived down them all.&lt;/p&gt;

&lt;p&gt;Fourteen PRs. Six repositories. Backwards compatible throughout. Zero downtime.&lt;/p&gt;

&lt;p&gt;The old flow had an extra hop through legacy middleware that handled state management. The new flow removes that layer entirely. Which makes for a faster time to checkout, same user experience, one less thing to maintain.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Point
&lt;/h2&gt;

&lt;p&gt;This would have been a multi-team effort. Alignment meetings across three teams, at minimum. Negotiated timelines. Architectural discussions. Coordinated releases.&lt;/p&gt;

&lt;p&gt;Instead, it was one developer holding context that used to require an org chart.&lt;/p&gt;

&lt;p&gt;I’m not saying AI tooling makes you a better engineer. I’m saying it lets you hold more context. And sometimes that’s the difference between “we’ll need to schedule a meeting with the other teams” and “I’ll have a PR up by Thursday.”&lt;/p&gt;

&lt;p&gt;The context ceiling just got a lot higher.&lt;/p&gt;

</description>
      <category>vibecoding</category>
      <category>claudecode</category>
      <category>specdrivendevelopmen</category>
      <category>codex</category>
    </item>
    <item>
      <title>This Is the Way: Delete the Code</title>
      <dc:creator>Peter van Onselen</dc:creator>
      <pubDate>Tue, 10 Feb 2026 08:00:00 +0000</pubDate>
      <link>https://forem.com/peter_vanonselen_e86eab6/this-is-the-way-delete-the-code-2n8m</link>
      <guid>https://forem.com/peter_vanonselen_e86eab6/this-is-the-way-delete-the-code-2n8m</guid>
      <description>&lt;p&gt;&lt;em&gt;How I learned to do AI Katas and make disposable code helped me recover from burnout&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;Burnout is real, and it takes time to work its way out.&lt;/p&gt;

&lt;p&gt;I spent the last couple of months trying to work up the will to tackle game projects again. Every attempt fizzled. After a 3+ month slog on a project that &lt;a href="https://dev.to/peter_vanonselen_e86eab6/how-many-times-do-you-have-to-build-too-much-to-learn-scope-creep-238g"&gt;refused to reach an end state&lt;/a&gt;. I had nothing left.&lt;/p&gt;

&lt;p&gt;So instead of committing to another massive project, I started playing.&lt;/p&gt;

&lt;p&gt;I came across the Ralph Wingum loop (&lt;a href="https://www.youtube.com/watch?v=RpvQH0r0ecM" rel="noopener noreferrer"&gt;30-min video if you’re curious&lt;/a&gt;) and decided to experiment. The premise is simple: use two Claude skills and a bash script to let an AI go full agent mode. The first skill, &lt;code&gt;/prd&lt;/code&gt;, takes a spec and generates user stories with verifiable acceptance criteria. The second, &lt;code&gt;/ralph&lt;/code&gt;, converts that PRD into JSON. Then you loop over the JSON until done. This is basic agentic coding, but AI-agnostic and surprisingly effective.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.petervanonselen.com%2Fassets%2Fagentic-play%2Fimage.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.petervanonselen.com%2Fassets%2Fagentic-play%2Fimage.png" alt="the wiggam loop" width="800" height="692"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I needed a project to test this on, so I picked &lt;a href="https://boardgamegeek.com/boardgame/163474/v-sabotage" rel="noopener noreferrer"&gt;V-Sabotage&lt;/a&gt;. It’s a board game I enjoy but rarely get to play (toddler life), and more importantly, it’s simple enough to define a clear MVP: rooms, a player, guards, sneaking mechanics, a win condition. I’d learned my lesson about scope.&lt;/p&gt;

&lt;p&gt;The real experiment wasn’t building the game. That was just the head fake. It actually was figuring out how to break down the work. How big should each PRD be? Do you treat each milestone as its own PRD? Do you throw the whole spec at it and see what happens?&lt;/p&gt;

&lt;p&gt;I had to find out.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;First run:&lt;/strong&gt; I threw a PRD at the skills, ralphed it, and looped on a fresh repo. What came out was very familiar from the last time I was building a game in Godot with AI. Bascially something that worked, but buggy, clunky, no tests, poor signal architecture, tightly coupled code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Second run:&lt;/strong&gt; Same PRD, same loop, but this time I initialised the repo with a CLAUDE.md file first. Just basics: test-drive the code, use Godot 4.x best practices, that sort of thing.&lt;/p&gt;

&lt;p&gt;The difference was dramatic. The AI wrote its own test runner. It test-drove everything, achieved high coverage, produced cleaner interfaces, used signals properly, and kept things decoupled. Twenty minutes of compute, and the output was genuinely good. Honestly the thing that blew my mind on this was &lt;strong&gt;it wrote it’s own TEST RUNNER!?!?&lt;/strong&gt;. Are you kidding me?&lt;/p&gt;

&lt;p&gt;So … key lesson: agent files matter. A lot.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Third run:&lt;/strong&gt; I embedded full milestones into the PRDs—a dozen user stories each, multiple acceptance criteria. The loop churned through it and produced a testable MVP in surprisingly little time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fift8zivoya2r0l2e8454.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fift8zivoya2r0l2e8454.png" alt="stealth game" width="800" height="415"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fourth run:&lt;/strong&gt; I got distracted by an app idea. Spent a couple of hours refining it with AI, generated a chunky PRD, threw the whole thing at the &lt;code&gt;/prd&lt;/code&gt; skill. It produced 40 stories. I looped it. An hour later, with 5% of my usage allowance remaining, I had a working prototype.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwnk9biprbp8sehj2kxvp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwnk9biprbp8sehj2kxvp.png" alt="random app" width="800" height="1124"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It didn’t do exactly what I wanted. But it did most of what I’d asked. This was surprisingly more than enough to immediately change my thinking about what I actually needed in a meaningful way.&lt;/p&gt;

&lt;p&gt;And here’s the thing that made all of this feel like play instead of work: &lt;strong&gt;I deleted the code.&lt;/strong&gt;. I went full &lt;a href="https://www.coderetreat.org/" rel="noopener noreferrer"&gt;code retreat conways game of life&lt;/a&gt; delete the code.&lt;/p&gt;

&lt;p&gt;Multiple times. Deliberately. The output I was chasing wasn’t a codebase. It was understanding. How big can a PRD get before the loop breaks? (Bigger than I expected.) How much do agent files matter? (More than I expected.) What’s the minimum setup to get something useful? (Less than I expected.)&lt;/p&gt;

&lt;p&gt;Disposable code meant low stakes. Low stakes meant I could experiment freely. And experimenting freely, it turns out, is how I recover from burnout.&lt;/p&gt;

&lt;p&gt;This play has fundamentally changed how I work at The Economist. But that’s a story for next time.&lt;/p&gt;

</description>
      <category>vibecoding</category>
      <category>claudecode</category>
      <category>specdrivendevelopmen</category>
      <category>codex</category>
    </item>
    <item>
      <title>Why You Shouldn’t Speedrun a Production Refactor</title>
      <dc:creator>Peter van Onselen</dc:creator>
      <pubDate>Fri, 12 Dec 2025 08:00:00 +0000</pubDate>
      <link>https://forem.com/peter_vanonselen_e86eab6/why-you-shouldnt-speedrun-a-production-refactor-4l8h</link>
      <guid>https://forem.com/peter_vanonselen_e86eab6/why-you-shouldnt-speedrun-a-production-refactor-4l8h</guid>
      <description>&lt;p&gt;&lt;em&gt;Learning the hard way that AI makes discipline more important, not less…&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;This week has been a mess. I’ve been ill since last Thursday with a cough that’s been both productive and dizzying, which in a rare moment of clarity made me realize that maybe doing personal dev in the evenings is… not ideal. So no game dev tales this week.&lt;/p&gt;

&lt;p&gt;Instead, I want to talk about how I nearly torpedoed a production refactor at The Economist a month ago by forgetting the most important lesson I’ve been learning over the past few months: &lt;strong&gt;AI makes discipline more important, not less.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The spectacular failure
&lt;/h2&gt;

&lt;p&gt;I’m currently on the e-comm-funnel team working on the checkout pipeline. One of my first projects has been tackling Commerce Services. This is a Go monolith (a language I’d never used before recently) that started as a POC and got productionized. Naturally, it’s a beautiful mess with conflicting APIs doing all sorts of non-cohesive domain things.&lt;/p&gt;

&lt;p&gt;My goal: break it into microservices.&lt;/p&gt;

&lt;p&gt;So naturally I did what I’ve been practicing with Horizons Edge and started with a spec. I had a very long conversation with Codex, analyzed the repo structure, identified the domains, mapped dependencies. From this chat we produced a solid 10 page high-level plan.&lt;/p&gt;

&lt;p&gt;And the first step of that plan was &lt;em&gt;Phase 1: extract common code into a shared library&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;And somewhat predictably, here’s where I got clever.&lt;/p&gt;

&lt;p&gt;I thought: “I’ve got a detailed spec. Codex knows Go. Let’s just… do the whole thing! Whats the worst that could happen?”&lt;/p&gt;

&lt;p&gt;So I did. One massive refactor. Codex happily obliged.&lt;/p&gt;

&lt;p&gt;Then I looked at the pull request: &lt;strong&gt;200 files changed in the monolith. 80 files in the new library.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxir1jgj1e4jvypc7gjpu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxir1jgj1e4jvypc7gjpu.png" alt="this is fine, right?" width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And I just stared at it, completely overwhelmed by the obvious question: &lt;em&gt;How the hell am I going to verify this actually works?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;There was no way I could meaningfully review 280 files of changes. No way I could ask another engineer to do it. No way to be confident this wouldn’t break something subtle in production. I’d just created an unshippable monster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Starting over, properly this time
&lt;/h2&gt;

&lt;p&gt;I scrapped the entire thing and started again with an “I need this to be incremental” mindset.&lt;/p&gt;

&lt;p&gt;Not just because I wanted to be able to review it, though that’s critical, but because I genuinely believe small releases into production are the right way to work. It should have been my default starting point. Instead, I’m still learning just how disciplined I need to be when working with AI tooling.&lt;/p&gt;

&lt;p&gt;The new approach:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;First&lt;/strong&gt; , I wrote a much more detailed spec for Phase 1 that lived in the new repo. Not just “extract shared library” but an 8-step plan where each step could go to production independently. Start with the absolute minimum: just one joint service with no dependencies. This would validate the CI/CD pipeline, the integration points, everything, with the smallest possible change.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Then&lt;/strong&gt; , one step at a time:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Extracted and deploy leaf utilities (logging, validation, middleware)&lt;/li&gt;
&lt;li&gt;Migrated HTTP routing abstractions&lt;/li&gt;
&lt;li&gt;Moved observability and AWS helpers&lt;/li&gt;
&lt;li&gt;Extracted infrastructure components like health checks&lt;/li&gt;
&lt;li&gt;Finally, the component registry&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At each step: tested, improved coverage, deployed to production, monitored. The existing systems kept running exactly as before.&lt;/p&gt;

&lt;p&gt;I followed the same hyper-methodical approach I’ve been using with the game project. Focusing on small scoped MVP slices and incremental delivery. For the actual development, I loaded Codex into a workspace with both repos and had it follow the spec file for each migration. Then validated with Claude in GitHub Copilot, extensive personal review, and eventually team review before each production deployment.&lt;/p&gt;

&lt;p&gt;The result: A refactor of a core system touching ~200 files, in a programming language I’m just learning, in a domain I’d just joined, completed over a couple of weeks with zero downtime. No one on the team was blocked or impacted. It just happened quietly in the background.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I’m taking away
&lt;/h2&gt;

&lt;p&gt;Two things keep reinforcing themselves across contexts:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;First&lt;/strong&gt; : AI amplifies your need for discipline. The easier it becomes to generate large amounts of code, the more critical it is to think carefully about scope, verification, and deployment strategy. One-shotting 280 files feels productive in the moment. It’s not. It’s just creating an unshippable mess you’ll have to undo.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Second&lt;/strong&gt; : The “what’s the smallest increment that adds value?” mindset pays off everywhere. It saved Horizons Edge when I was drowning in scope creep. It made this refactor safe and reviewable. It’s not just a nice-to-have for side projects … it’s how you de-risk production changes in unfamiliar territory.&lt;/p&gt;

&lt;p&gt;Next up is breaking out actual domains into microservices, starting with Identity &amp;amp; User. But that’s a plan for next year, when I’m hopefully no longer coughing my lungs out.&lt;/p&gt;

</description>
      <category>godot</category>
      <category>vibecoding</category>
      <category>claudecode</category>
      <category>specdrivendevelopmen</category>
    </item>
    <item>
      <title>Finally… A Wild MVP Appears</title>
      <dc:creator>Peter van Onselen</dc:creator>
      <pubDate>Thu, 04 Dec 2025 08:00:00 +0000</pubDate>
      <link>https://forem.com/peter_vanonselen_e86eab6/finally-a-wild-mvp-appears-39km</link>
      <guid>https://forem.com/peter_vanonselen_e86eab6/finally-a-wild-mvp-appears-39km</guid>
      <description>&lt;p&gt;&lt;em&gt;Three Months to MVP: What I Learned Building a Tactical Card Game with AI…&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcit2izofzxlqc2b105fo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcit2izofzxlqc2b105fo.png" alt="banner" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It’s been three months since I started trying to make Horizon’s Edge, a tactical turn-based wargame in the sky. And honestly, I’m completely stunned that I even have something that actually … kinda … works.&lt;/p&gt;

&lt;h2&gt;
  
  
  From Vibe to Spec
&lt;/h2&gt;

&lt;p&gt;When I started this project, I was pure vibe coding. But somewhere in the middle, when I was trying to refactor the UI from a classic RTS/turn-based strategy with a mass of buttons to something entirely driven by card play, vibe coding hit a wall. I couldn’t get AI tooling to cooperate with my loose intuitions. That’s when I started working with specs and it changed my life…. metaphorical life …. but life!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjwstdtbgh6w3e92bpuue.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjwstdtbgh6w3e92bpuue.png" alt="card driven" width="800" height="318"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Writing a clear specification before diving head first into a vibe changed everything. Suddenly the AI had guardrails. It stopped looping in circles. If you want to go deeper on this, I wrote about starting to figure out specs in &lt;a href="https://claude.ai/2025/10/03/chaos-cards-and-claude-copy/" rel="noopener noreferrer"&gt;Chaos, Cards, and Claude&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Months of Education
&lt;/h2&gt;

&lt;p&gt;What I’ve learned so far is this: it’s entirely possible to make working software with AI tools. You can keep momentum even when you’re completely out of energy or time. But the most important thing … the thing that actually matters … is that you have to always verify what the AI outputs. Automated unit tests are your best friend here. A Quality Engineers mindset and thinking about edge cases is fundamental.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk85lytdb7ykaglfeu50f.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk85lytdb7ykaglfeu50f.gif" alt="waveform generation" width="560" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I’ve done major refactors. I’ve rethought how the game works multiple times. I added procedural generation because it was fun. I figured out how to make the game work with just card play mechanics that… mostly work (much to my surprise). Some refactors were vibe-coded disasters; others were spec-driven and clean. But every single one taught me something about how to work with AI as a tool rather than a replacement.&lt;/p&gt;

&lt;h2&gt;
  
  
  The MVP: What It Took
&lt;/h2&gt;

&lt;p&gt;Two days ago I finished the final major system needed to validate the MVP: the victory system. Islands needed to change ownership. Every existing system needed to integrate with that. Core island nodes needed to be targetable from creatures, spells, and abilities. Getting the targeting code to work meant hitting more touch points than I anticipated. A lot of different abilities needed specific ways to target islands, not just creatures. It was more entertaining than expected, but I had a spec, and that spec kept me honest.&lt;/p&gt;

&lt;p&gt;I now have two thematic decks that are actually unique. 10 creatures. Infrastructure cards that terraform and change the world. A spell that destroys the world. Card play mechanics that mostly work. A whole horde of nuanced and detailed rules about damage and combat that work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prototype done. Well, mostly.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The mechanics are all there. What’s left is UI feedback. Cards need to show their play cost clearly, what they do when discarded, what abilities they have before you play them. I know all this because I’ve spent three months building it. Others won’t. So I’m going to tackle those UI gaps this week, then put this in front of a few people to get real feedback.&lt;/p&gt;

&lt;h2&gt;
  
  
  Now It’s Your Turn
&lt;/h2&gt;

&lt;p&gt;I’ve spent the past five months working on this. It started with a Magic the Gathering Pauper Jumpstart cube that just wouldn’t get out of my head, went running headlong into a board game prototype that was way too complicated and barrelled straight into this game. I’ve learned a ridiculous amount. &lt;strong&gt;But here’s what actually matters:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It is possible to make working software with AI. You can make some amazing things with these tools. You can learn as you go. You can ship those crazy ideas you never thought you could.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So here’s what I want:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I want &lt;em&gt;you&lt;/em&gt; go out and make your own mistakes. Build something weird. Use AI as a tool, verify everything it does, and then make something great. Make art. Because making art, well, that’s the most human thing we can do.&lt;/p&gt;

&lt;p&gt;Tell me what you build. I want to hear about it.&lt;/p&gt;

&lt;p&gt;Till then, keep learning.&lt;/p&gt;

</description>
      <category>godot</category>
      <category>vibecoding</category>
      <category>claudecode</category>
      <category>specdrivendevelopmen</category>
    </item>
  </channel>
</rss>
