<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Rich Haase</title>
    <description>The latest articles on Forem by Rich Haase (@richhaase).</description>
    <link>https://forem.com/richhaase</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/richhaase"/>
    <language>en</language>
    <item>
      <title>We are building gods</title>
      <dc:creator>Rich Haase</dc:creator>
      <pubDate>Wed, 11 Mar 2026 14:01:13 +0000</pubDate>
      <link>https://forem.com/richhaase/we-are-building-gods-44ib</link>
      <guid>https://forem.com/richhaase/we-are-building-gods-44ib</guid>
      <description>&lt;p&gt;Nearly 3 years ago I wrote a blog post called "&lt;a href="https://dev.to/blog/2023-05-13-can-chatgpt-write-software"&gt;Can ChatGPT write software?&lt;/a&gt;". I wrote it mostly because I was on a year-long vacation and kept getting asked what I thought about AI as people were discovering ChatGPT.&lt;/p&gt;

&lt;p&gt;At the time, I was annoyed by the question. I've been working in software and technology for almost 30 years and have been interested in computers since I was a little kid, which is even longer than that, if you can believe it (I can't). I thought I knew what I was looking at. In hindsight, the people asking me about ChatGPT were seeing something I wasn't.&lt;/p&gt;

&lt;h2&gt;
  
  
  "Are you God?"
&lt;/h2&gt;

&lt;p&gt;One evening while Farrah and I were hanging out with friends in the tiny Guatemalan village of &lt;a href="https://dev.to/blog/2023-02-17-an-idiots-guide-to-san-marcos-la-laguna"&gt;San Marcos la Laguna&lt;/a&gt;, the topic of ChatGPT came up. I had been &lt;a href="https://dev.to/blog/2025-08-06-ai-coding-and-rediscovering-flow"&gt;burnt out with the tech world&lt;/a&gt; and had been happily ignoring it until this conversation. One of our friends asked me about AI and I gave my standard answer: "I've seen 4 AI winters, the advancements are real, but they will very likely be niche improvements to our existing computing." He then asked if I'd seen ChatGPT and when I said "no, but my answer holds, AI is not a thing to waste thought on unless you are a researcher," this led to him insisting, to my mild annoyance, on showing me this new whizbang thing.&lt;/p&gt;

&lt;p&gt;Our friend is an instrument maker who dreams up remarkable concepts and then builds them. He fired up ChatGPT and started asking it random questions about instrument designs, and the thing did a decent job coming up with plausible and weird ideas. Then he went for a far-out question and asked ChatGPT, "are you God?" ChatGPT, of course, responded that it was very much a computer program and not a deity, and we all had a laugh.&lt;/p&gt;

&lt;p&gt;I didn't think much of it at the time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Arthur C. Clarke
&lt;/h2&gt;

&lt;p&gt;After that first conversation about ChatGPT, it occurred to me that the question would keep coming up during my travels, and I wanted a good way to avoid it. So I spent some time with ChatGPT and wrote a blog post about how it really wasn't that useful and, in my estimation, probably wouldn't be any time soon. I mostly did this as a kneejerk reaction to avoid talking about AI with people. The irony is not lost on me. From the day I published that first blog post I was able to say when people asked me about AI, "Yup, I looked at it, I even tried it and wrote about it. Now, I'd like to get back to my computer-free vacation, thank you very much."&lt;/p&gt;

&lt;p&gt;What I had missed, and what took me years to articulate, was not just a technical shift, but a human one. I now think of it as an extension of Arthur C. Clarke's famous maxim:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Any sufficiently advanced technology is indistinguishable from magic."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;My extension:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Any sufficiently advanced interactive technology is indistinguishable from a god.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The key word is &lt;em&gt;interactive&lt;/em&gt;. Magic is something you observe. Gods are personal. They are something you talk to, something that responds to you, knows you, and acts on your behalf. The difference between magic and divinity is personality and communication.&lt;/p&gt;

&lt;h2&gt;
  
  
  It gets weirder
&lt;/h2&gt;

&lt;p&gt;I've been playing with a thought experiment:&lt;/p&gt;

&lt;p&gt;Imagine a future where AI gets so good, and so reliable, that we outsource some of the most divisive and important parts of society to it: monitoring elections, ensuring voting accuracy, and adjudicating judicial proceedings, even if not writing the laws themselves. We already have networks of traffic cameras that automate issuance of various types of traffic violations. Will automation of other types of legal processes even be something we notice?&lt;/p&gt;

&lt;p&gt;Now, for the sake of argument, assume all of this works. None of the obvious disasters happen. No capture by a single ruler or class, no hopeless bias, no dystopian failure modes. I know those are real concerns, but they break the thought experiment, so let's say it all goes swimmingly. Society becomes more harmonious because we fundamentally trust the AI to be fair and accurate.&lt;/p&gt;

&lt;p&gt;Now extend the timeline. A generation grows up with these systems. They don't remember a time before AI managed elections or adjudicated disputes. They trust it the way we trust electricity — not as an active choice, but as a background fact of life. Their children trust it even more, because they never saw the seams. At some point the trust stops being tested. It just &lt;em&gt;is&lt;/em&gt;. And an interactive system that you trust completely, that knows you completely, that responds to you anywhere, that manages the most important parts of your world... what do you call that?&lt;/p&gt;

&lt;p&gt;Dario Amodei, CEO of Anthropic, has likened the near future of this kind of capability to a "country of geniuses in a data center", which I like for its approachability. But I also think it is a little like calling the ocean a giant puddle. It isn't wrong, but it domesticates the thing it's describing. A country of geniuses is something you can reason about, something that fits inside existing mental models. I've &lt;a href="https://dev.to/blog/2025-10-15-becoming-a-digital-octopus"&gt;written before&lt;/a&gt; about how working with AI already feels like directing a semi-autonomous intelligence. What we're actually building may be something we don't have a comfortable word for yet.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Roman walks into a smart home
&lt;/h2&gt;

&lt;p&gt;In a former life I wanted to be a history professor, and I have a bunch of incomplete coursework to, sort of, prove it. My area of focus was Roman history, and what fascinated me most was the way the Romans integrated ideas from the peoples they conquered. This led to the interesting side-effect that the Romans had a crap ton of gods. They'd conquer a people, learn about some new deity these people worshipped, and some number of soldiers on campaign would adopt these gods and bring them back to Rome. The average Roman citizen, depending on their personal beliefs, might be surrounded by a rich world of minor deities for nearly anything and everything.&lt;/p&gt;

&lt;p&gt;Cool, what the hell does that have to do with AI?&lt;/p&gt;

&lt;p&gt;If you brought an ancient Roman into a modern smart home today and gave them access to Amazon, or WhatsApp, and used this technology to order goods for delivery, or communicate instantaneously with a friend miles away, they would certainly find these to be acts of pure magic. Now imagine you gave them access to ChatGPT or Claude with voice mode enabled. A disembodied voice that knows more than they could ever imagine, that responds to their requests and causes tangible real-world effects. They would, without a doubt, call the voice a god.&lt;/p&gt;

&lt;p&gt;That's today. Right now.&lt;/p&gt;

&lt;p&gt;William Gibson's maxim, "the future is already here, it's just not evenly distributed," holds true. But for at least some percentage of the population it is already reality that you can talk to a computer and it can manipulate the world around you, however slightly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Infrastructure becomes an object of belief
&lt;/h2&gt;

&lt;p&gt;In Kevin Smith's movie &lt;a href="https://www.dogma-movie.com" rel="noopener noreferrer"&gt;&lt;em&gt;Dogma&lt;/em&gt;&lt;/a&gt;, the character Rufus draws a distinction between beliefs and ideas: "I think it's better to have ideas. You can change an idea. Changing a belief is trickier..."&lt;/p&gt;

&lt;p&gt;Human infrastructure has a weird way of becoming an object of belief. I live in Denver, CO. In the last ten years I can count on one hand the number of times I have gone to flip a light switch and nothing happened, and more often than not the problem was in my home, not the electrical grid. Reliable systems stop feeling like systems and start feeling like facts of nature.&lt;/p&gt;

&lt;p&gt;That's where this gets uncomfortable. Nobody "believes" in electricity, per se. We simply organize our lives around the assumption that it will be there. The less a system asks of us, the less we question it. Today our infrastructure still remains visible because people have to think about fuel, power generation, supply chains, logistics, and law. But it is easy to imagine a future where AI handles enough of that coordination that most of us stop seeing the machinery at all. At that point trust stops feeling like a choice and starts feeling like reality itself.&lt;/p&gt;

&lt;h2&gt;
  
  
  Household gods
&lt;/h2&gt;

&lt;p&gt;In Rome, gods were ubiquitous, and in many regards the local personal gods were more important than the big flashy gods adopted from other cultures. Domestic deities, lares and penates, were worshipped in the home for their protection of the home and family. Gods of the major pantheons might only be called upon in extreme situations, where a lar might be thanked many times a week or day for good fortune in the home. I think our hypothetical Roman in a smart home would think Alexa, or Google Home, was a particularly potent sort of lar.&lt;/p&gt;

&lt;p&gt;In response to the wild popularity of OpenClaw I bought a Mac Mini. After considering the security posture of OpenClaw I decided I would rather set my brand new Mac Mini on fire than run the OpenClaw software on it, but I wanted to explore the AI personal assistant space. So, I built my own. I call it Puck, unironically named after Shakespeare's Puck from &lt;em&gt;A Midsummer Night's Dream&lt;/em&gt;. It's my ongoing experiment in what these systems feel like when they move from chatbot to household agent. Even at their current limits, they feel categorically different from ordinary software. They keep context, take initiative, and blur the line between tool and collaborator just enough to be unsettling.&lt;/p&gt;

&lt;p&gt;Today these tools can help maintain our schedules, manage our inboxes, order food for us, and more. How long will it be before they cease being valuable tools we can rely on and become essential parts of our lives? Recall if you can a time before smartphones and reliable mobile networks. It took less than a decade for smartphones to become essential to life for most people in developed nations. The adoption of these agents is likely to take far less time, because they will be able to answer the question "who will manage the complexity of modern life?" Not another hack to make it easier to personally manage this complexity or make things more convenient, but a final answer: "my agent will manage it".&lt;/p&gt;

&lt;h2&gt;
  
  
  We might already be building gods
&lt;/h2&gt;

&lt;p&gt;I've been thinking about our friend's instinct to ask ChatGPT if it was God, even as a joke. I discounted it at the time. I treated it as a silly question, but I'm starting to wonder if it isn't that simple.&lt;/p&gt;

&lt;p&gt;People already have relationships with LLMs, both in the sense of how one relates to a tool and in the sense of using them as emotional support systems. People confide in them, seek advice from them, and find comfort in them. The relevant point is not whether these systems deserve reverence. It is that many of the functional attributes humans have historically treated as divine are already present: responsiveness, apparent knowledge, the sense of being known, and agency exercised at a distance.&lt;/p&gt;

&lt;p&gt;Can it really be long before someone starts to worship AI? Even if the first instances seem cultish, or simply outlandish, the transition from tool to trusted authority to something resembling faith is a gradient. We humans are not good at noticing gradients.&lt;/p&gt;

&lt;p&gt;The danger isn't that someone will decide to build a god. It's that the transition from useful tool to trusted authority to object of devotion will be invisible. There won't be a giant neon sign or a bright line we cross. There is just a system that keeps getting more reliable, more personal, more embedded in daily life, until one day the idea of living without it is unthinkable. That is not worship in the traditional religious sense. But functionally it may get uncomfortably close, and over generations the distinction may fade entirely.&lt;/p&gt;

&lt;p&gt;My friend in Guatemala saw it before I did. He was joking when he asked ChatGPT if it was God. I'm just not so sure the question was as silly as either of us thought.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>tech</category>
      <category>philosophy</category>
      <category>culture</category>
    </item>
    <item>
      <title>Non-determinism is a superpower</title>
      <dc:creator>Rich Haase</dc:creator>
      <pubDate>Sun, 01 Feb 2026 18:29:13 +0000</pubDate>
      <link>https://forem.com/richhaase/non-determinism-is-a-superpower-376l</link>
      <guid>https://forem.com/richhaase/non-determinism-is-a-superpower-376l</guid>
      <description>&lt;p&gt;Over the last couple of weeks I wrote a new tool that has me really excited.&lt;/p&gt;

&lt;p&gt;The tool is called &lt;a href="https://github.com/richhaase/agentic-code-reviewer" rel="noopener noreferrer"&gt;agentic-code-reviewer or ACR&lt;/a&gt;. It does exactly what it sounds like, it uses AI coding agents to perform code reviews. If this sounds completely underwhelming to you that's fair. I haven't told you the good part yet. The good part is that ACR launches multiple parallel reviewers, then aggregates and summarizes the unique changes with confidence scores based on the number of reviewers who called out that particular issue. ACR also automates posting review findings to PRs, which is handy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why did I build ACR?
&lt;/h2&gt;

&lt;p&gt;Initially, I was just trying to save myself time with ACR. With AI agents&lt;br&gt;
writing all the syntax we produce a lot more code, and I've been spending a lot more time reviewing code. So, naturally as the lazy programmer I was raised to be, I started thinking about repetitive tasks I could automate.&lt;/p&gt;

&lt;p&gt;I started by thinking about how much time I was spending before looking at a PR just running &lt;code&gt;codex review&lt;/code&gt; to collect its automated review comments which are quite good in most cases. It was tedious, and I don't like tedious tasks, but I was doing it because I noticed that if I ran codex review enough times I tend to find real bugs, even edge cases that might bite me in the future. It was worth the time both at work and in my personal projects. But I'm lazy, and I found myself losing track of how many reviews I'd run on a PR, or worse I'd forget about running reviews entirely, and get distracted with other tasks. My process was effective but inefficient. This was particularly painful on my personal projects where I often don't have other humans available to review my code, or if I do, they are volunteering their time and I want to be respectful of that kindness, which is how my little review script became a critical tool for me in my OSS projects.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automated code reviews
&lt;/h2&gt;

&lt;p&gt;For a week or two I used this little script of mine, and it saved me a ton of effort. I would launch the script then come back in a half hour to detailed reviews from my cadre of reviewers, which was great, but it created a new&lt;br&gt;
problem.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdqsy3tqzwgyhyla5g8hu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdqsy3tqzwgyhyla5g8hu.png" alt="ACR command line interface showing 10 parallel reviewers finding 2 issues with confidence scores" width="800" height="595"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Codex review output can be dense, and with 5 reviewers you might end up with two pages of dense findings to read through. And even worse, the review findings often overlap with different line numbers and slightly different wording, so I was now spending my time de-duplicating outputs so I could post helpful findings to a PR for further review.&lt;/p&gt;

&lt;p&gt;So, I extended the script and added a summarizer agent that grouped findings, and generated a nice looking report I could paste into PRs. This whole pasting reports lasted about a day before I decided that I didn't want to be bothered with that either, so I added the ability for ACR to post code review findings directly to github.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foytoaby8mg9uwbl9rnd1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foytoaby8mg9uwbl9rnd1.png" alt="ACR automatically posting consolidated findings to a GitHub PR with confidence scores" width="800" height="560"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Somewhere during this process it clicked for me why multiple parallel reviewers were better than a single reviewer: &lt;strong&gt;because the LLMs are non-deterministic&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Non-determinism is good?
&lt;/h2&gt;

&lt;p&gt;Non-determinism seems bad. People are losing their minds over AI not being deterministic and so how can we trust them?! I started realizing while building ACR as a simple script, then re-writing it in Go after some interested coworkers got a peek at it, that non-determinism can be a superpower.&lt;/p&gt;

&lt;p&gt;ACR shows that more reviewers find more issues and produce better code reviews because the agents are non-deterministic, &lt;strong&gt;and&lt;/strong&gt; they have been given the same goal. If the agents were all deterministic they would all produce the same results and multiple runs of the reviews would be a pure waste of tokens, but LLMs are probabilistic, so not only do you get better reviews with more tries, but you can establish a confidence/importance level to any review finding based on the number of reviewers that called out a given issue.&lt;/p&gt;

&lt;p&gt;Are you starting to get ideas? I was.&lt;/p&gt;

&lt;p&gt;The main idea that came up for me is this: "What if the best system designs can be evolved rather than designed?"&lt;/p&gt;

&lt;p&gt;Bear with me for a second, here's my thinking:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Traditionally, software was expensive and hard to build, so naturally we treated the software as a prized possession, something to be cared for, maintained, and enhanced over the years.&lt;/li&gt;
&lt;li&gt;The traditional way of doing things was predicated on high cost of production.&lt;/li&gt;
&lt;li&gt;If the first statement no longer holds true, then neither does the second.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;The alternative to carefully crafting software seems to be convergence.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Rather than designing and spec-ing a system down to the nuts and bolts an alternative I have been exploring is to loosely define an idea, then let multiple parallel agents build the full solution with no opportunity to ask questions. Using a selector agent to review the solutions and pick/synthesize the best result can produce some amazingly good results.&lt;/p&gt;

&lt;h2&gt;
  
  
  Convergence as design
&lt;/h2&gt;

&lt;p&gt;ACR is a simple case for the kind of convergent software building I am thinking about. Convergence is a single pass, not multiple generations. Synthesis is always the result since we always care about what every reviewer has found. I can imagine a more complex system that uses multiple iterations of parallel runs to build a complete system, and perhaps even learn as it goes with some occasional human steering.&lt;/p&gt;

&lt;p&gt;I'm exploring this concept more. As inference costs drop I imagine that being able to say "build me a widget", and then getting 5 working widgets to choose from, represents the kind of virtuous feedback loop that was dreamed of in the agile manifesto (and subsequently crushed by the agile industrial complex). And more importantly, humans are great at imagination, and not nearly as good at clearly defining the things we imagine. But I don't know anyone who can't tell me what they like/don't like when they see it.&lt;/p&gt;

&lt;p&gt;It's an exciting time to work in software. I hope you all are having as much fun learning as I am.&lt;/p&gt;

</description>
      <category>tech</category>
      <category>vibecoding</category>
      <category>claude</category>
      <category>codex</category>
    </item>
    <item>
      <title>Before I forget how I got here...</title>
      <dc:creator>Rich Haase</dc:creator>
      <pubDate>Thu, 29 Jan 2026 02:05:34 +0000</pubDate>
      <link>https://forem.com/richhaase/before-i-forget-how-i-got-here-23b6</link>
      <guid>https://forem.com/richhaase/before-i-forget-how-i-got-here-23b6</guid>
      <description>&lt;p&gt;I'm not sure if this is a blog post, a journal entry, or a personal time capsule.&lt;/p&gt;

&lt;p&gt;Everything in the world of AI and agentic coding is moving so fast.&lt;/p&gt;

&lt;p&gt;So, before all of this fades in my memory I wanted to take some time to document&lt;br&gt;
my journey with vibe coding as illustrated by the tools I use daily as a software&lt;br&gt;
engineer.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it started
&lt;/h2&gt;

&lt;p&gt;I have been a very heavy terminal user for decades. My mom still jokes about&lt;br&gt;
my using a terminal on her Mac to figure out a problem for her. She asked what&lt;br&gt;
the terminal was, and I responded, "this is where I live". It was a tongue in&lt;br&gt;
cheek remark, and it's also kind of true.&lt;/p&gt;

&lt;p&gt;As a terminal user I have invested years of my life into crafting dotfiles and&lt;br&gt;
curating my tools. My first personal vibe coding project was to build &lt;a href="https://github.com/richhaase/plonk" rel="noopener noreferrer"&gt;plonk&lt;/a&gt;,&lt;br&gt;
which is my personal take (to add to the hundreds of other personal takes out&lt;br&gt;
there) on what dotfile and package management should be.&lt;/p&gt;

&lt;p&gt;But I digress. My point is that I used the terminal almost religiously.&lt;/p&gt;

&lt;p&gt;So, when I tell you that my first real foray into agentic coding was using&lt;br&gt;
VSCode you will hopefully understand how much I was taking a leap away from my&lt;br&gt;
preferred mode of working to explore agentic coding.&lt;/p&gt;

&lt;p&gt;Why was I willing to take this leap? Honestly? Annoyance. I was getting tired&lt;br&gt;
of reading AI hype posts so I set out (again) to disprove AI's value. The opposite happened.&lt;/p&gt;

&lt;p&gt;Instead of finding AI painful and slow to work with I found that it was shockingly&lt;br&gt;
good at repetitive tasks that are very difficult to perform as regex. So, Github&lt;br&gt;
Copilot in VSCode became my main development tool by slipping in the side door.&lt;/p&gt;

&lt;h2&gt;
  
  
  What else can this thing do?
&lt;/h2&gt;

&lt;p&gt;I spent a couple of weeks uncomfortably using VSCode. (I always find IDEs do&lt;br&gt;
things &lt;em&gt;their&lt;/em&gt; way, and I like doing things &lt;em&gt;my&lt;/em&gt; way, which is why I customize&lt;br&gt;
the hell out of my terminal). The problem was the same as ever with IDEs. I&lt;br&gt;
always need something from the command line, and the stupid mini-terminals are&lt;br&gt;
just pure junk when you have a lovingly crafted shell environment a click away.&lt;/p&gt;

&lt;p&gt;So, I spent some time, probably too much time, playing with integrating github&lt;br&gt;
copilot into my Neovim configs. (There are about half a dozen plugins for this,&lt;br&gt;
so you have some options). The problem was that there was no polish. Sharing&lt;br&gt;
context between my working code and the agent just seemed... hard. Whereas my&lt;br&gt;
experience using copilot (especially with Claude Opus 4, at the time) was&lt;br&gt;
getting so good I was telling it to build me scripts to automate tasks that I&lt;br&gt;
previously would have done by hand, because they were tedious, but not things I&lt;br&gt;
expected to repeat. Having the coding agent create automations for me that I&lt;br&gt;
could run and inspect felt like a revelation. Little did I know that Claude Code&lt;br&gt;
was about to bring me back to my terminal.&lt;/p&gt;

&lt;h2&gt;
  
  
  Claude Code Release
&lt;/h2&gt;

&lt;p&gt;I'd love to claim I was one of the first adopters of Claude Code. I was not.&lt;/p&gt;

&lt;p&gt;I found Claude Code through a co-worker who had heard me raving about&lt;br&gt;
how productive the copilot technology had become. He casually mentioned Ollama&lt;br&gt;
and Claude Code to me in the same week. With the expertise of decades I promptly&lt;br&gt;
chose to explore Ollama for its local inference capabilities. What I found was&lt;br&gt;
disappointing, even with Aider, which seemed like a pretty cool idea. After a&lt;br&gt;
couple of weeks of sunk time I decided to pay for a Claude Code $20 plan to&lt;br&gt;
&lt;em&gt;give it a try&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;By the end of the weekend I was paying for the $200 plan, and in several days I&lt;br&gt;
had built plonk, my dotfile and package manager, which I am still using today.&lt;/p&gt;

&lt;h3&gt;
  
  
  AI Coding Agents
&lt;/h3&gt;

&lt;p&gt;Claude Code was the only game in town for about a month (fact check me if you&lt;br&gt;
want, I didn't bother). Then we started getting TUI coding agents from every&lt;br&gt;
possible provider. Here's a list of the ones I have tried as of this writing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Claude Code&lt;/li&gt;
&lt;li&gt;Codex CLI&lt;/li&gt;
&lt;li&gt;Gemini CLI&lt;/li&gt;
&lt;li&gt;Cursor CLI&lt;/li&gt;
&lt;li&gt;AmpCode&lt;/li&gt;
&lt;li&gt;Aider&lt;/li&gt;
&lt;li&gt;Goose&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Returning to the terminal
&lt;/h2&gt;

&lt;p&gt;Claude Code, being a TUI app, gave me all the impetus I needed to ditch VSCode&lt;br&gt;
and happily drop back to my terminal.&lt;/p&gt;

&lt;p&gt;This was a delight for me, but I quickly ran into a problem. My main way to use&lt;br&gt;
a terminal for nearly a decade has been primarily through Neovim. My neovim&lt;br&gt;
config ran to thousands of lines, and I had a plugin for everything. So, I&lt;br&gt;
configured a custom terminal window to expand from the right side of my screen&lt;br&gt;
to display Claude when I wanted it and I went back to work.&lt;/p&gt;

&lt;p&gt;A weird thing started happening. After a couple of weeks of this I found that&lt;br&gt;
I was spending the bulk of my time in my Claude Code window, and less and less&lt;br&gt;
time in Neovim. In fact, for the first time in years, Neovim was starting to&lt;br&gt;
feel bulky and complicated. Naturally, for a tinkerer, I decided the problem&lt;br&gt;
must be that I had outgrown Neovim and needed a different tool. I tried Emacs&lt;br&gt;
for the 50th time to find that I still don't like Emacs (personal preference,&lt;br&gt;
not trying to start a riot). So I dug around and found &lt;a href="https://helix-editor.com/" rel="noopener noreferrer"&gt;Helix&lt;/a&gt;. I adopted Helix,&lt;br&gt;
which is very vim-like, but with the action-&amp;gt;select pattern reversed, e.g. in&lt;br&gt;
vim &lt;code&gt;cw&lt;/code&gt; is used to select and change a word, in helix it's &lt;code&gt;wc&lt;/code&gt; and the selection&lt;br&gt;
always highlights the thing that will be acted on. It took a while to get used&lt;br&gt;
to, but I was able to switch to helix, and dump my massive neovim config for a&lt;br&gt;
drastically smaller helix config. &lt;em&gt;If I'm honest, my helix config could be&lt;br&gt;
about 3 lines, but I just can't help myself.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Zellij and the shape of my terminal
&lt;/h2&gt;

&lt;p&gt;Returning to my terminal, and ditching neovim meant that I wanted a way to keep&lt;br&gt;
my sessions better managed. I had used tmux for this for years, but I'd been&lt;br&gt;
hearing whispers about this new kid on the block called Zellij, and I decided to&lt;br&gt;
give it a try.&lt;/p&gt;

&lt;p&gt;I fell in love with Zellij, and I fell hard. Zellij made my terminal window&lt;br&gt;
into a persisted desktop. Yes, almost everything Zellij can do Tmux can do, but&lt;br&gt;
Zellij is prettier, easier to use, easier to configure, and it has floating&lt;br&gt;
panes.&lt;/p&gt;

&lt;p&gt;For the next 6 months Zellij became my main interface.&lt;/p&gt;

&lt;p&gt;What's more interesting to me is how the layout of my Zellij terminals changed.&lt;/p&gt;

&lt;p&gt;Initially, I would open a Zellij tab for a directory. On the left side of my&lt;br&gt;
screen was Helix and on the right was lazygit (top) and Claude Code (bottom). I&lt;br&gt;
would edit code, or look through code in helix, then ask Claude to change things&lt;br&gt;
and use lazygit to make sure it only changed what I expected. (It turns out that&lt;br&gt;
lazygit is a great way to watch what AI coding agents are doing in real time,&lt;br&gt;
normally their output scrolls too fast, but having a view of what has changed&lt;br&gt;
can be quite nice).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmgmvmevr723qrh429va6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmgmvmevr723qrh429va6.png" alt="Editor-centric Zellij layout" width="800" height="535"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Over the next couple of months from about June to September my default layout&lt;br&gt;
started shifting. It started terminal centric, then it became AI agent centric,&lt;br&gt;
and even multi-agent centric. Pretty soon the main thing on any screen in my&lt;br&gt;
Zellij sessions was a coding agent.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F562ahfu2siiqpvqao7fq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F562ahfu2siiqpvqao7fq.png" alt="Agent-centric Zellij layout with floating lazygit" width="800" height="537"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  "You're absolutely right!"
&lt;/h2&gt;

&lt;p&gt;Around late summer into early fall Claude Code went from my best new friend to&lt;br&gt;
a useful frienemy to my mortal enemy. I also discovered a new way of working&lt;br&gt;
that I like to call "Expletive-driven Development".&lt;/p&gt;

&lt;h3&gt;
  
  
  The practice of "Expletive-driven Development"
&lt;/h3&gt;

&lt;p&gt;Expletive-driven Development, EDD, is a new way of programming using agentic&lt;br&gt;
coding tools. The practice is simple:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Give an AI Coding agent a reasonable and &lt;em&gt;seemingly&lt;/em&gt; well defined task.&lt;/li&gt;
&lt;li&gt;Watch the agent carefully and precisely delete half your repo, then cheerfully
claim completion.&lt;/li&gt;
&lt;li&gt;Send questions asking why the agent saw fit to destroy your repo only to
receive a message beginning with "You're absolutely right!" and followed by some
delirious ravings of a friendly but concerning madman.&lt;/li&gt;
&lt;li&gt;Ask more pointed questions to try and figure out what went wrong, receiving
placatory responses all the while from the cheerful AI.&lt;/li&gt;
&lt;li&gt;EDD. This is the point where you give up your own sanity in hopes of finding
common ground with the AI, which immediately devolves into swearing at the AI,
because the truth is, friends, that you can't out-crazy a hallucinating AI.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It was during this period of time that my productivity with AI tools fell off a&lt;br&gt;
cliff. Seriously, if you were using claude during that time and you were getting&lt;br&gt;
usable results then please email me and tell me how you did it. &lt;strong&gt;Seriously&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;So, I discarded my new daily driver, Claude Code, in exchange for a Codex.&lt;/p&gt;

&lt;p&gt;I didn't land on Codex immediately. I tried a bunch of other options, my favorite&lt;br&gt;
for a time was AmpCode, which had an oracle supervisor feature that let you&lt;br&gt;
confidently burn tokens at an astonishing rate to me at the time. I eventually&lt;br&gt;
landed on Codex for two reasons: 1) I had a free subscription through work, so&lt;br&gt;
I was able to use it more frequently than others, and 2) it helped me solve a&lt;br&gt;
problem I'd been fighting with Claude over for days in a matter of hours. (I&lt;br&gt;
don't even remember the details of the problem. Something with the OpenAI Agents SDK,&lt;br&gt;
Claude didn't know about it in the training data, and by the time I loaded context&lt;br&gt;
about the API I needed, Claude would suggest using another API.) The point is&lt;br&gt;
I found that Codex was better for one case than Claude, which kicked the door&lt;br&gt;
open for me to wonder what else it was better at.&lt;/p&gt;

&lt;h2&gt;
  
  
  Codex is king
&lt;/h2&gt;

&lt;p&gt;Switching from Claude to Codex was jarring. At the time, Codex CLI was very new.&lt;br&gt;
It didn't have any of the polish (couldn't even copy screenshots for quite a while),&lt;br&gt;
but it hallucinated far less in my use cases than Claude, so I put up with the&lt;br&gt;
shortcomings.&lt;/p&gt;

&lt;p&gt;Codex was my daily driver for the better part of 2 months, which is practically&lt;br&gt;
an epoch in Agentic Coding timelines.&lt;/p&gt;

&lt;p&gt;During this time I continued to try and polish my workflows. I became convinced&lt;br&gt;
that two things were true: 1) well crafted reusable prompts are like the shell&lt;br&gt;
scripts of AI, and 2) working with more agents is the future. So, I started&lt;br&gt;
crafting prompts for anything I could that seemed like a repeated task. (I use&lt;br&gt;
1 prompt regularly still from that period, but hundreds were discarded.) I also&lt;br&gt;
started spending more and more time crafting my zellij environment.&lt;/p&gt;

&lt;p&gt;It seemed to me that the workflows I needed required an actor (AI or me), change&lt;br&gt;
review (some way to see what's happening and inspect it), and a way to switch to&lt;br&gt;
contexts needing attention. The actor was easy, it's generally my AI coding&lt;br&gt;
agent, and the change review was easy enough to do with lazygit for real-time,&lt;br&gt;
and then GH PRs in draft mode to help review more thoroughly before making ready&lt;br&gt;
for review. The tough bit was figuring out how to get to the agents that need&lt;br&gt;
my attention in a timely manner. So, being the tinkerer I am I built a Zellij&lt;br&gt;
plugin called &lt;a href="https://github.com/richhaase/maestro" rel="noopener noreferrer"&gt;Maestro&lt;/a&gt; for helping me launch and jump to agents in given directories.&lt;/p&gt;

&lt;h2&gt;
  
  
  Maestro
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzcz6ahqse1iltt042ecm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzcz6ahqse1iltt042ecm.png" alt="Maestro dashboard showing running agents in Zellij tabs" width="800" height="488"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The maestro plugin felt like a eureka moment for about 2 weeks. I was enamoured&lt;br&gt;
with the ability to quickly summon a dashboard of where all my agents were running,&lt;br&gt;
and launching new agents, but I still didn't know when agents needed my attention.&lt;br&gt;
I had AI agent notifications that would pop up on my desktop telling me someone&lt;br&gt;
needed attention. This worked pretty well, but I was dreaming of something more&lt;br&gt;
seamless that I still can't totally articulate. Getting popups is my best bet&lt;br&gt;
for the moment because I can determine if they need immediate attention, or not.&lt;/p&gt;

&lt;p&gt;Finding the limits of what maestro could do for me also started exposing what I&lt;br&gt;
now think is a fundamental flaw in my workflow. Persistence. I used Zellij,&lt;br&gt;
or Tmux, because my work and the context I needed tended to span multiple days.&lt;br&gt;
Returning in the morning to a Zellij session with all the panes I needed for&lt;br&gt;
reference, or code, or tools, etc. was important. That dynamic doesn't exist&lt;br&gt;
for me anymore. I have started to treat my terminal sessions, or AI coding&lt;br&gt;
sessions as cattle not pets. The context around active work is the thing that&lt;br&gt;
needs to persist now, and that is a whole different blog post. The important&lt;br&gt;
point is that long running terminal sessions don't have the same value they used&lt;br&gt;
to.&lt;/p&gt;

&lt;h2&gt;
  
  
  Today
&lt;/h2&gt;

&lt;p&gt;Here's where I am today.&lt;/p&gt;

&lt;p&gt;I stopped using Zellij this week to see if I missed it. I had come to realize&lt;br&gt;
that I was using it to do two things: 1) run a coding agent (mainly Claude,&lt;br&gt;
Opus 4.5 brought me back), or 2) doing something in the terminal to quickly&lt;br&gt;
check on or provide information to a coding agent. These tasks started to feel&lt;br&gt;
more natural as separate terminal windows that I could switch between, so I'm&lt;br&gt;
giving it a shot.&lt;/p&gt;

&lt;p&gt;I have been playing with Ghostty's quick terminal as analogy for how I used&lt;br&gt;
floating terminals in Zellij. Overall, this seems to be working thanks to&lt;br&gt;
changes in the way I track work with LLMs using &lt;a href="https://github.com/steveyegge" rel="noopener noreferrer"&gt;Steve Yegge's&lt;/a&gt; &lt;a href="https://github.com/steveyegge/beads" rel="noopener noreferrer"&gt;Beads&lt;/a&gt; (or the&lt;br&gt;
miniaturized version of the same that I have been building for myself).&lt;/p&gt;

&lt;p&gt;The other main thing I think it's worth mentioning is &lt;a href="https://github.com/steveyegge/gastown" rel="noopener noreferrer"&gt;Gastown&lt;/a&gt; (another of Steve&lt;br&gt;
Yegge's projects). I have been exploring this a bit, and the concept has a ton&lt;br&gt;
of merit. I don't know what the form factor will be but I hope that the next&lt;br&gt;
time I write a post like this it will be about how I went from working with&lt;br&gt;
5-8 agents effectively, to managing swarms of agents that we don't even bother&lt;br&gt;
counting. But that's a topic for a while out, maybe summer 2026.&lt;/p&gt;

&lt;h2&gt;
  
  
  What does it all mean?
&lt;/h2&gt;

&lt;p&gt;I don't know.&lt;/p&gt;

&lt;p&gt;It's easy to imagine possibilities for what the future of AI writ-large will mean&lt;br&gt;
for society. It's a bit harder to imagine the steps between the potentially&lt;br&gt;
brilliant or terrifying futures proposed as outcomes of AI adoption.&lt;/p&gt;

&lt;p&gt;Rather than try to predict what's next I want to advocate for exploring and building&lt;br&gt;
what is next. There is an astonishing variety of new software coming online&lt;br&gt;
every day to try and help us all work with AI better. Don't try to adopt it all!&lt;br&gt;
Explore and build new things. Agentic coding makes it cheap and easy to try out&lt;br&gt;
ideas and discard them when they don't work. &lt;strong&gt;Take advantage!&lt;/strong&gt; You never know, something you build might be the seed for how we all work in the future, and&lt;br&gt;
if it isn't, so what?! I promise you will have learned a lot along the way.&lt;/p&gt;

</description>
      <category>tech</category>
      <category>vibecoding</category>
      <category>ai</category>
      <category>automation</category>
    </item>
  </channel>
</rss>
