<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Reza Farshi</title>
    <description>The latest articles on Forem by Reza Farshi (@reza_farshi_c8f8521e3556f).</description>
    <link>https://forem.com/reza_farshi_c8f8521e3556f</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/reza_farshi_c8f8521e3556f"/>
    <language>en</language>
    <item>
      <title>Machines are in a loop, to brainstorm, plan, code and pair review</title>
      <dc:creator>Reza Farshi</dc:creator>
      <pubDate>Wed, 01 Apr 2026 04:55:36 +0000</pubDate>
      <link>https://forem.com/reza_farshi_c8f8521e3556f/my-ai-team-has-four-models-and-one-human-in-the-loop-2oj7</link>
      <guid>https://forem.com/reza_farshi_c8f8521e3556f/my-ai-team-has-four-models-and-one-human-in-the-loop-2oj7</guid>
      <description>&lt;h1&gt;
  
  
  My AI Team Has Four Models and One Human in the Loop
&lt;/h1&gt;

&lt;p&gt;Last week, GPT found a security bug in code that Claude wrote.&lt;/p&gt;

&lt;p&gt;Not a hypothetical. Not a contrived test. A real conversation-ownership vulnerability in a production app. If you started a chat, someone else could read your messages. Claude wrote the code. Claude reviewed the code. Claude missed it. GPT caught it in seconds.&lt;/p&gt;

&lt;p&gt;That moment changed how I think about AI-assisted development.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Single-Model Trap
&lt;/h2&gt;

&lt;p&gt;We all have a favorite model. Maybe it's Claude for reasoning, GPT for breadth, or whatever ships fastest. But here's the thing: every model has blind spots. And if you only use one model, you inherit all of its blind spots as your own.&lt;/p&gt;

&lt;p&gt;I've been building a workflow called TAT (Tiny AI Team) that treats AI models like an engineering team. Not one genius doing everything, but specialists collaborating, with me as the product owner making the final calls:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Claude Opus&lt;/strong&gt; orchestrates. Plans epics, breaks work into sprints, makes architectural decisions, reviews the big picture.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Claude Sonnet&lt;/strong&gt; codes. Opus spawns Sonnet as a subagent for implementation tasks. Fast, focused, stays in scope.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GPT&lt;/strong&gt; reviews code. After Claude self-reviews its own diff, GPT gets a second look. Independent eyes on the same code.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GPT-mini&lt;/strong&gt; brainstorms. Cheap, fast, good for idea generation where you want volume over precision.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Me.&lt;/strong&gt; I approve plans, make product decisions, and have the final say on every merge.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each model has a role. No model does everything. And it's not just better. It's dramatically cheaper.&lt;/p&gt;

&lt;p&gt;Before TAT, I was burning through my daily Opus limit trying to do everything with one model. Now Opus only handles what it's good at: planning and orchestration. Routine coding goes to Sonnet. Brainstorming goes to GPT-mini. The expensive model only runs when it matters. Same output quality, fraction of the cost.&lt;/p&gt;




&lt;h2&gt;
  
  
  When GPT Caught What Claude Missed
&lt;/h2&gt;

&lt;p&gt;The Supabase incident was eye-opening.&lt;/p&gt;

&lt;p&gt;I was building an AI concierge app. The database layer used Supabase, and I assumed the service role key had to be a JWT starting with &lt;code&gt;eyJ&lt;/code&gt; because that's what I'd always seen. Turns out Supabase changed their format to &lt;code&gt;sb_secret_...&lt;/code&gt; and I wasted time questioning a perfectly valid key.&lt;/p&gt;

&lt;p&gt;Claude didn't flag this. It shared my assumption.&lt;/p&gt;

&lt;p&gt;GPT, looking at the same context independently, said: &lt;em&gt;"Don't assume key formats. Test the connection instead."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Different training data. Different blind spots. That's the whole point.&lt;/p&gt;

&lt;p&gt;In the same sprint, &lt;strong&gt;Codex&lt;/strong&gt; (OpenAI's code-specialized model) caught the conversation ownership bug during code review. The endpoint let any user read any conversation. No ownership check. Claude's self-review walked right past it. Codex flagged it as a blocker.&lt;/p&gt;

&lt;p&gt;Meanwhile, when I tried using gpt-4o-mini for code reviews, it produced false positives. It flagged things that weren't actually wrong. The cheaper model wasn't just worse, it was counterproductive for that task.&lt;/p&gt;

&lt;p&gt;The lesson was clear: &lt;strong&gt;use the right model for the right job.&lt;/strong&gt; Codex for code review. Opus for planning. Mini for brainstorming where precision doesn't matter as much.&lt;/p&gt;




&lt;h2&gt;
  
  
  Three-Round Brainstorming
&lt;/h2&gt;

&lt;p&gt;This is where the multi-model approach gets interesting.&lt;/p&gt;

&lt;p&gt;TAT has a brainstorming skill that runs three rounds. In round one, GPT thinks independently, no bias from what Claude already decided. It generates ideas from scratch. In round two, Opus critiques GPT's ideas. Agrees, disagrees, adds what GPT missed. In round three, I decide what to keep.&lt;/p&gt;

&lt;p&gt;The independence matters. If you just ask one model to brainstorm and then critique itself, you get confirmation bias. Two different models, thinking separately, surface things neither would alone.&lt;/p&gt;




&lt;h2&gt;
  
  
  Lessons That Compound
&lt;/h2&gt;

&lt;p&gt;Here's what surprised me most: lessons compound.&lt;/p&gt;

&lt;p&gt;When GPT catches a bug in one project, that lesson gets captured. &lt;em&gt;"Run code review after every task. Codex caught a security bug that self-review missed."&lt;/em&gt; When a shell script fails silently because of &lt;code&gt;grep + set -e&lt;/code&gt;, that becomes a rule: &lt;em&gt;"Always add &lt;code&gt;|| true&lt;/code&gt; to greps that might not match."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;These aren't just notes in a file. TAT has a &lt;strong&gt;global lessons library&lt;/strong&gt; that gets loaded at the start of every sprint, in every project. What you learn building a real estate app shows up as a constraint when you start building a developer tool.&lt;/p&gt;

&lt;p&gt;Fifteen universal lessons, earned the hard way across three projects, automatically informing every future sprint. Each project makes the next one better.&lt;/p&gt;




&lt;h2&gt;
  
  
  Process Makes It Work
&lt;/h2&gt;

&lt;p&gt;But models alone aren't enough. You need process.&lt;/p&gt;

&lt;p&gt;TAT runs sprint ceremonies. A sprint-start gate loads your spec, decisions, and lessons before you write a line of code. A sprint-end retro captures what shipped, what slipped, and why. Between them, every task goes through a checkpoint sequence: &lt;strong&gt;Plan, Code, Review, Ship.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F91xn3a5tuwq96x8jl58j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F91xn3a5tuwq96x8jl58j.png" alt="Claude Code showing multiple agents running in parallel" width="800" height="286"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Multiple agents running in parallel. Opus orchestrating while Sonnet subagents code.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The Review checkpoint is strict. Claude self-reviews the diff first: checks scope, looks for bugs, fixes what it finds. Then GPT reviews independently. Both results go to the user.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcw1m13u3whkv666vi76i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcw1m13u3whkv666vi76i.png" alt="TAT self-review checkpoint with all checks passing" width="800" height="120"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The self-review gate: scope check, bug check, no untracked files. All before GPT even looks at it.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Git hooks enforce the rest. Conventional commit format. No direct pushes to main. Branch protection requiring PRs. These aren't suggestions. The hooks will reject your commit if you try to skip them.&lt;/p&gt;

&lt;p&gt;It sounds heavy. It's not. The checkpoints take seconds. And they catch things. Every. Single. Sprint.&lt;/p&gt;




&lt;h2&gt;
  
  
  TAT Builds Itself
&lt;/h2&gt;

&lt;p&gt;The part I enjoy most: TAT builds itself.&lt;/p&gt;

&lt;p&gt;The workflow that manages sprints, reviews code, captures lessons, and enforces gates? It was built using its own process. Sprint 1 created the foundation. Sprint 2 added GPT integration. Sprint 3 built the brainstorming and article skills. By Sprint 7, TAT was consolidating lessons from other projects back into itself.&lt;/p&gt;

&lt;p&gt;Each sprint makes the system smarter. The lessons library grew from zero to fifteen entries. The review gates went from optional to mandatory. The parallel agent support means multiple tasks run simultaneously while Opus keeps orchestrating.&lt;/p&gt;




&lt;h2&gt;
  
  
  This Article Was Written, Reviewed, and Published by TAT
&lt;/h2&gt;

&lt;p&gt;One more thing. This article was created using TAT itself.&lt;/p&gt;

&lt;p&gt;TAT's &lt;code&gt;/article&lt;/code&gt; skill scaffolded the folder structure, created the spec, and generated the draft. I reviewed it, left inline comments, and TAT revised it. After my approval, TAT committed the article, pushed the branch, and created the PR. The screenshots you see were taken during the same sprint that built the features being described.&lt;/p&gt;

&lt;p&gt;I'm the one human in the loop. And that's exactly how it should be. AI does the heavy lifting. I make the decisions.&lt;/p&gt;




&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;p&gt;TAT is open source. It's a set of Claude Code skills, shell scripts, and markdown files. No framework, no database, no dependencies beyond Claude Code and an OpenAI API key.&lt;/p&gt;

&lt;p&gt;If you're using AI to write code and you've ever thought &lt;em&gt;"I wish it would catch its own mistakes,"&lt;/em&gt; try giving it a colleague. A different model, with different training, looking at the same code.&lt;/p&gt;

&lt;p&gt;You might be surprised what it sees.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/farshi/tinyaiteam" rel="noopener noreferrer"&gt;github.com/farshi/tinyaiteam&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>softwareengineering</category>
      <category>devtools</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
