<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Rain_bitfrog</title>
    <description>The latest articles on Forem by Rain_bitfrog (@bitfrog).</description>
    <link>https://forem.com/bitfrog</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/bitfrog"/>
    <language>en</language>
    <item>
      <title>How I Built 7 AI Agents That Bring Discipline to VS Code Copilot</title>
      <dc:creator>Rain_bitfrog</dc:creator>
      <pubDate>Tue, 24 Mar 2026 17:50:42 +0000</pubDate>
      <link>https://forem.com/bitfrog/the-copilot-workflow-handoffs-not-chaos-1598</link>
      <guid>https://forem.com/bitfrog/the-copilot-workflow-handoffs-not-chaos-1598</guid>
      <description>&lt;p&gt;If you love VS Code but wish your AI coding experience was more structured and accurate, this tutorial is for you.&lt;br&gt;
I built BitFrog Copilot — a free, open-source VS Code extension that adds 7+1 specialized AI agents to GitHub Copilot Chat. Instead of one general-purpose agent doing everything, each agent has a clear role: brainstorming, planning, TDD execution, debugging, code review, mentoring, and UI design.&lt;br&gt;
In this post, I'll show you how to install it and walk through three real scenarios so you can see it in action.&lt;/p&gt;

&lt;p&gt;Install in 30 Seconds&lt;br&gt;
Option 1 — VS Code Marketplace:&lt;/p&gt;

&lt;p&gt;Open VS Code&lt;br&gt;
Go to Extensions (Ctrl+Shift+X)&lt;br&gt;
Search "BitFrog Copilot"&lt;br&gt;
Click Install&lt;/p&gt;

&lt;p&gt;Option 2 — Agent Plugin:&lt;br&gt;
Extensions sidebar → Agent Plugins → Search "bitfrog-copilot"&lt;br&gt;
That's it. No API keys, no config files.&lt;br&gt;
Requirements: VS Code with GitHub Copilot enabled. Works with GPT-5.4, Claude Opus 4.6, Gemini 3.1 Pro — your choice.&lt;/p&gt;

&lt;p&gt;Meet the Agents&lt;br&gt;
AgentWhat it does@bitfrogMain router — figures out what you need and sends you to the right agent@bitfrog-brainstormExplores ideas and designs before you write code@bitfrog-planMaps dependencies, decomposes tasks into ordered steps@bitfrog-executeTDD implementation — writes tests first, then code@bitfrog-debugDiagnoses root causes using a structured four-method approach@bitfrog-reviewThree-level code review: spec → quality → intent@bitfrog-mentorTeaches through questions — never gives direct answers@bitfrog-ui-designUX research and interface design&lt;/p&gt;

&lt;p&gt;Scenario 1: Build a Feature with TDD&lt;br&gt;
Let's say you need to add a password validation function.&lt;br&gt;
Step 1 — Brainstorm the requirements:&lt;br&gt;
@bitfrog-brainstorm I need a password validation function. &lt;br&gt;
It should check length, special characters, and common passwords.&lt;br&gt;
The agent won't just say "sure, here's a function." It will ask you questions: What's the minimum length? Do you need Unicode support? Should it check against a breach database? It investigates the real problem before proposing solutions.&lt;br&gt;
Step 2 — Plan the implementation:&lt;br&gt;
@bitfrog-plan Let's implement the password validator we just designed.&lt;br&gt;
The plan agent maps out dependencies and gives you an ordered task list. Each task is a handoff — you decide when to move forward.&lt;br&gt;
Step 3 — Execute with TDD:&lt;br&gt;
@bitfrog-execute Start with task 1 from the plan.&lt;br&gt;
The execute agent writes tests first, then implementation. Not because it's forced to — because it understands why TDD matters. It spawns parallel sub-agents for independent tasks when possible.&lt;/p&gt;

&lt;p&gt;Scenario 2: Debug a Tricky Bug&lt;br&gt;
You've got a bug where API responses are randomly empty. Instead of just reading the error message and guessing:&lt;br&gt;
@bitfrog-debug Our /api/users endpoint sometimes returns empty &lt;br&gt;
arrays even though the database has data.&lt;br&gt;
The debug agent uses a four-diagnostic-method approach (borrowed from traditional Chinese medicine):&lt;/p&gt;

&lt;p&gt;Observe — What are the visible symptoms?&lt;br&gt;
Listen — What do the logs and metrics say?&lt;br&gt;
Ask — What changed recently? What's the pattern?&lt;br&gt;
Examine — Check the actual code path and data flow&lt;/p&gt;

&lt;p&gt;It diagnoses the level of the problem (is this a network issue? a race condition? a query bug?) before suggesting a fix. No more guessing.&lt;/p&gt;

&lt;p&gt;Scenario 3: Code Review Before Merging&lt;br&gt;
You've finished a feature and want a thorough review:&lt;br&gt;
@bitfrog-review Review my changes in the auth module.&lt;br&gt;
The review agent runs three passes:&lt;/p&gt;

&lt;p&gt;Spec compliance — Does it do what it's supposed to do?&lt;br&gt;
Code quality — Are there maintainability issues, performance problems, or security concerns?&lt;br&gt;
User intent — Does it actually solve the user's problem, or just implement the ticket?&lt;/p&gt;

&lt;p&gt;That third pass is where most reviews fall short. The agent asks: "The spec says add rate limiting, but is 100 requests/minute the right limit for this use case?"&lt;/p&gt;

&lt;p&gt;The Workflow: Handoffs, Not Chaos&lt;br&gt;
The agents aren't isolated. They form a workflow:&lt;br&gt;
brainstorm → plan → execute → review&lt;br&gt;
                       ↕&lt;br&gt;
                     debug&lt;br&gt;
Each transition is a handoff button — the current agent suggests the next step, you decide when to proceed. debug and mentor can be called independently at any time.&lt;br&gt;
This is the key difference from vanilla Copilot: instead of one agent context-switching between planning, coding, and reviewing, each agent stays focused on what it's best at.&lt;/p&gt;

&lt;p&gt;Why Chinese Philosophy?&lt;br&gt;
Most AI coding agents use hard rules: "you MUST write tests," "you MUST NOT skip root cause analysis." This works, but the model follows the letter and misses the spirit.&lt;br&gt;
BitFrog uses five philosophical principles to shape how agents think:&lt;/p&gt;

&lt;p&gt;格物致知 (Investigate First) — Understand the real problem before acting&lt;br&gt;
知行合一 (Unity of Knowing and Doing) — If you know you should write tests, write them. No excuses.&lt;br&gt;
辨证论治 (Diagnose Before Treating) — Same symptom, different root causes. Diagnose first.&lt;br&gt;
中庸之道 (Right Measure) — Every action has a right amount. Don't over-engineer, don't under-engineer.&lt;br&gt;
三省吾身 (Three Reflections) — Quality comes from reflecting on your process, not checking boxes.&lt;/p&gt;

&lt;p&gt;Sounds weird? Yeah. Works surprisingly well.&lt;/p&gt;

&lt;p&gt;Try It Now&lt;/p&gt;

&lt;p&gt;GitHub: github.com/rainyulei/bitfrog-copilot&lt;br&gt;
VS Code Marketplace: BitFrog Copilot&lt;br&gt;
License: MIT — fully open source&lt;/p&gt;

&lt;p&gt;Star the repo if you find it useful. Issues and PRs welcome.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>githubcopilot</category>
      <category>vscode</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
