<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Dr. B </title>
    <description>The latest articles on Forem by Dr. B  (@codewithbg).</description>
    <link>https://forem.com/codewithbg</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/codewithbg"/>
    <language>en</language>
    <item>
      <title>I Built a Real-Time Hallucination Prevention System for LLMs Using Computer Vision</title>
      <dc:creator>Dr. B </dc:creator>
      <pubDate>Fri, 08 May 2026 13:42:00 +0000</pubDate>
      <link>https://forem.com/codewithbg/i-built-a-real-time-hallucination-prevention-system-for-llms-using-computer-vision-1age</link>
      <guid>https://forem.com/codewithbg/i-built-a-real-time-hallucination-prevention-system-for-llms-using-computer-vision-1age</guid>
      <description>&lt;p&gt;LLMs hallucinate. Everyone knows it. Most solutions involve better prompting, retrieval-augmented generation, or fine-tuning. All of these try to fix the problem &lt;em&gt;inside&lt;/em&gt; the language model.&lt;br&gt;
What if you used a camera to catch the LLM lying?&lt;br&gt;
That’s &lt;strong&gt;SENSE&lt;/strong&gt; - a real-time framework that takes an LLM’s claims about the world, checks them against live visual input using computer vision, and flags contradictions before they reach the user. Not prompt engineering. Not RAG. A live visual audit loop.&lt;/p&gt;


&lt;h2&gt;
  
  
  The Core Problem
&lt;/h2&gt;

&lt;p&gt;Imagine a robot or a smart assistant telling you “Hey! There is a red vase on the desk.” The LLM generated that description. But is it actually true? Is there really a red vase? Is it on the desk or somewhere else entirely?&lt;br&gt;
Traditional hallucination mitigation can’t answer this question because it only lives in text space. SENSE answers it by looking at the actual scene.&lt;/p&gt;


&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;p&gt;The system has three core pillars:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;VisionProbe&lt;/strong&gt; - The “eyes.” Takes a video frame and a list of LLM claims, runs object detection, and returns what it actually found, with confidence scores and bounding boxes.&lt;br&gt;
&lt;strong&gt;LogicGate&lt;/strong&gt; - The “judge.” Compares the LLM’s claims against the detected objects. If the LLM claimed “red vase” and vision found no red vase above a confidence threshold, it’s flagged as unverified or contradicted.&lt;br&gt;
&lt;strong&gt;TemporalTracker&lt;/strong&gt; - The “memory.” Holds detected objects across frames. This prevents false negatives from transient detection misses so if an object was confidently seen 3 frames ago but briefly occluded, SENSE doesn’t immediately call the LLM a liar.&lt;br&gt;
The main loop looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;llm_claim&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;laptop&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;person&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;red vase&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;mouse&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;book&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;vase on desk&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="n"&gt;cap&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;isOpened&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
   &lt;span class="n"&gt;ret&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;frame&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cap&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
   &lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;probe&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;probe_batch&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="n"&gt;frame&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;llm_claim&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
   &lt;span class="n"&gt;tracker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;update&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;results&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
   &lt;span class="n"&gt;buffered_results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;tracker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_buffered_detections&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;results&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
   &lt;span class="n"&gt;final_report&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;gate&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;audit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;llm_claim&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;buffered_results&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
   &lt;span class="n"&gt;viz&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;draw_results&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;frame&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;buffered_results&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;final_report&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;is_video&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every frame: detect → buffer → audit → visualize. Real-time, on live &lt;br&gt;
video.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Approach Is Different
&lt;/h2&gt;

&lt;p&gt;Most hallucination research lives in the text domain. SENSE introduces a grounding signal from a completely separate modality of vision which the LLM has no ability to fabricate or rationalize around.&lt;br&gt;
This is called &lt;strong&gt;symbolic-neural grounding&lt;/strong&gt; which uses a deterministic symbolic logic (the audit) to constrain probabilistic neural output (the LLM). The LLM can generate whatever it wants. SENSE checks it against physical reality.&lt;br&gt;
The system is also optimized for NVIDIA Blackwell GPUs, meaning the vision pipeline is built to run fast enough for real-time use cases such as robotics, AR assistants, live captioning, surveillance verification.&lt;/p&gt;




&lt;h2&gt;
  
  
  What the Visualizer Shows
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;Visualizer&lt;/code&gt; module draws results directly onto the video frame:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Green boxes&lt;/strong&gt;: LLM claims confirmed by vision&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Red boxes&lt;/strong&gt;: Objects detected but not claimed by LLM (model missed something)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Orange labels&lt;/strong&gt;: LLM claims with no visual evidence&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;FPS counter&lt;/strong&gt;: Because real-time means nothing if it’s running at 3 FPS&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  What’s Built vs. What’s Next
&lt;/h2&gt;

&lt;p&gt;The current framework has:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Real-time video loop with live audit&lt;/li&gt;
&lt;li&gt;Temporal buffering to reduce false negatives&lt;/li&gt;
&lt;li&gt;Threshold-tunable LogicGate (&lt;code&gt;threshold=0.3&lt;/code&gt; by default)&lt;/li&gt;
&lt;li&gt;GPU-accelerated vision probe&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What’s coming:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Spatial reasoning&lt;/strong&gt; - not just &lt;em&gt;what&lt;/em&gt; is in the scene but &lt;em&gt;where&lt;/em&gt;. “Vase on desk” requires positional verification, not just object detection.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Relational claims&lt;/strong&gt; - handling claims like “the laptop is next to the mouse” which require understanding object relationships&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LLM feedback loop&lt;/strong&gt; - sending audit results back to the LLM so it can self-correct, completing the loop&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Benchmarking&lt;/strong&gt; - running against standard hallucination datasets to get quantitative grounding accuracy&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Why This Has Research Value
&lt;/h2&gt;

&lt;p&gt;Multimodal hallucination detection, specifically using live visual grounding as an audit mechanism, is a relatively unexplored niche. Most published work focuses on post-hoc text evaluation or retrieval-augmented generation. A real-time vision-based audit layer is architecturally novel and has concrete applications in robotics, autonomous systems, and AR.&lt;br&gt;
This is heading toward an IEEE paper. If you’re working in this space, I’d genuinely like to connect.&lt;/p&gt;




&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;p&gt;Code is on GitHub: &lt;strong&gt;&lt;a href="https://github.com/zosob/SENSE" rel="noopener noreferrer"&gt;github.com/zosob/SENSE&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
Requirements: Python, PyTorch, OpenCV, and a GPU helps. Webcam or video file as input.&lt;br&gt;
Drop a comment if you have thoughts on the spatial reasoning problem which is the hardest part and I’m still working through it.&lt;/p&gt;

</description>
      <category>python</category>
      <category>ai</category>
      <category>computervision</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>I Built a Multi-Agent Coding System From Scratch in Python (No Frameworks)</title>
      <dc:creator>Dr. B </dc:creator>
      <pubDate>Wed, 06 May 2026 14:31:51 +0000</pubDate>
      <link>https://forem.com/codewithbg/i-built-a-multi-agent-coding-system-from-scratch-in-python-no-frameworks-44l9</link>
      <guid>https://forem.com/codewithbg/i-built-a-multi-agent-coding-system-from-scratch-in-python-no-frameworks-44l9</guid>
      <description>&lt;p&gt;Most multi-agent AI tutorials hand you LangChain, AutoGen, or CrewAI and say “here you go.” You wire a few abstractions together, get something running, and never really understand what’s happening under the hood.&lt;br&gt;
I wanted to understand what’s actually happening under the hood. So I built one from scratch.&lt;br&gt;
This is the story of &lt;strong&gt;multi-agent-coder&lt;/strong&gt; which is a system where a Planner decides at runtime which AI agent to call next, and a team of specialized agents (Architect, Engineer, Critic, TestRunner, Refactorer) collaborates to turn a plain English request into working, tested Python code.&lt;/p&gt;

&lt;p&gt;No LangChain. No AutoGen. Pure Python.&lt;/p&gt;


&lt;h2&gt;
  
  
  The Core Idea
&lt;/h2&gt;

&lt;p&gt;The central insight is simple: one LLM trying to do everything is worse than multiple LLMs each doing one thing well.&lt;br&gt;
A single prompt asking an AI to “plan the architecture, write the code, review it for bugs, and refactor it” produces mediocre results across all four. But if you give each job to a separate agent with a focused system prompt and its own memory, the output quality goes up dramatically.&lt;br&gt;
The challenge is who decides which agent runs next?&lt;/p&gt;

&lt;p&gt;That’s where the Planner comes in.&lt;/p&gt;


&lt;h2&gt;
  
  
  Architecture Overview
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fog02y0bzojhde01ck2gc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fog02y0bzojhde01ck2gc.png" alt="A flowchart of the architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Planner&lt;/strong&gt; receives the full current state (user request + each agent’s memory) as a JSON blob and responds with a JSON decision:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
 &lt;/span&gt;&lt;span class="nl"&gt;"next_agent"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Engineer"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
 &lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Implement the file structure from the Architect's plan"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
 &lt;/span&gt;&lt;span class="nl"&gt;"reason"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Architecture is complete, time to write code"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the architectural heart of the system. Instead of hardcoding a sequence, the Planner &lt;em&gt;reasons&lt;/em&gt; about what needs to happen next. It can send work back to the Engineer after the Critic finds a bug. It can skip the Refactorer if the code is already clean. It decides when to stop.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Agents
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Architect
&lt;/h3&gt;

&lt;p&gt;Takes the user’s request and produces a concrete plan: file structure, module breakdown, and step-by-step engineering approach. It writes the blueprint.&lt;/p&gt;

&lt;h3&gt;
  
  
  Engineer
&lt;/h3&gt;

&lt;p&gt;Reads the plan (and any Critic feedback) and writes actual Python files. It outputs code in fenced blocks labeled with filenames, which the controller parses and writes to disk automatically.&lt;/p&gt;

&lt;h3&gt;
  
  
  Critic
&lt;/h3&gt;

&lt;p&gt;Reviews the generated code against the original plan. It checks for correctness, edge cases, and consistency. It just gives structured feedback for the Engineer to act on.&lt;/p&gt;

&lt;h3&gt;
  
  
  TestRunner
&lt;/h3&gt;

&lt;p&gt;Runs &lt;code&gt;pytest&lt;/code&gt; on the workspace and reports results.&lt;/p&gt;

&lt;h3&gt;
  
  
  Refactorer
&lt;/h3&gt;

&lt;p&gt;Once the code passes tests and gets Critic approval, the Refactorer makes a final pass for code quality: naming, structure, clarity, removing redundancy.&lt;/p&gt;




&lt;h2&gt;
  
  
  Memory Management
&lt;/h2&gt;

&lt;p&gt;Each agent has its own memory namespace. The &lt;code&gt;MemoryManager&lt;/code&gt; tracks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Agent memory&lt;/strong&gt; - each agent’s last output (plan, code, review, etc.)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Loop memory&lt;/strong&gt; - shared state like last test output and last review&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Project memory&lt;/strong&gt; - file list and overall project context&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every time the Planner makes a decision, it sees the full current state. This is what allows it to reason across multiple loops because it knows the Critic flagged a bug last round, and it knows the Engineer already tried to fix it once.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;state&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user_request&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;user_request&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
   &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;project_memory&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;memory&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_project&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
   &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;loop_memory&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;memory&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_loop&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
   &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;architect_memory&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;memory&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Architect&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
   &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;engineer_memory&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;memory&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Engineer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
   &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;critic_memory&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;memory&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Critic&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
   &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;refactorer_memory&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;memory&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Refactorer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Why “From Scratch”?
&lt;/h2&gt;

&lt;p&gt;Frameworks like LangChain are powerful, but they hide the orchestration logic behind layers of abstraction. When something breaks, debugging is painful. When you want to customize, you’re fighting the framework.&lt;br&gt;
Building from scratch means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Every line is intentional&lt;/strong&gt; - you understand why it’s there&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The orchestration logic is transparent&lt;/strong&gt; - the controller.py is ~170 lines and readable top to bottom&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;It’s easy to extend&lt;/strong&gt; - adding a new agent is just writing a new class and teaching the Planner about it&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;It’s a great learning tool&lt;/strong&gt; - if you’re teaching agentic AI patterns, this is the codebase you want&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. The Planner is the hardest part.&lt;/strong&gt;&lt;br&gt;
Getting the Planner’s system prompt right took the most iteration. It needs to reliably return valid JSON, reason about state correctly, and know when to stop. Handling &lt;code&gt;json.JSONDecodeError&lt;/code&gt; and retrying is essential.&lt;br&gt;
&lt;strong&gt;2. Shared memory is both the strength and the risk.&lt;/strong&gt;&lt;br&gt;
Agents producing more context each loop is useful, but if memory grows unbounded, you hit token limits fast. Thoughtful memory summarization is a real engineering challenge.&lt;br&gt;
&lt;strong&gt;3. The Critic loop is where quality emerges.&lt;/strong&gt;&lt;br&gt;
The Engineer → Critic → Engineer loop is where the magic happens. A single Engineer pass produces okay code. Three loops produce something genuinely good. This mirrors how human code review works.&lt;br&gt;
&lt;strong&gt;4. TestRunner grounds the whole system.&lt;/strong&gt;&lt;br&gt;
Without real test execution, agents can convince themselves the code works when it doesn’t. Plugging pytest into the loop and feeding actual failure output back to the Engineer is what makes the system reliable.&lt;/p&gt;




&lt;h2&gt;
  
  
  What’s Next
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Benchmarking against HumanEval&lt;/strong&gt; to get quantitative results&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Smarter memory summarization&lt;/strong&gt; to handle longer projects&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A web UI&lt;/strong&gt; to visualize the agent loop in real time&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Potential IEEE paper&lt;/strong&gt; on the Planner-driven dynamic routing approach&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;p&gt;The full code is on GitHub: &lt;strong&gt;&lt;a href="https://github.com/zosob/multi-agent-coder" rel="noopener noreferrer"&gt;github.com/zosob/multi-agent-coder&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
It’s intentionally minimal and transparent. Fork it, add your own agent, break things, understand them. That’s the point.&lt;br&gt;
If you have questions, thoughts, or want to collaborate on the benchmarking work, please drop a comment or open an issue. Be kind!&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built with pure Python and curiosity. No frameworks harmed in the making of this system.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>ai</category>
      <category>machinelearning</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
