<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Amit Maraj</title>
    <description>The latest articles on Forem by Amit Maraj (@agenticamit).</description>
    <link>https://forem.com/agenticamit</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/agenticamit"/>
    <language>en</language>
    <item>
      <title>Agent Factory Recap: Cracking Open an Open Model</title>
      <dc:creator>Amit Maraj</dc:creator>
      <pubDate>Fri, 06 Feb 2026 19:10:28 +0000</pubDate>
      <link>https://forem.com/googleai/agent-factory-recap-cracking-open-an-open-model-42e6</link>
      <guid>https://forem.com/googleai/agent-factory-recap-cracking-open-an-open-model-42e6</guid>
      <description>&lt;p&gt;Welcome back to &lt;a href="https://www.youtube.com/playlist?list=PLIivdWyY5sqLXR1eSkiM5bE6pFlXC-OSs" rel="noopener noreferrer"&gt;The Agent Factory&lt;/a&gt;! In this episode, we’re joined by Ravin Kumar, a Research Engineer at DeepMind, to tackle one of the biggest topics in AI right now: building and training open-source agentic models. We wanted to go beyond just using agents and understand what it takes to build the entire factory line—from gathering data and supervised fine-tuning to reinforcement learning and evaluations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Agent Industry Pulse
&lt;/h2&gt;

&lt;p&gt;Timestamp: &lt;a href="https://www.youtube.com/watch?v=7YgUDf_JXN8&amp;amp;t=54s" rel="noopener noreferrer"&gt;2:00&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fql8gfwe2x33lom1ydlpd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fql8gfwe2x33lom1ydlpd.png" width="800" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before diving into the deep research, we looked at the latest developments in the fast-moving world of AI agents.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://blog.google/technology/google-deepmind/gemini-computer-use-model/" rel="noopener noreferrer"&gt;Gemini 2.5 Computer Use&lt;/a&gt;: Google's new model can act as a virtual user, interacting with computer screens, clicking buttons, typing in forms, and scrolling. It’s a shift from agents that just know things to agents that can do tasks directly in a browser.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://blog.google/technology/developers/introducing-vibe-coding-in-google-ai-studio/" rel="noopener noreferrer"&gt;Vibe Coding in AI Studio&lt;/a&gt;: A new approach to app building where you describe the "vibe" of the application you want, and the AI handles the boilerplate. It includes an Annotation Mode to refine specific UI elements with simple instructions like "Change this to green."&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://arxiv.org/abs/2510.18234" rel="noopener noreferrer"&gt;DeepSeek-OCR and Context Compression&lt;/a&gt;: DeepSeek introduced a method that treats documents like images to understand layout, compressing 10-20 text tokens into a single visual token. This drastically improves speed and reduces cost for long-context tasks.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://blog.google/technology/ai/veo-updates-flow/" rel="noopener noreferrer"&gt;Google Veo 3.1 and Flow&lt;/a&gt;: The new update to the AI video model adds rich audio generation and powerful editing features. You can now use "Insert" to add characters or "Remove" to erase objects from existing video footage, giving creators iterative control.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Ravin Kumar on Building Open Models
&lt;/h2&gt;

&lt;p&gt;We sat down with Ravin to break down the end-to-end process of creating an open model with agent capabilities. It turns out the process mirrors a traditional ML lifecycle but with significantly more complex components.&lt;/p&gt;

&lt;h3&gt;
  
  
  Defining Agent Data
&lt;/h3&gt;

&lt;p&gt;Timestamp: &lt;a href="https://youtu.be/7YgUDf_JXN8?si=r8PP24GP0o--DmQc&amp;amp;t=895" rel="noopener noreferrer"&gt;14:55&lt;/a&gt;&lt;br&gt;
Ravin explained that training data for agents looks vastly different from standard text datasets. It starts with identifying what users actually need. The data itself is a collection of trajectories, complex examples of the model making decisions and using tools. Ravin noted that they use a mix of human-curated data and synthetic data generated by their own internal "teacher" models and APIs to create a playground for the open models to learn in.&lt;/p&gt;

&lt;h3&gt;
  
  
  Training Techniques: SFT and Reinforcement Learning
&lt;/h3&gt;

&lt;p&gt;Timestamp: &lt;a href="https://youtu.be/7YgUDf_JXN8?si=lGRLwhn00IBx5Vj0&amp;amp;t=1034" rel="noopener noreferrer"&gt;17:14&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Once the data is ready, the training process involves a two-phase approach. First comes Supervised Fine-Tuning (SFT), where frameworks update the model's weights to nudge it into new behaviors based on the examples. However, to handle generalization—new situations not in the original trainin data—they rely on Reinforcement Learning (RL). Ravin highlighted the difficulty of setting rewards in RL, warning that models are prone to "reward hacking," where they might collect intermediate rewards without ever completing the final task.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Stakes of Evaluation
&lt;/h3&gt;

&lt;p&gt;Timestamp: &lt;a href="https://youtu.be/7YgUDf_JXN8?si=CiWVnqgYaDPV3MV7&amp;amp;t=1211" rel="noopener noreferrer"&gt;20:10&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ravin emphasized that evaluation is the most critical and high-stakes part of the process. You can't just trust the training process; you need a rigorous "final exam." They use a combination of broad public benchmarks to measure general capability and specific, custom evaluations to ensure the model is safe and effective for its intended user use case.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This conversation with Ravin Kumar really illuminated that building open agentic models is a highly structured, rigorous process. It requires creating high-quality trajectories for data, a careful combination of supervised and reinforcement learning, and, crucially, intense evaluation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Your turn to build
&lt;/h2&gt;

&lt;p&gt;As Ravin advised, the best place to start is at the end. Before you write a single line of training code, define what success looks like by building a small, 50-example final exam for your agent. If you can't measure it, you can't improve it. We also encourage you to try mixing different approaches; for example, using a powerful API model like Gemini as a router and a specialized open-source model for specific tasks.&lt;/p&gt;

&lt;p&gt;Check out the full episode for more details, and catch us next time!&lt;/p&gt;

&lt;h2&gt;
  
  
  Connect with us
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Ivan Nardini → &lt;a href="https://www.linkedin.com/in/ivan-nardini/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;, &lt;a href="https://x.com/ivnardini" rel="noopener noreferrer"&gt;X&lt;/a&gt;, &lt;a href="https://bsky.app/profile/ivnardini.bsky.social" rel="noopener noreferrer"&gt;Bsky&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Amit Maraj → &lt;a href="https://www.linkedin.com/in/amit-maraj/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;, &lt;a href="https://x.com/agenticamit" rel="noopener noreferrer"&gt;X&lt;/a&gt;, &lt;a href="https://www.tiktok.com/@agenticamit" rel="noopener noreferrer"&gt;TikTok&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Ravin Kumar → &lt;a href="https://www.linkedin.com/in/ravinakumar/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>gemini</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Building a Multi-Agent Deep Research Tool with Google ADK, A2A, &amp; Cloud Run</title>
      <dc:creator>Amit Maraj</dc:creator>
      <pubDate>Tue, 30 Dec 2025 03:29:32 +0000</pubDate>
      <link>https://forem.com/googleai/building-a-multi-agent-deep-research-tool-with-google-adk-a2a-cloud-run-2ldj</link>
      <guid>https://forem.com/googleai/building-a-multi-agent-deep-research-tool-with-google-adk-a2a-cloud-run-2ldj</guid>
      <description>&lt;p&gt;"Research" is a loaded word. It’s not just Googling a keyword. It’s reading papers, verifying facts, finding that &lt;em&gt;one&lt;/em&gt; perfect diagram, and synthesizing it all into something coherent.&lt;/p&gt;

&lt;p&gt;Asking a single AI agent to do all of that sequentially is not very efficient. They’ll hallucinate, they’ll get stuck, and they’ll definitely be slow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0x1rl0mywjtyugavrtuq.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0x1rl0mywjtyugavrtuq.gif" alt="Deep Researcher Tool" width="600" height="444"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;(TL;DR: Want the code? Check out the &lt;a href="https://github.com/amitkmaraj/deep-research-agentic-architecture" rel="noopener noreferrer"&gt;&lt;strong&gt;Deep Research Agent code&lt;/strong&gt; on GitHub&lt;/a&gt;.)&lt;/p&gt;

&lt;p&gt;I wanted a system that could take a topic—say, "The History of Recurrent Neural Networks"—and produce a comprehensive, illustrated report. Additionally, I wanted to learn how to build a Deep Research Tool from scratch.&lt;/p&gt;

&lt;p&gt;The first attempt? A single loop. It researched, then it looked for images, then it checked its work. It took forever.&lt;/p&gt;

&lt;p&gt;So I asked: &lt;strong&gt;Can I make this faster?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this post, we’re going to build a &lt;strong&gt;Parallel Research Squad&lt;/strong&gt;. Instead of one agent doing everything, we’ll spin up three specialized agents that run &lt;em&gt;simultaneously&lt;/em&gt;, coordinated by a central Orchestrator. We’ll use &lt;a href="https://github.com/google/adk-python" rel="noopener noreferrer"&gt;&lt;strong&gt;Google’s Agent Development Kit (ADK)&lt;/strong&gt;&lt;/a&gt; for the brains, the &lt;a href="https://google.github.io/adk-docs/a2a/" rel="noopener noreferrer"&gt;&lt;strong&gt;Agent-to-Agent (A2A) Protocol&lt;/strong&gt;&lt;/a&gt; for communication, and &lt;a href="https://cloud.google.com/run?utm_campaign=CDR_0x5f9e213a_default_b472372936&amp;amp;utm_medium=external&amp;amp;utm_source=blog" rel="noopener noreferrer"&gt;&lt;strong&gt;Google's Cloud Run&lt;/strong&gt;&lt;/a&gt; to let them scale infinitely.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmoep0obqflw5dk2m73zy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmoep0obqflw5dk2m73zy.png" alt="Architecture" width="800" height="696"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Part 1: Agentic Design Patterns
&lt;/h2&gt;

&lt;p&gt;We aren't just writing prompts anymore; we are doing &lt;strong&gt;System Engineering&lt;/strong&gt;. To build a robust system, we leverage three key design patterns:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. The Orchestrator Pattern
&lt;/h3&gt;

&lt;p&gt;Instead of a "God Agent" that decides everything, we have a central &lt;strong&gt;Orchestrator&lt;/strong&gt;. Think of it as the Editor-in-Chief. It doesn't write the articles; it assigns stories to reporters. It manages the state, handles errors, and ensures the final product meets the deadline.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Parallelization
&lt;/h3&gt;

&lt;p&gt;This is our speed hack. Most agent frameworks run sequentially (Step A -&amp;gt; Step B -&amp;gt; Step C). But "Reading Arxiv Papers" and "Searching for Images" are independent tasks. By running them in parallel, we reduce the total latency to the duration of the slowest task, not the sum of all tasks.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. The Evaluator-Optimizer
&lt;/h3&gt;

&lt;p&gt;We don't trust the first draft. Our system includes a &lt;strong&gt;Judge&lt;/strong&gt; agent. The Orchestrator sends the research to the Judge, who returns a strict Pass/Fail grade with feedback. If it fails, the Orchestrator loops back (Optimizer) to fix the gaps.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4kdfh5qb8m8fq8szqz1h.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4kdfh5qb8m8fq8szqz1h.jpeg" alt="Sequential Processing" width="800" height="812"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Part 2: The Need for Speed (Parallel Execution)
&lt;/h2&gt;

&lt;p&gt;The biggest bottleneck in AI agents is latency. Waiting for a model to "think" and browse the web takes time.&lt;/p&gt;

&lt;p&gt;With ADK, we implement a &lt;code&gt;ParallelAgent&lt;/code&gt;. This isn't just a concept; it's a primitive in the framework that handles the async complexity for us. ParallelAgents run in parallel, and the Orchestrator waits for all of them to finish before moving on. This is a simple way to parallelize your agents and improve performance within agents that don't depend on each other.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# orchestrator/app/agent.py
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;google.adk.agents&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ParallelAgent&lt;/span&gt;

&lt;span class="c1"&gt;# The "Squad" runs together
&lt;/span&gt;&lt;span class="n"&gt;research_squad&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ParallelAgent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;research_squad&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Runs the researcher, academic scholar, and asset gatherer in parallel.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;sub_agents&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;researcher&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;academic_scholar&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;asset_gatherer&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This one change cut our total processing time by &lt;strong&gt;60%&lt;/strong&gt;. While the Scholar is reading a dense PDF, the Asset Gatherer is already validating image URLs.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Favhhaz1eamg01ts2jgtw.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Favhhaz1eamg01ts2jgtw.jpeg" alt="A2A Handshake" width="800" height="812"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Part 3: The Universal Language (A2A Protocol)
&lt;/h2&gt;

&lt;p&gt;How do these agents talk? They are separate microservices. The Researcher might be on a high-memory instance, while the Orchestrator is on a tiny one.&lt;/p&gt;

&lt;p&gt;We use the &lt;strong&gt;Agent-to-Agent (A2A) Protocol&lt;/strong&gt;. It’s like a standardized API for AI agents, built on top of JSON-RPC.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why A2A?
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Decoupling&lt;/strong&gt;: The Orchestrator doesn't need to know &lt;em&gt;how&lt;/em&gt; the Researcher works, just &lt;em&gt;where&lt;/em&gt; it is.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Interoperability&lt;/strong&gt;: You could write the Researcher in Python and the Judge in Go. As long as they speak A2A, they can collaborate.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Service Discovery&lt;/strong&gt;: In development, we map agents to &lt;code&gt;localhost&lt;/code&gt; ports. In production, we map them to Cloud Run URLs.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# orchestrator/app/agent.py
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;google.adk.agents.remote_a2a_agent&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;RemoteA2aAgent&lt;/span&gt;

&lt;span class="c1"&gt;# The Orchestrator calls the remote Scholar service
&lt;/span&gt;&lt;span class="n"&gt;academic_scholar&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;RemoteA2aAgent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;academic_scholar&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="c1"&gt;# In prod, this is an internal Cloud Run URL
&lt;/span&gt;    &lt;span class="n"&gt;agent_card&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://scholar-service:8000/.well-known/agent.json&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Searches for academic papers.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsfbbymggjjdyk3pn8h2c.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsfbbymggjjdyk3pn8h2c.jpeg" alt="Scaling Graph" width="800" height="812"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Part 4: Infrastructure as a Superpower (Cloud Run)
&lt;/h2&gt;

&lt;p&gt;We deploy this system on &lt;strong&gt;Google Cloud Run&lt;/strong&gt;. This gives us the "Grocery Store" scaling model.&lt;/p&gt;

&lt;h3&gt;
  
  
  The "Grocery Store" Model
&lt;/h3&gt;

&lt;p&gt;Imagine a grocery store with one checkout lane. If 50 people show up, the line goes out the door.&lt;br&gt;
In our system, each agent is a checkout lane.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Monolith&lt;/strong&gt;: One lane. 50 requests = 50x wait time.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Microservices on Cloud Run&lt;/strong&gt;: 50 requests = Cloud Run instantly spins up &lt;strong&gt;50 instances&lt;/strong&gt; of the Researcher. Everyone gets checked out at once.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Scale to Zero
&lt;/h3&gt;

&lt;p&gt;When no one is using the app, we have &lt;strong&gt;0 instances&lt;/strong&gt; running. We pay &lt;strong&gt;$0&lt;/strong&gt;. This is crucial for cost-effective AI applications. Note, when a Cloud Run service is not in service, it is automatically scaled to zero, which means that it will require a cold start when the next request comes in. You can keep your Cloud Run services warm by using a health check.&lt;/p&gt;




&lt;h2&gt;
  
  
  Part 5: The Frontend (Next.js + Real-Time)
&lt;/h2&gt;

&lt;p&gt;We didn't want a CLI tool. We wanted a product.&lt;/p&gt;

&lt;p&gt;We built a &lt;strong&gt;Next.js&lt;/strong&gt; frontend that connects to the Orchestrator. Because we know the architecture, we can visualize it. When the &lt;code&gt;research_squad&lt;/code&gt; starts, our frontend shows three pulsing indicators side-by-side. You actually &lt;em&gt;see&lt;/em&gt; the parallelism happening.&lt;/p&gt;

&lt;p&gt;It creates a sense of "liveness" and transparency that builds user trust.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By breaking our monolith into a &lt;strong&gt;Parallel Research Squad&lt;/strong&gt;, we built a system that is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Faster&lt;/strong&gt;: Parallel execution cuts wait times by &amp;gt;50%.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Better&lt;/strong&gt;: Specialized agents (Scholar, Gatherer) do deeper work than one generalist.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Scalable&lt;/strong&gt;: Microservices on Cloud Run handle infinite load.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Want to build this yourself?&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/amitkmaraj/deep-research-agentic-architecture" rel="noopener noreferrer"&gt;&lt;strong&gt;Deep Research Agent code&lt;/strong&gt;&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/google/adk" rel="noopener noreferrer"&gt;&lt;strong&gt;Google ADK documentation&lt;/strong&gt;&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://google.github.io/adk-docs/a2a/" rel="noopener noreferrer"&gt;&lt;strong&gt;Agent-to-Agent (A2A) Protocol&lt;/strong&gt;&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cloud.google.com/run?utm_campaign=CDR_0x5f9e213a_default_b472372936&amp;amp;utm_medium=external&amp;amp;utm_source=blog" rel="noopener noreferrer"&gt;&lt;strong&gt;Google's Cloud Run&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>adk</category>
      <category>cloud</category>
    </item>
  </channel>
</rss>
