<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Victor Iglesias</title>
    <description>The latest articles on Forem by Victor Iglesias (@theorchestrator).</description>
    <link>https://forem.com/theorchestrator</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/theorchestrator"/>
    <language>en</language>
    <item>
      <title>The Orchestrator — Issue #002</title>
      <dc:creator>Victor Iglesias</dc:creator>
      <pubDate>Tue, 17 Feb 2026 14:01:52 +0000</pubDate>
      <link>https://forem.com/theorchestrator/the-orchestrator-issue-002-5dmj</link>
      <guid>https://forem.com/theorchestrator/the-orchestrator-issue-002-5dmj</guid>
      <description>&lt;h1&gt;
  
  
  The Orchestrator — Issue #002
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;February 17, 2026&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Signal
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Google Ships WebMCP — and the Web Just Got an API for Agents
&lt;/h3&gt;

&lt;p&gt;The biggest shift in how AI agents interact with the web dropped last week, and it wasn't a new model. It was plumbing.&lt;/p&gt;

&lt;p&gt;Google unveiled &lt;a href="https://developer.chrome.com/blog/webmcp-epp" rel="noopener noreferrer"&gt;WebMCP&lt;/a&gt; (Web Model Context Protocol), a proposed standard that lets websites expose structured tools directly to AI agents running in Chrome. Co-authored by engineers at Google and Microsoft, and building on Anthropic's existing Model Context Protocol (MCP), it gives websites a first-class way to declare what actions they support — search products, book flights, submit forms — instead of forcing agents to scrape and click their way through UIs like caffeinated interns.&lt;/p&gt;

&lt;p&gt;The protocol ships in Chrome's early preview with two complementary APIs: a JavaScript imperative API for registering dynamic tools, and a declarative HTML approach for simpler cases. If you've built MCP tools for Claude or ChatGPT, you're 90% of the way to WebMCP compatibility — the input schema uses the same JSON Schema v7 format.&lt;/p&gt;

&lt;p&gt;Why this matters more than another model release: right now, every agent framework has its own brittle approach to web interaction. Puppeteer scripts break when a site updates a CSS class. WebMCP replaces that entire fragile layer with a standard interface. The W3C is already reviewing it for formal standardization.&lt;/p&gt;

&lt;p&gt;I've been saying the real bottleneck for agents isn't intelligence — it's integration. WebMCP is the clearest signal yet that the infrastructure layer is catching up. If you build anything that touches the web, pay attention to this one.&lt;/p&gt;

&lt;p&gt;(&lt;a href="https://developer.chrome.com/blog/webmcp-epp" rel="noopener noreferrer"&gt;Google blog post&lt;/a&gt; | &lt;a href="https://www.adweek.com/media/google-takes-a-step-toward-an-internet-built-for-ai-agents/" rel="noopener noreferrer"&gt;Adweek coverage&lt;/a&gt;)&lt;/p&gt;




&lt;h2&gt;
  
  
  Agent Drops
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Apple Xcode 26.3&lt;/strong&gt; — Apple opened Xcode to agentic coding. Claude Agent and OpenAI Codex can now operate autonomously inside Xcode: searching docs, modifying project settings, capturing Previews, and iterating through builds. Not a copilot sidebar — full agent access to the IDE. (&lt;a href="https://www.apple.com/newsroom/2026/02/xcode-26-point-3-unlocks-the-power-of-agentic-coding/" rel="noopener noreferrer"&gt;Apple Newsroom&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cline CLI 2.0&lt;/strong&gt; — Cline rebuilt its coding agent for the terminal from the ground up. Parallel agent execution across tmux panes, native CI/CD automation, piped I/O, and free trial models via Kimi K2.5 and MiniMax M2.5. Open source under Apache 2.0. If you've been managing agents in VS Code and hitting walls, this is worth a look. (&lt;a href="https://cline.ghost.io/introducing-cline-cli-2-0/" rel="noopener noreferrer"&gt;Cline blog&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Coinbase Agentic Wallets&lt;/strong&gt; — Coinbase launched wallet infrastructure for AI agents on Feb 11. Agents can now spend, earn, and trade crypto autonomously using the AgentKit framework. Their x402 payment protocol has already processed over 50 million transactions. Agents with wallets — the future nobody asked for but everyone's building. (&lt;a href="https://coinpaprika.com/news/coinbase-agentic-wallets-ai-agents-february-2026-launch/" rel="noopener noreferrer"&gt;Coinpaprika&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PicoClaw&lt;/strong&gt; — An OpenClaw-inspired AI assistant rebuilt in Go that runs on $10 hardware with under 10MB of RAM. About 95% agent-generated code. Boots in under a second on a single low-power core. Interesting proof of concept for edge agents on truly constrained devices. (&lt;a href="https://ledgerlife.io/picoclaw-brings-personal-ai-agents-to-10-devices-with-under-10mb-of-memory/" rel="noopener noreferrer"&gt;LedgerLife&lt;/a&gt;)&lt;/p&gt;




&lt;h2&gt;
  
  
  Build This
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Weekend Project: Add WebMCP Tools to Your Side Project
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Complexity:&lt;/strong&gt; Beginner-Intermediate | &lt;strong&gt;Stack:&lt;/strong&gt; Any web app + JavaScript | &lt;strong&gt;Cost:&lt;/strong&gt; Free&lt;/p&gt;

&lt;p&gt;If you have any web app — even a static site with a search feature — you can start exposing WebMCP tools today using Chrome's early preview.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Enable the flag&lt;/strong&gt; in Chrome 146+ (check &lt;code&gt;chrome://flags&lt;/code&gt; for WebMCP)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Register a tool&lt;/strong&gt; using the imperative JavaScript API:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nb"&gt;navigator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;modelContext&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;registerTool&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;search_posts&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Search blog posts by keyword&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;inputSchema&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;object&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;properties&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;query&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;string&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;required&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;query&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;query&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;yourSearchFunction&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;query&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;text&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;results&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;}]&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Test it&lt;/strong&gt; with any MCP-compatible agent running in Chrome&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The schema format is identical to OpenAI function calling and MCP server tools. If you've written either, this is a 20-minute port. The interesting part: once a few major sites adopt this, agents won't need screen scraping anymore. Getting in early means understanding the pattern before it's everywhere.&lt;/p&gt;




&lt;h2&gt;
  
  
  One Link
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://cline.ghost.io/introducing-cline-cli-2-0/" rel="noopener noreferrer"&gt;Cline CLI 2.0 blog post: "From Sidebar to Terminal"&lt;/a&gt;&lt;/strong&gt; — The best articulation I've read of why coding agents are moving from IDE sidebars to terminals. The key insight: when you're orchestrating systems rather than editing files, you need an interface built for orchestration. Long-running processes, parallel sessions, piped I/O — these are terminal primitives, not IDE ones. Worth reading even if you never use Cline.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;The Orchestrator is a weekly newsletter about AI agents — frameworks, deployments, and what's actually shipping vs. what's just a demo. Written by Victor Iglesias (&lt;a href="https://x.com/peakydevs" rel="noopener noreferrer"&gt;@peakydevs&lt;/a&gt;).&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>automation</category>
      <category>opensource</category>
    </item>
    <item>
      <title>The Orchestrator — Issue #0: The Browser Wars Have Gone Agentic</title>
      <dc:creator>Victor Iglesias</dc:creator>
      <pubDate>Tue, 17 Feb 2026 05:54:37 +0000</pubDate>
      <link>https://forem.com/theorchestrator/the-orchestrator-issue-0-the-browser-wars-have-gone-agentic-1k6i</link>
      <guid>https://forem.com/theorchestrator/the-orchestrator-issue-0-the-browser-wars-have-gone-agentic-1k6i</guid>
      <description>&lt;h1&gt;
  
  
  The Orchestrator — Issue #0 (Pilot)
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;February 17, 2025&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Signal
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Browser Wars Have Gone Agentic
&lt;/h3&gt;

&lt;p&gt;Every major AI lab is now shipping agents that can use a computer. And I mean &lt;em&gt;actually&lt;/em&gt; use it — clicking buttons, filling forms, navigating websites. OpenAI's &lt;a href="https://openai.com/index/introducing-operator/" rel="noopener noreferrer"&gt;Operator&lt;/a&gt;, powered by their Computer-Using Agent (CUA) model, launched to Pro subscribers last month and has been the talk of the agent community ever since. It combines GPT-4o's vision with reinforcement learning to interpret screenshots and interact with GUIs like a human would.&lt;/p&gt;

&lt;p&gt;But here's what made this week interesting: the open-source response arrived fast and loud. &lt;a href="https://github.com/browser-use/browser-use" rel="noopener noreferrer"&gt;Browser Use&lt;/a&gt;, an open-source alternative, has been blowing up across Reddit, YouTube, and dev Twitter. It does roughly what Operator does — autonomous browser control — but you self-host it, use any LLM you want, and pay nothing. The pitch is simple: why pay $200/month for ChatGPT Pro when you can run browser automation locally?&lt;/p&gt;

&lt;p&gt;I've been testing both. Operator is polished but constrained — it runs in OpenAI's sandbox, not your actual browser. Browser Use is rougher but far more flexible. For developers building agent workflows, Browser Use is the more interesting primitive. For end users who just want "book me a flight," Operator wins on UX.&lt;/p&gt;

&lt;p&gt;The bigger picture: we're watching the "agent interface layer" get commoditized in real time. Anthropic has &lt;a href="https://docs.anthropic.com/en/docs/build-with-claude/computer-use" rel="noopener noreferrer"&gt;Computer Use&lt;/a&gt; in beta. Google is reportedly working on their own. Within six months, every frontier model will ship with browser control as a standard capability. The question isn't &lt;em&gt;if&lt;/em&gt; agents will use our computers — it's who controls the session.&lt;/p&gt;




&lt;h2&gt;
  
  
  Agent Drops
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://techcrunch.com/2025/02/13/anthropics-next-major-ai-model-could-arrive-within-weeks/" rel="noopener noreferrer"&gt;Anthropic's Next Model Incoming&lt;/a&gt;&lt;/strong&gt; — Anthropic is reportedly weeks away from releasing a new Claude model that combines standard language capabilities with deep reasoning, featuring a "sliding scale" for cost control. Early reports say it beats o3-mini-high on some coding benchmarks. This could be the hybrid reasoning model we've been waiting for.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.axios.com/2025/02/10/anthropic-economic-index-ai-use-data" rel="noopener noreferrer"&gt;Anthropic Economic Index&lt;/a&gt;&lt;/strong&gt; — Anthropic analyzed 4 million+ Claude conversations to map how AI is actually being used in the economy. The headline finding: most usage is "augmentation" not "automation" — people working &lt;em&gt;with&lt;/em&gt; AI, not replacing themselves. Software development dominates. &lt;a href="https://arxiv.org/abs/2503.04761" rel="noopener noreferrer"&gt;Full paper on arXiv&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://fortune.com/2025/02/14/sam-altman-openai-plans-gpt-5-release-timelines/" rel="noopener noreferrer"&gt;OpenAI's GPT-4.5 Roadmap&lt;/a&gt;&lt;/strong&gt; — Sam Altman laid out the path forward: GPT-4.5 (codename "Orion") ships in weeks as the final non-chain-of-thought model. After that, o-series and GPT-series merge into one unified model that "knows when to think." The model picker is dying. Good riddance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://siliconangle.com/2025/02/14/legal-ai-startup-eudia-launches-105m-bring-ai-agents-legal-teams/" rel="noopener noreferrer"&gt;Eudia Launches with $105M&lt;/a&gt;&lt;/strong&gt; — Legal AI startup Eudia came out of stealth with $105M to build AI agents for corporate legal teams. That's a massive seed-stage raise, and it signals that vertical agent plays (agents built for one specific domain) are where the money is flowing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.reuters.com/technology/artificial-intelligence/robotics-startup-figure-ai-talks-new-funding-395-billion-valuation-bloomberg-2025-02-14/" rel="noopener noreferrer"&gt;Figure AI at $39.5B&lt;/a&gt;&lt;/strong&gt; — The humanoid robotics company is in talks to raise $1.5B at a $39.5B valuation. Not purely an "agent" company, but their robots run on agent architectures. The embodied agent space is getting absurdly well-funded.&lt;/p&gt;




&lt;h2&gt;
  
  
  Build This
&lt;/h2&gt;

&lt;h3&gt;
  
  
  A Personal Research Agent with Browser Use
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Stack:&lt;/strong&gt; Browser Use + Claude 3.5 Sonnet (or any LLM via API) + Python&lt;br&gt;
&lt;strong&gt;Complexity:&lt;/strong&gt; Intermediate&lt;br&gt;
&lt;strong&gt;Cost:&lt;/strong&gt; ~$0.10/run (API costs only)&lt;/p&gt;

&lt;p&gt;The idea: build an agent that takes a research question, opens a browser, searches across multiple sources, extracts key findings, and returns a structured summary. Think OpenAI's Deep Research, but yours.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install &lt;a href="https://github.com/browser-use/browser-use" rel="noopener noreferrer"&gt;Browser Use&lt;/a&gt; (&lt;code&gt;pip install browser-use&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Configure it with your preferred LLM (Claude, GPT-4o, or even a local model via Ollama)&lt;/li&gt;
&lt;li&gt;Write a task prompt: "Research [topic]. Visit at least 3 sources. Extract key claims with URLs. Summarize in 500 words."&lt;/li&gt;
&lt;li&gt;Add a simple output parser that structures the results into markdown&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The magic is in the task decomposition — Browser Use handles the clicking and scrolling, your LLM handles the reasoning. Chain them together and you've got a research agent that costs pennies per query instead of $200/month.&lt;/p&gt;

&lt;p&gt;Start simple. Get it working on one search engine. Then add multi-source routing. Then add fact-checking between sources. That's how you build agents that actually work — incrementally, not all at once.&lt;/p&gt;




&lt;h2&gt;
  
  
  One Link
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://arxiv.org/abs/2503.04761" rel="noopener noreferrer"&gt;Which Economic Tasks are Performed with AI?&lt;/a&gt;&lt;/strong&gt; — Anthropic's full paper behind the Economic Index. It's the most rigorous look at real-world AI usage I've seen. Not vibes, not surveys — actual conversation data from millions of users mapped to occupational tasks. If you care about where agents are headed, this paper tells you where they already are. The finding that 37% of occupations use AI for at least a quarter of their tasks is both exciting and sobering. Read it.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;The Orchestrator is a weekly newsletter about AI agents and autonomous AI. Written by Victor Iglesias.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>automation</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
