<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Glendel Joubert Fyne Acosta</title>
    <description>The latest articles on Forem by Glendel Joubert Fyne Acosta (@glendel).</description>
    <link>https://forem.com/glendel</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/glendel"/>
    <language>en</language>
    <item>
      <title>The AI FOMO Trap: Why your Multi-Agent System is brittle (and how to fix it)</title>
      <dc:creator>Glendel Joubert Fyne Acosta</dc:creator>
      <pubDate>Thu, 14 May 2026 00:45:30 +0000</pubDate>
      <link>https://forem.com/glendel/the-ai-fomo-trap-why-your-multi-agent-system-is-brittle-and-how-to-fix-it-20o7</link>
      <guid>https://forem.com/glendel/the-ai-fomo-trap-why-your-multi-agent-system-is-brittle-and-how-to-fix-it-20o7</guid>
      <description>&lt;p&gt;A developer on Reddit recently told me: "&lt;em&gt;Companies right now are risking the LLM-led parts of their architecture due to FOMO. We'll see how far they get&lt;/em&gt;".&lt;/p&gt;

&lt;p&gt;He is absolutely right. Fear Of Missing Out is driving engineering teams to ship "Autonomous Agents" at breakneck speed. But in the rush to production, we are abandoning 20 years of established software engineering principles.&lt;/p&gt;

&lt;p&gt;We are letting probabilistic models control deterministic runtimes.&lt;/p&gt;

&lt;p&gt;If you are routing network traffic, validating data schemas, or checking user permissions using an LLM prompt, you are not building a resilient system. You are building a fragile prompt-chain wrapped in hope. When it fails (and it will), it will be slow, expensive, and completely un-auditable. InfoSec won't accept "the model hallucinated the auth check" as a valid incident report.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;The Cure: The Manager-Executor Pattern&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;To build enterprise-grade Multi-Agent Systems, we must separate the &lt;em&gt;Cognitive&lt;/em&gt; from the &lt;em&gt;Deterministic&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. The Manager (Probabilistic)&lt;/strong&gt; This is the LLM. Its only job is to reason, plan, and analyze context. It decides &lt;em&gt;what needs to be done&lt;/em&gt;. It does not execute code. It does not manage its own memory. It requests actions via strict JSON schemas.&lt;br&gt;
&lt;strong&gt;2. The Executor (Deterministic)&lt;/strong&gt; This is your runtime framework. It acts as the boundary. When the Manager requests an action, the Executor:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Verifies the agent's permissions.&lt;/li&gt;
&lt;li&gt;Validates the payload against a strict schema.&lt;/li&gt;
&lt;li&gt;Checks the token/cost budget.&lt;/li&gt;
&lt;li&gt;Executes the code (API call, DB write).&lt;/li&gt;
&lt;li&gt;Returns the exact result to the Manager.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;The Framework Controls the AI&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;The fundamental shift required in MAS architecture is understanding that &lt;strong&gt;the framework must control the LLM; the LLM must never control the framework&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Right now, developers are having to build these custom state machines and validation layers from scratch because popular frameworks default to LLM-routing. It's time we standardize this. We need "&lt;strong&gt;A Real Framework&lt;/strong&gt;" for Multi-Agent Systems—a framework that enforces the Manager-Executor pattern by default.&lt;/p&gt;

&lt;p&gt;Stop relying on vibes-based engineering. Let's get back to rigorous software architecture.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>architecture</category>
      <category>softwareengineering</category>
      <category>programming</category>
    </item>
    <item>
      <title>The Token Waste Problem: Why your AI Agents shouldn't evaluate permissions</title>
      <dc:creator>Glendel Joubert Fyne Acosta</dc:creator>
      <pubDate>Sat, 09 May 2026 00:47:02 +0000</pubDate>
      <link>https://forem.com/glendel/the-token-waste-problem-why-your-ai-agents-shouldnt-evaluate-permissions-2a2c</link>
      <guid>https://forem.com/glendel/the-token-waste-problem-why-your-ai-agents-shouldnt-evaluate-permissions-2a2c</guid>
      <description>&lt;p&gt;We are burning millions of API tokens on problems that &lt;code&gt;if&lt;/code&gt; statements solved 20 years ago.&lt;/p&gt;

&lt;p&gt;I speak with developers building Multi-Agent Systems (MAS) every day, and I keep seeing the same massive architectural anti-pattern: &lt;strong&gt;Routing everything through the AI model.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Need to check an agent's permissions? "Ask the LLM."&lt;/li&gt;
&lt;li&gt;  Need to route a message? "Ask the LLM."&lt;/li&gt;
&lt;li&gt;  Need to validate a data schema? "Ask the LLM."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Language models are extraordinary reasoning engines. But they are also expensive, probabilistic, and relatively slow. If a problem has a deterministic, correct answer (like checking an access policy), it should be evaluated by runtime code, not guessed by a neural network.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Anti-Pattern
&lt;/h3&gt;

&lt;p&gt;Instead of doing this (Probabilistic):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// BAD: Asking the LLM to check permissions&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`You are an agent. The user wants to delete a file. 
Here are their permissions: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;permissions&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;. 
Should you allow it?`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;decision&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;generate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The Solution
&lt;/h3&gt;

&lt;p&gt;We need to get back to doing this (Deterministic):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// GOOD: Let code handle policy, let AI handle reasoning&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;hasPermission&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;delete_file&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Unauthorized&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; 
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// Only call the LLM for actual cognitive tasks&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;plan&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reasonAboutFile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;AI should decide &lt;em&gt;what&lt;/em&gt; to do. Deterministic code should execute it and enforce the boundaries.&lt;/p&gt;

&lt;p&gt;Are we forgetting basic software engineering principles just because AI is exciting? The MAS space doesn't need more wrappers; we need standardized frameworks that enforce these boundaries. Let's get back to building solid infrastructure.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>architecture</category>
      <category>softwaredevelopment</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
