<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Shunsuke Hayashi</title>
    <description>The latest articles on Forem by Shunsuke Hayashi (@shunsukehayashi).</description>
    <link>https://forem.com/shunsukehayashi</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/shunsukehayashi"/>
    <language>en</language>
    <item>
      <title>Your AI Agent Skills Are Silently Breaking Right Now (And Nobody's Watching)</title>
      <dc:creator>Shunsuke Hayashi</dc:creator>
      <pubDate>Fri, 20 Mar 2026 21:07:16 +0000</pubDate>
      <link>https://forem.com/shunsukehayashi/your-ai-agent-skills-are-silently-breaking-right-now-and-nobodys-watching-174j</link>
      <guid>https://forem.com/shunsukehayashi/your-ai-agent-skills-are-silently-breaking-right-now-and-nobodys-watching-174j</guid>
      <description>&lt;p&gt;&lt;strong&gt;91% of ML models degrade over time. Only 5% of production agents have real monitoring. Here's how to fix it.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;Your AI agent looked great on demo day.&lt;/p&gt;

&lt;p&gt;Three months later, it's quietly returning wrong answers. Not crashing. Not throwing errors. Just... subtly worse. Your API caller skill scored 0.95 in January. By March, it's 0.42. Nobody changed your code. Nobody noticed.&lt;/p&gt;

&lt;p&gt;This is the biggest unsolved problem in AI agent operations — and it's happening to everyone.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Data Is Terrifying
&lt;/h2&gt;

&lt;p&gt;Let's start with the numbers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;91% of ML models experience degradation over time&lt;/strong&gt; — MIT research across 32 datasets and 4 industries&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;67% of enterprises report measurable AI degradation within 12 months&lt;/strong&gt; — Gartner&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Only 5% of production AI agents have mature monitoring&lt;/strong&gt; — Cleanlab 2025 survey&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;90-95% of AI initiatives fail to reach sustained production value&lt;/strong&gt; — Industry reports&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But here's the number that should keep you up at night:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If an AI agent achieves 85% accuracy per action — which sounds great — a 10-step workflow only succeeds &lt;strong&gt;20% of the time&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That's not a bug. That's math. And it gets worse every week your skills go unmonitored.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Four Horsemen of Skill Degradation
&lt;/h2&gt;

&lt;p&gt;Agent skills don't fail catastrophically. They rot gradually. Here are the four ways it happens:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. API Changes
&lt;/h3&gt;

&lt;p&gt;External APIs change without warning. A field gets renamed. A response format shifts. Your agent was calling &lt;code&gt;v2/users&lt;/code&gt; but the provider silently redirected to &lt;code&gt;v3/users&lt;/code&gt; with a different schema.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Week 1: API returns { "name": "Alice" }           → skill score: 1.0
Week 2: API returns { "full_name": "Alice" }       → skill score: 0.6
Week 3: API returns { "user": { "name": "Alice" }} → skill score: 0.3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Nobody changed your code. The score just... dropped.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Model Updates
&lt;/h3&gt;

&lt;p&gt;Your LLM provider ships an update. The model is "better" overall, but your carefully tuned prompts now produce slightly different outputs. Your parsing logic breaks on 15% of responses. No error. Just worse results.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Auth Expiry
&lt;/h3&gt;

&lt;p&gt;OAuth tokens expire. API keys get rotated. Service accounts lose permissions. Your skill still runs — it just gets 401s that it handles as "empty results" instead of errors.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Prompt Drift
&lt;/h3&gt;

&lt;p&gt;Over time, small modifications accumulate. Someone adds "be more concise." Someone else adds "include all details." The prompt contradicts itself. The skill still works, but its quality oscillates unpredictably.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Traditional Monitoring Fails
&lt;/h2&gt;

&lt;p&gt;Here's the gap: most agent frameworks handle &lt;strong&gt;execution&lt;/strong&gt;. None handle &lt;strong&gt;operational health&lt;/strong&gt;.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Framework&lt;/th&gt;
&lt;th&gt;Executes tasks&lt;/th&gt;
&lt;th&gt;Monitors skill quality over time&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;LangGraph&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CrewAI&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AutoGen&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Mastra&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;VoltAgent&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Your APM tool tells you the agent ran. It doesn't tell you the agent's output quality dropped 40% compared to last week.&lt;/p&gt;

&lt;p&gt;Your error tracker catches crashes. It doesn't catch the skill that returns "technically valid but wrong" results.&lt;/p&gt;

&lt;p&gt;Your observability stack monitors latency and error rates. It's completely blind to &lt;strong&gt;semantic degradation&lt;/strong&gt; — the silent drift from 95% quality to 60% quality with zero errors.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Industry Is Waking Up
&lt;/h2&gt;

&lt;p&gt;2026 is the year the industry acknowledged this problem:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OpenAI&lt;/strong&gt; shipped &lt;a href="https://developers.openai.com/blog/eval-skills" rel="noopener noreferrer"&gt;Skill Eval&lt;/a&gt; — unit tests for agent skills.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Google's Minko Gechev&lt;/strong&gt; published &lt;a href="https://blog.mgechev.com/2026/02/26/skill-eval/" rel="noopener noreferrer"&gt;Skill Eval&lt;/a&gt; with the tagline about skills deserving the same rigor as code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Spring AI&lt;/strong&gt; &lt;a href="https://spring.io/blog/2026/01/13/spring-ai-generic-agent-skills/" rel="noopener noreferrer"&gt;acknowledged&lt;/a&gt; there's no built-in versioning system for skills — if you update a skill, all applications immediately use the new version with no rollback path.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CNBC&lt;/strong&gt; ran a feature on &lt;a href="https://www.cnbc.com/2026/03/01/ai-artificial-intelligence-economy-business-risks.html" rel="noopener noreferrer"&gt;"Silent failure at scale"&lt;/a&gt; — calling it the AI risk nobody sees coming.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;EU AI Act&lt;/strong&gt; mandates continuous monitoring for high-risk AI systems by August 2, 2026. If your agents make decisions that affect people, monitoring isn't optional anymore.&lt;/p&gt;

&lt;p&gt;The consensus is clear: &lt;strong&gt;agent skills need the same operational rigor as production software.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;But detecting the problem is only half the battle.&lt;/p&gt;

&lt;h2&gt;
  
  
  Detection Alone Isn't Enough
&lt;/h2&gt;

&lt;p&gt;OpenAI's Skill Eval and Google's Skill Eval are valuable tools. They tell you &lt;strong&gt;that&lt;/strong&gt; a skill degraded.&lt;/p&gt;

&lt;p&gt;They don't tell you &lt;strong&gt;why&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;They don't tell you &lt;strong&gt;what to do about it&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;And they certainly don't &lt;strong&gt;fix it automatically&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;That's the gap we built &lt;a href="https://github.com/ShunsukeHayashi/agent-skill-bus" rel="noopener noreferrer"&gt;Agent Skill Bus&lt;/a&gt; to fill.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing the Self-Improving Loop
&lt;/h2&gt;

&lt;p&gt;Agent Skill Bus is a framework-agnostic runtime that adds three capabilities no existing framework provides:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;External Changes ──→ Knowledge Watcher ──→ Prompt Request Bus ──→ Execute
                                                ↑                    │
                                                │                    ↓
                                          Self-Improving ←── Skill Runs Log
                                             Skills
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Module 1: Self-Improving Skills
&lt;/h3&gt;

&lt;p&gt;A 7-step quality loop that runs continuously:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;OBSERVE → ANALYZE → DIAGNOSE → PROPOSE → EVALUATE → APPLY → RECORD
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every skill execution gets scored (0.0 to 1.0). The system watches for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Score drops&lt;/strong&gt; — Moving average falls below threshold&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Drift&lt;/strong&gt; — 15%+ score decrease week-over-week&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consecutive failures&lt;/strong&gt; — 3+ failures in a row triggers immediate alert&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When degradation is detected, an LLM reads the failing skill definition + error logs, diagnoses the root cause, and proposes a fix. Low-risk fixes (like updating an API endpoint URL) are applied automatically. High-risk fixes get routed to a human for approval.&lt;/p&gt;

&lt;p&gt;This is the key differentiator: &lt;strong&gt;not just detection, but diagnosis and repair&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Module 2: Knowledge Watcher
&lt;/h3&gt;

&lt;p&gt;Instead of waiting for skills to break, Knowledge Watcher proactively monitors for changes that &lt;strong&gt;will&lt;/strong&gt; break them:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Tier 1&lt;/strong&gt; (every 6 hours): Dependency versions, API endpoint health, config drift&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tier 2&lt;/strong&gt; (daily): GitHub issue patterns, user feedback, platform changelogs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tier 3&lt;/strong&gt; (weekly): Industry trends, competitor releases, best practice updates&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When a breaking change is detected upstream, the system generates a task to update affected skills &lt;strong&gt;before&lt;/strong&gt; they fail in production.&lt;/p&gt;

&lt;h3&gt;
  
  
  Module 3: Prompt Request Bus
&lt;/h3&gt;

&lt;p&gt;A DAG-based task queue that coordinates multi-agent workflows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tasks declare dependencies (&lt;code&gt;dependsOn&lt;/code&gt;) for automatic ordering&lt;/li&gt;
&lt;li&gt;File-level locking prevents two agents from editing the same resource&lt;/li&gt;
&lt;li&gt;Priority routing (&lt;code&gt;critical &amp;gt; high &amp;gt; medium &amp;gt; low&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Deduplication prevents the same task from being queued twice&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Zero Dependencies. Just JSONL Files.
&lt;/h2&gt;

&lt;p&gt;This is the part that surprises people.&lt;/p&gt;

&lt;p&gt;Agent Skill Bus doesn't use a database. It doesn't need Redis or RabbitMQ. It doesn't lock you into a specific framework.&lt;/p&gt;

&lt;p&gt;Everything is stored in plain JSONL files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;.agent-skill-bus/
├── skill-runs.jsonl         &lt;span class="c"&gt;# Execution history&lt;/span&gt;
├── queue.jsonl              &lt;span class="c"&gt;# Task queue&lt;/span&gt;
├── knowledge-diffs.jsonl    &lt;span class="c"&gt;# Detected changes&lt;/span&gt;
└── active-locks.jsonl       &lt;span class="c"&gt;# File locks&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Any language can read these files. Any CI pipeline can process them. Any framework can integrate by simply appending a line of JSON.&lt;/p&gt;

&lt;p&gt;This means you can add Agent Skill Bus to your existing setup in 30 seconds:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx agent-skill-bus init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And start recording skill executions immediately:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx agent-skill-bus record-run &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--agent&lt;/span&gt; my-agent &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--skill&lt;/span&gt; api-caller &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--task&lt;/span&gt; &lt;span class="s2"&gt;"fetch user data"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--result&lt;/span&gt; success &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--score&lt;/span&gt; 0.95
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Works With Everything
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Framework&lt;/th&gt;
&lt;th&gt;How to integrate&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Claude Code&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Drop a SKILL.md into &lt;code&gt;.claude/skills/&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Codex&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Drop into &lt;code&gt;.codex/skills/&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;LangGraph&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Call &lt;code&gt;record-run&lt;/code&gt; in your tool functions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;CrewAI&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Add a task completion callback&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Custom&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Append to JSONL files directly&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;For Claude Code users, just add one line to your &lt;code&gt;AGENTS.md&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;After completing any task, log the result:
npx agent-skill-bus record-run &lt;span class="nt"&gt;--agent&lt;/span&gt; claude &lt;span class="nt"&gt;--skill&lt;/span&gt; &amp;lt;skill-name&amp;gt; &lt;span class="nt"&gt;--task&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;task&amp;gt;"&lt;/span&gt; &lt;span class="nt"&gt;--result&lt;/span&gt; &amp;lt;success|fail|partial&amp;gt; &lt;span class="nt"&gt;--score&lt;/span&gt; &amp;lt;0.0-1.0&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. The self-improving loop runs automatically from there.&lt;/p&gt;

&lt;h2&gt;
  
  
  Production Results
&lt;/h2&gt;

&lt;p&gt;We run Agent Skill Bus in production at &lt;a href="https://miyabi-ai.jp" rel="noopener noreferrer"&gt;LLC Miyabi&lt;/a&gt;, coordinating 42 AI agents daily:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;27 tasks/day&lt;/strong&gt; average throughput&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;44 cron jobs&lt;/strong&gt; feeding the bus&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;57% reduction in skill failures&lt;/strong&gt; after enabling the self-improvement loop&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;7-minute fastest security incident response&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The 57% number is the one that matters. More than half of skill failures were preventable — they were caused by silent degradation that the loop caught and fixed before users noticed.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bigger Picture
&lt;/h2&gt;

&lt;p&gt;We're at an inflection point. AI agents are moving from demos to production. Gartner predicts 40% of enterprise applications will embed task-specific AI agents by end of 2026, up from less than 5% in 2025.&lt;/p&gt;

&lt;p&gt;That's an 8x increase in agents — without a corresponding increase in monitoring infrastructure.&lt;/p&gt;

&lt;p&gt;The software industry solved this decades ago. We don't ship code without tests, CI/CD, and monitoring. Agent skills deserve the same treatment.&lt;/p&gt;

&lt;p&gt;Agent Skill Bus is our answer: &lt;strong&gt;the missing runtime that keeps agent skills healthy in production&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Get Started
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install and initialize&lt;/span&gt;
npx agent-skill-bus init

&lt;span class="c"&gt;# Record your first skill execution&lt;/span&gt;
npx agent-skill-bus record-run &lt;span class="nt"&gt;--agent&lt;/span&gt; my-agent &lt;span class="nt"&gt;--skill&lt;/span&gt; api-caller &lt;span class="nt"&gt;--task&lt;/span&gt; &lt;span class="s2"&gt;"test"&lt;/span&gt; &lt;span class="nt"&gt;--result&lt;/span&gt; success &lt;span class="nt"&gt;--score&lt;/span&gt; 1.0

&lt;span class="c"&gt;# Check what needs attention&lt;/span&gt;
npx agent-skill-bus flagged

&lt;span class="c"&gt;# See the dashboard&lt;/span&gt;
npx agent-skill-bus dashboard
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/ShunsukeHayashi/agent-skill-bus" rel="noopener noreferrer"&gt;github.com/ShunsukeHayashi/agent-skill-bus&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Full ecosystem (110+ skills)&lt;/strong&gt;: &lt;a href="https://agentskills.bath.me" rel="noopener noreferrer"&gt;agentskills.bath.me&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Zero dependencies. MIT licensed. Framework-agnostic.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built by &lt;a href="https://miyabi-ai.jp" rel="noopener noreferrer"&gt;LLC Miyabi&lt;/a&gt; — running 42 AI agents in production daily.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Follow &lt;a href="https://x.com/The_AGI_WAY" rel="noopener noreferrer"&gt;@The_AGI_WAY&lt;/a&gt; for updates.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>monitoring</category>
      <category>agents</category>
    </item>
  </channel>
</rss>
