<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Ken W Alger</title>
    <description>The latest articles on Forem by Ken W Alger (@kenwalger).</description>
    <link>https://forem.com/kenwalger</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/kenwalger"/>
    <language>en</language>
    <item>
      <title>The Inference Renaissance</title>
      <dc:creator>Ken W Alger</dc:creator>
      <pubDate>Fri, 08 May 2026 16:28:00 +0000</pubDate>
      <link>https://forem.com/kenwalger/the-inference-renaissance-374n</link>
      <guid>https://forem.com/kenwalger/the-inference-renaissance-374n</guid>
      <description>&lt;h2&gt;
  
  
  Pattern Defined
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Precise Definition:&lt;/strong&gt; Inference Patterns are repeatable architectural frameworks that govern how an LLM processes, retrieves, and acts upon information to ensure deterministic reliability and cost-efficiency.&lt;/p&gt;

&lt;h2&gt;
  
  
  Problem Being Solved
&lt;/h2&gt;

&lt;p&gt;We are currently in the “Vibe-Coding” era of AI development. While prompt engineering got us through the door, it fails at the enterprise level because it lacks structural integrity. Without patterns, prompt engineering simply doesn’t scale.&lt;/p&gt;

&lt;p&gt;For those who have followed my &lt;em&gt;Forensics&lt;/em&gt; work, the stakes are higher than just “bad answers”. When context windows carry irrelevant or sensitive materials through to inference, such as with the &lt;a href="https://www.kenwalger.com/blog/ai/the-sovereign-vault-mcp-case-study-high-integrity-ai/" rel="noopener noreferrer"&gt;Sovereign Vault&lt;/a&gt;, privacy airlocks fail. Expensively. The &lt;a href="https://www.kenwalger.com/blog/ai/the-sovereign-redactor-a-precision-guided-privacy-airlock" rel="noopener noreferrer"&gt;Sovereign Redactor&lt;/a&gt; only works if the architecture around it is as disciplined as the model itself.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Case
&lt;/h2&gt;

&lt;p&gt;Consider a &lt;a href="https://dev.to/kenwalger/archival-intelligence-a-forensic-rare-book-auditor-448"&gt;Forensic Rare Book Auditor&lt;/a&gt; attempting to validate a 19th-century shipping ledger. If the system simply “searches” for a record, it may find it, but it cannot verify the provenance or manage the cost of the high-reasoning required to interpret handwritten data. Without a pattern, the system is just a digital lucky dip.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solution
&lt;/h2&gt;

&lt;p&gt;Over the coming weeks, I am applying the same rigor I used for the &lt;a href="https://www.mongodb.com/company/blog/building-with-patterns-a-summary" rel="noopener noreferrer"&gt;MongoDB Building with Patterns&lt;/a&gt; series to the AI stack. I will explore patterns across three domains, covering five architectural primitives:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Efficiency Patterns:&lt;/strong&gt; Speculative Decoding, Context Compression&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Structural Retrieval:&lt;/strong&gt; Hybrid Retrieval&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agentic Reliability:&lt;/strong&gt; Agent Tool-Calling, Multi-Model Routing&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Trade-Offs
&lt;/h2&gt;

&lt;p&gt;There is a specific unit of pain associated with this transition. Your first pattern-governed system will take longer to ship than a prompt-engineered equivalent. Expect at least two additional sprint cycles for schema design and handoff contracts. For &lt;strong&gt;Technical Leaders&lt;/strong&gt; , the trade-off is front-loading the engineering labor to eliminate the downstream volatility of &lt;em&gt;hallucination-hunting&lt;/em&gt;. You are trading “quick-start” speed for long-term governance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;The era of the “Black Box” is ending. By applying these patterns, we can move from accidental success to engineered reliability.&lt;/p&gt;

&lt;h3&gt;
  
  
  Next Up
&lt;/h3&gt;

&lt;p&gt;In two weeks, we go deep on &lt;em&gt;Speculative Decoding&lt;/em&gt; and why you should stop paying for high-reasoning tokens you don’t actually need.&lt;/p&gt;

&lt;h3&gt;
  
  
  Inference Pattern Series
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.kenwalger.com/blog/uncategorized/inference-patterns-renaissance-vibe-coding-to-engineering" rel="noopener noreferrer"&gt;Inference Renaissance&lt;/a&gt; – &lt;em&gt;This Post&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Speculative Decoding – &lt;em&gt;May 21&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Context Compression Pattern – &lt;em&gt;June 4&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Hybrid Retrieval – &lt;em&gt;June 18&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Agent Tool-Calling – &lt;em&gt;July 2&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Multi-Model Routing – &lt;em&gt;July 16&lt;/em&gt;
&lt;a href="https://www.facebook.com/sharer.php?u=https%3A%2F%2Fwww.kenwalger.com%2Fblog%2Fai-engineering%2Finference-patterns-renaissance-vibe-coding-to-engineering%2F&amp;amp;t=The%20Inference%20Renaissance&amp;amp;s=100&amp;amp;p[url]=https%3A%2F%2Fwww.kenwalger.com%2Fblog%2Fai-engineering%2Finference-patterns-renaissance-vibe-coding-to-engineering%2F&amp;amp;p[images][0]=&amp;amp;p[title]=The%20Inference%20Renaissance" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.kenwalger.com%2Fblog%2Fwp-content%2Fplugins%2Fsocial-media-feather%2Fsynved-social%2Fimage%2Fsocial%2Fregular%2F96x96%2Ffacebook.png" title="Share on Facebook" alt="Facebook" width="96" height="96"&gt;&lt;/a&gt;&lt;a href="https://twitter.com/intent/tweet?url=https%3A%2F%2Fwww.kenwalger.com%2Fblog%2Fai-engineering%2Finference-patterns-renaissance-vibe-coding-to-engineering%2F&amp;amp;text=Hey%20check%20this%20out" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.kenwalger.com%2Fblog%2Fwp-content%2Fplugins%2Fsocial-media-feather%2Fsynved-social%2Fimage%2Fsocial%2Fregular%2F96x96%2Ftwitter.png" title="Share on Twitter" alt="twitter" width="128" height="128"&gt;&lt;/a&gt;&lt;a href="https://www.reddit.com/submit?url=https%3A%2F%2Fwww.kenwalger.com%2Fblog%2Fai-engineering%2Finference-patterns-renaissance-vibe-coding-to-engineering%2F&amp;amp;title=The%20Inference%20Renaissance" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.kenwalger.com%2Fblog%2Fwp-content%2Fplugins%2Fsocial-media-feather%2Fsynved-social%2Fimage%2Fsocial%2Fregular%2F96x96%2Freddit.png" title="Share on Reddit" alt="reddit" width="96" height="96"&gt;&lt;/a&gt;&lt;a href="https://www.linkedin.com/shareArticle?mini=true&amp;amp;url=https%3A%2F%2Fwww.kenwalger.com%2Fblog%2Fai-engineering%2Finference-patterns-renaissance-vibe-coding-to-engineering%2F&amp;amp;title=The%20Inference%20Renaissance" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.kenwalger.com%2Fblog%2Fwp-content%2Fplugins%2Fsocial-media-feather%2Fsynved-social%2Fimage%2Fsocial%2Fregular%2F96x96%2Flinkedin.png" title="Share on Linkedin" alt="linkedin" width="96" height="96"&gt;&lt;/a&gt;&lt;a href="mailto:?subject=The%20Inference%20Renaissance&amp;amp;body=Hey%20check%20this%20out:%20https%3A%2F%2Fwww.kenwalger.com%2Fblog%2Fai-engineering%2Finference-patterns-renaissance-vibe-coding-to-engineering%2F"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.kenwalger.com%2Fblog%2Fwp-content%2Fplugins%2Fsocial-media-feather%2Fsynved-social%2Fimage%2Fsocial%2Fregular%2F96x96%2Fmail.png" title="Share by email" alt="mail" width="96" height="96"&gt;&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The post &lt;a href="https://www.kenwalger.com/blog/ai-engineering/inference-patterns-renaissance-vibe-coding-to-engineering/" rel="noopener noreferrer"&gt;The Inference Renaissance&lt;/a&gt; appeared first on &lt;a href="https://www.kenwalger.com/blog" rel="noopener noreferrer"&gt;Blog of Ken W. Alger&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aiengineering</category>
      <category>architecturalstrateg</category>
      <category>digitalforensics</category>
      <category>inferencepatterns</category>
    </item>
    <item>
      <title>The Local Eye (Sovereign Vision)</title>
      <dc:creator>Ken W Alger</dc:creator>
      <pubDate>Thu, 07 May 2026 16:34:41 +0000</pubDate>
      <link>https://forem.com/kenwalger/the-local-eye-sovereign-vision-2h4h</link>
      <guid>https://forem.com/kenwalger/the-local-eye-sovereign-vision-2h4h</guid>
      <description>&lt;p&gt;We’ve built a system that is &lt;a href="https://www.kenwalger.com/blog/ai/ai-agent-reliability-llm-as-a-judge" rel="noopener noreferrer"&gt;Reliable&lt;/a&gt;, &lt;a href="https://www.kenwalger.com/blog/ai/the-accountant-optimizing-ai-costs-with-semantic-routing/" rel="noopener noreferrer"&gt;Affordable&lt;/a&gt;, and &lt;a href="https://www.kenwalger.com/blog/ai/ai-agent-governance-human-in-the-loop-hitl/" rel="noopener noreferrer"&gt;Governed&lt;/a&gt;. But until now, our Forensic Team has been "blind." It could only reconcile text-based metadata.&lt;/p&gt;

&lt;p&gt;In the world of rare book forensics, the text is only half the story. The typography, paper grain, and binding texture are the true "fingerprints." However, sending high-resolution, proprietary scans of a $50,000 asset to a cloud-based LLM is a Data Sovereignty nightmare.&lt;/p&gt;

&lt;p&gt;Today, we introduce &lt;strong&gt;The Local Eye&lt;/strong&gt;: Edge-based Multimodal Vision that processes pixels without letting them leak into the cloud.&lt;/p&gt;

&lt;h2&gt;The Sovereignty Gap in Multimodal AI&lt;/h2&gt;

&lt;p&gt;Most multimodal implementations send raw images directly to frontier models (like GPT-4o). For an enterprise, this is a liability.&lt;/p&gt;

&lt;ol&gt;
    &lt;li&gt;
&lt;strong&gt;Intellectual Property:&lt;/strong&gt; Who owns the training data rights to the scan?&lt;/li&gt;
    &lt;li&gt;
&lt;strong&gt;Privacy:&lt;/strong&gt; Does the image contain metadata or background information that violates NDAs?&lt;/li&gt;
    &lt;li&gt;
&lt;strong&gt;Cost:&lt;/strong&gt; Sending 10MB 4K images for every query is an "Accountant's" nightmare.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;Implementing "Feature Extraction" at the Edge&lt;/h2&gt;

&lt;p&gt;Instead of sending the image to the cloud, we use &lt;a href="https://ollama.com/library/llama3.2-vision" rel="noopener noreferrer"&gt;Llama 3.2 Vision&lt;/a&gt; running locally via &lt;a href="https://ollama.com/" rel="noopener noreferrer"&gt;Ollama&lt;/a&gt;. Our MCP server acts as an "Airlock."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Handshake:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Normalization:&lt;/strong&gt; The &lt;code&gt;sharp&lt;/code&gt; library resizes and standardizes the forensic scan locally.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Local Inference:&lt;/strong&gt; The Vision SLM analyzes the image and generates a text-based "Feature Map."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Metadata Egress:&lt;/strong&gt; Only the textual description is passed to the reasoning agents. Even if The Accountant routes the task to a Cloud model for deep analysis, the cloud only sees our description, never the pixels.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.kenwalger.com%2Fblog%2Fwp-content%2Fuploads%2F2026%2F03%2Fsovereign-ai-local-vision-mcp-architecture-1024x123.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.kenwalger.com%2Fblog%2Fwp-content%2Fuploads%2F2026%2F03%2Fsovereign-ai-local-vision-mcp-architecture-1024x123.png" alt="Architectural diagram of the 'Local Eye' workflow. An artifact image is processed locally using the Sharp library and Llama 3.2 Vision. Only the resulting text metadata is allowed to pass through the security airlock to cloud-based reasoning models, ensuring the original pixels never leave the local environment." width="800" height="96"&gt;&lt;/a&gt; &lt;/p&gt;
The Sovereign Vision Workflow—Extracting intelligence at the edge to prevent data leakage. 



&lt;p&gt;In code we might have something like this then:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;// From src/index.ts: The Vision Airlock
async function analyzeArtifactVision(imagePath: string, focus: string) {
  const processedImage = await sharp(imagePath).resize(512, 512).toBuffer();

  // Local-only call to Ollama
  const description = await ollama.generate({
    model: 'llama3.2-vision',
    prompt: `Analyze the ${focus} of this artifact.`,
    images: [processedImage.toString('base64')]
  });

  return description; // Pixels stay here. Only text leaves.
}
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;The "Zero-Pixel" Policy&lt;/h2&gt;

&lt;p&gt;The goal is to maximize &lt;strong&gt;Intelligence&lt;/strong&gt; while minimizing &lt;strong&gt;Exposure&lt;/strong&gt;. By implementing Local Vision, we treat the cloud as a "Reasoning Utility," not a "Data Store." We send it the logic puzzle, but we never give it the raw forensic evidence. We gain the power of frontier-model reasoning without the risk of data harvesting.&lt;/p&gt;

&lt;h3&gt;Developer Lessons: The "Latency of Locality"&lt;/h3&gt;

&lt;p&gt;In building the Sovereign Vault, we learned that 'Data Sovereignty' has a physical cost: &lt;strong&gt;Time&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;While a cloud-based API might analyze a 4K image in seconds, running a deep-dive OCR and visual analysis on local consumer hardware using Llama 3.2-Vision takes significantly longer. We had to tune our "Airlock" timeouts—raising the ceiling from &lt;strong&gt;120 seconds&lt;/strong&gt; to &lt;strong&gt;300 seconds&lt;/strong&gt;—to give the local "Eye" enough time to process complex handwriting on a standard CPU.&lt;/p&gt;

&lt;p&gt;Additionally, we realized that our error logs were a potential privacy leak. We implemented &lt;em&gt;Log Truncation&lt;/em&gt; to ensure that even our failures respect the Sovereign Vault's privacy mandate.&lt;/p&gt;

&lt;h2&gt;The "Zero-Glue" Discovery&lt;/h2&gt;

&lt;p&gt;In a traditional setup, adding vision would require rewriting the orchestrator's core logic. Because we use the &lt;strong&gt;Model Context Protocol&lt;/strong&gt;, the orchestrator simply asked the server: "What can you do?". The server replied with the &lt;code&gt;analyze_artifact_vision&lt;/code&gt; manifest. The agent then dynamically decided to use this new "Eye" to investigate the Gatsby image. No new glue code was written to connect the vision model to the reasoning brain.&lt;/p&gt;

&lt;h2&gt;Case Study: The Gatsby Inscription&lt;/h2&gt;

&lt;p&gt;To test our &lt;em&gt;Sovereign Vault&lt;/em&gt;, we ran a forensic audit on a high-value first edition of &lt;em&gt;The Great Gatsby&lt;/em&gt;. Our local Vision Agent detected something anomalous on the title page: a cursive, multi-line inscription.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.kenwalger.com%2Fblog%2Fwp-content%2Fuploads%2F2026%2F03%2Fgreat_gatsby.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.kenwalger.com%2Fblog%2Fwp-content%2Fuploads%2F2026%2F03%2Fgreat_gatsby.jpg" alt="An image of The Great Gatsby copyright page" width="700" height="504"&gt;&lt;/a&gt; &lt;/p&gt;
Image credit: [University of Southern Mississippi Special Collections](https://lib.usm.edu/spcol/exhibitions/item_of_the_month/iotm_june_2021.html) (June 2021 Item of the Month)



&lt;h3&gt;The Sovereign Trace&lt;/h3&gt;

&lt;p&gt;When we ran the &lt;code&gt;analyze_artifact_vision&lt;/code&gt; tool, the local Llama 3.2 Vision model performed a deep scan and returned a fascinating finding:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;**Visual Findings: Handwritten Inscription**
* Location: Right-hand margin of title page
* Medium: Faint pencil, cursive script
* Transcribed Content: "Then we are not alone at all when we remember that we have in our hearts that something so precious..."
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Why this matters:&lt;/strong&gt; Notice that the model didn't just see "scribbles." It attempted to transcribe a 40-word passage. Crucially, the &lt;strong&gt;Forensic Analyst&lt;/strong&gt; (Claude) recognized that this text does not exist in any canonical version of &lt;em&gt;The Great Gatsby&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;This is a massive forensic win. The "Eye" identified a potential &lt;strong&gt;fabricated provenance&lt;/strong&gt; or a non-standard owner intervention. Because this happened inside our "&lt;strong&gt;Airlock&lt;/strong&gt;," the specific handwriting and the non-canonical text were captured without ever touching a cloud API.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Architect’s Trade-off: The Reasoning Gap&lt;/strong&gt;&lt;br&gt;
While our local Llama 3.2-Vision is an incredible "Eye," it occasionally faces a &lt;strong&gt;Reasoning Gap&lt;/strong&gt;. In certain runs, it may identify a note as "illegible" or produce repetitive output due to CPU thermal throttling or model constraints.&lt;/p&gt;

&lt;p&gt;Instead of hallucinating a "clean" signature, our system is designed to &lt;strong&gt;Safe-Fail&lt;/strong&gt;. It flags the finding as &lt;strong&gt;"Indeterminate"&lt;/strong&gt; and triggers a &lt;strong&gt;High-Severity Human Authorization&lt;/strong&gt; request.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Governance Challenge:&lt;/strong&gt; We now have a transcribed inscription that might contain a previous owner's private thoughts or names. If we simply passed this output to an LLM for summarization, we would have leaked a private message to a third-party server. This discovery sets the stage for our next architectural layer: &lt;strong&gt;The Redactor&lt;/strong&gt;.&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>architecture</category>
      <category>privacy</category>
    </item>
    <item>
      <title>Why Your Tech Stack Doesn’t Matter</title>
      <dc:creator>Ken W Alger</dc:creator>
      <pubDate>Wed, 06 May 2026 22:36:18 +0000</pubDate>
      <link>https://forem.com/kenwalger/why-your-tech-stack-doesnt-matter-4e59</link>
      <guid>https://forem.com/kenwalger/why-your-tech-stack-doesnt-matter-4e59</guid>
      <description>&lt;h2&gt;
  
  
  Architecting for Reliability in the Age of Multi-Agent Systems
&lt;/h2&gt;

&lt;p&gt;We are currently over-indexing on "Model Orchestration." &lt;/p&gt;

&lt;p&gt;Every week, a new library, a new vector database, or a new framework tops the GitHub trending charts. &lt;/p&gt;

&lt;p&gt;This week it might be &lt;a href="https://info.langchain.com/" rel="noopener noreferrer"&gt;LangGraph&lt;/a&gt;. The next &lt;a href="https://crewai.com/" rel="noopener noreferrer"&gt;CrewAI&lt;/a&gt;. Something else right behind it.&lt;/p&gt;

&lt;p&gt;Every week the same question shows up:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Which stack should I use to build a reliable multi-agent system?"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It's the wrong question.&lt;/p&gt;

&lt;p&gt;Because I've yet to see a system fail because to the wrong framework, language, or database.&lt;/p&gt;

&lt;p&gt;I've seen them fail because they couldn't recover state, couldn't control context, and couldn't explain what they just did. &lt;/p&gt;

&lt;p&gt;There’s a persistent belief that the logo on the documentation is the secret sauce for a production-ready system.&lt;/p&gt;

&lt;p&gt;It isn’t. In fact, if you’re spending the majority of your time debating the stack, you’re missing the architectural patterns that actually determine whether your agents will succeed or hallucinate into oblivion.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Illusion of the Framework
&lt;/h2&gt;

&lt;p&gt;A Multi-Agent System (MAS) is &lt;strong&gt;not&lt;/strong&gt; a library problem. It is a &lt;strong&gt;State Management&lt;/strong&gt; problem disguised as an AI problem. Whether you use a graph-based logic or a role-based queue, the fundamental challenges and failure modes remain identical:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;lost state&lt;/li&gt;
&lt;li&gt;bloated context&lt;/li&gt;
&lt;li&gt;untraceable decisions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The stack you choose is merely the syntax you use to solve universal engineering constraints.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;The Core Thesis:&lt;/strong&gt; Reliability in agentic workflows is derived from &lt;em&gt;patterns&lt;/em&gt;, not &lt;em&gt;packages&lt;/em&gt;. A secure, scalable system built in Python looks fundamentally the same as one built in Rust if the underlying system primitives are respected.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The Three Constants of Reliable Agents
&lt;/h2&gt;

&lt;p&gt;Regardless of your tools, your architecture must solve for these three pillars to move from a "cool demo" to a production asset:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;State is Sovereign
If an agentic loop fails at step 7 of 12, does your system restart from scratch? If so, your stack doesn't matter because your architecture is broken. A resilient system requires &lt;strong&gt;Deterministic Checkpointing&lt;/strong&gt;: 

&lt;ul&gt;
&lt;li&gt;Capture the full &lt;strong&gt;thread state&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Preserve &lt;strong&gt;intent&lt;/strong&gt;, not just data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resume&lt;/strong&gt; execution without replaying the entire workflow.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Without this, your system is just a loop with amnesia.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The Context Tax
Context windows are not infinite. In reality, every token you give an agent is a tax on its reasoning. The "how" isn't about which LLM you use; it's about the &lt;strong&gt;Routing Layer&lt;/strong&gt;:&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Classify intent&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Expose&lt;/strong&gt; only relevant tools&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Minimize&lt;/strong&gt; context surface area&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Less context doesn’t limit the system—it sharpens it.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Governance as a First-Class Citizen
An agent is a service principal. If it cannot be audited, revoked, or sandboxed at the identity level, it shouldn't have access to your data or exist in production. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A reliable system enforces:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Least-Privilege Authorization&lt;/strong&gt;, ensuring agents operate within a cryptographic "box" regardless of whether they are running in a Docker container or a serverless function.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scoped&lt;/strong&gt; tool usage&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Traceable execution&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Example
&lt;/h2&gt;

&lt;p&gt;Consider a simple multi-agent workflow:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1robv9b3amelp9pgo1ox.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1robv9b3amelp9pgo1ox.png" alt=" " width="800" height="527"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If your system can't resume from that point with the same context and intent, you don't have a system.&lt;/p&gt;

&lt;p&gt;You have a demo.&lt;/p&gt;

&lt;p&gt;A reliable system looks different.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F26gbdns2kzpaxc11sz8y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F26gbdns2kzpaxc11sz8y.png" alt=" " width="800" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Framework-Agnostic Checklist
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Pillar&lt;/th&gt;
&lt;th&gt;The Real Question&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Coordination&lt;/td&gt;
&lt;td&gt;How do agents hand off work without bloating context or losing intent?&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Observability&lt;/td&gt;
&lt;td&gt;Can we trace every decision back to inputs and reasoning steps?&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Resilience&lt;/td&gt;
&lt;td&gt;What happens when a model fails mid-workflow? Can we resume without replaying?&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Sovereignty&lt;/td&gt;
&lt;td&gt;Who owns the data and execution environment—us or the platform?&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Closing Thoughts
&lt;/h2&gt;

&lt;p&gt;These are not new problems. They're just showing up in a new layer.&lt;/p&gt;

&lt;p&gt;Stop chasing the framework. A system built in Python and one built in Rust will fail in exactly the same ways if the architecture is wrong.&lt;/p&gt;

&lt;p&gt;The difference isn't the stack. It's whether you've designed for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;State&lt;/li&gt;
&lt;li&gt;Context&lt;/li&gt;
&lt;li&gt;Control&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The tools are interchangeable. The architecture is not.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This is the foundation for the upcoming &lt;em&gt;Sovereign Synapse&lt;/em&gt; series—where we move from theory to a local-first system that treats memory, context, and ownership as first-class concerns.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>architecture</category>
      <category>distributedsystems</category>
      <category>devops</category>
    </item>
    <item>
      <title>When Your AI Pipeline Grows Up: Infrastructure Thinking for Real-Time Inference at Scale</title>
      <dc:creator>Ken W Alger</dc:creator>
      <pubDate>Wed, 06 May 2026 15:45:00 +0000</pubDate>
      <link>https://forem.com/kenwalger/when-your-ai-pipeline-grows-up-infrastructure-thinking-for-real-time-inference-at-scale-1g7d</link>
      <guid>https://forem.com/kenwalger/when-your-ai-pipeline-grows-up-infrastructure-thinking-for-real-time-inference-at-scale-1g7d</guid>
      <description>&lt;p&gt;There’s a familiar arc in AI development. A team builds a model, wires up a pipeline, and ships it. It works. In the demo, it’s fast. Features arrive cleanly, predictions feel fresh, vector search returns sensible results. Everyone is happy.&lt;/p&gt;

&lt;p&gt;Then production happens.&lt;/p&gt;

&lt;p&gt;Latencies spike unpredictably. Features arrive stale. The vector index that performed beautifully at 100K records starts degrading at 10M. The system that hummed in development begins to wheeze under real load. The model hasn’t changed. The accuracy metrics still look fine. But the &lt;em&gt;system&lt;/em&gt; is struggling — and accuracy is no longer the only thing that matters.&lt;/p&gt;

&lt;p&gt;This post is about what comes after model accuracy: the infrastructure concerns that determine whether your real-time AI actually works in production at scale.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Gap Between Dev and Prod
&lt;/h2&gt;

&lt;p&gt;Most ML pipelines are designed around a happy-path assumption: data is clean, features are fresh, requests arrive at a manageable pace, and the compute resources you provisioned are enough. These assumptions hold in development. They rarely hold at scale.&lt;/p&gt;

&lt;p&gt;The production environment introduces three categories of pressure that expose architectural weaknesses:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Load variability.&lt;/strong&gt; Traffic is never flat. Real-world AI workloads spike — product launches, viral events, end-of-quarter reporting rushes, user behavior patterns tied to time zones. A pipeline that performs at P50 doesn’t guarantee acceptable behavior at P99. And P99 is where your users live when things go wrong.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Data velocity.&lt;/strong&gt; Features go stale. The world changes faster than batch refresh cycles. For recommendation systems, fraud detection, personalization engines, and anything that depends on recent behavioral signals, a feature value that’s 15 minutes old can be meaningfully worse than one that’s 15 seconds old. The gap between feature generation and model consumption is a direct contributor to prediction quality degradation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Index drift.&lt;/strong&gt; Vector search is not a set-it-and-forget-it operation. As your embedding space grows and evolves — new documents, updated products, revised knowledge bases — the indices that power semantic search require continuous maintenance. Approximate Nearest Neighbor (ANN) indices in particular degrade in relevance and response time as the data distribution shifts underneath them.&lt;/p&gt;

&lt;p&gt;Understanding these three pressures is the first step toward designing systems that survive them.&lt;/p&gt;




&lt;h2&gt;
  
  
  What “Real-Time” Actually Requires
&lt;/h2&gt;

&lt;p&gt;“Real-time AI” is an overloaded term. Before you can design for it, you need to be precise about what it means in your context. There are at least three meaningful tiers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Near-real-time (seconds to minutes):&lt;/strong&gt; Acceptable for many analytics, batch recommendation refreshes, and reporting use cases.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Low-latency (sub-second):&lt;/strong&gt; Required for interactive recommendation, search, and user-facing personalization.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Streaming real-time (milliseconds):&lt;/strong&gt; Required for fraud detection, financial trading signals, and reactive safety systems.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each tier demands different architectural choices. A feature store that works beautifully for near-real-time refreshes may be completely unsuited for millisecond-latency inference. The first architectural question to ask isn’t &lt;em&gt;“how do we get features?”&lt;/em&gt; — it’s &lt;em&gt;“what does real-time actually mean for this workload?”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Once you’ve answered that, you can reason about the pipeline design.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Three Architectural Pillars
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Feature Freshness: Designing for the Speed of Your Signal
&lt;/h3&gt;

&lt;p&gt;The feature pipeline is where most latency and staleness problems originate. There are two broad architectures:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Batch feature pipelines&lt;/strong&gt; compute features on a schedule — hourly, daily, or on-demand — and write them to a feature store. They’re operationally simple and cost-efficient. They’re also structurally incapable of delivering fresh signals for low-latency workloads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Streaming feature pipelines&lt;/strong&gt; compute features continuously as events arrive, using frameworks like Apache Kafka, Apache Flink, or Spark Structured Streaming. They’re more complex to build and operate, but they’re the only viable path when your model needs to reason about what happened in the last 30 seconds.&lt;/p&gt;

&lt;p&gt;The practical reality is that most production systems need both. A &lt;em&gt;Lambda architecture&lt;/em&gt; pattern — combining batch for historical aggregates with streaming for real-time signals — gives you the freshness of streaming where it matters without abandoning the reliability and richness of batch-computed features.&lt;/p&gt;

&lt;p&gt;Key design decisions in feature pipelines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Point-in-time correctness&lt;/strong&gt; : Features used for training must reflect what the system would have known at the moment of prediction — not values computed with hindsight. Failure to enforce this introduces training-serving skew, one of the most insidious sources of silent model degradation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backfill capability&lt;/strong&gt; : Can your streaming pipeline reconstruct historical features when you retrain? Architectures that can’t backfill trade away long-term flexibility for short-term simplicity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Feature reuse&lt;/strong&gt; : The same feature — a user’s 7-day purchase count, for example — is often needed by multiple models. Centralizing feature computation prevents redundant infrastructure and inconsistent definitions across teams.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  2. The Feature Store: Consistency and Latency at Scale
&lt;/h3&gt;

&lt;p&gt;A feature store is the operational hub of a real-time ML system. It serves as the bridge between feature computation (where data scientists live) and model inference (where production systems live). Getting its design right has outsized consequences.&lt;/p&gt;

&lt;p&gt;The central tension in feature store design is between &lt;strong&gt;consistency&lt;/strong&gt; and &lt;strong&gt;latency&lt;/strong&gt;. Achieving both simultaneously at scale is genuinely hard.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The dual-store pattern&lt;/strong&gt; is the most widely adopted solution. It separates storage into two layers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An &lt;strong&gt;online store&lt;/strong&gt; — typically an in-memory or low-latency key-value store — serves features at inference time. Reads must be fast, often sub-millisecond. The tradeoff is cost: fast storage is expensive, so online stores typically hold only the most recent feature values.&lt;/li&gt;
&lt;li&gt;An &lt;strong&gt;offline store&lt;/strong&gt; — typically a columnar data warehouse — serves training pipelines, batch scoring, and historical analysis. Reads are slower but the storage cost is orders of magnitude lower.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A write path synchronizes values between the two stores as new features are computed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Consistency pitfalls to design against:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Training-serving skew&lt;/strong&gt; : If the offline store and online store derive features differently — even slightly — your model is trained on data that doesn’t match what it sees in production. This is silent and difficult to detect.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Schema drift&lt;/strong&gt; : Features evolve. Adding a new feature, changing a transformation, or retiring a deprecated one all require careful version management. Feature stores without explicit schema governance accumulate technical debt that eventually manifests as production incidents.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cold start&lt;/strong&gt; : When a new entity (a new user, a new product) arrives with no feature history, what does the model see? Null-handling and default value strategy belong in the feature store design, not as afterthoughts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Access pattern design:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Feature retrieval for inference often involves batch point lookups — fetching dozens of feature values for a single entity across multiple feature groups simultaneously. The data model and indexing strategy of your online store must be optimized for this access pattern, not for the range scans and aggregations that suit an offline analytical store.&lt;/p&gt;




&lt;h3&gt;
  
  
  3. Vector Search at Scale: Maintaining Performance Under Continuous Change
&lt;/h3&gt;

&lt;p&gt;Vector databases and ANN search have moved from research curiosity to production infrastructure in a remarkably short time. They’re now central to RAG (Retrieval-Augmented Generation) pipelines, semantic search, recommendation systems, and multimodal applications. And they introduce a class of operational problems that most teams underestimate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The index degradation problem&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;ANN indices — HNSW, IVF, and their variants — are built for approximate search speed, not for correctness under mutation. They’re typically optimized at build time for a specific data distribution. As you add, update, and delete vectors continuously, several things happen:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Recall degrades&lt;/strong&gt; : The approximation quality drops as the index structure diverges from the actual data distribution.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Latency increases&lt;/strong&gt; : More nodes are traversed during search as the graph structure becomes less optimal.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tombstone accumulation&lt;/strong&gt; : Deleted vectors that aren’t fully purged create phantom results and slow index traversal.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The naive solution — periodic full index rebuilds — introduces its own problems: rebuild latency, resource contention during the rebuild window, and the risk of serving stale or inconsistent results during transitions.&lt;/p&gt;

&lt;p&gt;More sophisticated approaches include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Incremental indexing&lt;/strong&gt; : Adding new vectors to the live index rather than rebuilding from scratch, trading some approximation quality for operational continuity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Segment-based architectures&lt;/strong&gt; : Maintaining multiple smaller index segments that are merged periodically, similar to how LSM-tree databases manage compaction. Fresh vectors land in small, easily-rebuilt segments; cold vectors live in stable, large segments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Recall monitoring&lt;/strong&gt; : Treating recall as an operational metric — not just a benchmark number — and triggering maintenance operations when it drops below acceptable thresholds.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Filtering and hybrid search&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Production vector search is rarely pure semantic similarity. Real workloads layer metadata filters on top of vector similarity: find the most relevant product &lt;em&gt;in a user’s country&lt;/em&gt;, find the most similar document &lt;em&gt;within a specific category&lt;/em&gt;, find semantically related customers &lt;em&gt;above a revenue threshold&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Pre-filtering and post-filtering strategies have meaningfully different performance and correctness profiles. Pre-filtering (restricting the candidate set before ANN search) is faster but can miss relevant results if the filter is highly selective. Post-filtering (running ANN search broadly, then applying filters) is more complete but wastes compute. The right approach depends on your data distribution and selectivity characteristics — and it needs to be a deliberate architectural choice, not a default.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Framework for Evaluating Your Pipeline
&lt;/h2&gt;

&lt;p&gt;Before committing to architectural decisions, it’s worth stress-testing your current or planned design against these questions:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;On feature freshness:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
– What is the maximum acceptable age of each feature at inference time?&lt;br&gt;&lt;br&gt;
– Do you have a streaming path for high-velocity signals?&lt;br&gt;&lt;br&gt;
– Is training-serving skew actively monitored?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;On the feature store:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
– Can you retrieve all features for a single inference request in a single round-trip?&lt;br&gt;&lt;br&gt;
– Is your schema versioned and your transformation logic reproducible?&lt;br&gt;&lt;br&gt;
– What happens when a new entity arrives with no feature history?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;On vector search:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
– Do you track recall as a production metric?&lt;br&gt;&lt;br&gt;
– How do you handle index updates without full rebuilds?&lt;br&gt;&lt;br&gt;
– Is your filtering strategy validated against your actual query distribution?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;On the system as a whole:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
– What is your P99 latency SLA, and have you load-tested to it?&lt;br&gt;&lt;br&gt;
– Where are your single points of failure?&lt;br&gt;&lt;br&gt;
– Can you replay or backfill features and embeddings if a component fails?&lt;/p&gt;

&lt;p&gt;These aren’t hypothetical questions. Each one corresponds to a category of production incident that real teams have encountered when real-time AI systems scaled beyond their original design envelope.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Shift in Mindset
&lt;/h2&gt;

&lt;p&gt;Scaling real-time AI infrastructure requires a shift in how engineering teams think about the problem.&lt;/p&gt;

&lt;p&gt;In early development, the model is the system. Accuracy is the primary metric. Everything else is scaffolding.&lt;/p&gt;

&lt;p&gt;At scale, the &lt;em&gt;pipeline&lt;/em&gt; is the system. The model is one component — important, but dependent on everything that surrounds it. Latency, freshness, consistency, and recall become first-class engineering concerns, tracked with the same rigor as model performance metrics.&lt;/p&gt;

&lt;p&gt;The teams that make this transition successfully are the ones that start treating their feature pipelines, feature stores, and vector indices not as data infrastructure afterthoughts, but as the production systems they actually are — with SLAs, observability, capacity planning, and failure modes worth designing against from the start.&lt;/p&gt;

&lt;p&gt;Real-time AI at scale is harder than it looks. But it’s not mysterious. The problems are identifiable, the architectural patterns are well-understood, and the path forward is clear once you’re asking the right questions.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post is part of an ongoing series on building production-grade AI systems. If you found this useful, consider sharing it with a teammate who’s hitting these problems for the first time.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  When Your AI Pipeline Grows Up
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Real Time AI At Scale – &lt;em&gt;This Post.&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Feature Freshness – &lt;em&gt;Coming 13 May 2026&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Feature Store - &lt;em&gt;Coming 20 May 2026&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Vector Search - &lt;em&gt;Coming 27 May 2026&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Operations - &lt;em&gt;Coming 3 June 2026&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The post &lt;a href="https://www.kenwalger.com/blog/ai/when-your-ai-pipeline-grows-up-infrastructure-thinking-for-real-time-inference-at-scale/" rel="noopener noreferrer"&gt;When Your AI Pipeline Grows Up: Infrastructure Thinking for Real-Time Inference at Scale&lt;/a&gt; appeared first on &lt;a href="https://www.kenwalger.com/blog" rel="noopener noreferrer"&gt;Blog of Ken W. Alger&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
    </item>
    <item>
      <title>The Backyard Quarry, Part 8: From Rocks to Reality</title>
      <dc:creator>Ken W Alger</dc:creator>
      <pubDate>Tue, 05 May 2026 14:18:53 +0000</pubDate>
      <link>https://forem.com/kenwalger/the-backyard-quarry-part-8-from-rocks-to-reality-1dbc</link>
      <guid>https://forem.com/kenwalger/the-backyard-quarry-part-8-from-rocks-to-reality-1dbc</guid>
      <description>&lt;p&gt;At the beginning of this series, the problem seemed simple.&lt;/p&gt;

&lt;p&gt;There were a lot of rocks in the yard.&lt;/p&gt;

&lt;p&gt;Some were small.&lt;/p&gt;

&lt;p&gt;Some were large.&lt;/p&gt;

&lt;p&gt;A few were firmly in what I’ve been calling Engine Block Class.&lt;/p&gt;

&lt;p&gt;The original idea was straightforward: catalog them, maybe sell a few, and build a small system around the process.&lt;/p&gt;

&lt;p&gt;Along the way, the project grew.&lt;/p&gt;

&lt;h2&gt;What We Built&lt;/h2&gt;

&lt;p&gt;Across the previous posts, the Backyard Quarry gradually evolved into something more structured.&lt;/p&gt;

&lt;p&gt;We explored:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;designing a schema for physical objects&lt;/li&gt;
    &lt;li&gt;capturing images and measurements&lt;/li&gt;
    &lt;li&gt;building ingestion pipelines&lt;/li&gt;
    &lt;li&gt;indexing and searching the dataset&lt;/li&gt;
    &lt;li&gt;representing objects as digital twins&lt;/li&gt;
    &lt;li&gt;scaling the system as the dataset grows&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;None of these ideas are particularly new on their own.&lt;/p&gt;

&lt;p&gt;But when combined, they form a recognizable structure.&lt;/p&gt;

&lt;h2&gt;The Pattern Behind the Project&lt;/h2&gt;

&lt;p&gt;What the Quarry experiment revealed is that many modern systems share the same underlying architecture.&lt;/p&gt;

&lt;p&gt;It doesn’t matter whether the input is:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;rocks in a backyard&lt;/li&gt;
    &lt;li&gt;industrial machine parts&lt;/li&gt;
    &lt;li&gt;museum artifacts&lt;/li&gt;
    &lt;li&gt;scanned environments&lt;/li&gt;
    &lt;li&gt;sensor data&lt;/li&gt;
    &lt;li&gt;documents or images&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The pattern remains surprisingly consistent.&lt;/p&gt;

&lt;p&gt;We start with the physical world.&lt;/p&gt;

&lt;p&gt;We capture information from it.&lt;/p&gt;

&lt;p&gt;We transform that information into structured data.&lt;/p&gt;

&lt;p&gt;Then we build systems on top of that structure.&lt;/p&gt;

&lt;h2&gt;The Signature Architecture&lt;/h2&gt;

&lt;p&gt;At a high level, the pattern looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.kenwalger.com%2Fblog%2Fwp-content%2Fuploads%2F2026%2F05%2Fphysical-world-to-data-platform-architecture-897x1024.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.kenwalger.com%2Fblog%2Fwp-content%2Fuploads%2F2026%2F05%2Fphysical-world-to-data-platform-architecture-897x1024.png" alt="Diagram showing a system architecture where physical world inputs flow through capture, ingestion, processing, storage, indexing, and application layers." width="800" height="913"&gt;&lt;/a&gt; &lt;/p&gt;
A common architecture pattern for systems that transform real-world inputs into usable digital platforms.



&lt;p&gt;Each layer has a role:&lt;/p&gt;

&lt;h3&gt;Capture Layer&lt;/h3&gt;

&lt;p&gt;The interface between the real world and the system.&lt;/p&gt;

&lt;p&gt;Examples:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;cameras&lt;/li&gt;
    &lt;li&gt;sensors&lt;/li&gt;
    &lt;li&gt;manual input&lt;/li&gt;
    &lt;li&gt;scanning systems&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;Ingestion Pipeline&lt;/h3&gt;

&lt;p&gt;Raw inputs enter the system.&lt;/p&gt;

&lt;p&gt;Queues and ingestion services buffer incoming data.&lt;/p&gt;

&lt;p&gt;This stage provides resilience and scalability.&lt;/p&gt;

&lt;h3&gt;Processing &amp;amp; Transformation&lt;/h3&gt;

&lt;p&gt;Raw inputs are converted into usable forms.&lt;/p&gt;

&lt;p&gt;Examples:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;metadata extraction&lt;/li&gt;
    &lt;li&gt;photogrammetry&lt;/li&gt;
    &lt;li&gt;feature generation&lt;/li&gt;
    &lt;li&gt;classification&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;Structured Data + Assets&lt;/h3&gt;

&lt;p&gt;The system stores both:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;structured records&lt;/li&gt;
    &lt;li&gt;unstructured assets&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is where digital twins live.&lt;/p&gt;

&lt;h3&gt;Indexing &amp;amp; Search&lt;/h3&gt;

&lt;p&gt;Data becomes usable.&lt;/p&gt;

&lt;p&gt;Indexes, embeddings, and search systems allow retrieval and exploration.&lt;/p&gt;

&lt;h3&gt;Applications&lt;/h3&gt;

&lt;p&gt;Finally, systems are built on top of the data:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;dashboards&lt;/li&gt;
    &lt;li&gt;analytics&lt;/li&gt;
    &lt;li&gt;automation&lt;/li&gt;
    &lt;li&gt;AI systems&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Recognizing Systems&lt;/h2&gt;

&lt;p&gt;One of the more interesting outcomes of the Quarry project is how quickly the pattern became recognizable.&lt;/p&gt;

&lt;p&gt;Once you see it, it’s hard to miss.&lt;/p&gt;

&lt;p&gt;Manufacturing systems follow this structure.&lt;/p&gt;

&lt;p&gt;Archival systems follow this structure.&lt;/p&gt;

&lt;p&gt;Many modern AI systems follow this structure.&lt;/p&gt;

&lt;p&gt;Even systems designed to analyze motion or sensor data follow this structure.&lt;/p&gt;

&lt;p&gt;Different inputs.&lt;/p&gt;

&lt;p&gt;Same architecture.&lt;/p&gt;

&lt;h2&gt;Systems Thinking&lt;/h2&gt;

&lt;p&gt;The biggest shift in perspective comes when you stop thinking about individual objects and start thinking about the system as a whole.&lt;/p&gt;

&lt;p&gt;Instead of asking:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;&lt;em&gt;How do we catalog this rock?&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You start asking:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;&lt;em&gt;How does the system handle many objects over time?&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This change in perspective leads to different kinds of decisions:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;how pipelines are structured&lt;/li&gt;
    &lt;li&gt;how data flows through the system&lt;/li&gt;
    &lt;li&gt;how failures are handled&lt;/li&gt;
    &lt;li&gt;how the system evolves&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At that point, the problem is no longer about objects.&lt;/p&gt;

&lt;p&gt;It’s about systems.&lt;/p&gt;

&lt;h2&gt;A Small Experiment&lt;/h2&gt;

&lt;p&gt;The &lt;strong&gt;Backyard Quarry&lt;/strong&gt; began as a small experiment.&lt;/p&gt;

&lt;p&gt;A dataset that happened to be available.&lt;/p&gt;

&lt;p&gt;A problem that seemed simple.&lt;/p&gt;

&lt;p&gt;But small experiments are often useful.&lt;/p&gt;

&lt;p&gt;They allow ideas to emerge in a manageable setting.&lt;/p&gt;

&lt;p&gt;The same architectural questions that appear in large organizations also appear here — just at a smaller scale.&lt;/p&gt;

&lt;h2&gt;The Real Takeaway&lt;/h2&gt;

&lt;p&gt;The real lesson from the Quarry isn’t about rocks.&lt;/p&gt;

&lt;p&gt;It’s about recognizing patterns.&lt;/p&gt;

&lt;p&gt;Modern systems often share common structures.&lt;/p&gt;

&lt;p&gt;Once you understand those structures, it becomes easier to design new systems.&lt;/p&gt;

&lt;p&gt;You start to see the same ideas appearing in different places.&lt;/p&gt;

&lt;p&gt;And that recognition becomes a powerful tool.&lt;/p&gt;

&lt;h2&gt;One Last Observation&lt;/h2&gt;

&lt;p&gt;Some engineering lessons come from large projects.&lt;/p&gt;

&lt;p&gt;Others come from experiments.&lt;/p&gt;

&lt;p&gt;Occasionally, they come from a pile of rocks in the backyard.&lt;/p&gt;

&lt;p&gt;And if you happen to need a carefully documented specimen from the &lt;strong&gt;Backyard Quarry&lt;/strong&gt;, inventory may still be available.&lt;/p&gt;

&lt;p&gt;Shipping, however, remains an unsolved optimization problem.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Rock Quarry Series
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.kenwalger.com/blog/software-engineering/the-backyard-quarry-turning-rocks-into-data" rel="noopener noreferrer"&gt;Turning Rocks into Data&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.kenwalger.com/blog/data-engineering/designing-a-schema-for-physical-objects" rel="noopener noreferrer"&gt;Designing a Schema for Physical Objects&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.kenwalger.com/blog/data-engineering/capturing-physical-objects-data-pipeline" rel="noopener noreferrer"&gt;Capturing the Physical World&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.kenwalger.com/blog/data-engineering/searching-physical-objects-data-indexing" rel="noopener noreferrer"&gt;Searching a Pile of Rocks&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.kenwalger.com/blog/data-engineering/digital-twins-physical-objects-explained" rel="noopener noreferrer"&gt;Digital Twins for Physical Objects&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.kenwalger.com/blog/data-engineering/scaling-data-pipelines-physical-objects" rel="noopener noreferrer"&gt;Scaling the Quarry&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.kenwalger.com/blog/data-engineering/system-design-patterns-real-world-data-platforms" rel="noopener noreferrer"&gt;Systems Beyond the Backyard&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.kenwalger.com/blog/data-engineering/from-rocks-to-reality-system-design-patterns" rel="noopener noreferrer"&gt;From Rocks to Reality&lt;/a&gt; - &lt;em&gt;This Post&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>architecture</category>
      <category>dataengineering</category>
      <category>sideprojects</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>The Guardian: Human-in-the-Loop AI Governance</title>
      <dc:creator>Ken W Alger</dc:creator>
      <pubDate>Thu, 30 Apr 2026 16:24:47 +0000</pubDate>
      <link>https://forem.com/kenwalger/the-guardian-human-in-the-loop-ai-governance-jgf</link>
      <guid>https://forem.com/kenwalger/the-guardian-human-in-the-loop-ai-governance-jgf</guid>
      <description>&lt;h1&gt;The Guardian: Human-in-the-Loop AI Governance&lt;/h1&gt;

&lt;p&gt;We’ve built a system that is &lt;a href="https://www.kenwalger.com/blog/ai/ai-agent-reliability-llm-as-a-judge" rel="noopener noreferrer"&gt;Reliable&lt;/a&gt; and &lt;a href="https://www.kenwalger.com/blog/ai/the-accountant-optimizing-ai-costs-with-semantic-routing" rel="noopener noreferrer"&gt;Affordable&lt;/a&gt;. Our Forensic Team is accurate, and &lt;a href="https://www.kenwalger.com/blog/ai/the-accountant-optimizing-ai-costs-with-semantic-routing" rel="noopener noreferrer"&gt;The Accountant&lt;/a&gt; ensures we aren't wasting our cognitive budget.&lt;/p&gt;

&lt;p&gt;But in the enterprise, "capable" is not enough. For high-stakes decisions—like a $50k rare book audit or a compliance check—fully autonomous AI is a &lt;strong&gt;Liability&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Today, we introduce &lt;strong&gt;The Guardian&lt;/strong&gt;: The final phase of our Production-Grade AI trilogy. We are implementing a standardized Human-in-the-Loop (HITL) checkpoint, moving from "Autonomous Agents" to "Augmented Intelligence."&lt;/p&gt;

&lt;h2&gt;1. &lt;strong&gt;The Autonomous Trap: Confident Hallucination&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In the first post of this series, &lt;a href="https://www.kenwalger.com/blog/ai/ai-agent-reliability-llm-as-a-judge" rel="noopener noreferrer"&gt;The Judge&lt;/a&gt; proved that even the best models can confidently hallucinate. In a forensic audit, an agent might identify a water damage pattern and declare: &lt;strong&gt;"CRITICAL: High probability of modern forgery."&lt;/strong&gt; If that finding is wrong, the reputational and financial damage is severe. The problem isn't the AI’s capability; it’s the &lt;em&gt;lack of authorization&lt;/em&gt;. The agent is a &lt;em&gt;worker&lt;/em&gt;, not a &lt;em&gt;partner&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;2. &lt;strong&gt;Implementing the "Governance Gate"&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;We need a way to "brake" the agent’s flow when it finds a high-severity issue. We’ve added the &lt;code&gt;request_human_signature&lt;/code&gt; tool to our &lt;a href="https://github.com/kenwalger/mcp-forensic-analyzer" rel="noopener noreferrer"&gt;Forensic Analyzer MCP server project&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In &lt;code&gt;orchestrator.py&lt;/code&gt;, we updated the logic. When the Analyst flags a "HIGH" severity discrepancy, the system performs a specialized handshake:&lt;/p&gt;

&lt;ol&gt;
    &lt;li&gt;
&lt;strong&gt;Stateful Pause:&lt;/strong&gt; The Python orchestrator interrupts the agent workflow.&lt;/li&gt;
    &lt;li&gt;
&lt;strong&gt;Authorization Prompt:&lt;/strong&gt; It presents the evidence to the user via a CLI prompt.&lt;/li&gt;
    &lt;li&gt;
&lt;strong&gt;Cryptographic Signature:&lt;/strong&gt; The user must authorize the finding before it’s committed to the final report.&lt;/li&gt;
&lt;/ol&gt;

&lt;pre&gt;&lt;code&gt;# The Guardian's "Nuclear Key" moment in orchestrator.py
def _apply_guardian_handshake(analyst_result: dict) -&amp;gt; tuple[dict, list[dict]]:
    """
    Human-in-the-Loop: if Analyst has HIGH discrepancies, prompt for authorization.
    """
    disputed: list[dict] = []
    data = analyst_result.get("data") or {}
    disc = data.get("discrepancies", [])

    # Filter for the "High Stakes" findings
    high_disc = [d for d in disc if (d.get("severity") or "").upper() == "HIGH"]

    for d in high_disc:
        summary = f"[{d.get('severity')}] {d.get('field')}: {d.get('expected')} vs {d.get('observed')}"
        print(f"\n  Guardian: HIGH severity finding — {summary}")

        # THE STATEFUL PAUSE: The orchestrator stops and waits for a human
        answer = input("  Do you authorize this forensic finding? (yes/no): ").strip().lower()

        if answer != "yes":
            # Escalation: If not authorized, it's flagged as 'DISPUTED_BY_HUMAN'
            disputed.append({**d, "status": "DISPUTED_BY_HUMAN"})

    return analyst_result, disputed
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;By requiring a human to type 'yes', we are moving from Autonomous Assumption to Authorized Augmentation in the following ways:&lt;/p&gt;

&lt;ol&gt;
    &lt;li&gt;
&lt;strong&gt;Severity-Based Intervention:&lt;/strong&gt; "We don't interrupt the user for every 'Low' or 'Medium' variance. We only trigger the Guardian for High-Severity findings—those that carry legal or financial liability. This preserves the 'UX flow' while maintaining safety."&lt;/li&gt;
    &lt;li&gt;
&lt;strong&gt;The 'Disputed' State:&lt;/strong&gt; "Notice that a 'No' from the human doesn't just delete the finding. It moves it to a specialized 'Requires Further Investigation' section of the report. This ensures that the AI’s observation is preserved but clearly labeled as unauthorized."&lt;/li&gt;
    &lt;li&gt;
&lt;strong&gt;Non-Interactive Fallback:&lt;/strong&gt; "The code includes a check for EOFError (line 507). If the system is running in a non-interactive environment like a CI/CD pipeline, it defaults to 'No' (Dispute) for safety. Never default to 'Yes' for a high-risk authorization."&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.kenwalger.com%2Fblog%2Fwp-content%2Fuploads%2F2026%2F04%2Fai-governance-human-in-the-loop-hitl-handshake-logic-520x1024.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.kenwalger.com%2Fblog%2Fwp-content%2Fuploads%2F2026%2F04%2Fai-governance-human-in-the-loop-hitl-handshake-logic-520x1024.png" alt="Architectural diagram of a human-in-the-loop AI governance system called The Guardian. An agent workflow processes a task. When it detects a high-severity finding, it pauses and performs a stateful 'Authorization Handshake' with a Human Guardian. The human must sign or reject the finding before it proceeds to finalize the output report." width="520" height="1024"&gt;&lt;/a&gt; &lt;/p&gt;
The Guardian Architecture—Moving from Autonomous Agents to Stateful, Authorized Human-AI Augmentation.



&lt;h2&gt;3. &lt;strong&gt;Beyond the CLI: The Enterprise Handshake&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This reference implementation uses a CLI input() prompt for simplicity. However, the MCP tool is &lt;em&gt;standardized&lt;/em&gt;. In a production environment, this tool wouldn't pause a Python script; it would:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;Trigger a Slack/Teams Alert to a senior auditor.&lt;/li&gt;
    &lt;li&gt;Open a Jira Ticket for manual review.&lt;/li&gt;
    &lt;li&gt;Request a Webauthn (Biometric) Signature in a web dashboard.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Summary: Building the Sovereign AI Stack&lt;/h2&gt;

&lt;p&gt;Across this series, we’ve moved from basic orchestration to a &lt;strong&gt;Production-Grade AI Mesh&lt;/strong&gt;. We’ve proven that we can build systems that are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Reliable:&lt;/strong&gt; Audited by &lt;a href="https://www.kenwalger.com/blog/ai/ai-agent-reliability-llm-as-a-judge/" rel="noopener noreferrer"&gt;The Judge&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sustainable:&lt;/strong&gt; Optimized by &lt;a href="https://www.kenwalger.com/blog/ai/the-accountant-optimizing-ai-costs-with-semantic-routing/" rel="noopener noreferrer"&gt;The Accountant&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Safe:&lt;/strong&gt; Governed by &lt;strong&gt;The Guardian&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;The road to autonomous agents isn't paved with more tokens; it's paved with better guardrails.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;What’s Next?&lt;/h3&gt;

&lt;p&gt;The code for the entire trilogy is available in the &lt;strong&gt;&lt;a href="https://github.com/kenwalger/mcp-forensic-analyzer" rel="noopener noreferrer"&gt;MCP Forensic Analyzer repository&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;I'm currently working on &lt;strong&gt;Phase 3: The Sovereign Vault&lt;/strong&gt;, where we will explore &lt;strong&gt;Local Multimodal Vision&lt;/strong&gt; (processing artifact images without cloud egress) and &lt;strong&gt;PII Redaction&lt;/strong&gt; to protect proprietary "Golden Data."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Have questions about implementing these patterns in your own enterprise?&lt;/strong&gt; Connect with me on &lt;a href="https://www.linkedin.com/in/kenwalger/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; or follow the blog for the next series.&lt;/p&gt;

&lt;h3&gt;The Production-Grade AI Series (Complete)&lt;/h3&gt;

&lt;ul&gt;
    &lt;li&gt;Post 1: &lt;a href="https://www.kenwalger.com/blog/ai/ai-agent-reliability-llm-as-a-judge/" rel="noopener noreferrer"&gt;The Judge Agent: Who Audits the Auditors?&lt;/a&gt; (Reliability)&lt;/li&gt;
    &lt;li&gt;Post 2: &lt;a href="https://www.kenwalger.com/blog/ai/the-accountant-optimizing-ai-costs-with-semantic-routing/" rel="noopener noreferrer"&gt;The Accountant: Cognitive Budgeting &amp;amp; Model Routing&lt;/a&gt; (Sustainability)&lt;/li&gt;
    &lt;li&gt;Post 3: The Guardian: Human-in-the-Loop Governance (Safety)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Looking for the foundation? Check out my previous series: &lt;a href="https://www.kenwalger.com/blog/ai/mcp-usb-c-moment-ai-architecture/" rel="noopener noreferrer"&gt;The Zero-Glue AI Mesh with MCP&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>governance</category>
      <category>safety</category>
      <category>python</category>
    </item>
    <item>
      <title>What I’ve Been Building: Systems, AI, and Real-World Data</title>
      <dc:creator>Ken W Alger</dc:creator>
      <pubDate>Wed, 29 Apr 2026 15:24:06 +0000</pubDate>
      <link>https://forem.com/kenwalger/what-ive-been-building-systems-ai-and-real-world-data-426a</link>
      <guid>https://forem.com/kenwalger/what-ive-been-building-systems-ai-and-real-world-data-426a</guid>
      <description>&lt;p&gt;Over the past several weeks, I’ve been spending a lot of time thinking about systems.&lt;/p&gt;

&lt;p&gt;Some of that thinking has taken the form of writing.&lt;/p&gt;

&lt;p&gt;If you’ve come across any of my recent posts, they might seem like they cover very different topics:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;cataloging rocks in a backyard&lt;/li&gt;
    &lt;li&gt;building AI systems using MCP&lt;/li&gt;
    &lt;li&gt;working with documents, images, and real-world data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At first glance, they don’t appear to have much in common.&lt;/p&gt;

&lt;p&gt;But they’re all exploring the same underlying idea.&lt;/p&gt;

&lt;h2&gt;The Common Thread&lt;/h2&gt;

&lt;p&gt;Across all of these posts, the focus has been on a specific kind of problem:&lt;/p&gt;

&lt;blockquote&gt;How do we turn messy, real-world inputs into structured, usable systems?&lt;/blockquote&gt;

&lt;p&gt;That problem shows up in many different forms.&lt;/p&gt;

&lt;p&gt;Sometimes the input is physical:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;objects&lt;/li&gt;
    &lt;li&gt;artifacts&lt;/li&gt;
    &lt;li&gt;environments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Sometimes it’s digital:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;documents&lt;/li&gt;
    &lt;li&gt;images&lt;/li&gt;
    &lt;li&gt;logs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Sometimes it’s dynamic:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;motion&lt;/li&gt;
    &lt;li&gt;behavior&lt;/li&gt;
    &lt;li&gt;sensor data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But the challenge is the same.&lt;/p&gt;

&lt;p&gt;The input is unstructured.&lt;/p&gt;

&lt;p&gt;The system needs structure.&lt;/p&gt;

&lt;h2&gt;The Backyard Quarry&lt;/h2&gt;

&lt;p&gt;One way I explored this idea was through a small project I called the &lt;strong&gt;Backyard Quarry&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It started with a simple observation:&lt;/p&gt;

&lt;p&gt;There are a lot of rocks in the yard.&lt;/p&gt;

&lt;p&gt;From there, the problem evolved into something more interesting:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;how to represent physical objects as data&lt;/li&gt;
    &lt;li&gt;how to capture images and measurements&lt;/li&gt;
    &lt;li&gt;how to build pipelines around that data&lt;/li&gt;
    &lt;li&gt;how to search and organize it&lt;/li&gt;
    &lt;li&gt;how to think about digital twins&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What began as a small experiment became a way to explore system design in a constrained, tangible setting.&lt;/p&gt;

&lt;h2&gt;MCP and AI Systems&lt;/h2&gt;

&lt;p&gt;In parallel, I’ve been writing about building AI systems using MCP.&lt;/p&gt;

&lt;p&gt;On the surface, this looks very different.&lt;/p&gt;

&lt;p&gt;Instead of rocks, the inputs are:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;documents&lt;/li&gt;
    &lt;li&gt;APIs&lt;/li&gt;
    &lt;li&gt;models&lt;/li&gt;
    &lt;li&gt;agent workflows&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But the structure is familiar.&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;inputs are ingested&lt;/li&gt;
    &lt;li&gt;processed&lt;/li&gt;
    &lt;li&gt;transformed&lt;/li&gt;
    &lt;li&gt;routed&lt;/li&gt;
    &lt;li&gt;used by applications&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The system still needs to handle:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;variability&lt;/li&gt;
    &lt;li&gt;scale&lt;/li&gt;
    &lt;li&gt;imperfect data&lt;/li&gt;
    &lt;li&gt;orchestration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Different inputs.&lt;/p&gt;

&lt;p&gt;Same patterns.&lt;/p&gt;

&lt;h2&gt;From Objects to Systems&lt;/h2&gt;

&lt;p&gt;One of the more useful realizations in working through these ideas is this:&lt;/p&gt;

&lt;blockquote&gt;The problem is rarely about the individual object.
It’s about the system that handles many objects over time.&lt;/blockquote&gt;

&lt;p&gt;Whether the object is:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;a rock&lt;/li&gt;
    &lt;li&gt;a document&lt;/li&gt;
    &lt;li&gt;a sensor reading&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The questions become:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;how is it represented?&lt;/li&gt;
    &lt;li&gt;how does it enter the system?&lt;/li&gt;
    &lt;li&gt;how is it transformed?&lt;/li&gt;
    &lt;li&gt;how is it stored?&lt;/li&gt;
    &lt;li&gt;how is it retrieved?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are system-level questions.&lt;/p&gt;

&lt;h2&gt;A Shared Architecture&lt;/h2&gt;

&lt;p&gt;Across these different domains, a common architecture begins to emerge.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.kenwalger.com%2Fblog%2Fwp-content%2Fuploads%2F2026%2F04%2Freal-world-data-to-system-architecture-diagram-435x1024.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.kenwalger.com%2Fblog%2Fwp-content%2Fuploads%2F2026%2F04%2Freal-world-data-to-system-architecture-diagram-435x1024.png" alt="Diagram showing how raw inputs are captured, processed, structured, indexed, and used by applications in a data system." width="435" height="1024"&gt;&lt;/a&gt; &lt;/p&gt;
A common pattern for transforming real-world inputs into usable systems.



&lt;p&gt;The labels change depending on the domain.&lt;/p&gt;

&lt;p&gt;But the structure remains consistent.&lt;/p&gt;

&lt;h2&gt;Why This Matters&lt;/h2&gt;

&lt;p&gt;Understanding this pattern makes it easier to approach new problems.&lt;/p&gt;

&lt;p&gt;Instead of starting from scratch each time, you can ask:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;Where does the data come from?&lt;/li&gt;
    &lt;li&gt;How does it enter the system?&lt;/li&gt;
    &lt;li&gt;What transformations are required?&lt;/li&gt;
    &lt;li&gt;How will it be used?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This reduces complexity.&lt;/p&gt;

&lt;p&gt;It also makes systems more predictable.&lt;/p&gt;

&lt;h2&gt;What I’m Interested In&lt;/h2&gt;

&lt;p&gt;Going forward, I’m particularly interested in systems that sit at the boundary between:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;the physical world and digital systems&lt;/li&gt;
    &lt;li&gt;unstructured inputs and structured data&lt;/li&gt;
    &lt;li&gt;human workflows and automated processes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That includes areas like:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;digital archiving&lt;/li&gt;
    &lt;li&gt;photogrammetry and 3D capture&lt;/li&gt;
    &lt;li&gt;AI-assisted analysis&lt;/li&gt;
    &lt;li&gt;systems that track objects or behavior over time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These problems are messy.&lt;/p&gt;

&lt;p&gt;Which is part of what makes them interesting.&lt;/p&gt;

&lt;h2&gt;A Continuing Exploration&lt;/h2&gt;

&lt;p&gt;The posts I’ve been writing are not meant to be definitive.&lt;/p&gt;

&lt;p&gt;They’re part of an ongoing exploration.&lt;/p&gt;

&lt;p&gt;A way to think through problems in public.&lt;/p&gt;

&lt;p&gt;And occasionally, a way to use a slightly unusual example — like a pile of rocks — to make broader ideas easier to see.&lt;/p&gt;

&lt;h2&gt;If You’re Interested&lt;/h2&gt;

&lt;p&gt;If any of this resonates, you might find these useful:&lt;/p&gt;

&lt;h3&gt;The Backyard Quarry Series&lt;/h3&gt;

&lt;p&gt;A systems-focused look at modeling and working with physical objects starting with &lt;a href="https://www.kenwalger.com/blog/software-engineering/the-backyard-quarry-turning-rocks-into-data" rel="noopener noreferrer"&gt;Turning Rocks Into Data&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;MCP and AI Systems&lt;/h3&gt;

&lt;p&gt;A technical exploration of building agent-based systems and data pipelines. I'd suggest starting with &lt;a href="https://www.kenwalger.com/blog/ai/mcp-usb-c-moment-ai-architecture" rel="noopener noreferrer"&gt;The End of Glue Code: Why MCP is the USB-C Moment for AI Systems&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;More to come.&lt;/p&gt;

&lt;p&gt;And if nothing else, it turns out that even a backyard can be a good place to think about system design.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>data</category>
      <category>mcp</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>The Backyard Quarry, Part 7: Systems Beyond the Backyard</title>
      <dc:creator>Ken W Alger</dc:creator>
      <pubDate>Tue, 28 Apr 2026 16:43:12 +0000</pubDate>
      <link>https://forem.com/kenwalger/the-backyard-quarry-part-7-systems-beyond-the-backyard-4en0</link>
      <guid>https://forem.com/kenwalger/the-backyard-quarry-part-7-systems-beyond-the-backyard-4en0</guid>
      <description>&lt;p&gt;By now, the Backyard Quarry system has grown beyond its original intent.&lt;/p&gt;

&lt;p&gt;We started with a pile of rocks.&lt;/p&gt;

&lt;p&gt;We ended up with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a schema&lt;/li&gt;
&lt;li&gt;a capture process&lt;/li&gt;
&lt;li&gt;a processing pipeline&lt;/li&gt;
&lt;li&gt;storage and indexing&lt;/li&gt;
&lt;li&gt;digital representations of physical objects&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Along the way, something interesting happened.&lt;/p&gt;

&lt;p&gt;The problems stopped feeling unique.&lt;/p&gt;

&lt;h2&gt;Recognizing the Pattern&lt;/h2&gt;

&lt;p&gt;At first, the Quarry felt like a small, slightly absurd project.&lt;/p&gt;

&lt;p&gt;But the more pieces came together, the more familiar it became.&lt;/p&gt;

&lt;p&gt;The same structure appeared again and again:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;capture data from the physical world&lt;/li&gt;
&lt;li&gt;transform it into structured representations&lt;/li&gt;
&lt;li&gt;store it&lt;/li&gt;
&lt;li&gt;index it&lt;/li&gt;
&lt;li&gt;build systems on top of it&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This isn’t a rock problem.&lt;/p&gt;

&lt;p&gt;It’s a pattern.&lt;/p&gt;

&lt;h2&gt;Where the Pattern Appears&lt;/h2&gt;

&lt;p&gt;Once you start looking for it, you see it everywhere.&lt;/p&gt;

&lt;h3&gt;Manufacturing Systems&lt;/h3&gt;

&lt;p&gt;Physical parts become digital records.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;components are tracked&lt;/li&gt;
&lt;li&gt;condition is monitored&lt;/li&gt;
&lt;li&gt;systems are modeled&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each part has a digital twin.&lt;/p&gt;

&lt;p&gt;The system keeps everything connected.&lt;/p&gt;

&lt;h3&gt;Museums and Archives&lt;/h3&gt;

&lt;p&gt;Artifacts are cataloged and preserved.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;metadata describes objects&lt;/li&gt;
&lt;li&gt;images and scans capture detail&lt;/li&gt;
&lt;li&gt;provenance tracks history&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal is the same:&lt;/p&gt;

&lt;p&gt;Turn physical objects into structured, searchable systems.&lt;/p&gt;

&lt;h3&gt;Photogrammetry and 3D Capture&lt;/h3&gt;

&lt;p&gt;Entire environments can be captured and reconstructed.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;objects become meshes&lt;/li&gt;
&lt;li&gt;scenes become models&lt;/li&gt;
&lt;li&gt;real-world geometry becomes data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the Quarry pipeline, scaled up.&lt;/p&gt;

&lt;h3&gt;AI and Document Systems&lt;/h3&gt;

&lt;p&gt;Even text-based systems follow the same pattern.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;raw documents are ingested&lt;/li&gt;
&lt;li&gt;processed into structured formats&lt;/li&gt;
&lt;li&gt;indexed for retrieval&lt;/li&gt;
&lt;li&gt;used by applications&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The inputs are different.&lt;/p&gt;

&lt;p&gt;The structure is familiar.&lt;/p&gt;

&lt;h3&gt;Healthcare and Motion&lt;/h3&gt;

&lt;p&gt;Human movement becomes data.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;sensors capture motion&lt;/li&gt;
&lt;li&gt;signals are processed&lt;/li&gt;
&lt;li&gt;patterns are analyzed&lt;/li&gt;
&lt;li&gt;systems track change over time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is where the idea of digital twins becomes more dynamic.&lt;/p&gt;

&lt;p&gt;Not just objects.&lt;/p&gt;

&lt;p&gt;But behavior.&lt;/p&gt;

&lt;h2&gt;The Common Structure&lt;/h2&gt;

&lt;p&gt;Across all of these domains, the same core system emerges.&lt;/p&gt;

&lt;p&gt;It doesn’t matter whether the input is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a rock&lt;/li&gt;
&lt;li&gt;a machine part&lt;/li&gt;
&lt;li&gt;an artifact&lt;/li&gt;
&lt;li&gt;a document&lt;/li&gt;
&lt;li&gt;a human movement pattern&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The architecture is remarkably consistent.&lt;/p&gt;

&lt;p&gt;Capture.&lt;/p&gt;

&lt;p&gt;Process.&lt;/p&gt;

&lt;p&gt;Store.&lt;/p&gt;

&lt;p&gt;Index.&lt;/p&gt;

&lt;p&gt;Use.&lt;/p&gt;

&lt;h2&gt;The Value of Abstraction&lt;/h2&gt;

&lt;p&gt;One of the more useful realizations from the Quarry project is this:&lt;/p&gt;

&lt;blockquote&gt;
  The value isn’t in the specific object.
  It’s in the system that handles it.
&lt;/blockquote&gt;

&lt;p&gt;Once you understand the pattern, you can apply it in different contexts.&lt;/p&gt;

&lt;p&gt;The details change.&lt;/p&gt;

&lt;p&gt;The structure remains.&lt;/p&gt;

&lt;h2&gt;Systems, Not Features&lt;/h2&gt;

&lt;p&gt;At a certain point, it becomes less useful to think in terms of features.&lt;/p&gt;

&lt;p&gt;Instead, the focus shifts to systems.&lt;/p&gt;

&lt;p&gt;Questions change.&lt;/p&gt;

&lt;p&gt;Instead of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;How do we store this object?&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;How do we search this dataset?&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You start asking:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;How does data move through the system?&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Where are the bottlenecks?&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;How do we handle growth?&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;How do we handle imperfect inputs?&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are system-level questions.&lt;/p&gt;

&lt;h2&gt;The Real Takeaway&lt;/h2&gt;

&lt;p&gt;The &lt;strong&gt;Backyard Quarry&lt;/strong&gt; started as a simple, somewhat comical, experiment.&lt;/p&gt;

&lt;p&gt;But it revealed something broader.&lt;/p&gt;

&lt;p&gt;Many modern systems are built on the same foundation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;transforming real-world inputs into structured data&lt;/li&gt;
&lt;li&gt;building pipelines around that transformation&lt;/li&gt;
&lt;li&gt;enabling search, analysis, and interaction&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The objects change.&lt;/p&gt;

&lt;p&gt;The pattern doesn’t.&lt;/p&gt;

&lt;h2&gt;Looking Back&lt;/h2&gt;

&lt;p&gt;It’s a little surprising how far the idea traveled.&lt;/p&gt;

&lt;p&gt;From:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a pile of rocks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;data modeling&lt;/li&gt;
&lt;li&gt;ingestion pipelines&lt;/li&gt;
&lt;li&gt;search systems&lt;/li&gt;
&lt;li&gt;digital twins&lt;/li&gt;
&lt;li&gt;scalable architectures&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And now:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;recognizing patterns across industries&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Not bad for something that started in the backyard.&lt;/p&gt;

&lt;h2&gt;What Comes Next&lt;/h2&gt;

&lt;p&gt;There’s one final step.&lt;/p&gt;

&lt;p&gt;So far, we’ve explored:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;how to model objects&lt;/li&gt;
&lt;li&gt;how to capture them&lt;/li&gt;
&lt;li&gt;how to store and search them&lt;/li&gt;
&lt;li&gt;how systems scale&lt;/li&gt;
&lt;li&gt;how patterns repeat&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the final post, we’ll bring everything together.&lt;/p&gt;

&lt;p&gt;A single view of the system.&lt;/p&gt;

&lt;p&gt;A way to think about it as a whole.&lt;/p&gt;

&lt;p&gt;Because once you can see the full structure, the pattern becomes difficult to miss.&lt;/p&gt;

&lt;p&gt;And at that point, it becomes clear that the Quarry was never really about rocks.&lt;/p&gt;

&lt;p&gt;It was about learning to recognize systems.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Rock Quarry Series
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.kenwalger.com/blog/software-engineering/the-backyard-quarry-turning-rocks-into-data" rel="noopener noreferrer"&gt;Turning Rocks into Data&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.kenwalger.com/blog/data-engineering/designing-a-schema-for-physical-objects" rel="noopener noreferrer"&gt;Designing a Schema for Physical Objects&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.kenwalger.com/blog/data-engineering/capturing-physical-objects-data-pipeline" rel="noopener noreferrer"&gt;Capturing the Physical World&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.kenwalger.com/blog/data-engineering/searching-physical-objects-data-indexing" rel="noopener noreferrer"&gt;Searching a Pile of Rocks&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.kenwalger.com/blog/data-engineering/digital-twins-physical-objects-explained" rel="noopener noreferrer"&gt;Digital Twins for Physical Objects&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.kenwalger.com/blog/data-engineering/scaling-data-pipelines-physical-objects" rel="noopener noreferrer"&gt;Scaling the Quarry&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.kenwalger.com/blog/data-engineering/system-design-patterns-real-world-data-platforms" rel="noopener noreferrer"&gt;Systems Beyond the Backyard&lt;/a&gt; - &lt;em&gt;This Post&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.kenwalger.com/blog/data-engineering/from-rocks-to-reality-system-design-patterns" rel="noopener noreferrer"&gt;From Rocks to Reality&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>architecture</category>
      <category>database</category>
      <category>dataengineering</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>The Accountant: Optimizing AI Costs with Semantic Routing</title>
      <dc:creator>Ken W Alger</dc:creator>
      <pubDate>Thu, 23 Apr 2026 16:25:32 +0000</pubDate>
      <link>https://forem.com/kenwalger/the-accountant-optimizing-ai-costs-with-semantic-routing-mi2</link>
      <guid>https://forem.com/kenwalger/the-accountant-optimizing-ai-costs-with-semantic-routing-mi2</guid>
      <description>&lt;p&gt;We’ve solved the Reliability problem with &lt;a href="https://www.kenwalger.com/blog/ai/ai-agent-reliability-llm-as-a-judge" rel="noopener noreferrer"&gt;The Judge&lt;/a&gt;. We have a system that can scientifically prove whether our Forensic Team is accurate. But there’s a new problem that keeps Directors and CFOs up at night: &lt;strong&gt;Sustainability&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In an enterprise environment, using a massive, high-reasoning model (like Claude 3.5 or GPT-4o) for every single bibliography lookup is a "Cognitive Budget" disaster. It’s like hiring a Senior Architect to fix a broken link.&lt;/p&gt;

&lt;p&gt;Today, we introduce &lt;strong&gt;The Accountant&lt;/strong&gt;: A Semantic Router that classifies task complexity and routes requests to the cheapest model capable of passing the Judge's rubric.&lt;/p&gt;

&lt;h2&gt;1. &lt;strong&gt;The Concept of "Tiered Intelligence"&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Not all forensic tasks require the same level of "gray matter." To scale effectively, we must categorize our workload:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;
&lt;strong&gt;LEVEL 1 (Operational):&lt;/strong&gt; "Find the standard page count for the 1925 edition of Gatsby." This is a lookup and retrieval task. Local SLMs (Small Language Models) like Phi-4 or Llama 3.2 excel here.&lt;/li&gt;
    &lt;li&gt;
&lt;strong&gt;LEVEL 2 (Forensic):&lt;/strong&gt; "Compare the binding grain and typography inconsistencies between two suspected forgeries." This requires high-dimensional analysis and deep reasoning. This is a job for the Cloud.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.kenwalger.com%2Fblog%2Fwp-content%2Fuploads%2F2026%2F04%2Fai-agent-semantic-routing-tiered-intelligence-architecture-scaled.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.kenwalger.com%2Fblog%2Fwp-content%2Fuploads%2F2026%2F04%2Fai-agent-semantic-routing-tiered-intelligence-architecture-scaled.png" alt="Architectural diagram of a Semantic Router called The Accountant. A user request enters the router, which classifies it into Level 1 (Simple/Metadata) or Level 2 (Complex Forensic). Level 1 is routed to a local Tier 1 SLM like Phi-4 or Llama 3.2, while Level 2 is routed to a Tier 2 Frontier Cloud model like Claude 3.5. Both paths converge to produce a final Audit Report." width="800" height="199"&gt;&lt;/a&gt; &lt;/p&gt;
The Semantic Router Architecture—Implementing Tiered Intelligence to optimize cognitive budget and reduce inference costs.



&lt;h2&gt;2. &lt;strong&gt;Implementing the Router (The Gatekeeper Pattern)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;We've added &lt;code&gt;router.py&lt;/code&gt; to our &lt;a href="https://github.com/kenwalger/mcp-forensic-analyzer" rel="noopener noreferrer"&gt;repository&lt;/a&gt;. The logic acts as a gatekeeper.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Classification:&lt;/strong&gt; A lightweight model (the Accountant) reviews the user's query against our &lt;code&gt;config/prompts.yaml&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Economic Decision:&lt;/strong&gt; If the query is "Level 1", we trigger the &lt;code&gt;ollama&lt;/code&gt; provider. If it's "Level 2," we escalate to the &lt;code&gt;anthropic&lt;/code&gt; provider.&lt;/li&gt;
&lt;/ol&gt;

&lt;pre&gt;&lt;code&gt;# The Accountant's Decision Engine in router.py
level = await classify_query(query)
provider = get_provider_for_level(level)

if level == "LEVEL_1":
    print("Accountant Decision: LEVEL_1 - Routing to Local SLM to save budget")
else:
    print("Accountant Decision: LEVEL_2 - Routing to High-Reasoning Cloud Model")
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;By defaulting to &lt;strong&gt;LEVEL_2&lt;/strong&gt; if classification fails, we ensure that we never sacrifice accuracy for cost - we only save money when we are certain the tasks are simple.&lt;/p&gt;

&lt;h2&gt;3. &lt;strong&gt;Projecting the ROI with The Judge&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;While we built the Accountant (the router), we haven't yet run a full-scale economic audit in this repository. However, the architecture is designed to scientifically measure this trade-off using the Judge Agent (from our last post).&lt;/p&gt;

&lt;p&gt;In an enterprise environment, a Director would use this framework to benchmark a representative sample of historical queries. A typical analysis for tiered intelligence systems shows that the vast majority of "forensic" requests are actually simple metadata lookups. By routing those to a local SLM (Phi-4 or Llama 3.2), we can achieve comparable reliability scores to a frontier cloud model while zeroing out the marginal cost of those specific tokens.&lt;/p&gt;

&lt;h3&gt;The Theoretical Savings (100k Calls/Month):&lt;/h3&gt;

&lt;ul&gt;
    &lt;li&gt;Current Cost (Frontier Cloud for 100% of tasks): &lt;strong&gt;~$7,600/month&lt;/strong&gt;
&lt;/li&gt;
    &lt;li&gt;Projected Cost (90/10 Routed Split): &lt;strong&gt;~$1,800/month&lt;/strong&gt;
&lt;/li&gt;
    &lt;li&gt;
&lt;strong&gt;Total Savings:&lt;/strong&gt; ~76% reduction in inference costs.&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Task Category&lt;/th&gt;
&lt;th&gt;Estimated Volume&lt;/th&gt;
&lt;th&gt;"Status Quo" Cost (Frontier Cloud)&lt;/th&gt;
&lt;th&gt;"Routed" Cost (Accountant/SLM)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Level 1 (Standard Lookup/Formatting)&lt;/td&gt;
&lt;td&gt;90% (90k calls)&lt;/td&gt;
&lt;td&gt;~$4,500&lt;/td&gt;
&lt;td&gt;~$0 (Local/Self-Hosted)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Level 2 (Deep Forensic Analysis)&lt;/td&gt;
&lt;td&gt;10% (10k calls)&lt;/td&gt;
&lt;td&gt;~$3,100&lt;/td&gt;
&lt;td&gt;~$1,800*&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Total Cognitive Budget&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;100%&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;~$7,600&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;~$1,800&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;em&gt;* Note: Level 2 "Routed" costs are lower here because the Accountant ensures only the most complex 10% of tokens hit the high-cost provider, whereas the "Status Quo" assumes a higher average cost across all 100k calls due to the lack of optimization.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;Cognitive Budgeting Insights&lt;/h3&gt;

&lt;p&gt;As a Director, the responsibility is to build Sustainable Intelligence. If 80% of an AI workload can be moved to local infrastructure or cheaper "Flash" models without dropping our reliability score, I’m not just a developer—I’m a profit center. Semantic routing allows us to scale AI horizontally without the cloud bill scaling vertically.&lt;/p&gt;

&lt;h2&gt;🛠️ Step into the Clean-Room&lt;/h2&gt;

&lt;p&gt;The &lt;strong&gt;Accountant&lt;/strong&gt; logic is now live in the repository. You can test the routing logic yourself by running the local orchestrator with the &lt;code&gt;--use-accountant&lt;/code&gt; flag.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Explore the Code:&lt;/strong&gt; &lt;a href="https://github.com/kenwalger/mcp-forensic-analyzer" rel="noopener noreferrer"&gt;MCP Forensic Analyzer on GitHub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;(If this architecture helps your team justify their AI spend, consider dropping a ⭐ on the repo!)&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;The Production-Grade AI Series&lt;/h3&gt;

&lt;ul&gt;
    &lt;li&gt;
&lt;strong&gt;Post 1:&lt;/strong&gt; &lt;a href="https://www.kenwalger.com/blog/ai/ai-agent-reliability-llm-as-a-judge" rel="noopener noreferrer"&gt;The Judge Agent: Who Audits the Auditors?&lt;/a&gt; (Reliability)&lt;/li&gt;
    &lt;li&gt;
&lt;strong&gt;Post 2: &lt;/strong&gt;The Accountant: Optimizing AI Costs with Semantic Routing (Sustainability) - &lt;em&gt;You're Here&lt;/em&gt;
&lt;/li&gt;
    &lt;li&gt;
&lt;strong&gt;Post 3: &lt;/strong&gt;The Guardian: Human-in-the-Loop Governance (Safety) - &lt;em&gt;Coming Soon&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Looking for the foundation? Check out my previous series: &lt;a href="https://www.kenwalger.com/blog/ai/mcp-usb-c-moment-ai-architecture/" rel="noopener noreferrer"&gt;The Zero-Glue AI Mesh with MCP&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>systemdesign</category>
      <category>ai</category>
      <category>llmrouting</category>
      <category>costoptimization</category>
    </item>
    <item>
      <title>The Backyard Quarry, Part 6: Scaling the Quarry</title>
      <dc:creator>Ken W Alger</dc:creator>
      <pubDate>Tue, 21 Apr 2026 16:59:30 +0000</pubDate>
      <link>https://forem.com/kenwalger/the-backyard-quarry-part-6-scaling-the-quarry-44i2</link>
      <guid>https://forem.com/kenwalger/the-backyard-quarry-part-6-scaling-the-quarry-44i2</guid>
      <description>&lt;p&gt;So far, the Backyard Quarry system has worked well.&lt;/p&gt;

&lt;p&gt;We have:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;a schema&lt;/li&gt;
    &lt;li&gt;a capture process&lt;/li&gt;
    &lt;li&gt;stored assets&lt;/li&gt;
    &lt;li&gt;searchable data&lt;/li&gt;
    &lt;li&gt;digital twins&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For a small dataset, everything feels manageable.&lt;/p&gt;

&lt;p&gt;A few rocks here and there.&lt;/p&gt;

&lt;p&gt;A handful of records.&lt;/p&gt;

&lt;p&gt;It’s easy to reason about the system.&lt;/p&gt;

&lt;h2&gt;When the Dataset Grows&lt;/h2&gt;

&lt;p&gt;The moment the dataset starts to grow, the assumptions change.&lt;/p&gt;

&lt;p&gt;Instead of a few rocks, imagine:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;hundreds&lt;/li&gt;
    &lt;li&gt;thousands&lt;/li&gt;
    &lt;li&gt;eventually, many thousands&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At that point, a few new questions appear:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;How do we process incoming data efficiently?&lt;/li&gt;
    &lt;li&gt;Where do we store large assets?&lt;/li&gt;
    &lt;li&gt;How do we keep queries fast?&lt;/li&gt;
    &lt;li&gt;What happens when processing takes longer than capture?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are the same questions that show up in any system dealing with real-world data.&lt;/p&gt;

&lt;h2&gt;The Pipeline Becomes the System&lt;/h2&gt;

&lt;p&gt;At small scale, the pipeline is implicit.&lt;/p&gt;

&lt;p&gt;You take a photo.&lt;/p&gt;

&lt;p&gt;You upload it.&lt;/p&gt;

&lt;p&gt;You update a record.&lt;/p&gt;

&lt;p&gt;At larger scale, that approach breaks down.&lt;/p&gt;

&lt;p&gt;The pipeline becomes explicit.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.kenwalger.com%2Fblog%2Fwp-content%2Fuploads%2F2026%2F04%2Fphysical-object-data-pipeline-scalable-architecture.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.kenwalger.com%2Fblog%2Fwp-content%2Fuploads%2F2026%2F04%2Fphysical-object-data-pipeline-scalable-architecture.png" alt="Diagram showing a scalable data pipeline for physical objects including capture, ingestion queue, processing workings, storage, and indexing." width="680" height="763"&gt;&lt;/a&gt;&lt;/p&gt;
At scale, simple data flows evolve into multi-stage pipelines with decoupled processing and storage.



&lt;p&gt;Each stage now has a role:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;capture generates raw input&lt;/li&gt;
    &lt;li&gt;ingestion buffers incoming data&lt;/li&gt;
    &lt;li&gt;processing transforms it&lt;/li&gt;
    &lt;li&gt;storage persists it&lt;/li&gt;
    &lt;li&gt;indexing makes it usable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What used to be a simple flow becomes a system of components.&lt;/p&gt;

&lt;h2&gt;Decoupling the System&lt;/h2&gt;

&lt;p&gt;One of the first things that happens at scale is decoupling.&lt;/p&gt;

&lt;p&gt;Instead of doing everything at once, we separate concerns:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;capture does not block processing&lt;/li&gt;
    &lt;li&gt;processing does not block storage&lt;/li&gt;
    &lt;li&gt;storage does not block indexing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This introduces queues and asynchronous work.&lt;/p&gt;

&lt;p&gt;Instead of:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;take photo → process → store → done
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;we now have:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;take photo → enqueue → process later → update system
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This improves resilience.&lt;/p&gt;

&lt;p&gt;It also introduces complexity.&lt;/p&gt;

&lt;h2&gt;Storage Starts to Matter&lt;/h2&gt;

&lt;p&gt;At small scale, storage decisions are easy.&lt;/p&gt;

&lt;p&gt;At larger scale, they matter.&lt;/p&gt;

&lt;p&gt;We now have different types of data:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;metadata (small, structured)&lt;/li&gt;
    &lt;li&gt;images (large, unstructured)&lt;/li&gt;
    &lt;li&gt;3D models (larger, computationally expensive to generate)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These tend to be stored differently:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;database for structured data&lt;/li&gt;
    &lt;li&gt;object storage for assets&lt;/li&gt;
    &lt;li&gt;references connecting the two&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This separation becomes critical for performance and cost.&lt;/p&gt;

&lt;h2&gt;Processing Becomes a Bottleneck&lt;/h2&gt;

&lt;p&gt;Not all steps in the pipeline are equal.&lt;/p&gt;

&lt;p&gt;Some are fast:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;inserting metadata&lt;/li&gt;
    &lt;li&gt;updating records&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Others are slow:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;generating 3D models&lt;/li&gt;
    &lt;li&gt;running image processing&lt;/li&gt;
    &lt;li&gt;extracting features&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As the dataset grows, these slower steps become bottlenecks.&lt;/p&gt;

&lt;p&gt;Which leads to another pattern:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Parallelization&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Instead of one process handling everything, we distribute the work.&lt;/p&gt;

&lt;p&gt;Multiple workers.&lt;/p&gt;

&lt;p&gt;Multiple jobs.&lt;/p&gt;

&lt;p&gt;Multiple stages running simultaneously.&lt;/p&gt;

&lt;h2&gt;Indexing at Scale&lt;/h2&gt;

&lt;p&gt;Search also changes at scale.&lt;/p&gt;

&lt;p&gt;At small scale:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;simple queries are fast&lt;/li&gt;
    &lt;li&gt;no special indexing required&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At larger scale:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;indexes must be built and maintained&lt;/li&gt;
    &lt;li&gt;similarity search requires preprocessing&lt;/li&gt;
    &lt;li&gt;updates must propagate through the system&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Search becomes an active part of the pipeline, not just a query on top of it.&lt;/p&gt;

&lt;h2&gt;Failure Becomes Normal&lt;/h2&gt;

&lt;p&gt;At small scale, failures are rare and easy to fix.&lt;/p&gt;

&lt;p&gt;At larger scale, failures are expected.&lt;/p&gt;

&lt;p&gt;Examples:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;missing images&lt;/li&gt;
    &lt;li&gt;failed processing jobs&lt;/li&gt;
    &lt;li&gt;incomplete models&lt;/li&gt;
    &lt;li&gt;inconsistent metadata&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The system must tolerate these failures.&lt;/p&gt;

&lt;p&gt;Not eliminate them.&lt;/p&gt;

&lt;p&gt;This leads to:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;retries&lt;/li&gt;
    &lt;li&gt;partial results&lt;/li&gt;
    &lt;li&gt;eventual consistency&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In other words, the system becomes more realistic.&lt;/p&gt;

&lt;h2&gt;A Familiar Architecture&lt;/h2&gt;

&lt;p&gt;At this point, the &lt;strong&gt;Backyard Quarry&lt;/strong&gt; starts to resemble a typical data platform.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.kenwalger.com%2Fblog%2Fwp-content%2Fuploads%2F2026%2F04%2Fpyhsical-to-digital-system-architecture-layers-286x1024.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.kenwalger.com%2Fblog%2Fwp-content%2Fuploads%2F2026%2F04%2Fpyhsical-to-digital-system-architecture-layers-286x1024.png" alt="Layered architecture diagram showing physical world input flowing through capture, ingestion, processing, storage, indexing, and application layers." width="286" height="1024"&gt;&lt;/a&gt; &lt;/p&gt;
A common architectural pattern for systems that transform physical inputs into digital data.



&lt;p&gt;Different domains implement this differently.&lt;/p&gt;

&lt;p&gt;But the structure is remarkably consistent.&lt;/p&gt;

&lt;h2&gt;The Tradeoff&lt;/h2&gt;

&lt;p&gt;Scaling introduces tradeoffs.&lt;/p&gt;

&lt;p&gt;We gain:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;throughput&lt;/li&gt;
    &lt;li&gt;flexibility&lt;/li&gt;
    &lt;li&gt;resilience&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We lose:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;simplicity&lt;/li&gt;
    &lt;li&gt;immediacy&lt;/li&gt;
    &lt;li&gt;ease of reasoning&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What was once a straightforward system becomes a collection of interacting parts.&lt;/p&gt;

&lt;h2&gt;The Real Shift&lt;/h2&gt;

&lt;p&gt;The most important change isn’t technical.&lt;/p&gt;

&lt;p&gt;It’s conceptual.&lt;/p&gt;

&lt;p&gt;At small scale, you think about individual objects.&lt;/p&gt;

&lt;p&gt;At larger scale, you think about systems.&lt;/p&gt;

&lt;p&gt;You stop asking:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;How do I store this rock?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;And start asking:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;How does the system handle many rocks over time?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;That shift is what turns a project into a platform.&lt;/p&gt;

&lt;h2&gt;What Comes Next&lt;/h2&gt;

&lt;p&gt;At this point, the &lt;strong&gt;Backyard Quarry&lt;/strong&gt; is no longer just a small experiment.&lt;/p&gt;

&lt;p&gt;It’s a miniature version of a data platform.&lt;/p&gt;

&lt;p&gt;And the patterns we’ve seen — schema design, pipelines, indexing, scaling — show up in many places.&lt;/p&gt;

&lt;p&gt;In the next post, we’ll zoom out even further.&lt;/p&gt;

&lt;p&gt;Because once you start recognizing these patterns, you begin to see them everywhere.&lt;/p&gt;

&lt;p&gt;Not just in rock piles.&lt;/p&gt;

&lt;p&gt;But in systems across industries.&lt;/p&gt;

&lt;p&gt;And somewhere along the way, the Quarry stopped being about rocks.&lt;/p&gt;

&lt;p&gt;It became about how systems grow.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Rock Quarry Series
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.kenwalger.com/blog/software-engineering/the-backyard-quarry-turning-rocks-into-data" rel="noopener noreferrer"&gt;Turning Rocks into Data&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.kenwalger.com/blog/data-engineering/designing-a-schema-for-physical-objects" rel="noopener noreferrer"&gt;Designing a Schema for Physical Objects&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.kenwalger.com/blog/data-engineering/capturing-physical-objects-data-pipeline" rel="noopener noreferrer"&gt;Capturing the Physical World&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.kenwalger.com/blog/data-engineering/searching-physical-objects-data-indexing" rel="noopener noreferrer"&gt;Searching a Pile of Rocks&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.kenwalger.com/blog/data-engineering/digital-twins-physical-objects-explained" rel="noopener noreferrer"&gt;Digital Twins for Physical Objects&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.kenwalger.com/blog/data-engineering/scaling-data-pipelines-physical-objects" rel="noopener noreferrer"&gt;Scaling the Quarry&lt;/a&gt; - &lt;em&gt;This Post&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.kenwalger.com/blog/data-engineering/system-design-patterns-real-world-data-platforms" rel="noopener noreferrer"&gt;Systems Beyond the Backyard&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.kenwalger.com/blog/data-engineering/from-rocks-to-reality-system-design-patterns" rel="noopener noreferrer"&gt;From Rocks to Reality&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>systemarchitecture</category>
    </item>
    <item>
      <title>Who Audits the Auditors? Building an LLM-as-a-Judge for Agentic Reliability</title>
      <dc:creator>Ken W Alger</dc:creator>
      <pubDate>Thu, 16 Apr 2026 16:12:31 +0000</pubDate>
      <link>https://forem.com/kenwalger/who-audits-the-auditors-building-an-llm-as-a-judge-for-agentic-reliability-5d7</link>
      <guid>https://forem.com/kenwalger/who-audits-the-auditors-building-an-llm-as-a-judge-for-agentic-reliability-5d7</guid>
      <description>&lt;p&gt;We’ve built a powerful Forensic Team. They can find books, analyze metadata, and spot discrepancies using MCP.&lt;/p&gt;

&lt;p&gt;But in the enterprise, 'it seems to work' isn't a metric. If an agent misidentifies a $50,000 first edition, the liability is real.&lt;/p&gt;

&lt;p&gt;Today, we move from &lt;em&gt;Subjective Trust&lt;/em&gt; to &lt;em&gt;Quantitative Reliability&lt;/em&gt;. We are building &lt;strong&gt;The Judge&lt;/strong&gt;—a high-reasoning evaluator that audits our Forensic Team against a 'Golden Dataset' of ground-truth facts.&lt;/p&gt;

&lt;h3&gt;Before you Begin&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites:&lt;/strong&gt; You should have an existing agentic workflow (see my &lt;a href="https://www.kenwalger.com/blog/ai/mcp-usb-c-moment-ai-architecture/" rel="noopener noreferrer"&gt;MCP Forensic Series&lt;/a&gt;) and a high-reasoning model (Claude 3.5 Opus/GPT-4o) to act as the Judge.&lt;/p&gt;

&lt;h2&gt;1. The "Golden Dataset"&lt;/h2&gt;

&lt;p&gt;Before we can grade the agents, we need an Answer Key. We’re creating &lt;code&gt;tests/golden_dataset.json&lt;/code&gt;. This file contains the "Ground Truth"—scenarios where we know there are errors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example Entry:&lt;/strong&gt;&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;{
"test_id": "TC-001",
"input": "The Great Gatsby, 1925",
"expected_finding": "Page count mismatch: Observed 218, Standard 210",
"severity": "high"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;blockquote&gt;
  &lt;strong&gt;Director's Note&lt;/strong&gt;: In an enterprise setting, "Reliability" is the precursor to "Permission". You will not get the budget to scale agents until you can prove they won't hallucinate $50k errors. This framework provides the data you need for that internal sell.
&lt;/blockquote&gt;

&lt;h2&gt;2. The Judge's Rubric&lt;/h2&gt;

&lt;p&gt;A good Judge needs a rubric. We aren't just looking for "Yes/No." We want to grade on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Precision:&lt;/strong&gt; Did it find only the real errors?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Recall:&lt;/strong&gt; Did it find all the real errors?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reasoning:&lt;/strong&gt; Did it explain why it flagged the record?&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;3. Refactoring for Resilience&lt;/h2&gt;

&lt;p&gt;Before building the Judge, we had to address a common "Senior-level" trap: hardcoding agent logic. Based on architectural reviews, we moved our system prompts from the Python client into a dedicated &lt;code&gt;config/prompts.yaml&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This isn't just about clean code; it’s about Observability. By decoupling the "Instructions" from the "Execution," we can now A/B test different prompt versions against the Judge to see which one yields the highest accuracy for specific models.&lt;/p&gt;

&lt;h2&gt;4. The Implementation: The Evaluation Loop&lt;/h2&gt;

&lt;p&gt;We’ve added &lt;code&gt;evaluator.py&lt;/code&gt; to &lt;a href="https://github.com/kenwalger/mcp-forensic-analyzer" rel="noopener noreferrer"&gt;the repo&lt;/a&gt;. It doesn't just run the agents; it monitors their "vital signs."&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Error Transparency:&lt;/strong&gt; We replaced "swallowed" exceptions with structured logging. If a provider fails, the system logs the incident for diagnosis instead of failing silently.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Handshake:&lt;/strong&gt; The loop runs the Forensic Team, collects their logs, and submits the whole package to a high-reasoning Judge Agent.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;The Evaluator-Optimizer Blueprint&lt;/h3&gt;

&lt;p&gt;This diagram represents our move from "Does the code run?" to &lt;strong&gt;Does the intelligence meet the quality bar?"&lt;/strong&gt; This closed-loop system is required before we can start the fiscal optimization of choosing smaller models to handle simpler tasks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.kenwalger.com%2Fblog%2Fwp-content%2Fuploads%2F2026%2F03%2Fai-agent-reliability-evaluator-optimizer-loop-diagram-822x1024.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.kenwalger.com%2Fblog%2Fwp-content%2Fuploads%2F2026%2F03%2Fai-agent-reliability-evaluator-optimizer-loop-diagram-822x1024.png" alt="Architectural diagram of an AI Evaluator-Optimizer loop. It shows a Golden Dataset feeding into an Agent Execution layer, which then passes outputs and logs to a Judge Agent for scoring against a rubric. The final Reliability Report provides a feedback loop for prompt tuning and iterative improvement." width="800" height="997"&gt;&lt;/a&gt; &lt;/p&gt;
The Evaluator-Optimizer Loop-Moving from manual vibe-checks to automated, quantitative reliability scoring.



&lt;h2&gt;Director-Level Insight: The "Accuracy vs. Cost" Curve&lt;/h2&gt;

&lt;blockquote&gt;
  As a Director, I don't just care about "cost per token." I care about Defensibility. If a forensic audit is challenged, I need to show a historical accuracy rating. By implementing this Evaluator, we move from "Vibe-checking" to a Quantitative Reliability Score. This allows us to set a "Minimum Quality Bar" for deployment. If a model update or a prompt change drops our accuracy by 2%, the Judge blocks the deployment.
&lt;/blockquote&gt;

&lt;h2&gt;The Production-Grade AI Series&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Post 1:&lt;/strong&gt; The Judge Agent — &lt;em&gt;You are here&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Post 2:&lt;/strong&gt; The Accountant (Cognitive Budgeting &amp;amp; Model Routing) — &lt;em&gt;Coming Soon&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Post 3:&lt;/strong&gt; The Guardian (Human-in-the-Loop Handshakes) — &lt;em&gt;Coming Soon&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Looking for the foundation? Check out my previous series: &lt;a href="https://www.kenwalger.com/blog/ai/mcp-usb-c-moment-ai-architecture/" rel="noopener noreferrer"&gt;The Zero-Glue AI Mesh with MCP&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>architecture</category>
      <category>testing</category>
      <category>python</category>
    </item>
    <item>
      <title>5.4-Cyber and the Death of the Static CI/CD Pipeline</title>
      <dc:creator>Ken W Alger</dc:creator>
      <pubDate>Wed, 15 Apr 2026 17:17:27 +0000</pubDate>
      <link>https://forem.com/kenwalger/54-cyber-and-the-death-of-the-static-cicd-pipeline-ma</link>
      <guid>https://forem.com/kenwalger/54-cyber-and-the-death-of-the-static-cicd-pipeline-ma</guid>
      <description>&lt;p&gt;Today’s &lt;a href="https://openai.com/index/scaling-trusted-access-for-cyber-defense/" rel="noopener noreferrer"&gt;announcement&lt;/a&gt; of &lt;strong&gt;OpenAI 5.4-Cyber&lt;/strong&gt; isn’t just another incremental model update. It is the sounding of the death knell for the "Static" CI/CD pipeline. If you are still relying on a sequence of YAML-defined steps and basic SAST/DAST scans, your security posture just became an open door.&lt;/p&gt;

&lt;h2&gt;
  
  
  The End of Security Through Obscurity
&lt;/h2&gt;

&lt;p&gt;The headline feature of 5.4-Cyber is its unprecedented capability in &lt;strong&gt;binary reverse engineering&lt;/strong&gt;. Historically, compiled code offered a "speed bump" for attackers. By lowering the refusal boundaries for authorized defenders, OpenAI has effectively weaponized the defense. But there’s a catch: the delta between a patch and an exploit has now shrunk to near-zero.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why "Shift Left" is No Longer Enough
&lt;/h2&gt;

&lt;p&gt;We’ve been told to "shift left" for a decade. But 5.4-Cyber proves that static analysis is a knife in a gunfight. When an AI can deconstruct your build in seconds, you need more than a linter. You need &lt;strong&gt;Continuous Hardening&lt;/strong&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"The 'Senior Developer' of 2026 isn't the one who writes the most secure code; it's the one who orchestrates the most aggressive AI red-teaming agent in their deployment pipeline."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Introducing Agentic Gatekeeping
&lt;/h2&gt;

&lt;p&gt;The future isn't CI/CD; it’s &lt;strong&gt;AI/AD (Autonomous Defense)&lt;/strong&gt;. Your pipeline should no longer be a series of "if/then" statements. It must become a battleground. Every Pull Request should be met by an adversarial agent powered by models like 5.4-Cyber that actively attempts to exploit the new code before the 'Merge' button is even enabled.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Trusted Access Paradox
&lt;/h2&gt;

&lt;p&gt;OpenAI is gating these tools behind "Trusted Access for Cyber." While well-intentioned, this creates a Red Queen’s Race. As defenders get smarter AI, attackers will use leaked or uncensored "Shadow Models." If your pipeline doesn't evolve to be as dynamic as the threats, you're building on sand.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Takeaway
&lt;/h2&gt;

&lt;p&gt;Stop refining your YAML files. Start building &lt;strong&gt;Agentic Workflows&lt;/strong&gt;. If your 2026 roadmap doesn’t include an autonomous red-team agent sitting inside your pipeline, you aren't doing DevSecOps—you're just waiting for a breach you can't predict.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>ai</category>
      <category>security</category>
      <category>cybersecurity</category>
    </item>
  </channel>
</rss>
