<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Mani G</title>
    <description>The latest articles on Forem by Mani G (@manigaaa27).</description>
    <link>https://forem.com/manigaaa27</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/manigaaa27"/>
    <language>en</language>
    <item>
      <title>How (A) I Built Open-Source LLM Guardrails with FastAPI</title>
      <dc:creator>Mani G</dc:creator>
      <pubDate>Sat, 21 Mar 2026 20:44:35 +0000</pubDate>
      <link>https://forem.com/manigaaa27/how-a-i-built-open-source-llm-guardrails-with-fastapi-51p5</link>
      <guid>https://forem.com/manigaaa27/how-a-i-built-open-source-llm-guardrails-with-fastapi-51p5</guid>
      <description>&lt;p&gt;Building production AI applications means dealing with prompt injection, PII leakage, hallucinated outputs, and agents that go rogue. We (me and AI) built AgentGuard — an open-source FastAPI service that sits between your app and any LLM provider to handle all of this in one place.&lt;/p&gt;

&lt;p&gt;What it does&lt;br&gt;
AgentGuard runs seven parallel input safety checks on every request before it reaches your LLM: prompt injection heuristics, jailbreak pattern detection, PII and secret detection, restricted topic filtering, and data exfiltration attempts. On the output side, it validates schema conformance, citation presence, grounding coverage, policy compliance, and a composite quality score (internally called the "slop score") that ranges from 0.0 (clean) to 1.0 (reject).&lt;br&gt;
Beyond checks, it also compiles versioned prompt packages — replacing ad-hoc prompt strings with auditable YAML configs — and governs agent actions through a risk-scoring and human-in-the-loop approval layer.&lt;/p&gt;

&lt;p&gt;GitHub: &lt;a href="https://github.com/MANIGAAA27/agentguard" rel="noopener noreferrer"&gt;https://github.com/MANIGAAA27/agentguard&lt;/a&gt;&lt;br&gt;
Docs site: &lt;a href="https://manigaaa27.github.io/agentguard/" rel="noopener noreferrer"&gt;https://manigaaa27.github.io/agentguard/&lt;/a&gt;&lt;br&gt;
Comparison vs Guardrails AI, NeMo, LlamaGuard: &lt;a href="https://github.com/MANIGAAA27/agentguard/blob/main/docs/comparison.md" rel="noopener noreferrer"&gt;https://github.com/MANIGAAA27/agentguard/blob/main/docs/comparison.md&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>ai</category>
      <category>fastapi</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
