<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Miroslav Šotek</title>
    <description>The latest articles on Forem by Miroslav Šotek (@anulum).</description>
    <link>https://forem.com/anulum</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/anulum"/>
    <language>en</language>
    <item>
      <title>I built an open-source real-time LLM hallucination guardrail — here are the benchmarks</title>
      <dc:creator>Miroslav Šotek</dc:creator>
      <pubDate>Sun, 29 Mar 2026 01:17:47 +0000</pubDate>
      <link>https://forem.com/anulum/i-built-an-open-source-real-time-llm-hallucination-guardrail-here-are-the-benchmarks-22em</link>
      <guid>https://forem.com/anulum/i-built-an-open-source-real-time-llm-hallucination-guardrail-here-are-the-benchmarks-22em</guid>
      <description>&lt;h2&gt;
  
  
  What is Director-Class AI?
&lt;/h2&gt;

&lt;p&gt;An open-source Python library that guards LLM output in real time. It watches tokens as they stream and halts generation the moment it detects a hallucination.&lt;/p&gt;

&lt;p&gt;It uses &lt;strong&gt;NLI&lt;/strong&gt; (Natural Language Inference via DeBERTa/FactCG) and optional &lt;strong&gt;RAG knowledge grounding&lt;/strong&gt; to score each claim against source documents.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;director-ai
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Two-line integration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;director_ai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;guard&lt;/span&gt;
&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;guard&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;openai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;OpenAI&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;  &lt;span class="c1"&gt;# wraps any OpenAI/Anthropic client
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Benchmarks (measured, not aspirational)
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;th&gt;Conditions&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Balanced accuracy&lt;/td&gt;
&lt;td&gt;75.8%&lt;/td&gt;
&lt;td&gt;FactCG on LLM-AggreFact (29,320 samples)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GPU latency&lt;/td&gt;
&lt;td&gt;14.6ms/pair&lt;/td&gt;
&lt;td&gt;GTX 1060, ONNX, batch=16&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;L40S latency&lt;/td&gt;
&lt;td&gt;0.5ms/pair&lt;/td&gt;
&lt;td&gt;FP16, batch=32&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;E2E catch rate&lt;/td&gt;
&lt;td&gt;90.7%&lt;/td&gt;
&lt;td&gt;Hybrid mode, 600 HaluEval traces&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Rust BM25 speedup&lt;/td&gt;
&lt;td&gt;10.2x&lt;/td&gt;
&lt;td&gt;Over pure Python implementation&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Framework Integrations
&lt;/h2&gt;

&lt;p&gt;LangChain, LlamaIndex, LangGraph, CrewAI, Haystack, DSPy, Semantic Kernel, and SDK Guard (wraps OpenAI/Anthropic/Bedrock/Gemini/Cohere clients).&lt;/p&gt;

&lt;h2&gt;
  
  
  Honest Limitations
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;NLI-only scoring needs KB grounding for domain use (medical FPR=100% without KB)&lt;/li&gt;
&lt;li&gt;ONNX CPU is slow (383ms/pair) — GPU recommended&lt;/li&gt;
&lt;li&gt;Long documents need &amp;gt;=16GB VRAM&lt;/li&gt;
&lt;li&gt;Summarisation accuracy weakest (AggreFact-CNN 68.8%)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Quality
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;3,545 tests, 91% coverage&lt;/li&gt;
&lt;li&gt;Sigstore-signed releases, SLSA provenance&lt;/li&gt;
&lt;li&gt;OpenSSF Best Practices: 100%&lt;/li&gt;
&lt;li&gt;19 badges of CI/security health&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/anulum/director-ai" rel="noopener noreferrer"&gt;github.com/anulum/director-ai&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docs&lt;/strong&gt;: &lt;a href="https://anulum.github.io/director-ai" rel="noopener noreferrer"&gt;anulum.github.io/director-ai&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PyPI&lt;/strong&gt;: &lt;a href="https://pypi.org/project/director-ai/" rel="noopener noreferrer"&gt;pypi.org/project/director-ai&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AGPL-3.0 with commercial licensing available.&lt;/p&gt;

&lt;p&gt;Would love feedback from anyone working on LLM reliability, RAG pipelines, or AI safety!&lt;/p&gt;

</description>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
