<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: EnDevSols</title>
    <description>The latest articles on Forem by EnDevSols (@endevsols).</description>
    <link>https://forem.com/endevsols</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/endevsols"/>
    <language>en</language>
    <item>
      <title>How We Automated Hallucination Detection in Enterprise RAG Pipelines</title>
      <dc:creator>Muhammad Muzammil</dc:creator>
      <pubDate>Wed, 29 Apr 2026 05:34:16 +0000</pubDate>
      <link>https://forem.com/endevsols/how-we-automated-hallucination-detection-in-enterprise-rag-pipelines-42ca</link>
      <guid>https://forem.com/endevsols/how-we-automated-hallucination-detection-in-enterprise-rag-pipelines-42ca</guid>
      <description>&lt;p&gt;Your RAG isn't broken. It's just lying quietly.&lt;/p&gt;

&lt;p&gt;Retrieval works. The LLM sounds confident. Your users get an answer.&lt;/p&gt;

&lt;p&gt;But somewhere in that response, a claim contradicts the source document it was supposed to be grounded in. No error thrown. No flag raised. Just a confident, wrong answer, delivered at scale.&lt;/p&gt;

&lt;p&gt;This is the hallucination problem that doesn't get talked about enough. Not the obvious failures. The subtle ones.&lt;/p&gt;

&lt;p&gt;We've seen it across enterprise RAG deployments  in legal tools, internal knowledge bases, customer-facing assistants. The retrieval pipeline performs. The LLM performs. And still, trust erodes the moment a user catches one bad answer.&lt;/p&gt;

&lt;p&gt;We're open sourcing &lt;a href="https://endevsols.com/open-source/longtracer" rel="noopener noreferrer"&gt;LongTracer&lt;/a&gt;, our answer to this problem.&lt;/p&gt;

&lt;p&gt;LongTracer sits at the output layer of any RAG pipeline and verifies every claim in an LLM response against your source documents. It uses a hybrid STS + NLI approach: first finding the most semantically relevant source sentence per claim, then classifying whether that source actually supports, contradicts, or is neutral to what the LLM said.&lt;/p&gt;

&lt;p&gt;The result: a trust score, a verdict, and a clear list of exactly which claims hallucinated and why.&lt;/p&gt;

&lt;p&gt;No LLM calls. No vector store required. No new infrastructure. It works with LangChain, LlamaIndex, Haystack, LangGraph, or any pipeline that gives you a response and source chunks.&lt;/p&gt;

&lt;p&gt;MIT licensed. Built from real implementation experience.&lt;/p&gt;

&lt;p&gt;If you're running RAG in production, your users deserve answers you can actually stand behind.&lt;/p&gt;

&lt;p&gt;Try:&lt;br&gt;
&lt;code&gt;pip install longtracer&lt;/code&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>rag</category>
      <category>opensource</category>
      <category>python</category>
    </item>
  </channel>
</rss>
