<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: MCnad</title>
    <description>The latest articles on Forem by MCnad (@nad_mc).</description>
    <link>https://forem.com/nad_mc</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/nad_mc"/>
    <language>en</language>
    <item>
      <title>Generic RAG Frameworks: Why They Can’t Catch On</title>
      <dc:creator>MCnad</dc:creator>
      <pubDate>Sun, 23 Mar 2025 13:24:49 +0000</pubDate>
      <link>https://forem.com/nad_mc/generic-rag-frameworks-why-they-cant-catch-on-83n</link>
      <guid>https://forem.com/nad_mc/generic-rag-frameworks-why-they-cant-catch-on-83n</guid>
      <description>&lt;p&gt;In the market for generic RAG frameworks, the different providers are fighting over who can provide 67% accuracy versus 65%. And when you run an off-the-shelf RAG framework on your use case, it will end up closer to 50% accuracy. Is this the best that the industry can do? &lt;/p&gt;

&lt;p&gt;Actually, yes, it is. The problem is that these frameworks are meant to be plug-and-play; you are supposed to be able to build the product once and then sell it to thousands of customers in different verticals. But it just doesn’t work well enough, and it’s unclear whether generic RAGs will ever be able to deliver on this promise. Let’s see why.&lt;/p&gt;

&lt;h2&gt;
  
  
  DIY
&lt;/h2&gt;

&lt;p&gt;The problem with generic RAG frameworks is just that—they are generic and don’t incorporate specific know-how about the vertical or the industry from domain experts. When it comes to information retrieval, the more specific and informed the key/query design of your index, the better accuracy you’ll get. To do that, you need domain experts with continuous refinement processes.&lt;/p&gt;

&lt;p&gt;While these generic frameworks don’t deliver, RAG is not that hard to build in-house—and custom implementations are easier to optimize and perform much better than their generic counterparts. At the same time, engineers have every incentive to build RAG frameworks in-house: It doesn’t require extraordinary expertise to achieve good results, and it allows them to play around with interesting technology and improve their skills. &lt;/p&gt;

&lt;p&gt;Custom RAGs typically perform dramatically better than generic ones because of the reality that the more case-specific your RAG’s design is, the more accurate it will be. &lt;/p&gt;

&lt;h2&gt;
  
  
  Are We Building Just For Fun?
&lt;/h2&gt;

&lt;p&gt;If we want to build just for fun, that’s fine! But if we’re looking for production in real-world cases, accuracy must go higher.&lt;/p&gt;

&lt;p&gt;For example, if you were to build a generic RAG framework for the financial vertical, you might get to 90% accuracy based on this narrowing-down of the use ase. But is that good enough for a big bank serving millions of customers—can they afford to get 1 million answers wrong out of every 10? Wouldn’t it make more sense for them to build something in-house quickly and with more control, that could get close to 100% accuracy? In fact, they’d be crazy not to. &lt;/p&gt;

&lt;h2&gt;
  
  
  What It Takes to Reach 99.999% Accuracy
&lt;/h2&gt;

&lt;p&gt;It’s possible to build a 100% accurate RAG if you’ve got the right approach and are willing to put in the effort rather than trying to avoid it. We’ve actually done so (and open-sourced it) with &lt;a href="https://github.com/emcie-co/parlant-qna" rel="noopener noreferrer"&gt;parlant-qna&lt;/a&gt;. Doing so, for us, has involved making some strategic sacrifices we need to elaborate on:&lt;/p&gt;

&lt;h2&gt;
  
  
  No Chunked Upstream Information
&lt;/h2&gt;

&lt;p&gt;First, unlike generic plug-and-play RAG frameworks, we don't chunk upstream information at all; documents are sent whole to the LLM. This already prevents many lost-in-transition errors.&lt;/p&gt;

&lt;h2&gt;
  
  
  Independent Q&amp;amp;A
&lt;/h2&gt;

&lt;p&gt;Our second strategic choice was to have the questions and answers managed independently rather than heuristically parsing upstream knowledge-bases (KBs). Think about it: You don’t write KB docs the same way you do when you have a conversation. If you want your RAG framework to be more accurate, or if you’d at least like to adjust conversational responses without conflicting with upstream KBs, you have to provide information with the same conversational character that you expect your AI agent to respond in. We just break from convention in having you roll up your sleeves and do that as the fundamental methodology. It’ll be better this way in the long run.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dynamic Manual Tagging
&lt;/h2&gt;

&lt;p&gt;The next decision we made was to support dynamic, manual tagging for every question. One of the big problems with RAG frameworks is what to do when they get answers wrong. With plug-and-play RAG frameworks, in order to update the framework, you usually have to rerun the entire system to re-parse your KBs, and often you’ll end up breaking something else. So we wanted to add a marker that would allow you to always return a particular question response to specific queries. It helps to quickly deploy a fix to production while buying time to iterate on the results of that specific question using your automatic, predictive retrieval mechanisms. &lt;/p&gt;

&lt;h2&gt;
  
  
  Knowledge-Base Optimization (rather than a perfected RAG)
&lt;/h2&gt;

&lt;p&gt;Lastly, we advise our users to spend time curating and optimizing their knowledge-base. When discussing creating RAG frameworks and AI agents in general, we talk a lot—perhaps too much—about creating the perfect algorithms that make them work. &lt;/p&gt;

&lt;p&gt;But the key to creating RAG that actually returns highly accurate answers lies in curating the knowledge base, not in creating a perfect generic algorithm. Engineering teams will often spend months trying to create the ideal RAG framework when, in reality, dedicating a tenth of the time to manual knowledge base curation would get them better, more easily maintainable results—much faster. Alas, while some engineers may find this approach uninspiring, from a resource management and time-to-market perspective it’s often the smarter choice.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disclaimer: It Does Come At A Cost&lt;/strong&gt;&lt;br&gt;
Some of the practices we’ve mentioned above do make RAG more accurate, but they can also increase latency and price. That said, new hardware such as Cerebras, and newer LLMs such as Gemini 2.0 Flash, are now making it practical.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fza3qealzgqjm5mp6cwxr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fza3qealzgqjm5mp6cwxr.png" alt="Image description" width="629" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here’s a framework for assessing whether an approach like Parlant QnA’s is right for your use case.&lt;br&gt;
Risk assessment: How severe are the consequences of an incorrect response?&lt;br&gt;
Volume considerations: How many queries will your system handle daily? (As said above, recent hardware and newer LLMs are now making this approach more practical anyway)&lt;br&gt;
Expertise availability: Do you have domain experts ready to curate and edit the QnAs properly?&lt;br&gt;
Time constraints: How quickly do you need to deploy?&lt;/p&gt;

&lt;h2&gt;
  
  
  Is 70% 'Good Enough'?
&lt;/h2&gt;

&lt;p&gt;Perhaps the most frustrating aspect of the conversation about RAG frameworks is that the industry's expectation that 70% accuracy is 'good enough' is flat-out absurd.&lt;/p&gt;

&lt;p&gt;I've heard many say, "70% is great because human representatives get even less than that." While that could well be true for some use cases (note that the industry average CSAT and FCR is around 80%), there are two important points to consider for GenAI agents.&lt;br&gt;
First of all, if a human agent makes an error, various types of insurance can cover the business's losses. &lt;br&gt;
Second, even when a human agent is only 70% accurate, the remaining 30% of errors are typically limited in impact. It's not often that a human representative will agree to sell a customer a truck for $1, but low-accuracy AI agents are known to do so. Businesses are therefore much more apprehensive about GenAI agents getting it wrong.&lt;/p&gt;

&lt;p&gt;For real-life use cases, we need to start talking about AI accuracy the same way we talk about most service-level agreements (SLA): in terms of the number of ninths that come after the decimal point. Instead of 67% versus 65%, we should start building systems that compete between 99.99% and 99.999%, and be willing to make the relative sacrifices in the meantime—such as in response latency—while AI hardware is catching up to match the accuracy with the desired response speed.&lt;/p&gt;

&lt;p&gt;As I illustrated before, a large bank running 100 million conversations every month can’t afford 10 million faulty responses every month. Given that (especially in regulated industries where each faulty response could cause reputation and legal risks), such low-accuracy approaches are simply not good enough for real-world applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Retrieving RAG to its full potential
&lt;/h2&gt;

&lt;p&gt;If accuracy is mission-critical, enterprises can't afford to gamble on generic, plug-and-play RAG to power their customer-facing AI agents. A best-effort attempt at retrieval doesn’t cut it—especially when subtle errors can mean financial, legal, or reputational risk.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>showdev</category>
      <category>rag</category>
      <category>genai</category>
    </item>
    <item>
      <title>AI Agents vs. Real Customers: What Could Possibly Go Wrong?</title>
      <dc:creator>MCnad</dc:creator>
      <pubDate>Fri, 14 Feb 2025 10:29:38 +0000</pubDate>
      <link>https://forem.com/nad_mc/llm-misalignment-can-companies-really-let-ai-agents-face-their-customers-416o</link>
      <guid>https://forem.com/nad_mc/llm-misalignment-can-companies-really-let-ai-agents-face-their-customers-416o</guid>
      <description>&lt;p&gt;&lt;em&gt;Today, a 70% accuracy rate is often considered a success for large language models (LLMs) in human interactions. But, for well-established brands, this threshold introduces serious reputational and legal risks. So, how close are we to truly reliable, AI-driven customer interactions?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2jdjbrrere1l57bw24pl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2jdjbrrere1l57bw24pl.png" alt="A conversation between LLM and a human user showing the user asks " width="800" height="258"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AI alignment isn’t just about tuning a model; it’s about ensuring that an AI chat agent, for example, consistently conforms to a company’s needs (and rules) and, just as importantly, serves its customers effectively. Unlike human support agents, who can quickly adapt and clarify misunderstandings, LLMs can spiral into frustrating loops, expose sensitive information, or completely misinterpret user intent if they aren’t properly aligned.&lt;/p&gt;

&lt;h2&gt;
  
  
  Is Technology Ready?
&lt;/h2&gt;

&lt;p&gt;The LLM revolution accelerated with the introduction of Transformer models in the monumental paper, Attention Is All You Need (2017), paving the way to advanced AI applications like ChatGPT and Claude.&lt;/p&gt;

&lt;p&gt;But is this technology truly ready to autonomously handle customer interactions? The short answer: Yes. But it’s not just about AI capabilities; it’s about how LLMs are utilized as part of a larger solution architecture. Even when LLMs are given explicit context and instructions tailored to one’s needs, this data must be meticulously structured in a way that’s adapted to a model’s inherent attributes and tendencies. Without a robust framework, misalignment and hallucinations are inevitable.&lt;/p&gt;

&lt;p&gt;Humans navigate conversations seamlessly because we intuitively understand context, intent, and social norms. LLMs, however, struggle with consistency even when given extensive instructions and guidelines (sometimes even more).&lt;/p&gt;

&lt;h2&gt;
  
  
  More Reasons Why Current AI Methodologies Fall Short When Facing Customers
&lt;/h2&gt;

&lt;p&gt;Most LLM applications today prioritize efficiency and response speed over implementing mechanisms for accuracy and consistency. This creates several challenges in complex use cases, such as customer service:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;LLMs struggle with complex reasoning.&lt;/strong&gt; Without additional alignment and real-time evaluation mechanisms, they easily lose focus when given more than a few explicit instructions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Conflicting instructions lead to inconsistencies.&lt;/strong&gt; LLMs aren’t inherently good or consistent at resolving priority conflicts in instructions within prompts.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;One common approach to handling these challenges is to break an LLM application's architecture into structured flowcharts. Each stage in a customer service interaction is then guided by a specialized prompt. However, this approach ironically degrades the customer experience, making LLMs feel no different from older flow-based chatbots.&lt;br&gt;
The reason lies in the inherent limitations of flowchart-based structures:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Intent detection is unreliable.&lt;/strong&gt; Customers often have multiple, evolving intents that require dynamic handling rather than rigid, singular classification.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Guiding the conversation is impractical.&lt;/strong&gt; Many businesses would want AI agents to proactively shape conversations and guide users, such as countering with questions rather than immediately satisfying requests, but intent-based models aren’t well-suited for this level of experience.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Context-switching is unnatural.&lt;/strong&gt; Flow-based execution struggles to maintain coherent conversations when users switch between different topics or tasks. This can lead to interactions that feel disjointed and out of touch and, consequently, to poor customer experience.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is why even AI customer service solutions with a 70% accuracy rate are often considered “successful” today by AI vendors. But in real-world deployments, that standard is far too low.&lt;/p&gt;

&lt;h2&gt;
  
  
  We Can’t Settle for 70% Accuracy
&lt;/h2&gt;

&lt;p&gt;For enterprises managing one million conversations daily, 70% accuracy means that 300,000 conversations are not handled reliably! Inside this misaligned 30%, some of the mistakes can be vital, including violating company policy, providing false facts like communicating a wrong account balance for a bank client or even violating regulations.&lt;/p&gt;

&lt;p&gt;As previously described, LLM is not a human brain, and we need to understand how to better feed it, utilize it, and optimize it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Treating LLMs as One Component of a Larger System
&lt;/h2&gt;

&lt;p&gt;At &lt;a href="https://www.parlant.io/" rel="noopener noreferrer"&gt;Parlant&lt;/a&gt;, an open-source guidance framework for customer-facing LLM agents, we approach the problem differently. Instead of relying solely on LLMs to process and generate responses, we integrate them into a broader AI system with multiple moving parts. This methodology enables:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Dynamic instruction filtering.&lt;/strong&gt; In real-world deployments, AI agents must handle dozens to hundreds of instructions. Our system sorts and prioritizes only the most relevant ones for each conversation, keeping the model focused on what it really needs to do at any given point.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Self-critique and prioritization mechanisms.&lt;/strong&gt; Standard LLMs prioritize instructions and information placed later in a prompt, often ignoring earlier context. Our approach introduces Attentive Reasoning Queries (stay tuned for the upcoming research paper publication), which dynamically refocus the model’s attention, ensuring all critical guidelines are applied consistently.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Feedback loop for continuous improvement.&lt;/strong&gt; Instead of only evaluating model outputs in retrospect, our system analyzes in real-time how well the model adhered to each and every instruction, not only giving operators crucial feedback and insights on the model’s interpretations but also improving the model’s ability to bounce back from what could have been misguided responses to the customer.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By treating LLMs as part of a larger engine rather than the engine itself, Parlant&lt;a href="https://www.parlant.io/" rel="noopener noreferrer"&gt;&lt;/a&gt; is able to significantly improve reliability and accuracy in customer-facing scenarios.&lt;/p&gt;

&lt;h2&gt;
  
  
  Maximizing Alignment to Minimize Risk
&lt;/h2&gt;

&lt;p&gt;If companies want truly autonomous, reliable AI-driven customer interactions, they can’t settle for a 70% success rate. At Parlant, we’re pushing the boundaries of AI alignment to set a new benchmark for accuracy and trustworthiness.&lt;br&gt;
Want to learn more? Feel free to explore our open source at &lt;a href="https://github.com/emcie-co/parlant" rel="noopener noreferrer"&gt;https://github.com/emcie-co/parlant&lt;/a&gt;. &lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>ai</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Are LLMs Really Doomed?</title>
      <dc:creator>MCnad</dc:creator>
      <pubDate>Mon, 10 Feb 2025 15:14:45 +0000</pubDate>
      <link>https://forem.com/nad_mc/are-llms-really-doomed-j77</link>
      <guid>https://forem.com/nad_mc/are-llms-really-doomed-j77</guid>
      <description>&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Yann_LeCun" rel="noopener noreferrer"&gt;Yann LeCun&lt;/a&gt;, Chief AI Scientist at Meta and respected pioneer in AI research, recently stated that autoregressive LLMs (Large Language Models) are doomed because the probability of generating a sequence of tokens that represents a satisfying answer decreases exponentially by the token. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl2qsge8gi9kh82mgimrd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl2qsge8gi9kh82mgimrd.png" alt="Image description" width="800" height="459"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While I hold LeCun in especially high regard, and resonate with many of the insights he shared in the summit, I disagree with him on this particular point.&lt;/p&gt;

&lt;p&gt;Yann LeCun giving a key note at AI Action Summit&lt;br&gt;
Although he qualified his statement with "assuming independence of errors" (in each token generation) this, precisely, was the wrong turn in his analysis. Autoregressive LLMs do not actually diverge in the way he implied there, and we can demonstrate it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Autoregression?
&lt;/h2&gt;

&lt;p&gt;Under the hood, an LLM is a statistical prediction model that is trained to generate a completion for a given text of any (practical) length. We can say that an LLM is a function that accepts text up to a pre-defined length (a context) and outputs a single token out of a pre-defined vocabulary. Once it has generated a new token, it feeds it back into its input context, and generates the next one, and so on and so forth, until something tells it to stop, thus generating (hopefully) coherent sentences, paragraphs, and pages of text.&lt;/p&gt;

&lt;p&gt;For a deeper walkthrough of this process, &lt;a href="https://www.parlant.io/blog/what-is-autoregression" rel="noopener noreferrer"&gt;see our recent post on autoregression&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Convergent or Divergent?
&lt;/h2&gt;

&lt;p&gt;What LeCun is saying, then, can be unpacked as follows.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Given the set C of all completions of length N
(tokens),&lt;/li&gt;
&lt;li&gt;Given the subset A ⊂ C of all "acceptable" completions within C (A = C - U, where U ⊂ C is the subset of unacceptable completions)&lt;/li&gt;
&lt;li&gt;Let Ci be the completion we are now generating, token by token. Assume that Ci currently contains K&amp;lt;N completed tokens such that Ci is (still) an acceptable completion (Ci ∈ A)&lt;/li&gt;
&lt;li&gt;Suppose some independent constant E (for error) as the probability of generating the next token such that it causes Ci to diverge and become unacceptable (Ci ⊂ U)&lt;/li&gt;
&lt;li&gt;Then, generating the next token of Ci at K+1 is (1-E) likely to maintain the acceptability of Ci as a valid and correct completion&lt;/li&gt;
&lt;li&gt;Likewise, generating all remaining tokens R = N - K such that Ci stays acceptable has the probability of (1-E)^R&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;In Simpler Terms&lt;/strong&gt;&lt;br&gt;
If we always have, say, a 99% chance to generate a single next token such that the completion stays acceptable, then generating 100 next tokens brings our chance down to 0.99^100, or roughly 36%. If we generate 1,000 tokens, then by this logic there is a 0.0004% chance that our final completion is acceptable!&lt;/p&gt;

&lt;p&gt;Do you see the problem here? Many of us have generated 1k completions that have been perfectly fine. Could we all have landed on the lucky side of 0.0004%, or is something else going on? Moreover, what about techniques like Chain-of-Thought (CoT) and reasoning models? Notice how they generate hundreds if not thousands of tokens before converging to a response that is often more correct.&lt;/p&gt;

&lt;p&gt;The problem here is precisely with assuming that E is constant. It is not.&lt;/p&gt;

&lt;p&gt;LLMs, due to their attention mechanism, have a way to bounce back even from initial completions that we would find unacceptable. This is exactly what techniques like CoT or CoV (Chain-of-Verification) do—they lead the model to generate new tokens that will actually increase the completion's likelihood to converge and ultimately be acceptable.&lt;/p&gt;

&lt;p&gt;We know it first hand from developing the Attentive Reasoning Queries (ARQs) technique which we use in Parlant. We get the model to generate, on its own, a structured thinking process of our design, which keeps it convergent throughout the generation process.&lt;/p&gt;

&lt;p&gt;Depending on your prompting technique and completion schema, not only do you not have to drop to 0.0004% acceptance rate; you can actually stay quite close to 100%.`&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
By [Yam Marcovitz](https://www.linkedin.com/in/yam-marcovic/) Tech lead at Parlant.io&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;br&gt;
`&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>showdev</category>
      <category>rag</category>
    </item>
  </channel>
</rss>
