<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Shrinivas Nadager</title>
    <description>The latest articles on Forem by Shrinivas Nadager (@shrinivas_nadager_4afb107).</description>
    <link>https://forem.com/shrinivas_nadager_4afb107</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/shrinivas_nadager_4afb107"/>
    <language>en</language>
    <item>
      <title>LLMs Hallucinate. RAG Fixes That — Here’s How We Built a Reliable Healthcare AI</title>
      <dc:creator>Shrinivas Nadager</dc:creator>
      <pubDate>Fri, 12 Dec 2025 18:39:24 +0000</pubDate>
      <link>https://forem.com/shrinivas_nadager_4afb107/how-rag-is-transforming-the-power-of-llms-for-real-world-healthcare-5c7h</link>
      <guid>https://forem.com/shrinivas_nadager_4afb107/how-rag-is-transforming-the-power-of-llms-for-real-world-healthcare-5c7h</guid>
      <description>&lt;p&gt;Large Language Models (LLMs) changed the world — but Retrieval-Augmented Generation (RAG) is what makes them truly useful in real-world applications.&lt;/p&gt;

&lt;p&gt;Today, I'm excited to introduce Sanjeevani AI, our RAG-powered intelligent chat system designed to deliver accurate, context-aware, Ayurvedic-backed health insights. It’s fast, reliable, domain-specialized, and most importantly — built for real end-users who need clarity, not hallucinations.&lt;/p&gt;

&lt;p&gt;In this article, I’ll break down:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Why RAG is becoming the backbone of modern AI systems&lt;/li&gt;
&lt;li&gt;How RAG boosts accuracy, reliability, and trust&lt;/li&gt;
&lt;li&gt;How we built and optimized Sanjeevani AI&lt;/li&gt;
&lt;li&gt;The real-world impact on users&lt;/li&gt;
&lt;li&gt;Why RAG-based systems are the future&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The Problem with Standard LLMs: Hallucinations &amp;amp; Inconsistency
&lt;/h2&gt;

&lt;p&gt;LLMs like GPT, Claude, and LLaMA are incredibly powerful — but they have one big flaw:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;They don’t know what they don’t know.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When an LLM lacks domain-specific information (health, finance, law, agriculture, etc.), it tries to “guess.”&lt;br&gt;
And that guess often results in hallucinations — wrong answers delivered with total confidence.&lt;/p&gt;

&lt;p&gt;In a domain like healthcare, hallucinations are unacceptable.&lt;/p&gt;

&lt;p&gt;This is where Retrieval-Augmented Generation (RAG) becomes a game-changer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7qu6af2yj9m91t0rt76f.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7qu6af2yj9m91t0rt76f.jpg" alt=" " width="738" height="1600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What RAG Actually Does
&lt;/h2&gt;

&lt;p&gt;RAG makes LLMs smarter by connecting them to an external knowledge base.&lt;/p&gt;

&lt;p&gt;Here’s the simple workflow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;User asks a question →&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;System retrieves relevant documents from a verified dataset →&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The LLM uses those documents to produce an answer →&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The result is factual, grounded, and context-accurate&lt;br&gt;
No guessing. No hallucinating. No generic responses.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;RAG turns an LLM into a domain expert, even if it wasn’t trained on that domain originally.&lt;/p&gt;

&lt;p&gt;This idea is so powerful that almost every modern AI company — from OpenAI to Meta — is now pushing RAG-based systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing Sanjeevani AI — A RAG-Powered Health Companion
&lt;/h2&gt;

&lt;p&gt;Sanjeevani AI is our AI system built to empower users with safe, reliable, and personalized health information rooted in Ayurveda and modern wellness science.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;What makes Sanjeevani AI unique?&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
Uses RAG for domain-accurate responses&lt;/p&gt;

&lt;p&gt;Powered by vector embeddings + semantic search&lt;/p&gt;

&lt;p&gt;Integrates LLMs for natural conversation&lt;/p&gt;

&lt;p&gt;Built with a curated Ayurvedic knowledge base&lt;/p&gt;

&lt;p&gt;Supports symptom-based queries&lt;/p&gt;

&lt;p&gt;Provides lifestyle tips, remedies, herbs, and diet suggestions&lt;/p&gt;

&lt;p&gt;Built on a full-stack setup using Python, Flask, Supabase, and LLaMA&lt;br&gt;
The result?&lt;/p&gt;

&lt;p&gt;Users get precise, trustworthy answers, backed by real medical text—not random LLM predictions.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Our RAG Pipeline Works
&lt;/h2&gt;

&lt;p&gt;Here’s the simplified architecture Sanjeevani AI uses:&lt;/p&gt;

&lt;p&gt;User Question → Text Preprocessing →  Vector Search in Ayurvedic Database →  Top-k Relevant Chunks Retrieved →  LLM Generates Context-Aware Response →  Final Answer&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vector Database&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We store Ayurvedic texts, symptom guides, food recommendations, herb details, and lifestyle protocols as embedding vectors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Semantic Search&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When the user asks something, the system retrieves the most relevant knowledge chunks instantly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;LLM integration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The LLM (LLaMA-based) reads both the question and retrieved context → then produces a grounded, accurate response.&lt;/p&gt;

&lt;p&gt;This solves hallucinations while still keeping the natural fluency of LLMs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real-World Use Cases (Where RAG Truly Shines)
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Symptom-based suggestions
Users can ask:&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;“I have acidity and mild headache. What should I do?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Sanjeevani AI retrieves remedies, herbs, and lifestyle recommendations backed by texts — not guesses.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Dietary and lifestyle planning&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Users can ask:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“What foods reduce inflammation naturally?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;RAG ensures the response is pulled from credible knowledge sources.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tech Stack (For Devs Who Love Details)
&lt;/h2&gt;

&lt;p&gt;Backend: Python + Flask&lt;/p&gt;

&lt;p&gt;Database: Supabase&lt;/p&gt;

&lt;p&gt;Vector Search: Chroma &amp;amp; Pinecone&lt;/p&gt;

&lt;p&gt;Embeddings: Sentence Transformers / LLaMA‐based&lt;/p&gt;

&lt;p&gt;LLM: LLaMA-4, LLaMA- 4 20B parameters &lt;/p&gt;

&lt;p&gt;Frontend: React native (App and Web)&lt;/p&gt;

&lt;p&gt;RAG Pipeline: Custom-built retrieval + context injection&lt;/p&gt;

&lt;p&gt;Everything is modular, scalable, and production-ready.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh2iosth0v8r52w5dcojt.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh2iosth0v8r52w5dcojt.jpg" alt=" " width="540" height="1170"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Impact on End Users: Reliability, Safety &amp;amp; Trust
&lt;/h2&gt;

&lt;p&gt;End users don’t care about embeddings or vector stores.&lt;br&gt;
They care about one thing:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Can I trust the answer?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Sanjeevani AI ensures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Accurate health information&lt;/li&gt;
&lt;li&gt;Clear explanations&lt;/li&gt;
&lt;li&gt;Personalized, actionable recommendations&lt;/li&gt;
&lt;li&gt;Zero hallucinations&lt;/li&gt;
&lt;li&gt;Fast responses&lt;/li&gt;
&lt;li&gt;Easy-to-use interface&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When technology becomes reliable, users feel empowered — and that’s the true purpose of AI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts: RAG Isn’t Just an Add-On — It’s a Breakthrough
&lt;/h2&gt;

&lt;p&gt;Sanjeevani AI is proof that when you combine LLMs + RAG + domain knowledge:&lt;/p&gt;

&lt;p&gt;You unlock smart, safe, and specialized AI systems that deliver real value to real people.&lt;/p&gt;

&lt;p&gt;AI is evolving fast, but RAG is what makes it practical.&lt;/p&gt;

&lt;p&gt;If you’re building anything with LLMs — chatbots, assistants, automation, knowledge tools — start with RAG first.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It changes everything.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>rag</category>
      <category>ai</category>
      <category>llm</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Zed vs VS Code: Which Editor Should You Use in 2026?</title>
      <dc:creator>Shrinivas Nadager</dc:creator>
      <pubDate>Fri, 12 Dec 2025 15:34:32 +0000</pubDate>
      <link>https://forem.com/shrinivas_nadager_4afb107/zed-vs-vs-code-which-editor-should-you-use-in-2026-3e6b</link>
      <guid>https://forem.com/shrinivas_nadager_4afb107/zed-vs-vs-code-which-editor-should-you-use-in-2026-3e6b</guid>
      <description>&lt;p&gt;Developers today need more than syntax highlighting and extensions — they want speed, focus, collaboration, and built-in AI support. VS Code has been the de facto standard for years, but Zed is emerging as a compelling alternative with a fresh vision for modern development.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. What’s the Core Difference?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;VS Code — feature-rich, extensible, massive ecosystem, and deeply integrated into many development workflows. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Zed — built from scratch in Rust with GPU-accelerated rendering, focusing on speed, responsiveness, and native support for collaboration and AI &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  2. Performance &amp;amp; Resource Usage
&lt;/h2&gt;

&lt;p&gt;Zed launches faster, consumes less memory, and delivers near-instant typing and navigation even in large projects — a stark contrast to VS Code, which can slow down with heavy extensions. &lt;/p&gt;

&lt;h2&gt;
  
  
  3.  Built-In Collaboration &amp;amp; AI
&lt;/h2&gt;

&lt;p&gt;Zed’s multiplayer editing and integrated AI assistance are part of the core experience — no extra plugins required. VS Code still relies on extensions like Live Share and Copilot for these capabilities. &lt;/p&gt;

&lt;h2&gt;
  
  
  4. Ecosystem &amp;amp; Maturity
&lt;/h2&gt;

&lt;p&gt;VS Code’s huge extension marketplace is hard to beat. Zed is younger with a smaller ecosystem, though it continues evolving rapidly.&lt;br&gt;
features.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnrvj8nrryydqz1hbk5or.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnrvj8nrryydqz1hbk5or.png" alt=" " width="800" height="530"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Want the full detailed comparison with benchmarks, screenshots, and decision framework?&lt;br&gt;
Read the complete article on my blog:&lt;br&gt;
🔗&lt;a href="https://www.thesgn.blog/blog/vscode_zed" rel="noopener noreferrer"&gt; https://www.thesgn.blog/blog/vscode_zed&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>programming</category>
      <category>webdev</category>
      <category>productivity</category>
      <category>vscode</category>
    </item>
  </channel>
</rss>
