<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Prateek Mangalgi</title>
    <description>The latest articles on Forem by Prateek Mangalgi (@prateek_mangalgi).</description>
    <link>https://forem.com/prateek_mangalgi</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/prateek_mangalgi"/>
    <language>en</language>
    <item>
      <title>I Built a Real-Time Video Calling App Using WebRTC in React Native, And It Was Harder Than I Expected</title>
      <dc:creator>Prateek Mangalgi</dc:creator>
      <pubDate>Sat, 21 Feb 2026 17:01:56 +0000</pubDate>
      <link>https://forem.com/prateek_mangalgi/i-built-a-real-time-video-calling-app-using-webrtc-in-react-native-and-it-was-harder-than-i-lph</link>
      <guid>https://forem.com/prateek_mangalgi/i-built-a-real-time-video-calling-app-using-webrtc-in-react-native-and-it-was-harder-than-i-lph</guid>
      <description>&lt;p&gt;Most developers have used apps like Zoom, Google Meet, or WhatsApp calls.&lt;/p&gt;

&lt;p&gt;But building one?&lt;/p&gt;

&lt;p&gt;That’s a completely different story.&lt;/p&gt;

&lt;p&gt;When I started working on my React Native WebRTC app, I thought:&lt;/p&gt;

&lt;p&gt;“It’s just video streaming, right?”&lt;/p&gt;

&lt;p&gt;I was wrong.&lt;/p&gt;

&lt;p&gt;Very wrong.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Moment I Realized This Isn’t Just Another App&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In a normal app:&lt;/p&gt;

&lt;p&gt;You send a request → server responds&lt;/p&gt;

&lt;p&gt;In WebRTC:&lt;/p&gt;

&lt;p&gt;Two devices talk directly&lt;/p&gt;

&lt;p&gt;No middleman.&lt;br&gt;
No API response cycle.&lt;br&gt;
No “simple backend”.&lt;/p&gt;

&lt;p&gt;Just two peers trying to:&lt;/p&gt;

&lt;p&gt;Discover each other&lt;/p&gt;

&lt;p&gt;Negotiate connection&lt;/p&gt;

&lt;p&gt;Exchange network details&lt;/p&gt;

&lt;p&gt;Stream audio/video in real-time&lt;/p&gt;

&lt;p&gt;That’s when it hit me:&lt;/p&gt;

&lt;p&gt;This isn’t frontend or backend.&lt;br&gt;
This is network-level engineering.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What WebRTC Actually Does&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;WebRTC (Web Real-Time Communication) allows devices to communicate peer-to-peer with ultra-low latency.&lt;/p&gt;

&lt;p&gt;Which means:&lt;/p&gt;

&lt;p&gt;Your phone can directly stream video to another device&lt;/p&gt;

&lt;p&gt;Without routing media through a server&lt;/p&gt;

&lt;p&gt;With built-in encryption&lt;/p&gt;

&lt;p&gt;And that’s powerful.&lt;/p&gt;

&lt;p&gt;But also complex.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Architecture Behind My WebRTC App&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;My app follows a classic but important structure:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Client Layer (React Native)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is where everything starts.&lt;/p&gt;

&lt;p&gt;The app:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Captures camera &amp;amp; microphone&lt;/li&gt;
&lt;li&gt;Displays local &amp;amp; remote video&lt;/li&gt;
&lt;li&gt;Handles UI for calling&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;React Native made it easier to build cross-platform apps for iOS and Android while still using native WebRTC performance.&lt;/p&gt;

&lt;p&gt;But UI was the easy part.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Signaling Server (The Unsung Hero)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here’s the biggest misconception:&lt;/p&gt;

&lt;p&gt;“WebRTC is peer-to-peer, so no server needed.”&lt;/p&gt;

&lt;p&gt;Wrong.&lt;/p&gt;

&lt;p&gt;You do need a signaling server.&lt;/p&gt;

&lt;p&gt;Its job:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Help users find each other&lt;/li&gt;
&lt;li&gt;Exchange connection data (SDP)&lt;/li&gt;
&lt;li&gt;Share ICE candidates&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without signaling, peers can’t even start talking.&lt;/p&gt;

&lt;p&gt;WebRTC does NOT define how signaling works, you have to build it yourself.&lt;/p&gt;

&lt;p&gt;In my project, this became the coordination layer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Peer Connection (The Core Engine)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once signaling is done:&lt;/p&gt;

&lt;p&gt;A peer connection is created&lt;/p&gt;

&lt;p&gt;Devices exchange:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Offer&lt;/li&gt;
&lt;li&gt;Answer&lt;/li&gt;
&lt;li&gt;ICE candidates&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is where the magic happens.&lt;/p&gt;

&lt;p&gt;The connection shifts from:&lt;/p&gt;

&lt;p&gt;Server-mediated to Direct device-to-device communication&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. NAT Traversal (The Hidden Complexity)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Real-world networks are messy.&lt;/p&gt;

&lt;p&gt;Devices sit behind:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Routers&lt;/li&gt;
&lt;li&gt;Firewalls&lt;/li&gt;
&lt;li&gt;NATs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So WebRTC uses:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;STUN&lt;/strong&gt; servers → Find your public IP&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TURN&lt;/strong&gt; servers → Relay data if direct connection fails&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without these, many connections simply wouldn’t work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Real-Time Media Flow&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once connected:&lt;/p&gt;

&lt;p&gt;Audio &amp;amp; video streams flow directly between peers&lt;/p&gt;

&lt;p&gt;No backend involved in media transfer&lt;/p&gt;

&lt;p&gt;Latency stays extremely low&lt;/p&gt;

&lt;p&gt;This is why WebRTC is used in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Video calls&lt;/li&gt;
&lt;li&gt;Live collaboration&lt;/li&gt;
&lt;li&gt;Telemedicine apps&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The Full Flow:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here’s how a call actually happens:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;User A starts a call&lt;/li&gt;
&lt;li&gt;Signaling server sends “offer” to User B&lt;/li&gt;
&lt;li&gt;User B responds with “answer”&lt;/li&gt;
&lt;li&gt;Both exchange ICE candidates&lt;/li&gt;
&lt;li&gt;Peer connection is established&lt;/li&gt;
&lt;li&gt;Media streams directly between devices&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Not simple.&lt;/p&gt;

&lt;p&gt;But beautiful.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgcsz4ye7bxilq5n83bmw.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgcsz4ye7bxilq5n83bmw.jpg" alt=" " width="800" height="510"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Made This Project Challenging&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This wasn’t just coding.&lt;/p&gt;

&lt;p&gt;It was understanding systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Debugging is painful&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You’re not debugging functions.&lt;/p&gt;

&lt;p&gt;You’re debugging:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Network states&lt;/li&gt;
&lt;li&gt;ICE failures&lt;/li&gt;
&lt;li&gt;Connection negotiation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. It’s asynchronous chaos&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Everything happens in events:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Offers&lt;/li&gt;
&lt;li&gt;Answers&lt;/li&gt;
&lt;li&gt;Candidates&lt;/li&gt;
&lt;li&gt;Streams&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Miss one step → connection fails silently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Documentation is scattered&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;WebRTC isn’t beginner-friendly.&lt;/p&gt;

&lt;p&gt;You don’t “learn it once”.&lt;/p&gt;

&lt;p&gt;You experience it over time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What This Project Taught Me&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before this project, I thought:&lt;/p&gt;

&lt;p&gt;“Apps are about APIs and UI.”&lt;/p&gt;

&lt;p&gt;Now I know:&lt;/p&gt;

&lt;p&gt;Some systems live below that layer.&lt;/p&gt;

&lt;p&gt;WebRTC taught me:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Real-time systems are fundamentally different&lt;/li&gt;
&lt;li&gt;Architecture matters more than code&lt;/li&gt;
&lt;li&gt;Networking knowledge is underrated&lt;/li&gt;
&lt;li&gt;Peer-to-peer is powerful but complex&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;From App Developer to System Thinker&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This project changed how I think.&lt;/p&gt;

&lt;p&gt;I stopped asking:&lt;/p&gt;

&lt;p&gt;“How do I build this feature?”&lt;/p&gt;

&lt;p&gt;And started asking:&lt;/p&gt;

&lt;p&gt;“How does communication actually happen?”&lt;/p&gt;

&lt;p&gt;That shift is what separates:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Developers&lt;/strong&gt; from &lt;strong&gt;Engineers&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Final Thought&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Building a WebRTC app isn’t about video calling.&lt;/p&gt;

&lt;p&gt;It’s about understanding:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Communication protocols&lt;/li&gt;
&lt;li&gt;Network behavior&lt;/li&gt;
&lt;li&gt;Real-time systems&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And once you understand that…&lt;/p&gt;

&lt;p&gt;You stop seeing apps as screens.&lt;/p&gt;

&lt;p&gt;You start seeing them as systems.&lt;/p&gt;

&lt;p&gt;GitHub Link: &lt;a href="https://github.com/prateek-mangalgi-dev18/WebRTC-react-native-app" rel="noopener noreferrer"&gt;https://github.com/prateek-mangalgi-dev18/WebRTC-react-native-app&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Portfolio Link: &lt;a href="https://prateek-mangalgi.vercel.app/" rel="noopener noreferrer"&gt;https://prateek-mangalgi.vercel.app/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webrtc</category>
      <category>mobile</category>
      <category>reactnative</category>
      <category>networking</category>
    </item>
    <item>
      <title>From Chatbot to Medical AI: How I Used RAG, FAISS &amp; Mistral to Ground AI in Reality</title>
      <dc:creator>Prateek Mangalgi</dc:creator>
      <pubDate>Mon, 16 Feb 2026 17:17:39 +0000</pubDate>
      <link>https://forem.com/prateek_mangalgi/from-chatbot-to-medical-ai-how-i-used-rag-faiss-mistral-to-ground-ai-in-reality-5f0d</link>
      <guid>https://forem.com/prateek_mangalgi/from-chatbot-to-medical-ai-how-i-used-rag-faiss-mistral-to-ground-ai-in-reality-5f0d</guid>
      <description>&lt;p&gt;Most AI demos look impressive.&lt;/p&gt;

&lt;p&gt;They answer anything.&lt;br&gt;
They speak confidently.&lt;br&gt;
They sound intelligent.&lt;/p&gt;

&lt;p&gt;But confidence is not accuracy.&lt;/p&gt;

&lt;p&gt;And when you’re dealing with medical reports, accuracy isn’t optional, it’s responsibility.&lt;/p&gt;

&lt;p&gt;When I built Medibotix, I didn’t want another chatbot that guesses.&lt;/p&gt;

&lt;p&gt;I wanted an AI that reads your medical report first, and only then speaks.&lt;/p&gt;

&lt;p&gt;That decision led me deep into Retrieval-Augmented Generation (RAG), vector search with FAISS, and the power of the Mistral AI API.&lt;/p&gt;

&lt;p&gt;And it completely changed how I think about building AI systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Problem With Vanilla AI Chat&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Large language models are powerful.&lt;/p&gt;

&lt;p&gt;But they have a fundamental limitation:&lt;/p&gt;

&lt;p&gt;They generate answers based on training data, not your uploaded document.&lt;/p&gt;

&lt;p&gt;Which means:&lt;/p&gt;

&lt;p&gt;They might hallucinate.&lt;/p&gt;

&lt;p&gt;They might generalize.&lt;/p&gt;

&lt;p&gt;They might sound right but be wrong.&lt;/p&gt;

&lt;p&gt;They don’t truly see your specific report unless designed to.&lt;/p&gt;

&lt;p&gt;In healthcare, that’s dangerous.&lt;/p&gt;

&lt;p&gt;So I asked myself:&lt;/p&gt;

&lt;p&gt;How do I make an AI answer strictly from the patient’s medical report, and nothing else?&lt;/p&gt;

&lt;p&gt;The answer wasn’t prompt engineering.&lt;/p&gt;

&lt;p&gt;It was architecture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Architecture Behind Medibotix&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Medibotix is built around a simple but powerful principle:&lt;/p&gt;

&lt;p&gt;The AI must retrieve relevant context before generating an answer.&lt;/p&gt;

&lt;p&gt;Here’s the system design.&lt;/p&gt;

&lt;p&gt;Document Upload (FastAPI Layer)&lt;/p&gt;

&lt;p&gt;A user uploads a medical report (PDF or text).&lt;/p&gt;

&lt;p&gt;The backend (built with FastAPI) does not immediately send it to the language model.&lt;/p&gt;

&lt;p&gt;Instead, it:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Extracts text from the file&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Cleans and prepares it&lt;/p&gt;

&lt;p&gt;Breaks it into overlapping chunks&lt;/p&gt;

&lt;p&gt;Why chunking?&lt;/p&gt;

&lt;p&gt;Because medical reports are long.&lt;br&gt;
And language models work best with structured context, not raw documents.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Intelligent Chunking&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The document is split into overlapping segments.&lt;/p&gt;

&lt;p&gt;Overlap matters because medical explanations often span multiple lines.&lt;br&gt;
Without overlap, meaning breaks.&lt;/p&gt;

&lt;p&gt;Each chunk becomes a knowledge unit.&lt;/p&gt;

&lt;p&gt;Think of it as turning a document into searchable memory fragments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Embeddings via Mistral API&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is where Mistral AI enters the architecture.&lt;/p&gt;

&lt;p&gt;Each chunk is converted into a vector embedding using the Mistral embedding model.&lt;/p&gt;

&lt;p&gt;Embeddings don’t store words.&lt;/p&gt;

&lt;p&gt;They store meaning.&lt;/p&gt;

&lt;p&gt;Now every chunk of the medical report becomes a coordinate in semantic space.&lt;/p&gt;

&lt;p&gt;Not keyword searchable.&lt;/p&gt;

&lt;p&gt;Meaning searchable.&lt;/p&gt;

&lt;p&gt;That distinction is everything&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. FAISS - The Memory Engine&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Those embeddings are stored inside FAISS.&lt;/p&gt;

&lt;p&gt;FAISS is optimized for ultra-fast similarity search.&lt;/p&gt;

&lt;p&gt;When a user asks:&lt;/p&gt;

&lt;p&gt;“Is my hemoglobin level low?”&lt;/p&gt;

&lt;p&gt;The system:&lt;/p&gt;

&lt;p&gt;Converts the question into an embedding (via Mistral API)&lt;/p&gt;

&lt;p&gt;Compares it against stored document embeddings&lt;/p&gt;

&lt;p&gt;Retrieves the top semantically similar chunks&lt;/p&gt;

&lt;p&gt;Not based on keyword matching.&lt;/p&gt;

&lt;p&gt;Based on contextual similarity.&lt;/p&gt;

&lt;p&gt;That’s the heart of RAG.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Retrieval-Augmented Generation (RAG)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now comes the critical orchestration.&lt;/p&gt;

&lt;p&gt;Instead of asking the language model to answer blindly, we:&lt;/p&gt;

&lt;p&gt;Inject only the retrieved chunks&lt;/p&gt;

&lt;p&gt;Provide strict system instructions&lt;/p&gt;

&lt;p&gt;Ask it to answer using that context alone&lt;/p&gt;

&lt;p&gt;The final answer is generated using a Mistral chat model, but grounded in retrieved evidence.&lt;/p&gt;

&lt;p&gt;That’s Retrieval-Augmented Generation.&lt;/p&gt;

&lt;p&gt;The model doesn’t guess.&lt;/p&gt;

&lt;p&gt;It reasons from evidence.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmqtv0pqwyun3ns6r889z.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmqtv0pqwyun3ns6r889z.jpg" alt=" " width="800" height="1202"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Guardrails: Designing Responsible Medical AI&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In healthcare, hallucination isn’t funny.&lt;/p&gt;

&lt;p&gt;It’s harmful.&lt;/p&gt;

&lt;p&gt;So Medibotix enforces strict constraints:&lt;/p&gt;

&lt;p&gt;Only health-related questions&lt;/p&gt;

&lt;p&gt;No political or unrelated topics&lt;/p&gt;

&lt;p&gt;No billing or administrative details&lt;/p&gt;

&lt;p&gt;Simple language explanations&lt;/p&gt;

&lt;p&gt;No unnecessary medical jargon&lt;/p&gt;

&lt;p&gt;Clear refusal for off-topic queries&lt;/p&gt;

&lt;p&gt;The AI doesn’t replace doctors.&lt;/p&gt;

&lt;p&gt;It translates reports into human language.&lt;/p&gt;

&lt;p&gt;That difference matters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Complete Flow (End-to-End Architecture)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here’s how everything connects:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;User uploads medical report&lt;/li&gt;
&lt;li&gt;FastAPI extracts and chunks text&lt;/li&gt;
&lt;li&gt;Mistral API generates embeddings&lt;/li&gt;
&lt;li&gt;Embeddings stored in FAISS index&lt;/li&gt;
&lt;li&gt;User asks a question&lt;/li&gt;
&lt;li&gt;Question embedded via Mistral&lt;/li&gt;
&lt;li&gt;FAISS retrieves top relevant chunks&lt;/li&gt;
&lt;li&gt;Retrieved context passed to Mistral chat model&lt;/li&gt;
&lt;li&gt;AI responds strictly from document evidence&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It’s not just AI.&lt;/p&gt;

&lt;p&gt;It’s a controlled intelligence pipeline.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What This Project Changed for Me&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before Medibotix, I thought AI products were about models.&lt;/p&gt;

&lt;p&gt;Now I know they’re about orchestration.&lt;/p&gt;

&lt;p&gt;A powerful model without retrieval is like:&lt;/p&gt;

&lt;p&gt;A brilliant doctor who hasn’t read your test results.&lt;/p&gt;

&lt;p&gt;RAG ensures the AI reads first.&lt;/p&gt;

&lt;p&gt;Then answers.&lt;/p&gt;

&lt;p&gt;And that small architectural shift makes the difference between:&lt;/p&gt;

&lt;p&gt;Impressive.&lt;/p&gt;

&lt;p&gt;And dependable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;From Chatbot to Cognitive System&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Medibotix isn’t just a chat interface.&lt;/p&gt;

&lt;p&gt;It’s a layered system:&lt;/p&gt;

&lt;p&gt;Embeddings for understanding&lt;/p&gt;

&lt;p&gt;FAISS for memory&lt;/p&gt;

&lt;p&gt;Retrieval for grounding&lt;/p&gt;

&lt;p&gt;Mistral for reasoning&lt;/p&gt;

&lt;p&gt;Guardrails for safety&lt;/p&gt;

&lt;p&gt;That’s modern AI engineering.&lt;/p&gt;

&lt;p&gt;And that’s where real differentiation lies.&lt;/p&gt;

&lt;p&gt;Not in making models talk louder.&lt;/p&gt;

&lt;p&gt;But in making them accountable to context.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Final Thought&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I didn’t want to build a chatbot that sounds intelligent.&lt;/p&gt;

&lt;p&gt;I wanted to build an AI that reads before it speaks.&lt;/p&gt;

&lt;p&gt;RAG gave it structure.&lt;br&gt;
FAISS gave it speed.&lt;br&gt;
Mistral API gave it reasoning power.&lt;/p&gt;

&lt;p&gt;And architecture gave it discipline.&lt;/p&gt;

&lt;p&gt;That’s how Medibotix went from a simple AI idea…&lt;/p&gt;

&lt;p&gt;To a document-grounded medical assistant built for responsibility.&lt;/p&gt;

&lt;p&gt;Demo link: &lt;a href="https://medibotix.vercel.app/" rel="noopener noreferrer"&gt;https://medibotix.vercel.app/&lt;/a&gt;&lt;br&gt;
GitHub link: &lt;a href="https://github.com/prateek-mangalgi-dev18/Medibotix" rel="noopener noreferrer"&gt;https://github.com/prateek-mangalgi-dev18/Medibotix&lt;/a&gt;&lt;br&gt;
Portfolio link: &lt;a href="https://prateek-mangalgi.vercel.app/" rel="noopener noreferrer"&gt;https://prateek-mangalgi.vercel.app/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>rag</category>
      <category>api</category>
      <category>python</category>
    </item>
    <item>
      <title>Most MERN Tutorials Ignore This, Until Production Breaks</title>
      <dc:creator>Prateek Mangalgi</dc:creator>
      <pubDate>Fri, 13 Feb 2026 07:38:42 +0000</pubDate>
      <link>https://forem.com/prateek_mangalgi/most-mern-tutorials-ignore-this-until-production-breaks-3c2l</link>
      <guid>https://forem.com/prateek_mangalgi/most-mern-tutorials-ignore-this-until-production-breaks-3c2l</guid>
      <description>&lt;p&gt;Media storage isn’t a feature, it’s infrastructure.&lt;br&gt;
Here’s how one decision transformed my MERN music app from fragile to scalable.&lt;br&gt;
When you build your first full-stack product, you think the hard part is logic.&lt;/p&gt;

&lt;p&gt;Authentication.&lt;br&gt;
Playlists.&lt;br&gt;
Search algorithms.&lt;br&gt;
UI polish.&lt;/p&gt;

&lt;p&gt;But no one tells you about the invisible monster waiting in production:&lt;/p&gt;

&lt;p&gt;Media storage.&lt;/p&gt;

&lt;p&gt;When I built my MERN music streaming platform, MusicHub, I was proud. It had an admin panel to upload songs, a clean UI, playlist creation, search by artist and movie, everything felt solid.&lt;/p&gt;

&lt;p&gt;Until I deployed it.&lt;/p&gt;

&lt;p&gt;And the illusion cracked.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Day It Slowed Down&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I uploaded a handful of songs.&lt;/p&gt;

&lt;p&gt;Just ten.&lt;/p&gt;

&lt;p&gt;That’s all it took.&lt;/p&gt;

&lt;p&gt;The server felt heavier.&lt;br&gt;
Streaming wasn’t as smooth.&lt;br&gt;
Deployment limits started whispering warnings.&lt;br&gt;
Storage felt fragile.&lt;/p&gt;

&lt;p&gt;That’s when I realized something most beginner tutorials never mention:&lt;/p&gt;

&lt;p&gt;Building features makes you feel like a developer.&lt;br&gt;
Handling infrastructure makes you become one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Moment I Stopped Thinking Like a Student&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Up until that point, I was storing audio files the “easy way.”&lt;br&gt;
Local storage. File paths in the database. Simple.&lt;/p&gt;

&lt;p&gt;But production is not impressed by simplicity.&lt;/p&gt;

&lt;p&gt;Production asks:&lt;/p&gt;

&lt;p&gt;What happens when 1,000 users stream simultaneously?&lt;/p&gt;

&lt;p&gt;What happens after 10 redeploys?&lt;/p&gt;

&lt;p&gt;What happens when storage resets?&lt;/p&gt;

&lt;p&gt;What happens when your backend becomes a file delivery service?&lt;/p&gt;

&lt;p&gt;And suddenly, I saw the flaw.&lt;/p&gt;

&lt;p&gt;My backend wasn’t meant to carry audio weight.&lt;br&gt;
It was meant to carry logic.&lt;/p&gt;

&lt;p&gt;That’s when I discovered Cloudinary, and more importantly, I understood why media delivery is its own engineering discipline.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Architectural Awakening&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Most people think Cloudinary is just for images.&lt;/p&gt;

&lt;p&gt;But it’s much more than that.&lt;/p&gt;

&lt;p&gt;It’s infrastructure for creators.&lt;/p&gt;

&lt;p&gt;When I integrated Cloudinary into MusicHub, something subtle but powerful changed:&lt;/p&gt;

&lt;p&gt;My server stopped being responsible for heavy media.&lt;/p&gt;

&lt;p&gt;It started focusing on what it should do:&lt;/p&gt;

&lt;p&gt;Authentication&lt;/p&gt;

&lt;p&gt;Business logic&lt;/p&gt;

&lt;p&gt;Database coordination&lt;/p&gt;

&lt;p&gt;User experience&lt;/p&gt;

&lt;p&gt;And audio?&lt;br&gt;
It traveled through a global CDN, optimized, distributed, resilient.&lt;/p&gt;

&lt;p&gt;Streaming became smoother.&lt;/p&gt;

&lt;p&gt;Deployment became peaceful.&lt;/p&gt;

&lt;p&gt;And the anxiety around “what if this breaks in production?” started fading.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Psychological Shift&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There’s something no one talks about in software engineering:&lt;/p&gt;

&lt;p&gt;Confidence.&lt;/p&gt;

&lt;p&gt;When your architecture is fragile, you feel it.&lt;br&gt;
When your infrastructure is messy, you know it.&lt;/p&gt;

&lt;p&gt;After moving my media handling to Cloudinary, MusicHub stopped feeling like:&lt;/p&gt;

&lt;p&gt;“A project I built.”&lt;/p&gt;

&lt;p&gt;It started feeling like:&lt;/p&gt;

&lt;p&gt;“A product I could ship.”&lt;/p&gt;

&lt;p&gt;That’s a powerful shift.&lt;/p&gt;

&lt;p&gt;Because good architecture doesn’t just improve performance,&lt;br&gt;
it improves how you think about yourself as a builder.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Final Thought&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every developer reaches a moment where they must choose:&lt;/p&gt;

&lt;p&gt;Keep building features…&lt;/p&gt;

&lt;p&gt;Or start building systems.&lt;/p&gt;

&lt;p&gt;For me, that moment came when storage nearly broke my music app.&lt;/p&gt;

&lt;p&gt;And choosing the right media infrastructure turned out to be the difference between a demo project and a scalable product.&lt;/p&gt;

&lt;p&gt;Sometimes growth as a developer doesn’t come from writing more code.&lt;/p&gt;

&lt;p&gt;It comes from understanding what not to handle yourself.&lt;/p&gt;

&lt;p&gt;And that realization changed how I build, permanently.&lt;/p&gt;

&lt;p&gt;GitHub link: &lt;a href="https://github.com/prateek-mangalgi-dev18/MusicHub" rel="noopener noreferrer"&gt;https://github.com/prateek-mangalgi-dev18/MusicHub&lt;/a&gt;&lt;br&gt;
Portfolio link: &lt;a href="https://prateek-mangalgi.vercel.app/" rel="noopener noreferrer"&gt;https://prateek-mangalgi.vercel.app/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>cloudinary</category>
      <category>mern</category>
      <category>cdn</category>
    </item>
  </channel>
</rss>
