<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Shravya K</title>
    <description>The latest articles on Forem by Shravya K (@shravya_k).</description>
    <link>https://forem.com/shravya_k</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/shravya_k"/>
    <language>en</language>
    <item>
      <title>How RAG Changed the Way We Use Large Language Models</title>
      <dc:creator>Shravya K</dc:creator>
      <pubDate>Tue, 13 Jan 2026 08:22:22 +0000</pubDate>
      <link>https://forem.com/shravya_k/how-rag-changed-the-way-we-use-large-language-models-1iih</link>
      <guid>https://forem.com/shravya_k/how-rag-changed-the-way-we-use-large-language-models-1iih</guid>
      <description>&lt;p&gt;&lt;strong&gt;#From guessing to grounded answers&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We've been asking AI the wrong questions.&lt;/p&gt;

&lt;p&gt;Or rather, we've been asking the right questions to the wrong system.&lt;/p&gt;

&lt;p&gt;Large Language Models are incredible at reasoning, connecting ideas, and explaining complex concepts. But there's something they're fundamentally not designed to do: remember specific facts with perfect accuracy.&lt;/p&gt;

&lt;p&gt;For years, we tried to make LLMs better at memorization. Bigger models, more training data, longer context windows.&lt;/p&gt;

&lt;p&gt;Then someone had a different idea: What if we stopped trying to make them remember everything and instead taught them how to look things up?&lt;/p&gt;

&lt;p&gt;That shift from memory to retrieval is what RAG is all about.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;LLMs Don't Know. They Predict.&lt;/strong&gt;&lt;br&gt;
Here's the fundamental thing about Large Language Models: they don't look things up. They predict.&lt;/p&gt;

&lt;p&gt;When you type a message, it gets broken into &lt;strong&gt;tokens&lt;/strong&gt; (pieces of words), which flow through a neural network asking one question billions of times: &lt;strong&gt;"What comes next?"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is called &lt;strong&gt;autoregressive generation&lt;/strong&gt; - each new word makes sense because of all the previous ones. It's like autocomplete on steroids, predicting one token at a time based on patterns learned during training.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Prompt: 

The weather today is

What the model does
Predicts the most likely next word:

The weather today is → "sunny"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the same prompt twice? You'll get different answers. The model is making probabilistic choices, not retrieving stored facts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Models You're Actually Using&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Different LLMs have different strengths:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ChatGPT (GPT-4)&lt;/strong&gt;: Best for conversations, coding, and following complex instructions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Claude&lt;/strong&gt;: Excels at long documents and detailed analysis&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gemini&lt;/strong&gt;: Strong with multimodal tasks (text, images, video)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Llama&lt;/strong&gt;: Open-source, customizable for specialized applications&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They all work the same way: predicting tokens based on context. And they all share the same core limitation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Limitation That Was Always There&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;LLMs can't look things up. When you ask about a certain product, the model doesn't search files. It generates an answer based on patterns from training data, data that's months old and has never seen your internal documents.&lt;/p&gt;

&lt;p&gt;If it doesn't know something, it doesn't say "I don't know." It predicts what a reasonable answer might sound like.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The problem:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Training data becomes outdated immediately&lt;/li&gt;
&lt;li&gt;No access to private or recent information&lt;/li&gt;
&lt;li&gt;No verification mechanism&lt;/li&gt;
&lt;li&gt;Confident responses regardless of accuracy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This works fine for creative tasks or explaining concepts. But for anything requiring factual accuracy? It's a dealbreaker.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Breakthrough: Stop Memorizing, Start Retrieving&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The solution came from rethinking the problem: What if LLMs don't need to remember everything?&lt;/p&gt;

&lt;p&gt;That's &lt;strong&gt;RAG - Retrieval-Augmented Generation.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The process is simple:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;User asks a question&lt;/li&gt;
&lt;li&gt;System searches a knowledge base&lt;/li&gt;
&lt;li&gt;Retrieved information gets added to the prompt&lt;/li&gt;
&lt;li&gt;LLM reads and responds&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fobuknji21tnkjz4z02tc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fobuknji21tnkjz4z02tc.png" alt=" " width="800" height="257"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;RAG = LLM + Retrieved Context&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Just like humans don't rely purely on memory when writing reports, we gather sources first,  RAG lets AI read before responding. The LLM handles reasoning, retrieval handles facts.&lt;/p&gt;

&lt;h2&gt;
  
  
  How RAG Works: The Complete Flow
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Setup Phase (happens once):&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Ingestion&lt;/strong&gt;: Load your documents (PDFs, web pages, databases)
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; Gather 600 help articles from a travel booking app, 150 PDF guides about cancellations and refunds, and fare rules from the internal system.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Chunking&lt;/strong&gt;: Break them into smaller pieces (usually a few paragraphs)
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;A long document explaining ticket cancellations is split into smaller sections—one for refunds, one for rescheduling, one for penalties.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Embedding&lt;/strong&gt;: Convert each chunk into a mathematical representation
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;A line like "Tickets cancelled within 24 hours are fully refundable" is turned into a format that captures its meaning.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Storage&lt;/strong&gt;: Save in a vector database optimized for similarity search
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;All sections are stored with tags like category="refunds", travel_type="flight", and last_updated="2024".

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Query Phase (every time someone asks):&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;User question: "My flight booking disappeared even though I paid for it"&lt;/code&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Convert the question into a mathematical form
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[booking_issue, payment_done, booking_missing, flight]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Search for chunks with similar meanings
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"booking not confirmed after payment"
"payment successful but ticket not issued"
"flight booking disappeared"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Retrieve the top matches (usually 3-10 chunks)
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Chunk 1:
"If payment is successful but ticket is not issued within 15 minutes,
the booking may remain in pending state."

Chunk 2:
"Bookings are auto-cancelled if airline confirmation is not received."

Chunk 3:
"Some banks show payment success even when airline rejects the transaction."

Chunk 4:
"Pending bookings disappear from user dashboard after 30 minutes."

Chunk 5:
"Refunds for failed bookings are processed within 5–7 days."

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Insert these chunks into the prompt sent to the LLM
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Context:
- Payment success does not always mean ticket issued
- Pending bookings may disappear after 30 minutes
- Airline confirmation failure causes auto-cancellation
- Refund timeline: 5–7 days

Answer the user clearly.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;LLM reads and generates an answer based on retrieved context
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FInal answer:
“Your booking disappeared because the airline did not confirm the ticket after payment. The system auto-cancelled it, and your payment will be refunded within 5–7 days.”
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The LLM never sees your entire document collection — just the most relevant pieces for each specific question.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three Ways to Find Information
&lt;/h2&gt;

&lt;p&gt;Retrieval isn't just "search" — there are three fundamentally different approaches:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Keyword Search: Exact Matching
&lt;/h3&gt;

&lt;p&gt;Finds documents with specific words or phrases. Fast and predictable, but misses synonyms.&lt;br&gt;
If the words don’t match, it pretends the information doesn’t exist.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Example:
Search: “pizza menu”
 Finds: “Pizza Menu – Downtown Branch”
 Misses: “Our Italian Dishes” or “What We Serve”
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Semantic Search: Understanding Meaning
&lt;/h3&gt;

&lt;p&gt;Uses mathematical representations (embeddings) to find conceptually similar content, even with different wording.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Search: “I’m hungry but don’t want to cook”
 Finds: “Nearby restaurants,” “Food delivery options,” “Quick meals”

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Metadata Filtering: Structured Criteria
&lt;/h3&gt;

&lt;p&gt;Applies hard rules: Metadata filtering narrows results using clear rules and labels.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Search: “Show my assignments”
 Filters: subject = “AI” AND status = “pending” AND due_date = “this week”
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Why Use All Three (Hybrid Search)
&lt;/h3&gt;

&lt;p&gt;Modern RAG systems combine these approaches:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Semantic search&lt;/strong&gt; finds conceptually relevant content&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Keyword search&lt;/strong&gt; ensures exact matches aren't missed&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Metadata filtering&lt;/strong&gt; applies necessary constraints&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each catches what the others miss.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real-World Example
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Scenario:&lt;/strong&gt; Food Ordering App Help Chatbot&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Setup:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Knowledge base: &lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;300 FAQs (orders, payments, delivery)&lt;/li&gt;
&lt;li&gt;150 issue guides (refunds, late delivery, app bugs)&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Process:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Break articles into small pieces&lt;/li&gt;
&lt;li&gt;Turn them into searchable representations&lt;/li&gt;
&lt;li&gt;Store them so the system can quickly look them up&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;When user asks: "Why does my food order get cancelled every time I add desserts?"&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Converts the question into a form it can compare with stored information&lt;/p&gt;

&lt;p&gt;Looks for related content using:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Meaning (order issues, add-ons, desserts)&lt;/li&gt;
&lt;li&gt;Keywords (“cancelled,” “desserts,” “order”)&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Applies filters:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;category = “order problems”&lt;/li&gt;
&lt;li&gt;status = “current”&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Picks the top 5 most relevant explanations and adds them to the prompt&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Result:&lt;/strong&gt; LLM response is based on actual app rules, not guesswork.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Before RAG:&lt;/strong&gt; "Please try restarting the app or placing the order again." (generic)&lt;br&gt;&lt;br&gt;
&lt;strong&gt;With RAG:&lt;/strong&gt; "Orders with desserts get cancelled if the restaurant is marked ‘no cold storage’. Try removing desserts or choosing a different outlet." (specific, accurate)&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Changes Everything
&lt;/h2&gt;

&lt;p&gt;RAG enables use cases that weren't reliably possible before:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Internal company AI&lt;/strong&gt;: Train on your specific docs, policies, codebase&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Medical assistance&lt;/strong&gt;: Reference current treatment guidelines and research&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Legal research&lt;/strong&gt;: Search case law with specific citations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Customer support&lt;/strong&gt;: Know exact product features and current policies&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Research tools&lt;/strong&gt;: Find and connect recent publications&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All of these need one thing: &lt;strong&gt;answers grounded in specific, current, trusted information.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Core Insight
&lt;/h2&gt;

&lt;p&gt;Large Language Models are reasoning engines that happen to be trained on data. We mistook that training data for a feature when it was really a limitation.&lt;/p&gt;

&lt;p&gt;RAG separates reasoning from knowledge:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;LLM&lt;/strong&gt;: Understanding context, connecting ideas, generating responses&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Retrieval&lt;/strong&gt;: Storing information, keeping it current, finding relevant pieces&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This division of labor makes AI systems reliable.&lt;/p&gt;

&lt;p&gt;The future isn't about models that know everything. It's about systems that know how to find what they need, when they need it, from trusted sources.&lt;/p&gt;

&lt;p&gt;That's what RAG represents. And it's just the beginning.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>rag</category>
      <category>llm</category>
    </item>
    <item>
      <title>I Thought I Knew How To Talk To AI: I Didn't</title>
      <dc:creator>Shravya K</dc:creator>
      <pubDate>Mon, 12 Jan 2026 13:01:42 +0000</pubDate>
      <link>https://forem.com/shravya_k/i-thought-i-knew-how-to-talk-to-ai-i-didnt-5aph</link>
      <guid>https://forem.com/shravya_k/i-thought-i-knew-how-to-talk-to-ai-i-didnt-5aph</guid>
      <description>&lt;p&gt;The first time I asked ChatGPT for help, I typed: "Write me a product description for noise-canceling headphones."&lt;br&gt;
It gave me garbage.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqap8rg39omxu8979awgc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqap8rg39omxu8979awgc.png" alt=" " width="800" height="171"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I blamed the AI. But the problem was me.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"Be Clear" Doesn't Mean "Be Long"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For months, I wrote these rambling prompts explaining every little detail.&lt;br&gt;
Results were still hit-or-miss.&lt;/p&gt;

&lt;p&gt;Then I learned something simple:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DELIMITERS&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;They're the lines you draw in your prompt to say, "This part is my instruction. This part is the content I want you to work with. Don't confuse the two".&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Think of it like writing on a whiteboard during a meeting. You don't just scribble everything together in one giant blob. You draw boxes. You underline headers. You separate the problem from the solution.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Delimiters do the same thing, they keep your instructions from bleeding into your content, so the AI knows exactly what's what.&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Eg: Triple quotes """, XML tags , or even simple dashes ---— pick whatever feels natural.&lt;/p&gt;

&lt;p&gt;Suddenly, the AI knew exactly what I wanted.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Role:
You are an AI writing assistant for students.
Task:
Improve clarity; keep it short; avoid complex words.
Rules:
- Do not change meaning
- Output only the improved text
- No extra explanations
Input | Metadata:
topic: AI prompting | level: beginner | tone: casual
---
Original text:
"""
AI works good but sometimes it gives wrong answers because the question is not clear.
"""
---
Output format:
improved_text: &amp;lt;one improved sentence&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Stop Rushing the AI (And Yourself)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I used to treat AI like a vending machine.&lt;br&gt;
Input -&amp;gt; enter -&amp;gt; expect perfection.&lt;/p&gt;

&lt;p&gt;Then someone told me: give the model time to think.&lt;br&gt;
Not literal time. &lt;strong&gt;Structure&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Instead of "Solve this," I learned to say:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"First, analyze the problem" - Understand what you're dealing with&lt;br&gt;
&lt;/p&gt;


&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;A train travels 60 km in 1.5 hours.
The speed of the train is constant.
We are asked to find the time required to travel 100 km at the same speed.

So this is a speed–distance–time problem.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;"Then, identify approaches"  -  Explore different ways forward&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;We can solve this by:
Calculating the speed of the train, then
Using the same speed to find the time for 100 km

Formulae involved:
Speed = Distance ÷ Time
Time = Distance ÷ Speed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;"Finally, provide your solution" - Deliver the answer that fits best&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Calculate speed:

Speed = 60 km ÷ 1.5 hours
Speed = 40 km/h
Now calculate time for 100 km:
Time = 100 km ÷ 40 km/h
Time = 2.5 hours
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The difference was massive.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Four Ways to Ask (That Actually Work)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Zero-shot:&lt;/strong&gt; Just the instruction. No examples.&lt;br&gt;
Works when the task is straightforward.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Prompt: 
Roast my coding skills.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;Output: Your code works, but even it looks surprised by that.&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Few-shot:&lt;/strong&gt; Multiple examples showing a clear pattern.&lt;br&gt;
My go-to for consistency.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Prompt:
Roast my coding skills.

Examples:
"Your variable names are so creative, even you forget what they mean."
"Your code has comments because future-you will be very confused."

Now roast me.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;Output: Your code is like a mystery novel - long, confusing, and the ending barely makes sense&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Context-based:&lt;/strong&gt; Background + constraints + question.&lt;br&gt;
Changed everything.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Prompt:
Context:
I’m a student who codes regularly.
This roast can be harsh.
Tone should be playful and relatable.

Task:
Roast my coding skills in one line.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;Output: Your code works, but only because the bugs are too tired to fight back&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Same question. Completely different answer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Finally Made It Click&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I stopped thinking of prompting as "giving commands."&lt;br&gt;
I started thinking of it as talking to someone who wants to help but needs to know what you actually need.&lt;br&gt;
You wouldn't tell a colleague "Do the thing."&lt;br&gt;
You'd give context, constraints, examples.&lt;br&gt;
That's all a good prompt is.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Start Here&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use delimiters.&lt;/li&gt;
&lt;li&gt;Separate your instructions from your content.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;That alone will transform your results.&lt;/li&gt;
&lt;li&gt;The rest will come naturally once you realize AI isn't a tool you command, it's a collaborator you communicate with.&lt;/li&gt;
&lt;li&gt;And the quality of what you build together depends on how clearly you communicate what you're trying to build.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>tutorial</category>
      <category>gpt3</category>
    </item>
  </channel>
</rss>
