<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Raunak ALI</title>
    <description>The latest articles on Forem by Raunak ALI (@raunaklallala).</description>
    <link>https://forem.com/raunaklallala</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/raunaklallala"/>
    <language>en</language>
    <item>
      <title>LangGraph vs. Chains: Building Smarter AI Workflows With State, Branching, and Memory</title>
      <dc:creator>Raunak ALI</dc:creator>
      <pubDate>Tue, 16 Sep 2025 03:30:00 +0000</pubDate>
      <link>https://forem.com/raunaklallala/langgraph-vs-chains-building-smarter-ai-workflows-with-state-branching-and-memory-56dd</link>
      <guid>https://forem.com/raunaklallala/langgraph-vs-chains-building-smarter-ai-workflows-with-state-branching-and-memory-56dd</guid>
      <description>&lt;h3&gt;
  
  
  Introduction: From Foundations to Real-World Agent Workflows
&lt;/h3&gt;

&lt;p&gt;If you’ve followed along with the first series &lt;a href="https://dev.to/raunaklallala/article-1-intro-to-gen-ai-llms-and-langchain-frameworkspart-a-4o66"&gt;Article 1 :Series&lt;/a&gt; , you’ve built a solid foundation in Generative AI, the inner workings of LLMs, and how frameworks like LangChain empower you to chain models and tools in practical applications.&lt;br&gt;&lt;br&gt;
We wrapped up by getting &lt;a href="https://dev.to/raunaklallala/article-1-chapter-f-practical-langchain-demo-with-google-gemini-duckduckgo-1a58"&gt;hands-on&lt;/a&gt; with Google Gemini and DuckDuckGo, showcasing how to connect language models to real-time web search for richer, up-to-date insights.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;But this is only the beginning.&lt;/em&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;What comes next?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
The real power of modern AI frameworks lies not just in connecting LLMs to tools, but in designing autonomous, agentic workflows, ie systems that can reason, act, and learn with minimal supervision. This is where we move from simple pipelines to intelligent agents capable of carrying out sophisticated tasks.&lt;/p&gt;

&lt;p&gt;In this new article series, we’ll chart a path:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;From basic LLM interactions to the architecture behind single-agent workflows.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;From writing prompts and chaining APIs, to building autonomous agents using LangGraph.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We’ll explore not only how these agents &lt;em&gt;think&lt;/em&gt; , but also how to structure, deploy, and scale them, bringing theory into actionable code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Whether you’re aiming to automate research, streamline business processes, or create next-gen AI products, this journey will take you from the core ideas to hands-on implementation.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  1. What is LangGraph?
&lt;/h3&gt;

&lt;p&gt;At its core, &lt;strong&gt;LangGraph&lt;/strong&gt; is a framework built on top of &lt;strong&gt;LangChain&lt;/strong&gt; that allows you to design &lt;strong&gt;stateful, graph-based workflows&lt;/strong&gt; for LLM-powered systems. Think of it as moving from a straight line to a map of possible paths.&lt;/p&gt;

&lt;h4&gt;
  
  
  Why This Matters
&lt;/h4&gt;

&lt;p&gt;Most of us start with simple LLM calls:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You send a prompt → the model replies.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That works for toy problems, but breaks quickly when you need &lt;strong&gt;multi-step reasoning, memory, or dynamic decisions&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;LangChain&lt;/strong&gt; improved on this by giving us &lt;strong&gt;LLM chains:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A sequence of steps where one output becomes the input to the next.
&lt;/li&gt;
&lt;li&gt;Great for linear workflows (summarize → translate → format).
&lt;/li&gt;
&lt;li&gt;But still stateless and rigid — if you need &lt;strong&gt;branching or state tracking&lt;/strong&gt;, it becomes clunky.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Enter LangGraph:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwobd1pxdc1ftuwxcji4s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwobd1pxdc1ftuwxcji4s.png" alt=" " width="550" height="461"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Here, your workflow is expressed as a &lt;strong&gt;graph&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Nodes&lt;/strong&gt; = steps (e.g., call an LLM, fetch from a DB, apply logic).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Edges&lt;/strong&gt; = the possible paths between steps.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;State&lt;/strong&gt; = context that persists across nodes.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;This means agents don’t just follow a checklist; they can &lt;strong&gt;branch, revisit steps, remember context, and adapt.&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;In other words:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Chains = pipelines.&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Graphs = decision systems.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  2. Why LangGraph is a Step Towards Agentic AI
&lt;/h3&gt;

&lt;p&gt;So why is this shift so important? Because it’s the foundation for &lt;strong&gt;Agentic AI&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agents are more than LLMs. They’re systems that can:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reason across steps (not just one-off answers).
&lt;/li&gt;
&lt;li&gt;Use tools (search engines, databases, APIs).
&lt;/li&gt;
&lt;li&gt;Maintain memory (short-term context + long-term knowledge).
&lt;/li&gt;
&lt;li&gt;Adapt dynamically (change course depending on inputs).
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;LangGraph gives you the scaffolding to build these systems.&lt;/strong&gt; Instead of relying on “prompt magic” or fragile chains, you now have a way to:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Break tasks into smaller sub-steps.
&lt;/li&gt;
&lt;li&gt;Manage state and memory across interactions.
&lt;/li&gt;
&lt;li&gt;Introduce branching logic (if this, then that).
&lt;/li&gt;
&lt;li&gt;Build persistent workflows that survive beyond a single request.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is what makes LangGraph a real step forward: it moves us closer to building AI that behaves like &lt;strong&gt;autonomous agents, capable of reasoning, acting, and learning — rather than just predicting the next word.&lt;/strong&gt;&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;Up next in &lt;a href=""&gt;&lt;strong&gt;Understanding Core Concepts of LangGraph (Deep Dive)&lt;/strong&gt;&lt;/a&gt;, we’ll dive into the core building blocks that make LangGraph tick: Nodes, Edges, and State. If you think of this chapter as the “why,” this next part is the “how”, a closer look at the execution units, the pathways between them, and the memory that makes workflows adaptive and alive.&lt;/p&gt;
&lt;/blockquote&gt;




</description>
      <category>agenticai</category>
      <category>llm</category>
      <category>langgrapgh</category>
      <category>genai</category>
    </item>
    <item>
      <title>Understanding Core Concepts of LangGraph (Deep Dive)</title>
      <dc:creator>Raunak ALI</dc:creator>
      <pubDate>Tue, 16 Sep 2025 03:30:00 +0000</pubDate>
      <link>https://forem.com/raunaklallala/understanding-core-concepts-of-langgraph-deep-dive-1d7h</link>
      <guid>https://forem.com/raunaklallala/understanding-core-concepts-of-langgraph-deep-dive-1d7h</guid>
      <description>

&lt;h1&gt;
  
  
  Understanding Core Concepts of LangGraph (Deep Dive)
&lt;/h1&gt;

&lt;p&gt;In the &lt;a href="https://dev.to/raunaklallala/langgraph-vs-chains-building-smarter-ai-workflows-with-state-branching-and-memory-56dd?preview=a3b0cc9225d9424df3488f81a91f12cdb0aa5491fd423d314307a209669dfaae1724d8416dd9b88431ddd3575e383685253544f0b5b84d04116904f6"&gt;last chapter&lt;/a&gt;, we talked about why LangGraph feels like a shift compared to traditional “linear chains.” Now, let’s slow down and zoom into its &lt;em&gt;DNA&lt;/em&gt;. At the core, LangGraph has three simple but powerful building blocks: &lt;strong&gt;Nodes, Edges, and State&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;If those names sound abstract, don’t worry, by the end of this chapter, you’ll see them the same way you see apps on your phone or stops on a subway map. They’re pieces you already know, just arranged in a smarter way.  &lt;/p&gt;




&lt;h3&gt;
  
  
  1. Nodes: The Execution Units
&lt;/h3&gt;

&lt;p&gt;A Node is basically “a single action.” Imagine breaking your workday into steps: checking email, making coffee, writing code, or scheduling a meeting. Each of those is a &lt;em&gt;Node&lt;/em&gt;.  &lt;/p&gt;

&lt;p&gt;In LangGraph, a Node can be many things:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A large language model call (like GPT, Gemini, or LLaMA).
&lt;/li&gt;
&lt;li&gt;A tool (search engine lookup, calculator, weather API, database query).
&lt;/li&gt;
&lt;li&gt;A custom function (Python function, regex cleaner, summarizer).
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each Node is like a worker with a simple contract: it takes an input, does its piece of the job, and pushes out an output.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Everyday example:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Think about ordering food on a delivery app.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;One Node takes your food order.
&lt;/li&gt;
&lt;li&gt;Another Node calls the restaurant’s system.
&lt;/li&gt;
&lt;li&gt;Another Node calculates delivery time.
Each does one thing, and together they create your experience.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Analogy:&lt;/strong&gt; Nodes are like “stations” on a metro map. The passenger (your data) steps off at every station, something happens to them, and then they move along.  &lt;/p&gt;




&lt;h3&gt;
  
  
  2.Edges: The Flow of Control
&lt;/h3&gt;

&lt;p&gt;Nodes mean nothing without connections. That’s where Edges come in—they define how data flows between steps.  &lt;/p&gt;

&lt;p&gt;Think of Edges as “decision pathways.” Sometimes they’re simple, sometimes they’re smart.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Types of edges:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deterministic: Always go from Node A → Node B.
&lt;/li&gt;
&lt;li&gt;Conditional: Choose the next Node depending on logic (like: if temperature &amp;gt; 30°C → recommend ice cream, else → recommend coffee).
&lt;/li&gt;
&lt;li&gt;Looping: Keep retrying until a condition is satisfied (like refreshing a page until tickets become available).
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Everyday example:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Picture a customer support chatbot:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you say “My internet is down,” the Edge routes you to technical troubleshooting.
&lt;/li&gt;
&lt;li&gt;If you say “I want to upgrade my plan,” the Edge routes you to subscription details.
Same system—different path, depending on your input.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Analogy:&lt;/strong&gt; If Nodes are stations, Edges are the railway tracks. They decide where the train (data) should head next.  &lt;/p&gt;




&lt;h3&gt;
  
  
  3. State: Memory That Persists
&lt;/h3&gt;

&lt;p&gt;This is where LangGraph flexes its muscles. State is memory. It’s what makes the system feel alive, instead of robotic.  &lt;/p&gt;

&lt;p&gt;In traditional chains (LangChain’s default way), once the step is done, the memory is gone. LangGraph changes that by carrying context across all steps—even across &lt;em&gt;different&lt;/em&gt; runs of the workflow.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Types of state:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Short-term state → like scratch notes (e.g., storing your last reply).
&lt;/li&gt;
&lt;li&gt;Long-term state → like preferences you always want remembered (favorite language, tone, or settings).
&lt;/li&gt;
&lt;li&gt;Shared state → memory that’s accessible to &lt;em&gt;all&lt;/em&gt; nodes during execution.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Everyday example:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Think of Netflix. It remembers what you watched last night, even if you close the app. That’s State. Without it, every time you logged in it would say: “Hello, stranger. Want to start from Season 1 again?”  &lt;/p&gt;

&lt;p&gt;In LLM workflows, this is a game changer. Suddenly your AI assistant can remember your name, your last three queries, and the fact that you hate super-long formal emails.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Analogy:&lt;/strong&gt; State is your personal notebook. Instead of starting from scratch each time, the AI flips back a few pages and says: &lt;em&gt;“Ah, I remember what we were doing.”&lt;/em&gt;  &lt;/p&gt;




&lt;h3&gt;
  
  
  4. Putting It All Together (Mini Example)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fupq10ay11lsuobfz6i6q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fupq10ay11lsuobfz6i6q.png" alt=" " width="505" height="516"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s imagine a Translator Graph:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Node 1:&lt;/strong&gt; Input Capture → User enters “Hello World.”
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Node 2:&lt;/strong&gt; Translator LLM → Converts to French (“Bonjour le monde”).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Edge:&lt;/strong&gt; Pass result forward.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Node 3:&lt;/strong&gt; Output Formatter → Returns polished, styled output.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;State:&lt;/strong&gt; Stores both “Hello World” and “Bonjour le monde” for reference.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now layer this with something real: imagine building a &lt;strong&gt;job application assistant&lt;/strong&gt;:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Node 1: Parse candidate resume.
&lt;/li&gt;
&lt;li&gt;Node 2: Summarize applicant strengths.
&lt;/li&gt;
&lt;li&gt;Node 3: Match with a job description.
&lt;/li&gt;
&lt;li&gt;Edges: If skills gap detected → route to skill-gap explainer node.
&lt;/li&gt;
&lt;li&gt;State: Stores both the resume context and job descriptions for continuity.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s not just a chatbot, infact it’s a real system that &lt;strong&gt;“remembers”&lt;/strong&gt; and adapts.  &lt;/p&gt;




&lt;h2&gt;
  
  
  Why These Concepts Matter
&lt;/h2&gt;

&lt;p&gt;Once you start thinking in terms of Nodes, Edges, and State, you realize this is less about AI “chats” and more about &lt;em&gt;AI workflows&lt;/em&gt;.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Nodes let you modularize logic (build small, reusable steps).
&lt;/li&gt;
&lt;li&gt;Edges let you branch and adapt (no more rigid scripts).
&lt;/li&gt;
&lt;li&gt;State gives your system context, continuity, and memory—making it feel closer to an intelligent colleague than a forgetful bot.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For me personally, the &lt;strong&gt;State&lt;/strong&gt; part was what made pieces click. The first time I built a system that could remember user preferences &lt;em&gt;across sessions&lt;/em&gt;—like “always reply in bullet points unless I say otherwise”—it felt like a leap. Suddenly, it wasn’t just a chatbot anymore. It was my &lt;em&gt;teammate&lt;/em&gt;.  &lt;/p&gt;

&lt;p&gt;This trio (Nodes, Edges, State) is why LangGraph isn’t just a library, but a framework for &lt;em&gt;persistent, adaptive, multi-step systems&lt;/em&gt; you can actually trust.  &lt;/p&gt;




</description>
      <category>langgraph</category>
      <category>genai</category>
      <category>llm</category>
      <category>python</category>
    </item>
    <item>
      <title>Gen AI-Powered HR Candidate &amp; Role Insights — A Practical LangChain Demo</title>
      <dc:creator>Raunak ALI</dc:creator>
      <pubDate>Sun, 14 Sep 2025 16:12:24 +0000</pubDate>
      <link>https://forem.com/raunaklallala/article-1-chapter-f-practical-langchain-demo-with-google-gemini-duckduckgo-1a58</link>
      <guid>https://forem.com/raunaklallala/article-1-chapter-f-practical-langchain-demo-with-google-gemini-duckduckgo-1a58</guid>
      <description>&lt;h1&gt;
  
  
  &lt;strong&gt;Gen AI-Powered HR Candidate &amp;amp; Role Insights — A Practical LangChain Demo&lt;/strong&gt;
&lt;/h1&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Intro: Why Automate HR Research?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Buzzwords aside, the day-to-day reality of HR means endless Googling, comparing candidate profiles and job requirements, and manually building up shortlists.&lt;br&gt;&lt;br&gt;
Instead, what if you could ask a question like &lt;em&gt;“Best fit for a Product Manager role in Mumbai ?”&lt;/em&gt;—and let an AI tool scour the web, summarize strengths, skills, and job-market needs, and instantly give you an actionable verdict?&lt;/p&gt;

&lt;p&gt;This is exactly what we build in this tutorial: a real, working pipeline using LangChain, Google Gemini (free API), and DuckDuckGo search—all glued together with proper chains so you can clearly see each step. Perfect for HR professionals, GenAI learners, and anyone wanting to explore practical LLM app design.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://colab.research.google.com/drive/1x3FXno8bOgQoFwXEPiPbTESJFMSGMPjQ?usp=sharing" rel="noopener noreferrer"&gt;&lt;strong&gt;Google collab notebook&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Project Flow Diagram (Text Description)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz5dgrnjgsf5l5nileop1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz5dgrnjgsf5l5nileop1.png" alt=" " width="421" height="181"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Jupyter Notebook: Step-by-Step with Explanations&lt;/strong&gt;
&lt;/h2&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Cell 1: Install All Required Packages&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="err"&gt;!&lt;/span&gt;&lt;span class="n"&gt;pip&lt;/span&gt; &lt;span class="n"&gt;install&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;U&lt;/span&gt; &lt;span class="n"&gt;langchain&lt;/span&gt; &lt;span class="n"&gt;langchain&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;google&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;genai&lt;/span&gt; &lt;span class="n"&gt;langchain&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;community&lt;/span&gt; &lt;span class="n"&gt;duckduckgo&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;search&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
We install the latest versions of the core libraries for our pipeline:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;langchain&lt;/strong&gt;: Orchestration of all LLM chains, agents, and tools.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;langchain-google-genai&lt;/strong&gt;: Seamlessly integrates Google’s Gemini LLM with LangChain.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;langchain-community&lt;/strong&gt;: Extra integrations, including tools and data connectors.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;duckduckgo-search&lt;/strong&gt;: Provides free web search functionality, crucial for fetching real-world, up-to-date info on candidates or job skills.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Cell 2: Set up Gemini API Key&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;GOOGLE_API_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;""&lt;/span&gt; &lt;span class="c1"&gt;# &amp;lt;--- YOUR KEY GOES HERE
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Here you securely save your Gemini API key in your Python environment—so the LLM can authenticate and run.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Pro Tip:&lt;/strong&gt; Always keep this key blank or in environment variables for sharing notebooks (never hard-code the real value publicly)!&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Cell 3: Import LangChain Libraries and Helpers&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain.llms.base&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;LLM&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain.callbacks.manager&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;CallbackManagerForLLMRun&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_google_genai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ChatGoogleGenerativeAI&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_community.tools&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;DuckDuckGoSearchResults&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain.prompts&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;PromptTemplate&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ChatPromptTemplate&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain.chains&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;LLMChain&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;SimpleSequentialChain&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The core LangChain classes/components.&lt;/li&gt;
&lt;li&gt;Gemini’s LLM connector.&lt;/li&gt;
&lt;li&gt;DuckDuckGo tool for free search.&lt;/li&gt;
&lt;li&gt;Prompt template classes to instruct both tools and LLMs.&lt;/li&gt;
&lt;li&gt;LLMChain and SimpleSequentialChain for composable pipeline steps.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Cell 4: Build a Custom LLM Wrapper for DuckDuckGo&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;DuckDuckGoLLM&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;LLM&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;_call&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;stop&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;run_manager&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;CallbackManagerForLLMRun&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;kwargs&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;search&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;DuckDuckGoSearchResults&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;search&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nd"&gt;@property&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;_llm_type&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;custom_duckduckgo&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;LangChain encourages each “chain” to look like an LLM (even if it’s a tool).&lt;/li&gt;
&lt;li&gt;We make a minimal subclass that “pretends” to be an LLM but simply calls DuckDuckGo, returning a string of search results.&lt;/li&gt;
&lt;li&gt;This allows us to wire the search step right into a &lt;code&gt;SimpleSequentialChain&lt;/code&gt; with no hacks!&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Cell 5: Get Your HR User Input&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;
&lt;span class="n"&gt;target_role&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Product Manager&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;location&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Mumbai&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;


&lt;span class="n"&gt;search_query&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;'"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;location&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt; (&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;target_role&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt; OR &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Senior &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;target_role&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;) site:linkedin.com/in &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
    &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;resume projects skills 2025&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The HR user enters a Location and Target job role.&lt;/li&gt;
&lt;li&gt;Both are merged into a single, smart query—for richer, contextual web results (Location  + latest skill demands).&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Cell 6: Define the Search Chain&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;search_prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;PromptTemplate&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_template&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{query}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;duckduck_llm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;DuckDuckGoLLM&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;search_chain&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;LLMChain&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;duckduck_llm&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;search_prompt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;output_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;search_results&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sets up the first LangChain chain—using your custom LLM and a simple prompt template that inserts your full HR query.&lt;/li&gt;
&lt;li&gt;The chain will take the HR query, perform the DuckDuckGo search, and output free-text search results as &lt;code&gt;'search_results'&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;Cell 7: Define the Gemini Summarizer Chain&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;
&lt;span class="n"&gt;gemini_llm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ChatGoogleGenerativeAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gemini-1.5-flash&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;summarize_prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ChatPromptTemplate&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_template&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;re an HR assistant. &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Given these web search results (candidate info and top skills):&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;{search_results}&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="s"&gt; For all the different Candidiates &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;TARGET ROLE IS &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;target_role&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Summarize in:&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;- List all the candidates with thier links from the info &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;- A list of each candidate&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;s main strengths&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;- Key candidates highlights matching the role&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;- A ready-to-copy shortlist or not recommendation for busy HR.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Use clear, up-to-date professional language.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;summarize_chain&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;LLMChain&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;gemini_llm&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;summarize_prompt&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sets up step two, using Gemini to process the fetched search results.&lt;/li&gt;
&lt;li&gt;The prompt is engineered (drawing from good prompt techniques) to give:

&lt;ul&gt;
&lt;li&gt;Bullet points of strengths&lt;/li&gt;
&lt;li&gt;Match to role&lt;/li&gt;
&lt;li&gt;Quick “shortlist or not” recommendation&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Designed to appeal to HR and to demo chained, composable LLM integration.
&lt;/li&gt;

&lt;li&gt;To learn more about &lt;em&gt;Prompt Engineering&lt;/em&gt; used in thh above code snippet and make your own custom accurate prompts u can refer this &lt;a href="https://dev.to/raunaklallala/article-1-intro-to-gen-aillms-and-langchain-frameworkspart-c-48ij"&gt;&lt;strong&gt;Guide&lt;/strong&gt;&lt;/a&gt;
***&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Cell 8: Combine Both Chains and Run!&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;overall_chain&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;SimpleSequentialChain&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;chains&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;search_chain&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;summarize_chain&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;verbose&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;overall_chain&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;search_query&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;SimpleSequentialChain&lt;/code&gt; connects both steps so the output of &lt;code&gt;search_chain&lt;/code&gt; becomes the input to &lt;code&gt;summarize_chain&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The process is fully modular, each chain is defined, testable, and reusable on its own!&lt;/li&gt;
&lt;li&gt;You pass your overall query and get back Gemini’s HR-friendly recommendation—just like a real hiring assistant would do.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Wrap-Up&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;What did we build?&lt;/strong&gt;
An honest, simple LangChain pipeline:
&lt;em&gt;User input&lt;/em&gt; → &lt;em&gt;Search tool&lt;/em&gt; → &lt;em&gt;LLM summary&lt;/em&gt; → &lt;em&gt;Instant, actionable candidate verdict!&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;What does this show off?&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;True, composable multi-step processing in LangChain&lt;/li&gt;
&lt;li&gt;Prompt engineering best practices for HR&lt;/li&gt;
&lt;li&gt;How to blend web tools and LLMs, even without complicated agent code&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Where to go next?&lt;/strong&gt;
Try running with more candidate/role inputs, extending with another Gemini step, or producing a PDF/CSV for your HR team!&lt;/li&gt;

&lt;/ul&gt;




</description>
      <category>genai</category>
      <category>langchain</category>
      <category>duckduckgo</category>
      <category>llm</category>
    </item>
    <item>
      <title>LangChain &amp; LangGraph: What You Need to Know Before You Build Agents and GenAI Applications</title>
      <dc:creator>Raunak ALI</dc:creator>
      <pubDate>Sun, 14 Sep 2025 16:11:31 +0000</pubDate>
      <link>https://forem.com/raunaklallala/article-1-chapter-e-introduction-to-langchain-langgraph-agn</link>
      <guid>https://forem.com/raunaklallala/article-1-chapter-e-introduction-to-langchain-langgraph-agn</guid>
      <description>&lt;h2&gt;
  
  
  LangChain &amp;amp; LangGraph: What You Need to Know Before You Build Agents and GenAI Applications
&lt;/h2&gt;




&lt;h3&gt;
  
  
  1. Why LangChain?
&lt;/h3&gt;

&lt;p&gt;LangChain is one of the most widely adopted frameworks for building &lt;strong&gt;LLM-powered applications&lt;/strong&gt;. While LLMs on their own can generate text, LangChain connects them to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Tools&lt;/strong&gt; (like APIs, calculators, databases).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Memory&lt;/strong&gt; (short-term &amp;amp; long-term).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Chains &amp;amp; Agents&lt;/strong&gt; (structured multi-step reasoning).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Think of it as the &lt;strong&gt;glue layer&lt;/strong&gt; between an LLM and the real world.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Official Docs&lt;/em&gt;: &lt;a href="https://python.langchain.com/docs/introduction/" rel="noopener noreferrer"&gt;LangChain Documentation&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  2. What About LangGraph?
&lt;/h3&gt;

&lt;p&gt;LangGraph is an extension built by the same team, focused on &lt;strong&gt;agent workflows&lt;/strong&gt;.&lt;br&gt;
Where LangChain gives you chains and tool usage, &lt;strong&gt;LangGraph adds state machines and graphs&lt;/strong&gt; — ideal for orchestrating &lt;strong&gt;multi-step or multi-agent systems&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;-&amp;gt; In short:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;LangChain&lt;/strong&gt; → Great for prototyping and connecting models.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LangGraph&lt;/strong&gt; → Great for scaling to real-world, reliable agent systems.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Official Docs&lt;/em&gt;: &lt;a href="https://python.langchain.com/docs/langgraph" rel="noopener noreferrer"&gt;LangGraph Documentation&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  3. Why This Tutorial Uses LangChain And LangGraph
&lt;/h3&gt;

&lt;p&gt;In this series, our focus is &lt;strong&gt;Agentic AI&lt;/strong&gt;. LangChain &amp;amp; LangGraph give us:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; &lt;strong&gt;Rapid prototyping&lt;/strong&gt; (easy chains, tools, prompts).&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Production patterns&lt;/strong&gt; (memory, state management, graphs).&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Community support&lt;/strong&gt; (huge ecosystem, tutorials, integrations).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They’re not the only option, but they balance &lt;strong&gt;beginner-friendliness&lt;/strong&gt; and &lt;strong&gt;professional depth&lt;/strong&gt;.&lt;/p&gt;




&lt;h3&gt;
  
  
  4. Alternatives You Should Know
&lt;/h3&gt;

&lt;p&gt;Depending on your use case, you might explore:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;LlamaIndex (GPT Index)&lt;/strong&gt; → Strong for &lt;strong&gt;RAG (Retrieval-Augmented Generation)&lt;/strong&gt; use cases.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Haystack&lt;/strong&gt; → Powerful for &lt;strong&gt;search + retrieval pipelines&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Guidance&lt;/strong&gt; → Focused on &lt;strong&gt;controlling LLM output formats&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AutoGPT / BabyAGI&lt;/strong&gt; → More experimental “autonomous” frameworks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each has trade-offs, but LangChain remains the most versatile entry point.&lt;/p&gt;




&lt;h3&gt;
  
  
  5. Limitations of LangChain / LangGraph
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt; &lt;strong&gt;Complexity&lt;/strong&gt; → Large learning curve for beginners beyond simple chains.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Performance&lt;/strong&gt; → Agents can be slow/expensive if not optimized (tool overuse, excessive prompts).&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Still evolving&lt;/strong&gt; → APIs change quickly; keeping up with updates is necessary.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  6. Prerequisites
&lt;/h3&gt;

&lt;p&gt;Before we jump into our practicals in the next chapter, let’s talk &lt;strong&gt;prerequisites&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;To follow along with LangChain implementations, you should ideally be comfortable with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; &lt;strong&gt;Python basics&lt;/strong&gt; → writing/understanding simple scripts.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;pip&lt;/strong&gt; → installing Python packages.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Virtual environments (venv / conda)&lt;/strong&gt; → optional but useful to keep projects isolated.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Jupyter / Google Colab&lt;/strong&gt; → running Python code in notebooks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you’re new, don’t worry — this module series will provide &lt;strong&gt;ready-to-use Google Colab notebooks&lt;/strong&gt; alongside the explanations. That way, you don’t need to set up a full dev environment on your laptop right away.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Next &lt;strong&gt;Build your own GenAI-powered HR research assistant!&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In this hands-on guide, you’ll see exactly how to connect Google Gemini and DuckDuckGo using LangChain which would fetch real-time news, summarize topics, and create modular AI chains in minutes. &lt;/li&gt;
&lt;li&gt;Curious how it works? Don’t miss this practical walkthrough!
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/raunaklallala/article-1-chapter-f-practical-langchain-demo-with-google-gemini-duckduckgo-1a58"&gt;&lt;strong&gt;Gen AI-Powered HR Candidate &amp;amp; Role Insights — A Practical LangChain Demo&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

</description>
      <category>genai</category>
      <category>llm</category>
      <category>langchain</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Agentic AI Explained: A Transition from Gen AI to Agentic AI</title>
      <dc:creator>Raunak ALI</dc:creator>
      <pubDate>Sun, 14 Sep 2025 16:10:55 +0000</pubDate>
      <link>https://forem.com/raunaklallala/article-1-chapter-d-transition-from-gen-ai-to-agentic-ai-1k68</link>
      <guid>https://forem.com/raunaklallala/article-1-chapter-d-transition-from-gen-ai-to-agentic-ai-1k68</guid>
      <description>&lt;h2&gt;
  
  
  Agentic AI Explained: How Smart Agents Turn Language Models Into Real-World Problem Solvers
&lt;/h2&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;1. From Prompting to Agents&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Think of &lt;strong&gt;Generative AI&lt;/strong&gt; today like a really smart intern: you ask it a question (prompt), it gives you an answer. That’s prompting very simple and powerful.&lt;/p&gt;

&lt;p&gt;But here’s the catch: LLMs don’t “know” everything. They generate based on their training data and the words you give them. So, if you ask an LLM &lt;em&gt;“What’s the current stock price of Tesla?”&lt;/em&gt;  it can’t fetch that for you. It’ll guess.&lt;/p&gt;

&lt;p&gt;That’s where &lt;strong&gt;Agents&lt;/strong&gt; come in.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Prompting&lt;/strong&gt; = “Ask and answer” model.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agents&lt;/strong&gt; = Models + Tools + Memory + Autonomy.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In short: while prompts are static queries, &lt;strong&gt;agents are dynamic systems&lt;/strong&gt; that decide what steps to take, which tools to use, and how to carry out multi-step goals.&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;2. What Is Agentic AI?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Agentic AI = LLMs upgraded into orchestrators.&lt;br&gt;
Instead of just generating text, they:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Access tools&lt;/strong&gt; (APIs, databases, calculators, search engines).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use memory&lt;/strong&gt; (short-term for conversation flow, long-term for knowledge retention).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Plan and reason&lt;/strong&gt; (break a task into smaller steps, execute in sequence).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Self-reflect&lt;/strong&gt; (evaluate outputs, retry or improve).&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;A simple analogy:&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Gen AI&lt;/strong&gt; = You asking ChatGPT for an essay draft.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agentic AI&lt;/strong&gt; = A virtual assistant that researches, cites sources, cross-checks facts, and then drafts the essay all without you having to prompt it for each step.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;


&lt;h3&gt;
  
  
  &lt;strong&gt;3. Examples in Action&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;LangChain + LangGraph&lt;/strong&gt; → Frameworks to build multi-step reasoning agents. LangGraph adds &lt;em&gt;state machines&lt;/em&gt; so agents can decide “what to do next.”&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AutoGPT&lt;/strong&gt; → A famous early experiment where you just give a high-level goal (“research electric cars and make a report”), and the agent loops through tasks autonomously.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-world&lt;/strong&gt; → Customer service bots that not only answer FAQs but also pull data from your CRM, check stock inventory, and send an email confirmation.&lt;/li&gt;
&lt;/ul&gt;


&lt;h3&gt;
  
  
  &lt;strong&gt;4. Architecture Breakdown&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Here’s how most &lt;strong&gt;Agentic AI systems&lt;/strong&gt; are structured:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fypvb78snt9rdus8kcsvj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fypvb78snt9rdus8kcsvj.png" alt=" " width="648" height="648"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;LLM Core&lt;/strong&gt; → Handles reasoning and natural language. (Gemini, GPT-4, Claude, LLaMA-based models)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Toolset&lt;/strong&gt; → Plug-ins like web search, SQL database connectors, calculators.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Controller/Orchestrator&lt;/strong&gt; → Decides when to call which tool.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Memory Layer&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Short-term memory&lt;/em&gt; (conversation so far).&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Long-term memory&lt;/em&gt; (knowledge storage, embeddings, vector databases like Pinecone, Weaviate).&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Feedback/Evaluation Loop&lt;/strong&gt; → The agent checks whether outputs make sense, retries if needed.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Think of it like a &lt;strong&gt;team&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;LLM = the brain&lt;/li&gt;
&lt;li&gt;Tools = the hands&lt;/li&gt;
&lt;li&gt;Memory = the notebook&lt;/li&gt;
&lt;li&gt;Orchestrator = the project manager&lt;/li&gt;
&lt;/ul&gt;


&lt;h3&gt;
  
  
  &lt;strong&gt;5. Strengths &amp;amp; Opportunities&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Autonomy&lt;/strong&gt; → Agents can run for hours or days on tasks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Efficiency&lt;/strong&gt; → Offload repetitive workflows (data entry, monitoring, research).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt; → Businesses can replace complex manual processes with AI pipelines.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Customization&lt;/strong&gt; → Agents can be fine-tuned to specific industries (finance, healthcare, e-commerce).&lt;/li&gt;
&lt;/ul&gt;


&lt;h3&gt;
  
  
  &lt;strong&gt;6. Limitations &amp;amp; Warnings&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Like any shiny new tech, &lt;strong&gt;Agentic AI isn’t flawless&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Hallucinations&lt;/strong&gt;: Agents may “act confidently wrong.” (e.g., citing non-existent papers).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost Explosion&lt;/strong&gt;: Running loops or external calls repeatedly can burn API credits.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Control Risk&lt;/strong&gt;: Fully autonomous systems can spiral if not sandboxed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Latency&lt;/strong&gt;: Multi-step reasoning = longer response times.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security&lt;/strong&gt;: Allowing agents tool access (e.g., your email or database) can open doors for misuse.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;: Early versions of AutoGPT would happily Google random things for hours, running up costs, with no guarantee of useful output.&lt;/p&gt;


&lt;h3&gt;
  
  
  &lt;strong&gt;7. Where Things Are Headed&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Agentic AI is shaping the next wave of applications:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AI Researchers&lt;/strong&gt; → Systems that design and test hypotheses.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI DevOps&lt;/strong&gt; → Agents fixing code, testing, and deploying autonomously.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enterprise Assistants&lt;/strong&gt; → Not just answering queries but executing workflows end-to-end.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Frameworks like &lt;strong&gt;LangGraph, CrewAI, and Microsoft’s AutoGen&lt;/strong&gt; are already bridging this gap. The vision? Moving from &lt;em&gt;single-shot Q&amp;amp;A&lt;/em&gt; to &lt;strong&gt;AI teammates&lt;/strong&gt; that collaborate with humans on real tasks.&lt;/p&gt;


&lt;h3&gt;
  
  
  &lt;strong&gt;8. Key Design Patterns &amp;amp; Implementation Strategies&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;When moving from single-shot Gen AI to &lt;strong&gt;Agentic AI systems&lt;/strong&gt;, engineers often rely on well-tested design patterns. These patterns help balance autonomy with control, ensuring agents don’t just “guess” but actually execute structured workflows.&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;A. Task Decomposition&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Agents break down complex goals into smaller, actionable steps.&lt;/p&gt;

&lt;p&gt;💡 Example:&lt;br&gt;
Instead of asking an agent directly: &lt;em&gt;“Write a business plan for an EV startup”&lt;/em&gt;, the agent may decompose it into:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Research EV market trends.&lt;/li&gt;
&lt;li&gt;Analyze competitors.&lt;/li&gt;
&lt;li&gt;Estimate costs and revenue.&lt;/li&gt;
&lt;li&gt;Draft business plan sections.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;🔹 In &lt;strong&gt;LangChain&lt;/strong&gt;, this often uses a “planner → executor” pattern.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Pseudo-pattern: Decompose into subtasks
&lt;/span&gt;&lt;span class="n"&gt;goal&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Write a business plan for an EV startup&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;subtasks&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;plan&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;goal&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# e.g., planner LLM decides steps
&lt;/span&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;task&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;subtasks&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;B. Tool Selection Heuristics&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Agents must know &lt;em&gt;which tool to use&lt;/em&gt; at the right time (e.g., calculator, database, search engine).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Heuristic-based&lt;/strong&gt;: Simple if-else logic.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Model-driven&lt;/strong&gt;: LLM itself decides via prompt engineering.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;calculate&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;user_query&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nf"&gt;use&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;calculator_tool&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;search&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;user_query&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nf"&gt;use&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;google_search_tool&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nf"&gt;use&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;LLM&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This avoids unnecessary tool calls and keeps costs down.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;C. Memory Management&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Memory is what makes agents “context-aware.”&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Short-term memory&lt;/strong&gt; → Keeps track of conversation/session (like a chatbot remembering what you just said).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Long-term memory&lt;/strong&gt; → Uses vector databases (like Pinecone, Weaviate, FAISS) to store embeddings for retrieval later.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Example: retrieve memory from vector store
&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;What did the customer say about pricing last week?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vector_db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;similarity_search&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;generate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;D. State Management&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Agents don’t just move linearly instead they often branch, loop, or run in parallel.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;State tracking&lt;/strong&gt; ensures the agent knows &lt;em&gt;where it is&lt;/em&gt; in the workflow.&lt;/li&gt;
&lt;li&gt;Frameworks like &lt;strong&gt;LangGraph&lt;/strong&gt; use a &lt;strong&gt;state machine&lt;/strong&gt; approach (nodes = steps, edges = transitions).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;📌 Example:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyrues7vtx3nyyd6slvzq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyrues7vtx3nyyd6slvzq.png" alt=" " width="800" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This keeps the workflow predictable and debuggable.&lt;/p&gt;




&lt;p&gt;Together, these patterns ie &lt;strong&gt;decomposition, tool heuristics, memory, and state management&lt;/strong&gt; are what elevate LLMs from simple “word predictors” into &lt;strong&gt;reliable autonomous systems&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Simple takeaway&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Gen AI&lt;/strong&gt; is great for one-off answers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agentic AI&lt;/strong&gt; is about orchestrating steps, tools, and memory for &lt;strong&gt;autonomous workflows&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This transition is not just technical ,it’s a &lt;strong&gt;paradigm shift&lt;/strong&gt; in how businesses, students, and professionals will work with AI.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Want to learn in depth on how to Talk to LLMs?&lt;/strong&gt;&lt;br&gt;
Discover Prompt Engineering from basic to Expert level&lt;br&gt;
Understand how to unlock the power of Prompt Engineering to get the best out of LLM Models &lt;br&gt;
Read:-&lt;a href="https://dev.to/raunaklallala/article-1-intro-to-gen-aillms-and-langchain-frameworkspart-c-48ij"&gt;&lt;strong&gt;Prompt Engineering Made Simple: Real-World Techniques, Mistakes to Avoid, and Hands-On for Everyone&lt;/strong&gt;&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Want to Build Real AI Apps, Not Just Test Prompts?&lt;/strong&gt;&lt;br&gt;
Discover why LangChain and LangGraph are the go-to frameworks for anyone looking to turn LLMs into powerful, tool-using apps—and even autonomous agents.&lt;br&gt;
This chapter breaks down what makes them essential, when to use each, their strengths and limitations, plus practical prerequisites for getting started. Perfect for beginners and future builders!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Read &lt;a href="https://dev.to/raunaklallala/article-1-chapter-e-introduction-to-langchain-langgraph-agn"&gt;&lt;strong&gt;LangChain &amp;amp; LangGraph: What You Need to Know Before You Build Agents and GenAI Applications&lt;/strong&gt;&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;




&lt;p&gt;&lt;strong&gt;Got questions or ideas?&lt;/strong&gt;Drop a comment below — I’d love to hear your thoughts.&lt;br&gt;
&lt;strong&gt;Let’s connect:&lt;/strong&gt; &lt;a href="https://www.linkedin.com/in/your-profile/" rel="noopener noreferrer"&gt;🔗 My LinkedIn&lt;/a&gt;  &lt;/p&gt;

</description>
      <category>genai</category>
      <category>agents</category>
      <category>langchain</category>
      <category>llm</category>
    </item>
    <item>
      <title>Prompt Engineering Made Simple: Real-World Techniques, Mistakes to Avoid, and Hands-On for Everyone</title>
      <dc:creator>Raunak ALI</dc:creator>
      <pubDate>Wed, 10 Sep 2025 05:37:02 +0000</pubDate>
      <link>https://forem.com/raunaklallala/article-1-intro-to-gen-aillms-and-langchain-frameworkspart-c-48ij</link>
      <guid>https://forem.com/raunaklallala/article-1-intro-to-gen-aillms-and-langchain-frameworkspart-c-48ij</guid>
      <description>&lt;h2&gt;
  
  
  Prompt Engineering Made Simple: Real-World Techniques, Mistakes to Avoid, and Hands-On for Everyone
&lt;/h2&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;1. What Is Prompt Engineering?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Prompt Engineering is basically how you “&lt;strong&gt;speak AI&lt;/strong&gt;” so it actually understands you.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;As a student or fresher, think of it like framing your questions to a teacher clearly so you get the best answer.
&lt;/li&gt;
&lt;li&gt;As a business leader, it’s about giving your AI “&lt;strong&gt;assistant&lt;/strong&gt;” the exact instruction it needs so it really does not do any guesswork, just action.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;2. Why Prompt Engineering Is Important&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Accuracy&lt;/strong&gt; — You get relevant, sharp responses, not hallucinations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consistency&lt;/strong&gt; — The AI understands what you want, every time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Efficiency&lt;/strong&gt; — Fewer rewrites, faster results.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Control&lt;/strong&gt; — You guide tone, structure, and style.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  3. Core Prompting Techniques
&lt;/h3&gt;

&lt;h4&gt;
  
  
  (a) Zero-Shot Prompting
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Ask directly; no setup.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Example:&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;What is the capital of Brazil?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Great for simple questions.&lt;/li&gt;
&lt;li&gt;Not always reliable if the prompt is vague.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  (b) Few-Shot Prompting
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Show a couple of demonstrations, then ask.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Example:&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Translate to French:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;I love programming. → J'adore la programmation.
&lt;/li&gt;
&lt;li&gt;This food is delicious. → Cette nourriture est délicieuse.
Now translate: “Where is the nearest train station?”&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Excellent for establishing format.&lt;/li&gt;
&lt;li&gt;Builds pattern understanding quickly.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  (c) Role Prompting
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Assign a persona or expertise.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Example:&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;You are a professional business consultant.&lt;br&gt;
 Suggest three cost-saving strategies for a small retail store.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Directs tone and domain; powerful for business output.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  (d) Chain-of-Thought (CoT) Prompting
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Encourage step-by-step reasoning.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Example:&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;A train leaves at 3 PM traveling 60 km/h. Another leaves at 4 PM traveling 80 km/h.&lt;br&gt;
When does the second catch up? &lt;em&gt;Think step by step.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Great for math and logic tasks.&lt;/li&gt;
&lt;li&gt;Small models may struggle to maintain coherence.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  (e) Instruction-Tuning vs. Prompting
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Instruction-tuned models (e.g., Mistral-Instruct, Falcon-Instruct, Gemini Flash) follow concise prompts better.&lt;/li&gt;
&lt;li&gt;How you ask still matters; even tuned models can misinterpret sloppy instructions.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;4. The PROMPT Method — A Practical Framework&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp97ntytex2gwsxyi8win.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp97ntytex2gwsxyi8win.png" alt=" " width="672" height="281"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;P&lt;/strong&gt; — Provide context (who, what, why)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;R&lt;/strong&gt; — Role (persona or expertise level)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;O&lt;/strong&gt; — Output format (bullet list, essay, JSON, etc.)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;M&lt;/strong&gt; — Models/examples (few-shot if needed)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;P&lt;/strong&gt; — Point out constraints (length, style, tone)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;T&lt;/strong&gt; — Test &amp;amp; tweak iteratively&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Sample Business Prompt (using PROMPT):&lt;/em&gt;  &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You are a marketing strategist. Write a LinkedIn post (100 words max, professional tone) about why small businesses should start using AI-powered chatbots. Include 3 bullet points with key benefits at the end .That covers all elements—context, role, format, constraints.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;5. Use Cases — Who Benefits and How?&lt;/strong&gt;
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Audience&lt;/th&gt;
&lt;th&gt;Use Cases&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Business&lt;/td&gt;
&lt;td&gt;Client proposals, marketing copy, market summaries, brainstorming ideas&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Students&lt;/td&gt;
&lt;td&gt;Summarizing lectures, generating practice questions, debugging code, translating and simplifying concepts&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;6. Common Prompting Mistakes (and Fixes)&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Too vague:&lt;/strong&gt;&lt;br&gt;
“Write something about AI.”&lt;br&gt;
→ Better: “Write a 200-word introduction to AI for high-school students, with three real-world examples.”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No structure:&lt;/strong&gt;&lt;br&gt;
“Summarize this article.”&lt;br&gt;
→ Better: “Summarize this article in five bullet points, each under 15 words.”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Overloaded prompt:&lt;/strong&gt;&lt;br&gt;
Asking multiple unrelated tasks at once&lt;br&gt;
→ Better: Break them into separate prompts to stay clear.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Limitations of Prompt Engineering (With Models)
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Issue&lt;/th&gt;
&lt;th&gt;Examples&lt;/th&gt;
&lt;th&gt;What Happens&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Hallucinations&lt;/td&gt;
&lt;td&gt;LLaMA-2-7B, Mistral-7B&lt;/td&gt;
&lt;td&gt;AI confidently states incorrect “facts.”&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Weak reasoning&lt;/td&gt;
&lt;td&gt;DistilGPT-2, GPT4All&lt;/td&gt;
&lt;td&gt;Chain-of-Thought fails; logic falls apart.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Poor instruction follow&lt;/td&gt;
&lt;td&gt;Falcon-7B, LLaMA-2-Base&lt;/td&gt;
&lt;td&gt;Ignores role or tone instructions.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Small context window&lt;/td&gt;
&lt;td&gt;GPT-NeoX-20B, DistilGPT2&lt;/td&gt;
&lt;td&gt;Cannot summarize long documents.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Bias / tone issues&lt;/td&gt;
&lt;td&gt;RedPajama-INCITE, Pythia&lt;/td&gt;
&lt;td&gt;Unfiltered models may produce off-color responses.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Free-tier limitations&lt;/td&gt;
&lt;td&gt;Google AI Studio free tier, Hugging Face Spaces&lt;/td&gt;
&lt;td&gt;Lower rate limits or slow response times during peak usage.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This is why prompt engineering isn’t optional — it helps you get more out of limited or smaller models.&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;7. Prompt Chaining — Breaking Down Complex Tasks&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Sometimes a single prompt isn’t enough. That’s where Prompt Chaining comes in—breaking a complex request into smaller steps, feeding outputs from one step into the next.&lt;br&gt;&lt;br&gt;
Think of it as building a pipeline: Prompt → Response → Refined Prompt → Final Output.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Example 1 — Business Case&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
  &lt;em&gt;Task:&lt;/em&gt; “Write a business strategy for launching an eco-friendly fashion brand.”&lt;br&gt;&lt;br&gt;
  &lt;strong&gt;Chained Approach:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prompt 1 → “List 5 challenges eco-fashion startups face.”
&lt;/li&gt;
&lt;li&gt;Prompt 2 → “Suggest 3 strategies to overcome each challenge.”
&lt;/li&gt;
&lt;li&gt;Prompt 3 → “Combine into a polished 500-word strategy report.”
&lt;em&gt;Instead of dumping one big prompt, you steer the model step-by-step, ensuring quality at each stage.&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Example 2 — Student Case&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
 &lt;em&gt;Task:&lt;/em&gt; “Write an essay on climate change.”&lt;br&gt;&lt;br&gt;
 &lt;strong&gt;Chained Approach:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prompt 1 → “List the top 5 causes of climate change.”
&lt;/li&gt;
&lt;li&gt;Prompt 2 → “Explain each cause in detail.”
&lt;/li&gt;
&lt;li&gt;Prompt 3 → “Summarize into a 1000-word essay with intro and conclusion.”
This modular approach reduces hallucination, improves structure, and gives you checkpoints to verify accuracy.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;When to Use Prompt Chaining&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Long research reports&lt;/li&gt;
&lt;li&gt;Multi-step reasoning (financial forecasts, legal summaries)&lt;/li&gt;
&lt;li&gt;Structured workflows (customer journey mapping, educational guides)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Limitation&lt;/strong&gt;: Models with short context windows (like DistilGPT2, GPT-NeoX-20B) may “forget” earlier steps if the chain grows too long. Larger models (like GPT-4, Gemini 1.5 Pro, Claude 3.5) handle this far better.&lt;/p&gt;




&lt;h4&gt;
  
  
  Simple Example (LangChain Style):
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain.llms import OpenAI

llm = OpenAI()

# Step 1: Generate outline
outline_prompt = PromptTemplate.from_template("Give me 3 bullet points on climate change.")
outline_chain = LLMChain(llm=llm, prompt=outline_prompt)

# Step 2: Expand outline into essay
essay_prompt = PromptTemplate.from_template("Expand this into a 300-word essay:\n{outline}")
essay_chain = LLMChain(llm=llm, prompt=essay_prompt)

outline = outline_chain.run({})
essay = essay_chain.run({"outline": outline})
print(essay)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;First prompt creates an outline, second prompt expands into an essay.
This is Prompt Chaining — and it’s especially powerful for business workflows (e.g., first create meeting notes → then turn into action items → then draft emails).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Demo notebook Link with Gemini model:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://shorturl.at/HDljm" rel="noopener noreferrer"&gt;Google Colab Demo Notebook&lt;/a&gt;&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Curious About the Brains Behind Generative AI?&lt;/strong&gt;&lt;br&gt;
Now that you know what Generative AI is,&lt;br&gt;
Are you ready to discover the engines driving this revolution?&lt;br&gt;
In the next chapter, explore:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What are LLMs (Large Language Models)—and how do they really work?&lt;/li&gt;
&lt;li&gt;How can anyone (student, business owner, or dev) try LLMs for free, right now?&lt;/li&gt;
&lt;li&gt;Hands-on tools and instant access to models and platforms!
&lt;/li&gt;
&lt;li&gt;👉 Unlock  &lt;a href="https://dev.to/raunaklallala/article-1-intro-to-gen-ai-llms-and-langchain-frameworkspart-b-1f84"&gt;&lt;strong&gt;How Do Large Language Models (LLMs) Work? Free Tools and Hands-On Tips for Beginners and Business Owners&lt;/strong&gt; &lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Ever Wondered How AI Can Go Beyond Simple Answers?&lt;/strong&gt;&lt;br&gt;
Discover the leap from prompts to real autonomy!&lt;br&gt;
See how “Agentic AI” combines LLMs, memory, and tools to build smart assistants that plan, reason, and act for you.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;👉 Unlock &lt;a href="https://dev.to/raunaklallala/article-1-chapter-d-transition-from-gen-ai-to-agentic-ai-1k68"&gt;*&lt;em&gt;Agentic AI Explained: A Transition from Gen AI to Agentic AI *&lt;/em&gt;&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;




&lt;p&gt;&lt;strong&gt;Got questions or ideas? **Drop a comment below — I’d love to hear your thoughts.&lt;br&gt;&lt;br&gt;
**Let’s connect:&lt;/strong&gt; &lt;a href="https://www.linkedin.com/in/your-profile/" rel="noopener noreferrer"&gt;🔗 My LinkedIn&lt;/a&gt;&lt;/p&gt;




</description>
      <category>genai</category>
      <category>promptengineering</category>
      <category>langchain</category>
      <category>python</category>
    </item>
    <item>
      <title>What is Generative AI? Practical Insights, Real-World Impact, and Why It’s Easier Than Ever to Begin</title>
      <dc:creator>Raunak ALI</dc:creator>
      <pubDate>Wed, 10 Sep 2025 05:36:15 +0000</pubDate>
      <link>https://forem.com/raunaklallala/article-1-intro-to-gen-ai-llms-and-langchain-frameworkspart-a-4o66</link>
      <guid>https://forem.com/raunaklallala/article-1-intro-to-gen-ai-llms-and-langchain-frameworkspart-a-4o66</guid>
      <description>&lt;h2&gt;
  
  
  What is Generative AI? Practical Insights, Real-World Impact, and Why It’s Easier Than Ever to Begin
&lt;/h2&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;1.Setting the Context&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Artificial Intelligence has been around for decades, powering everything from fraud detection in banks to recommendation systems in e-commerce. But until recently, its role was mostly predictive which means it would be recognizing patterns and making decisions within fixed boundaries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Generative AI changes that equation.&lt;/strong&gt; Instead of just classifying or predicting, these systems can produce entirely new outputs: text, code, designs, even audio and video.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For a Fresher: You can now build applications that interact more naturally with users, generate code, or draft content without mastering complex ML pipelines.&lt;/li&gt;
&lt;li&gt;For Businesses: Unlock automation in creative, analytical, and customer-facing processes that previously required human effort.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;2.Traditional AI vs Generative AI&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The distinction comes down to &lt;strong&gt;prediction vs creation&lt;/strong&gt;:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Traditional AI&lt;/th&gt;
&lt;th&gt;Generative AI&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Purpose&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Recognizes patterns &amp;amp; returns structured outputs&lt;/td&gt;
&lt;td&gt;Generates novel data (text, code, images, etc.)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Example&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Predicting house prices, image classification&lt;/td&gt;
&lt;td&gt;Writing stories, generating SQL, creating art&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Tech Stack&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Logistic regression, Decision Trees, CNNs&lt;/td&gt;
&lt;td&gt;Transformers (LLMs), Diffusion Models, GANs&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;In short:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Traditional AI answers “What is this?”&lt;/li&gt;
&lt;li&gt;Generative AI answers “What could this be?”&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;3.Generative AI Basics (As a cooking analogy)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Generative AI isn’t one single “magic brain.”&lt;br&gt;
It’s more like a kitchen where many tools, ingredients, and steps come together to cook something new.&lt;br&gt;
Let’s break the kitchen down into its parts:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmrvpesdzghudome4gtsi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmrvpesdzghudome4gtsi.png" alt=" " width="559" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Data Processing Layer → Preparing the Ingredients.&lt;br&gt;&lt;br&gt;
This stage involves data collection, cleaning, transformation, and augmentation so that raw inputs can be effectively used by models.&lt;br&gt;
&lt;em&gt;Before you can cook, you need clean ingredients.&lt;/em&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ingestion: Bringing in raw multimodal data like words, pictures, videos, sounds, or code.(Bringing in the raw stuff).
&lt;/li&gt;
&lt;li&gt;Tokenization: Converting data into numerical tokens so that we have discrete units that models understand.(Cutting food into bite sized pieces so the AI can chew .
&lt;/li&gt;
&lt;li&gt;Normalization: Standardizing formats (e.g., lowercase words, resized images).
&lt;/li&gt;
&lt;li&gt;Augmentation: Expanding datasets with variations to improve generalization by adding variety like flipping an image or rephrasing a sentence, so the model learns more.
&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Think&lt;/em&gt;: washing, chopping, and seasoning raw food so a recipe can happen.
&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Note&lt;/em&gt; : Without clean, structured inputs, models cannot learn or generate useful outputs.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Model Layer → The Chefs.&lt;br&gt;
The core AI models learn patterns from data and generate new outputs. &lt;br&gt;
This is where actual "cooking" happens ,The models take the ingredients and prepare something new.&lt;br&gt;&lt;br&gt;
Different architectures act like different chefs:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Transformers/LLMs: Handle sequential context, powering chatbots, summarizers, code assistants.(&lt;em&gt;Writers that understand context and can write essays or code.&lt;/em&gt;)
&lt;/li&gt;
&lt;li&gt;GANs: Generator creates outputs, discriminator critiques them → improves realism.
&lt;/li&gt;
&lt;li&gt;VAEs(Variational Autoencoders) : Encode input into latent space, then decode → useful for compression &amp;amp; generation.(&lt;em&gt;A chef who first compresses the recipe and then reconstructs it.&lt;/em&gt;)
&lt;/li&gt;
&lt;li&gt;Diffusion Models: Start with noise (like TV static) and refine step by step into a clear image.
&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Think&lt;/em&gt;:- Each of these chefs has their own cooking style.
&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Note&lt;/em&gt; : This is where the actual “generation” magic happens.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Frameworks &amp;amp; Libraries → The Cooking Appliances.&lt;br&gt;
These are developer toolkits and orchestration frameworks to build, train, and deploy models.&lt;br&gt;&lt;br&gt;
You don’t cook on bare fire instead you have stoves, ovens, mixers.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.pytorch.org/docs/stable/index.html" rel="noopener noreferrer"&gt;PyTorch&lt;/a&gt;/&lt;a href="https://www.tensorflow.org/tutorials" rel="noopener noreferrer"&gt;TensorFlow&lt;/a&gt;: Core ML libraries for model training &amp;amp; inference.&lt;em&gt;The main stoves and ovens for cooking AI models.&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://huggingface.co/docs/hub/en/index" rel="noopener noreferrer"&gt;Hugging Face Hub&lt;/a&gt;: Central repository of pretrained models, datasets, and tools.&lt;em&gt;The recipe library tons of ready-to-use AI models.&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://python.langchain.com/docs/introduction/" rel="noopener noreferrer"&gt;LangChain&lt;/a&gt; / &lt;a href="https://langchain-ai.github.io/langgraph/?_gl=1*pkfe5f*_gcl_au*MTg5MTQ3NjQ0NS4xNzU2MjM4OTcx*_ga*MTkyNjE3NDIxLjE3NTU5Mzg2NjU.*_ga_47WX3HKKY2*czE3NTc1MjM1NTQkbzQkZzAkdDE3NTc1MjM1NTQkajYwJGwwJGgw" rel="noopener noreferrer"&gt;LangGraph&lt;/a&gt;: Frameworks to connect LLMs with external APIs, memory, and workflows.&lt;em&gt;The kitchen manager that tells different chefs and tools how to work together.&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Note&lt;/em&gt; : They provide the “infrastructure code” to operationalize models efficiently.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Infrastructure Layer → The Kitchen Setup.&lt;br&gt;
The hardware and systems infrastructure that enables large-scale AI computation.&lt;br&gt;&lt;br&gt;
All those chefs and tools need a functional kitchen to actually work.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Compute (GPUs/TPUs): High-performance accelerators for training/inference. &lt;em&gt;High-powered burners (fast processors).&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Docker/Kubernetes: Containerization and orchestration for scalable deployments.&lt;em&gt;Meal prep stations to handle many dishes at once.&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Cloud (AWS, GCP, Azure): On-demand resources to train/serve models without local hardware limits.&lt;em&gt;Renting a huge industrial kitchen instead of cooking at home.&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Note&lt;/em&gt;: Without scalable infrastructure, even the best models can’t run at production scale.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Memory &amp;amp; Databases → The Cookbook. &lt;br&gt;
Persistent and semantic memory layers that allow models to store and retrieve context beyond one interaction.&lt;br&gt;&lt;br&gt;
Chefs need memory to keep track of past recipes and conversations.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Vector databases(e.g., &lt;a href="https://docs.pinecone.io/guides/get-started/overview" rel="noopener noreferrer"&gt;Pinecone&lt;/a&gt;, &lt;a href="https://docs.weaviate.io/weaviate" rel="noopener noreferrer"&gt;Weaviate&lt;/a&gt;, &lt;a href="https://medium.com/@mrcoffeeai/faiss-vector-database-be3a9725172f" rel="noopener noreferrer"&gt;FAISS&lt;/a&gt;): Store embeddings for semantic search.&lt;em&gt;Special books where AI organizes flavors/meanings instead of exact words.&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Note&lt;/em&gt;:- Memory allows AI to provide continuity, personalization, and reasoning across sessions . &lt;em&gt;Helps AI "remember" context like what you ordered last time.&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Model Tuning &amp;amp; Safety → The Head Chef &amp;amp; Inspector.&lt;br&gt;
Processes to adapt models to tasks and ensure safe usage.&lt;br&gt;&lt;br&gt;
Cooking isn’t done without quality control.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fine-tuning/LoRA (Low-Rank Adaptation): Refine pretrained models for domain-specific tasks with minimal resources. &lt;em&gt;Retraining a chef to specialize, like an Italian pasta expert.&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Safety Layers (Guardrails, RAG filters, toxicity checkers): Prevent harmful, biased, or unsafe outputs. &lt;em&gt;Check that the dish isn’t toxic, biased, or misleading.&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Note&lt;/em&gt; : This ensures the AI is both effective and trustworthy.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Interface Layer → The Waiter.&lt;br&gt;
User-facing and developer-facing interfaces that make AI usable.&lt;br&gt;&lt;br&gt;
You don’t talk to the chef directly instead you talk to the waiter (interface).    &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;APIs/SDKs: Programmatic access to models. &lt;em&gt;Menus that let developers order from the kitchen.&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;UI Tools(chatbots, dashboards, apps): End-user interaction layers. &lt;em&gt;Chatbots or dashboards where normal users can ask for something.&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Note&lt;/em&gt; : Interfaces bridge technical AI systems with human users.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Synthetic Data &amp;amp; Labeling → Practice Ingredients.&lt;br&gt;
Techniques to augment or validate training datasets where real data is limited.&lt;br&gt;&lt;br&gt;
Sometimes there isn’t enough real data to train chefs.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Synthetic Data: Artificially generated inputs for model training. &lt;em&gt;Fake but useful practice ingredients.&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Human-in-the-loop: Human experts provide oversight, corrections, and labels. &lt;em&gt;A senior chef taste-tests and corrects mistakes.&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Note&lt;/em&gt; :Ensures robust, diverse training while maintaining accuracy.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Simply saying:&lt;br&gt;
Generative AI = A kitchen where data is prepped → models (chefs) cook → infra (kitchen) supports → safety ensures no poison → waiter serves the meal to you.&lt;/strong&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  4. Why Now? Why It’s Practical
&lt;/h4&gt;

&lt;p&gt;Ten years ago, building such models required specialized ML skills and costly infrastructure. Now:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Open-source LLMs&lt;/strong&gt; (Falcon, Mistral, LLaMA, GPT4All) are readily available&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pretrained pipelines&lt;/strong&gt;: Use a model in Python with just a few lines of code&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Frameworks&lt;/strong&gt; like LangChain &amp;amp; LangGraph handle orchestration and visualization&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cloud compute &amp;amp; community tools&lt;/strong&gt;: Prototype in hours, not months&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;For freshers: This is the best time to start&lt;/em&gt;&lt;br&gt;
&lt;em&gt;For businesses: Rapid innovation, low upfront costs&lt;/em&gt;&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Curious About the Brains Behind Generative AI?&lt;/strong&gt;&lt;br&gt;
Now that you know what Generative AI is,&lt;br&gt;
Are you ready to discover the engines driving this revolution?&lt;br&gt;
In the next chapter, explore:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What are LLMs (Large Language Models)—and how do they really work?&lt;/li&gt;
&lt;li&gt;How can anyone (student, business owner, or dev) try LLMs for free, right now?&lt;/li&gt;
&lt;li&gt;Hands-on tools and instant access to models and platforms!
&lt;/li&gt;
&lt;li&gt;👉 Unlock  &lt;a href="https://dev.to/raunaklallala/article-1-intro-to-gen-ai-llms-and-langchain-frameworkspart-b-1f84"&gt;&lt;strong&gt;How Do Large Language Models (LLMs) Work? Free Tools and Hands-On Tips for Beginners and Business Owners&lt;/strong&gt; &lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;




&lt;p&gt;&lt;strong&gt;Got questions or ideas?&lt;/strong&gt;Drop a comment below — I’d love to hear your thoughts.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Let’s connect:&lt;/strong&gt; &lt;a href="https://www.linkedin.com/in/your-profile/" rel="noopener noreferrer"&gt;🔗 My LinkedIn&lt;/a&gt;&lt;/p&gt;




</description>
      <category>genai</category>
      <category>langchain</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>How Do Large Language Models (LLMs) Work? Free Tools and Hands-On Tips for Beginners and Business Owners</title>
      <dc:creator>Raunak ALI</dc:creator>
      <pubDate>Sun, 07 Sep 2025 00:01:50 +0000</pubDate>
      <link>https://forem.com/raunaklallala/article-1-intro-to-gen-ai-llms-and-langchain-frameworkspart-b-1f84</link>
      <guid>https://forem.com/raunaklallala/article-1-intro-to-gen-ai-llms-and-langchain-frameworkspart-b-1f84</guid>
      <description>&lt;h2&gt;
  
  
  How Do Large Language Models (LLMs) Work? Free Tools and Hands-On Tips for Beginners and Business Owners
&lt;/h2&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;1. What Are LLMs (Large Language Models)?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Imagine a system that doesn’t just store information like a database, but can converse, summarize, translate, write code, and even reason through problems. That’s what an LLM (Large Language Model) does.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To a &lt;strong&gt;business owner&lt;/strong&gt;: You can think of them as engines that can draft reports, analyze long documents, summarize meetings, or even generate marketing content at scale,cutting both cost and time.&lt;/li&gt;
&lt;li&gt;To a &lt;strong&gt;student or fresher&lt;/strong&gt;: It’s helpful to imagine them as a much smarter autocomplete. They’ve been trained on massive datasets, so they can predict the “next word” in a way that feels surprisingly natural, whether you’re writing code, a paragraph, or even a story.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;2. How Do LLMs Work (Architecture Basics)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxbgsdtegd10k98yekor1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxbgsdtegd10k98yekor1.png" alt=" " width="672" height="348"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At the core of most modern LLMs is the &lt;strong&gt;Transformer architecture&lt;/strong&gt; (Vaswani et al., 2017). &lt;br&gt;
Unlike older models that processed text one word at a time, transformers look at whole sequences in parallel and figure out which words matter most to each other. Here are the essentials: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Embeddings&lt;/strong&gt; – Words (or tokens) are turned into numerical vectors that capture meaning.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Positional Encoding&lt;/strong&gt; – Adds information about word order (since transformers don’t read sequentially by default).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Self-Attention&lt;/strong&gt; – Each word decides which other words in the sentence it should &lt;em&gt;pay attention&lt;/em&gt; to.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-Head Attention&lt;/strong&gt; – Multiple attention mechanisms run in parallel, capturing different patterns (syntax, context, semantics).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Feed-Forward Layers + Residuals&lt;/strong&gt; – Nonlinear layers stacked deep, with shortcut connections to keep training stable.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Output Layer&lt;/strong&gt; – Predicts the most likely next token, repeating the process to generate full sentences.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s the backbone: a stack of transformer blocks working together, with &lt;strong&gt;more layers = more power&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Want to dig deeper? Microsoft has a great &lt;a href="https://learn.microsoft.com/en-us/training/modules/introduction-large-language-models/" rel="noopener noreferrer"&gt;course&lt;/a&gt; on the same.&lt;/em&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;3. Types of LLMs&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Decoder-only (GPT-style)&lt;/strong&gt; → Text generation, chat, coding.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Encoder-only (BERT-style)&lt;/strong&gt; → Text classification, embeddings, search.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Encoder-Decoder (T5/FLAN-style)&lt;/strong&gt; → Translation, summarization, Q&amp;amp;A.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Instruction-tuned models&lt;/strong&gt; → Optimized for the following natural language prompts (e.g., Mistral-Instruct, Falcon-Instruct,Gemini).&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;4. Accessing Open-Source LLMs on Hugging Face&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Hugging Face hosts 100,000+ models. Some are fully open, others are &lt;strong&gt;gated&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To use &lt;strong&gt;gated models&lt;/strong&gt; like Mistral or LLaMA:&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Visit the model’s page (e.g., &lt;a href="https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1" rel="noopener noreferrer"&gt;Mistral-7B-Instruct&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;“Access repository”&lt;/strong&gt; and accept the license.&lt;/li&gt;
&lt;li&gt;Generate a &lt;strong&gt;Read token&lt;/strong&gt; here → &lt;a href="https://huggingface.co/settings/tokens" rel="noopener noreferrer"&gt;HF Tokens&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Authenticate in notebook:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;   &lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;huggingface_hub&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;login&lt;/span&gt;
   &lt;span class="nf"&gt;login&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;YOUR_HF_TOKEN&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  &lt;strong&gt;5. Running a Free LLM (Google AI Studio)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Instead of heavy Hugging Face models, you can start quickly with &lt;strong&gt;Google AI Studio&lt;/strong&gt; → free API keys, fast responses.&lt;/p&gt;

&lt;p&gt;👉 Try it here: &lt;a href="https://aistudio.google.com/" rel="noopener noreferrer"&gt;Google AI Studio&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 1: Get API Key&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Go to &lt;a href="https://aistudio.google.com/app/apikey" rel="noopener noreferrer"&gt;Google AI Studio Keys&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Generate a free API key.&lt;/li&gt;
&lt;li&gt;Copy it.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 2: Use in Notebook&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="err"&gt;!&lt;/span&gt;&lt;span class="n"&gt;pip&lt;/span&gt; &lt;span class="n"&gt;install&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;U&lt;/span&gt; &lt;span class="n"&gt;google&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;genai&lt;/span&gt;

&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;google&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;genai&lt;/span&gt;

&lt;span class="c1"&gt;# The client gets the API key from the environment variable `GEMINI_API_KEY`.
&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;genai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your_api_key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;models&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;generate_content&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gemini-2.5-flash&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;contents&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Explain All about LLMS&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;👉 Example Notebooks: &lt;br&gt;
1) &lt;a href="https://colab.research.google.com/drive/1rOm-FZ1WoS60U8vJ97-DZGa_6z4-RzKd?usp=sharing" rel="noopener noreferrer"&gt;Using Hugging Face free Models -&amp;gt; Colab Quickstart&lt;/a&gt;&lt;br&gt;
1) &lt;a href="https://colab.research.google.com/drive/1Sv1aD2MCNU06IiVoFWzcupFbp0EIN8-n?usp=sharing" rel="noopener noreferrer"&gt;Using Google AI Model -&amp;gt; Colab Quickstart&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;6. Free LLM Resources Table&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Free &amp;amp; Fun LLM Access for Students&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Platform&lt;/th&gt;
&lt;th&gt;Official Page&lt;/th&gt;
&lt;th&gt;Tutorial/Setup Guide&lt;/th&gt;
&lt;th&gt;Quick Notes&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Hugging Face&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://huggingface.co/models" rel="noopener noreferrer"&gt;Hugging Face Models&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.freecodecamp.org/news/get-started-with-hugging-face/" rel="noopener noreferrer"&gt;FreeCodeCamp: How To Start&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Use online demos, Spaces, no install needed. Colab works too!&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;ChatGPT (OpenAI, web)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://chat.openai.com" rel="noopener noreferrer"&gt;ChatGPT&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.whytryai.com/p/best-free-llms" rel="noopener noreferrer"&gt;WhyTryAI Guide&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Just sign up and use it; no local resources required.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Google Gemini AI Studio&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://ai.google.dev/gemini-api/" rel="noopener noreferrer"&gt;Gemini Studio&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://ai.google.dev/gemini-api/docs/quickstart" rel="noopener noreferrer"&gt;Gemini API Quickstart&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Run directly in browser or minimal code, free quota!&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Meta AI (Llama 3, web demo)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.meta.ai" rel="noopener noreferrer"&gt;Meta.ai&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.whytryai.com/p/best-free-llms" rel="noopener noreferrer"&gt;WhyTryAI Guide&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Llama 3 demo free in supported regions.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Free LLM Tools for Business Owners&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Platform&lt;/th&gt;
&lt;th&gt;Official Page&lt;/th&gt;
&lt;th&gt;Setup &amp;amp; Docs&lt;/th&gt;
&lt;th&gt;Quick Notes&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Google Gemini API&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://ai.google.dev/gemini-api/" rel="noopener noreferrer"&gt;Gemini API Main&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;
&lt;a href="https://ai.google.dev/gemini-api/docs/quickstart" rel="noopener noreferrer"&gt;Gemini Quickstart&lt;/a&gt; &lt;br&gt; &lt;a href="https://ai.google.dev/gemini-api/docs/ai-studio-quickstart" rel="noopener noreferrer"&gt;AI Studio Guide&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;Generous free tier, ready for business use.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Vercel AI Gateway&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://vercel.com/ai/gateway" rel="noopener noreferrer"&gt;AI Gateway&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;
&lt;a href="https://vercel.com/docs/ai-gateway/getting-started" rel="noopener noreferrer"&gt;Getting Started&lt;/a&gt; &lt;br&gt; &lt;a href="https://vercel.com/docs/ai-gateway/authentication" rel="noopener noreferrer"&gt;API Authentication&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;One-stop API hub for many models.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Groq API&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://console.groq.com" rel="noopener noreferrer"&gt;Groq Console&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;
&lt;a href="https://github.com/groq/groq-python" rel="noopener noreferrer"&gt;Groq Python SDK&lt;/a&gt; &lt;br&gt; &lt;a href="https://console.groq.com/docs/libraries" rel="noopener noreferrer"&gt;Client Libraries&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;Lightning-fast, monthly free tokens.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Hugging Face (commercial ok)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://huggingface.co/models" rel="noopener noreferrer"&gt;Hugging Face Models&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;
&lt;a href="https://www.freecodecamp.org/news/get-started-with-hugging-face/" rel="noopener noreferrer"&gt;FreeCodeCamp Setup&lt;/a&gt; &lt;br&gt; &lt;a href="https://github.com/eugeneyan/open-llms" rel="noopener noreferrer"&gt;Commercial Model List&lt;/a&gt;
&lt;/td&gt;
&lt;td&gt;Many models with permissive licenses.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Hands-On &amp;amp; Learning (For All)&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Resource&lt;/th&gt;
&lt;th&gt;Main Page&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Free LLM &amp;amp; Gen AI Courses&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;a href="https://www.evidentlyai.com/blog/llm-genai-courses" rel="noopener noreferrer"&gt;Evidently AI LLM Courses&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Curated list for free learning.[13]&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;p&gt;&lt;em&gt;Just pick a platform, follow the quickstart, and you can chat or code with an LLM in minutes!&lt;/em&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;7. Limitations of Free LLMs&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Rate limits&lt;/strong&gt; → Free APIs (Google AI, Hugging Face) restrict daily usage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Model size&lt;/strong&gt; → Smaller free/open models may give weaker answers vs GPT-4/Gemini Pro.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Latency&lt;/strong&gt; → Free cloud GPUs can be slow (Colab queues, Hugging Face load times).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Privacy&lt;/strong&gt; → Using free APIs means your inputs may be logged. For sensitive use cases, local/offline models are safer.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;Want Generative AI Explained Simply?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Start here! This article is makes it easy for beginners, using clear analogies, simple language, and a step-by-step look at all the main components.&lt;/p&gt;

&lt;p&gt;👉 Read &lt;a href="https://dev.to/raunaklallala/article-1-intro-to-gen-ai-llms-and-langchain-frameworkspart-a-4o66"&gt;&lt;strong&gt;What is Generative AI? Practical Insights, Real-World Impact, and Why It’s Easier Than Ever to Begin&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that you know what LLMs are, how they work, and how to get free access, the next step is learning how to &lt;strong&gt;talk to them effectively&lt;/strong&gt; — that’s where &lt;strong&gt;Prompt Engineering&lt;/strong&gt; comes in.&lt;/p&gt;

&lt;p&gt;👉 Read &lt;a href="https://dev.to/raunaklallala/article-1-intro-to-gen-aillms-and-langchain-frameworkspart-c-48ij"&gt;&lt;strong&gt;Prompt Engineering Made Simple: Real-World Techniques, Mistakes to Avoid, and Hands-On for Everyone&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Got questions or ideas?&lt;/strong&gt;Drop a comment below — I’d love to hear your thoughts.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Let’s connect:&lt;/strong&gt; &lt;a href="https://www.linkedin.com/in/your-profile/" rel="noopener noreferrer"&gt;🔗 My LinkedIn&lt;/a&gt;&lt;/p&gt;




</description>
      <category>genai</category>
      <category>llm</category>
      <category>langchain</category>
      <category>python</category>
    </item>
  </channel>
</rss>
