<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Sowjanya Sankara</title>
    <description>The latest articles on Forem by Sowjanya Sankara (@_sowjanyasankara_).</description>
    <link>https://forem.com/_sowjanyasankara_</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/_sowjanyasankara_"/>
    <language>en</language>
    <item>
      <title>Guardrails in AI: Keeping LLMs Safe</title>
      <dc:creator>Sowjanya Sankara</dc:creator>
      <pubDate>Mon, 27 Apr 2026 17:11:09 +0000</pubDate>
      <link>https://forem.com/_sowjanyasankara_/guardrails-in-ai-keeping-llms-safe-37p5</link>
      <guid>https://forem.com/_sowjanyasankara_/guardrails-in-ai-keeping-llms-safe-37p5</guid>
      <description>&lt;p&gt;🤔 Imagine asking an AI agent to generate a database query…&lt;br&gt;
and it returns something wrong — or worse, unsafe.&lt;/p&gt;

&lt;p&gt;The problem isn’t just intelligence.&lt;br&gt;
It’s control.&lt;/p&gt;

&lt;p&gt;That’s where &lt;strong&gt;&lt;em&gt;guardrails&lt;/em&gt;&lt;/strong&gt; come in.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;⚡ What are Guardrails in AI?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Guardrails are checks and controls added around an AI system to ensure it behaves correctly, safely, and reliably.&lt;/p&gt;

&lt;p&gt;They don’t make the model smarter.&lt;br&gt;
They make the system trustworthy.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Guardrails don’t change what the model knows — they control how it behaves.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Think of guardrails as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Filters before the model runs&lt;/li&gt;
&lt;li&gt;Validators after the model responds&lt;/li&gt;
&lt;li&gt;Rules that guide system behavior&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;⚙️ Where Do Guardrails Fit?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AI systems are not just:&lt;br&gt;
User → Model → Response ❌&lt;/p&gt;

&lt;p&gt;They actually work like this:&lt;br&gt;
User → Input Guardrails → Model → Output Guardrails → Final Response ✅&lt;/p&gt;

&lt;p&gt;Before the model → validate input&lt;br&gt;
After the model → validate output&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Guardrails sit outside the model, not inside it.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;&lt;strong&gt;🧩 Types of Guardrails&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;🔹 Input Guardrails&lt;br&gt;
Ensure the user input is safe and valid.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Block harmful or malicious prompts&lt;/li&gt;
&lt;li&gt;Prevent prompt injection attempts&lt;/li&gt;
&lt;li&gt;Validate structure of input&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;👉 Example:&lt;br&gt;
User tries to override system instructions → blocked&lt;/p&gt;

&lt;p&gt;🔹 Output Guardrails&lt;br&gt;
Ensure the model output is usable and correct.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Validate format (JSON, query, etc.)&lt;/li&gt;
&lt;li&gt;Filter unsafe or irrelevant content&lt;/li&gt;
&lt;li&gt;Check for missing or incorrect fields&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;👉 Example:&lt;br&gt;
LLM generates an invalid MongoDB query → rejected or retried&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ns1key0kc9nznawscy1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ns1key0kc9nznawscy1.png" alt="guardrails" width="800" height="184"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;🔄 Guardrails in AI Agents&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In agent systems, guardrails are applied at multiple steps:&lt;/p&gt;

&lt;p&gt;Before understanding the query&lt;br&gt;
Before calling a tool&lt;br&gt;
After generating a response&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Guardrails are not a single step — they are layered across the system.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;&lt;strong&gt;⚠️ Why Guardrails Matter&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Without guardrails:&lt;/p&gt;

&lt;p&gt;Models can hallucinate&lt;br&gt;
Outputs can be incorrect&lt;br&gt;
Systems can behave unpredictably&lt;/p&gt;

&lt;p&gt;With guardrails:&lt;/p&gt;

&lt;p&gt;Responses become reliable&lt;br&gt;
Systems become safer&lt;br&gt;
Results become consistent&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡An AI system without guardrails is not ready for real-world use.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;&lt;strong&gt;🔍 Real-World Example&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;User asks:&lt;br&gt;
“Find transfer passengers under 24 hours”&lt;/p&gt;

&lt;p&gt;🔹 Before the model (Input Guardrails)&lt;br&gt;
Check if the request is valid&lt;br&gt;
Ensure required conditions are present (like time constraint)&lt;br&gt;
Prevent unsafe or irrelevant instructions&lt;/p&gt;

&lt;p&gt;👉 Input is cleaned and structured before reaching the model&lt;/p&gt;

&lt;p&gt;🔹 After the model (Output Guardrails)&lt;br&gt;
Validate the generated query format&lt;br&gt;
Ensure required filters (like “under 24 hours”) are applied&lt;br&gt;
Check logic before execution&lt;/p&gt;

&lt;p&gt;👉 Output is verified before being used&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Building AI isn’t just about generating outputs.&lt;br&gt;
It’s about making sure those outputs are correct, safe, and usable.&lt;/p&gt;

&lt;p&gt;That’s what &lt;em&gt;guardrails&lt;/em&gt; enable.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>security</category>
      <category>backend</category>
      <category>generativeai</category>
    </item>
    <item>
      <title>How AI Agents Use Short-Term and Long-Term Memory</title>
      <dc:creator>Sowjanya Sankara</dc:creator>
      <pubDate>Tue, 14 Apr 2026 07:22:18 +0000</pubDate>
      <link>https://forem.com/_sowjanyasankara_/how-ai-agents-use-short-term-and-long-term-memory-stm-vs-ltm-439</link>
      <guid>https://forem.com/_sowjanyasankara_/how-ai-agents-use-short-term-and-long-term-memory-stm-vs-ltm-439</guid>
      <description>&lt;p&gt;&lt;strong&gt;Have you ever wondered why you forget a phone number in seconds but remember your childhood memories forever? 🤔&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That’s not random — it’s how our brain is designed.&lt;br&gt;
We rely on two powerful memory systems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Short-Term Memory (STM) handles what’s happening right now&lt;/li&gt;
&lt;li&gt;Long-Term Memory (LTM) stores what matters over time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Interestingly, modern AI agents work in a very similar way.&lt;/p&gt;

&lt;p&gt;In this blog, we’ll explore how AI agents use STM and LTM—and how they orchestrate both to make intelligent decisions.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;What is Short-Term Memory (STM) in AI Agents?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Short-Term Memory in AI agents refers to temporary memory that is used during an ongoing conversation or a task&lt;/p&gt;

&lt;p&gt;Think of STM as:&lt;br&gt;
🧠 What the agent is currently thinking about?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Current user question&lt;/li&gt;
&lt;li&gt;Conversation history &lt;/li&gt;
&lt;li&gt;Temporary variables during execution&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In Our usual terms:&lt;br&gt;
When a chatbot responds to you it remembers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What you just asked&lt;/li&gt;
&lt;li&gt;What it replied back&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But this memory is not permanent - once the session ends Boom! It's memory is gone. ( Like Gajini 🫠)&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;What is Long-Term Memory (LTM) in AI Agents?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Long-Term Memory stores information that persists beyond a single interaction.&lt;/p&gt;

&lt;p&gt;Think of LTM as:&lt;br&gt;
🫀 What the agent has learned over time?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Stored Documents (vector databases)&lt;/li&gt;
&lt;li&gt;Knowledge bases&lt;/li&gt;
&lt;li&gt;Past interactions when saved&lt;/li&gt;
&lt;li&gt;RAG systems&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In Our usual terms:&lt;br&gt;
When a chatbot answers based on&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Company documents&lt;/li&gt;
&lt;li&gt;Previously stored knowledge&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;…it is using Long-Term Memory.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;⚙️ How Agents Orchestrate STM and LTM&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is where things get interesting...&lt;/p&gt;

&lt;p&gt;AI agents don’t just use memory—they coordinate (orchestrate) between STM and LTM.&lt;/p&gt;

&lt;p&gt;Let us take a real world Example&lt;/p&gt;

&lt;p&gt;Let’s say:&lt;/p&gt;

&lt;p&gt;👉 User asks:&lt;br&gt;
“Find infant passengers at DEL within 24 hours.”&lt;/p&gt;

&lt;p&gt;What happens:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;STM:

&lt;ul&gt;
&lt;li&gt;Understands the current request&lt;/li&gt;
&lt;li&gt;Keeps the conversation context&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;LTM:

&lt;ul&gt;
&lt;li&gt;Provides stored logic and rules&lt;/li&gt;
&lt;li&gt;Knows how to identify infant passengers based on stored memory&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Orchestrator:

&lt;ul&gt;
&lt;li&gt;Picks the right data&lt;/li&gt;
&lt;li&gt;Applies the logic&lt;/li&gt;
&lt;li&gt;Builds and runs the query&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;💡 In one line:&lt;br&gt;
STM = current thinking, LTM = stored knowledge, orchestration = connecting both&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🚀 Final Thoughts&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AI agents are becoming more powerful not just because of better models—but because of better memory systems.&lt;/p&gt;

&lt;p&gt;Understanding how STM and LTM work together helps us:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build smarter systems&lt;/li&gt;
&lt;li&gt;Design better orchestrators&lt;/li&gt;
&lt;li&gt;Improve user experience&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>beginners</category>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
