<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Darren</title>
    <description>The latest articles on Forem by Darren (@realmrmemory).</description>
    <link>https://forem.com/realmrmemory</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/realmrmemory"/>
    <language>en</language>
    <item>
      <title>Why Your AI Agent Needs a Memory That Sticks</title>
      <dc:creator>Darren</dc:creator>
      <pubDate>Sat, 11 Apr 2026 04:22:03 +0000</pubDate>
      <link>https://forem.com/realmrmemory/why-your-ai-agent-needs-a-memory-that-sticks-1o6i</link>
      <guid>https://forem.com/realmrmemory/why-your-ai-agent-needs-a-memory-that-sticks-1o6i</guid>
      <description>&lt;h3&gt;
  
  
  The Amnesia Problem
&lt;/h3&gt;

&lt;p&gt;Your AI agent has no memory. Every session starts from scratch, forgetting context, user preferences, and learned facts — it's like trying to solve a puzzle blindfolded every time you restart.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is AI Agent Memory?
&lt;/h3&gt;

&lt;p&gt;AI agent memory stores, retrieves, and reasons over information across interactions, sessions, and tasks. This transforms how agents interact with users, making them more personalized, effective, and efficient.&lt;/p&gt;

&lt;h3&gt;
  
  
  Framework Showdown
&lt;/h3&gt;

&lt;p&gt;Here's a comparison of popular AI agent memory frameworks:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Framework&lt;/th&gt;
&lt;th&gt;Memory Class&lt;/th&gt;
&lt;th&gt;Architecture&lt;/th&gt;
&lt;th&gt;Open Source&lt;/th&gt;
&lt;th&gt;Stars&lt;/th&gt;
&lt;th&gt;Lock-in&lt;/th&gt;
&lt;th&gt;Managed Cloud&lt;/th&gt;
&lt;th&gt;Self-Host&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Mem0&lt;/td&gt;
&lt;td&gt;Personalization + institutional&lt;/td&gt;
&lt;td&gt;Vector + Graph&lt;/td&gt;
&lt;td&gt;Apache 2.0&lt;/td&gt;
&lt;td&gt;~48K&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Letta&lt;/td&gt;
&lt;td&gt;Both (OS-inspired)&lt;/td&gt;
&lt;td&gt;Tiered&lt;/td&gt;
&lt;td&gt;Apache 2.0&lt;/td&gt;
&lt;td&gt;~21K&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Zep / Graphiti&lt;/td&gt;
&lt;td&gt;Both (strongest on temporal)&lt;/td&gt;
&lt;td&gt;Temporal KG&lt;/td&gt;
&lt;td&gt;Graphiti: open&lt;/td&gt;
&lt;td&gt;~24K&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;td&gt;Via Graphiti only&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Choosing the Right Framework
&lt;/h3&gt;

&lt;p&gt;Your project's requirements determine the best framework. Need personalization, temporal reasoning, or long-running agents? Each framework has its strengths and weaknesses.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Mem0&lt;/strong&gt;: Ideal for personalization and institutional memory. It offers a managed cloud service with automatic compliance and scaling.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Zep / Graphiti&lt;/strong&gt;: Strongest on temporal knowledge graph architecture. However, self-hosting via Graphiti only is required.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Letta&lt;/strong&gt;: Offers an OS-inspired architecture with tiered memory management. It's ideal for long-running agents.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Alternatives to Mem0
&lt;/h3&gt;

&lt;p&gt;If you're looking beyond Mem0:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Letta&lt;/strong&gt;: Unique OS-inspired architecture and self-editing memory make it a compelling choice.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Zep / Graphiti&lt;/strong&gt;: Temporal knowledge graph architecture sets it apart.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MrMemory&lt;/strong&gt;: A managed memory API with semantic recall, auto-remember, and memory compression. Try the following code:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;pip&lt;/span&gt; &lt;span class="n"&gt;install&lt;/span&gt; &lt;span class="n"&gt;mrmemory&lt;/span&gt;

&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;mrmemory&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MrMemory&lt;/span&gt;
&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MrMemory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your-key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;remember&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user prefers dark mode&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tags&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;preferences&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;recall&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;what theme does the user like?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Choosing an AI agent memory framework can be daunting. Consider your project's needs and choose a framework that fits. If you're looking for a managed memory API with semantic recall, try MrMemory today!&lt;/p&gt;

&lt;p&gt;Suggested internal links:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://mrmemory.dev/docs" rel="noopener noreferrer"&gt;MrMemory Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://vectorize.io/best-mem0-alternatives-for-ai-agent-memory-in-2026/" rel="noopener noreferrer"&gt;Mem0 Alternatives&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Suggested tags:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AI agent memory&lt;/li&gt;
&lt;li&gt;Mem0&lt;/li&gt;
&lt;li&gt;Zep&lt;/li&gt;
&lt;li&gt;Letta&lt;/li&gt;
&lt;li&gt;MrMemory&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aiagentmemory</category>
      <category>mem0</category>
      <category>zep</category>
      <category>letta</category>
    </item>
    <item>
      <title>New post</title>
      <dc:creator>Darren</dc:creator>
      <pubDate>Wed, 08 Apr 2026 04:13:01 +0000</pubDate>
      <link>https://forem.com/realmrmemory/new-post-2l85</link>
      <guid>https://forem.com/realmrmemory/new-post-2l85</guid>
      <description>&lt;h1&gt;
  
  
  &lt;strong&gt;The Dark Side of Multi-Agent Systems: When Memory Fails&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;As we push the boundaries of AI collaboration, one critical aspect is often overlooked: memory. In multi-agent systems, agents need to recall knowledge, preferences, and outcomes over time – but their memory requirements are more complex than you think.&lt;/p&gt;

&lt;p&gt;Take, for example, the e-commerce platform with thousands of concurrent users. When a user logs in, they expect personalized recommendations based on past purchases. But what if the agent forgets this crucial information? The entire user experience falls apart.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Pitfalls of Short-Term Memory (STM)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;STM is great for maintaining recent context within an active session. However, it's limited by persistence and scalability issues. Imagine a scenario where multiple agents update STM concurrently – you'd need a robust system to handle the concurrent updates and provide fast recall times.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;mrmemory&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MrMemory&lt;/span&gt;

&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MrMemory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your-key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;remember&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user prefers dark mode&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tags&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;preferences&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is where MrMemory's managed memory API shines. It provides compression, self-edit tools, and three-layer governance to ensure data consistency and scalability.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Long-Term Memory (LTM) Conundrum&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;LTMs provide persistence of information across sessions. But designing an LTM that ensures data consistency and scalability is no easy feat. You need to consider factors like ownership, privacy, and concurrent updates.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;recall&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;what theme does the user like?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Team Memory: The Unsung Hero&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Effective team memory enables agents to share knowledge and collaborate effectively. But designing a robust team memory requires careful consideration of data consistency, ownership, and privacy.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;A Comparison with Alternatives&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;While Mem0 offers some features similar to MrMemory, it lacks compression, self-edit tools, and three-layer governance. Zep is a self-hosted solution that requires significant infrastructure investment. MemGPT is a large language model specifically designed for memory-intensive tasks.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;MrMemory&lt;/th&gt;
&lt;th&gt;Mem0&lt;/th&gt;
&lt;th&gt;Zep&lt;/th&gt;
&lt;th&gt;MemGPT&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Compression&lt;/td&gt;
&lt;td&gt;40-60% token savings&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Self-edit tools&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Three-layer governance&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Anti-pollution&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Designing memory schemas for multi-agent systems requires careful consideration of factors like synchronization, ownership, privacy, and data consistency. MrMemory's managed memory API provides a solution to these challenges, enabling agents to recall knowledge, preferences, and outcomes over time.&lt;/p&gt;

&lt;p&gt;Try MrMemory today and discover how its managed memory API can improve your agent collaboration and decision-making capabilities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recommended Reading&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://mrmemory.dev/docs/memory" rel="noopener noreferrer"&gt;Memory - Multi-agent Reference Architecture&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://mrmemory.dev/blog/why-multi-agent-systems-need-memory-engineering" rel="noopener noreferrer"&gt;Why Multi-Agent Systems Need Memory Engineering&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Tags&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;multi-agent systems&lt;/li&gt;
&lt;li&gt;memory schemas&lt;/li&gt;
&lt;li&gt;short-term memory (STM)&lt;/li&gt;
&lt;li&gt;long-term memory (LTM)&lt;/li&gt;
&lt;li&gt;team memory&lt;/li&gt;
&lt;li&gt;MrMemory&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Apply decay function to fade old embeddings</title>
      <dc:creator>Darren</dc:creator>
      <pubDate>Tue, 07 Apr 2026 04:13:22 +0000</pubDate>
      <link>https://forem.com/realmrmemory/apply-decay-function-to-fade-old-embeddings-mb3</link>
      <guid>https://forem.com/realmrmemory/apply-decay-function-to-fade-old-embeddings-mb3</guid>
      <description>&lt;h1&gt;
  
  
  &lt;strong&gt;Antipollution Patterns for AI Agent Memory&lt;/strong&gt;
&lt;/h1&gt;

&lt;h3&gt;
  
  
  The Context Pollution Problem
&lt;/h3&gt;

&lt;p&gt;Context pollution is a real issue that can tank the performance of your AI agents. I've seen it happen: you throw more memory at the problem, but instead of solving it, you just make it worse. The model starts spewing out garbage responses because it's stuck in a sea of irrelevant context.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Causes Context Pollution?
&lt;/h3&gt;

&lt;p&gt;It all comes down to how your model handles context. If it can't tell what's relevant and what's not, you're doomed. It's like trying to have a conversation with someone who just repeats everything they've ever heard without any filter.&lt;/p&gt;

&lt;h3&gt;
  
  
  Effective Forgetting
&lt;/h3&gt;

&lt;p&gt;So, how do you prevent this? Well, one approach is to implement effective forgetting mechanisms that let your model discard unnecessary info. We use decay functions for this in MrMemory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;mrmemory&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MrMemory&lt;/span&gt;
&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MrMemory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your-key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;remember&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user prefers dark mode&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tags&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;preferences&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="c1"&gt;# Apply decay function to fade old embeddings
&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;decay&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;results&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By applying a decay function, we can make old and unreferenced embeddings fade from the agent's memory, preventing context pollution.&lt;/p&gt;

&lt;h3&gt;
  
  
  Weighting Recent Memories
&lt;/h3&gt;

&lt;p&gt;Another approach is to weight recent memories higher during retrieval scoring. This way, your model prioritizes more relevant and up-to-date info when making decisions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;mrmemory&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MrMemory&lt;/span&gt;
&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MrMemory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your-key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;remember&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user prefers dark mode&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tags&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;preferences&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="c1"&gt;# Weight recent memories higher during retrieval
&lt;/span&gt;&lt;span class="n"&gt;weighted_results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;weight_recent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;results&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.8&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Comparison with Alternatives
&lt;/h3&gt;

&lt;p&gt;We've compared our solution to others like Mem0 and Zep. While they offer similar functionality, MrMemory's got some key advantages:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Solution&lt;/th&gt;
&lt;th&gt;Compression&lt;/th&gt;
&lt;th&gt;Self-Edit Tools&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;MrMemory&lt;/td&gt;
&lt;td&gt;40-60% token savings&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Mem0&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Zep (self-host only)&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Preventing context pollution is crucial for building effective AI agents. By using decay functions or weighting recent memories, you can keep your model's memory clean and efficient. Try MrMemory today and see the difference.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Suggested internal links:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://mrmemory.dev/docs" rel="noopener noreferrer"&gt;MrMemory Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/masterdarren23/mrmemory" rel="noopener noreferrer"&gt;MrMemory GitHub Repository&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Tags:&lt;/strong&gt; ai, memory, antipollution, context pollution, mrmemory&lt;/p&gt;

</description>
      <category>ai</category>
      <category>mrmemory</category>
    </item>
    <item>
      <title>Building a Chatbot That Remembers: Leveraging MrMemory for AI-Powered Conversations</title>
      <dc:creator>Darren</dc:creator>
      <pubDate>Tue, 07 Apr 2026 02:47:44 +0000</pubDate>
      <link>https://forem.com/realmrmemory/building-a-chatbot-that-remembers-leveraging-mrmemory-for-ai-powered-conversations-4aig</link>
      <guid>https://forem.com/realmrmemory/building-a-chatbot-that-remembers-leveraging-mrmemory-for-ai-powered-conversations-4aig</guid>
      <description>&lt;p&gt;Building a Chatbot That Remembers: Leveraging MrMemory for AI-Powered Conversations&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    2026-04-05
    2 min read

  MrMemoryStreamlitLangChainAI



  **Introduction**
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;===============&lt;/p&gt;

&lt;p&gt;As AI agents become increasingly sophisticated, the need to create chatbots that can remember user preferences and past conversations has never been more pressing. In this article, we'll explore how you can build a chatbot that remembers using MrMemory, Streamlit, and LangChain.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What's the Problem?&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;Imagine building a chatbot that can recall previous conversations and provide personalized responses based on a user's preferences. Sounds like science fiction, right? Unfortunately, most AI agents lack this crucial feature, leading to frustrating experiences for users.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MrMemory: The Solution&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;=====================&lt;/p&gt;

&lt;p&gt;Enter MrMemory, the managed memory API designed specifically for AI agents. With MrMemory, you can easily integrate memory recall into your chatbot, allowing it to remember user preferences and past conversations.&lt;/p&gt;

&lt;p&gt;Here's an example of how to use MrMemory in Python:&lt;/p&gt;

&lt;p&gt;pip install mrmemory&lt;br&gt;
from mrmemory import MrMemory&lt;/p&gt;

&lt;p&gt;client = MrMemory(api_key="your-key")&lt;br&gt;
client.remember("user prefers dark mode", tags=["preferences"])&lt;br&gt;
results = client.recall("what theme does the user like?")&lt;br&gt;
print(results)  # Output: "dark mode"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Streamlit and LangChain Integration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;=====================================&lt;/p&gt;

&lt;p&gt;To create a chatbot that remembers, you'll need to integrate MrMemory with Streamlit and LangChain. Streamlit provides a simple way to build web applications using Python, while LangChain is a powerful library for building AI agents.&lt;/p&gt;

&lt;p&gt;Here's an example of how to use Streamlit and LangChain together:&lt;/p&gt;

&lt;p&gt;import streamlit as st&lt;br&gt;
from langchain import LangChain&lt;/p&gt;

&lt;p&gt;st.title("Chatbot That Remembers")&lt;/p&gt;

&lt;p&gt;lang_chain = LangChain()&lt;/p&gt;

&lt;p&gt;while True:&lt;br&gt;
    user_input = st.text_input("User Input")&lt;br&gt;
    response = lang_chain.generate_response(user_input)&lt;br&gt;
    st.write(response)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Alternatives and Comparison&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;=============================&lt;/p&gt;

&lt;p&gt;While MrMemory is an excellent choice for building a chatbot that remembers, there are alternative solutions available. Mem0, Zep, and MemGPT are some popular options, but they lack the compression features and self-edit tools offered by MrMemory.&lt;/p&gt;

&lt;p&gt;Here's a comparison of these alternatives:&lt;/p&gt;

&lt;p&gt;FeatureMrMemoryMem0ZepMemGPT&lt;br&gt;
Compression40-60% token savingsNoNoNo&lt;br&gt;
Self-edit toolsYesNoNoNo&lt;/p&gt;

&lt;p&gt;GovernanceThree-layer governanceNoNoNo&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;==========&lt;/p&gt;

&lt;p&gt;In conclusion, building a chatbot that remembers user preferences and past conversations requires the right combination of tools. MrMemory, Streamlit, and LangChain offer a powerful solution for creating AI-powered conversations that remember.&lt;/p&gt;

&lt;p&gt;Try MrMemory today and experience the benefits of a managed memory API designed specifically for AI agents. Sign up for a free trial or explore our documentation to learn more.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Suggested Internal Links&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dev.to/getting-started-with-mrmemory"&gt;Getting Started with MrMemory&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/streamlit-tutorial-building-a-chatbot-that-remembers"&gt;Streamlit Tutorial: Building a Chatbot That Remembers&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/langchain-documentation-integrating-memory-recall"&gt;LangChain Documentation: Integrating Memory Recall&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Ready to give your AI agents memory?
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  Install in one line. Remember forever. Start with a 7-day free trial.

  [Start Free Trial →](https://buy.stripe.com/00w4gB2REex4daHeP38g001)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>mrmemory</category>
      <category>ai</category>
      <category>chatbot</category>
      <category>langchain</category>
    </item>
    <item>
      <title>How to Add Memory to Your Python AI Agent in 3 Lines of Code</title>
      <dc:creator>Darren</dc:creator>
      <pubDate>Mon, 06 Apr 2026 05:42:23 +0000</pubDate>
      <link>https://forem.com/realmrmemory/how-to-add-memory-to-your-python-ai-agent-in-3-lines-of-code-48ia</link>
      <guid>https://forem.com/realmrmemory/how-to-add-memory-to-your-python-ai-agent-in-3-lines-of-code-48ia</guid>
      <description>&lt;p&gt;Here is the article:&lt;/p&gt;




&lt;p&gt;title: "How to Add Memory to Your Python AI Agent in 3 Lines of Code"&lt;br&gt;
description: "Learn how to add persistent, searchable memory to your Python AI agent using MrMemory's Managed Memory API."&lt;br&gt;
tags: ["AI", "Python", "MrMemory"]&lt;/p&gt;
&lt;h2&gt;
  
  
  date: 2026-04-05
&lt;/h2&gt;
&lt;h3&gt;
  
  
  How to Add Memory to Your Python AI Agent in 3 Lines of Code
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Meta Description:&lt;/strong&gt; Learn how to add persistent, searchable memory to your Python AI agent using MrMemory's Managed Memory API.&lt;/p&gt;

&lt;p&gt;As AI developers, we've all experienced the frustration of trying to build a stateful conversational AI without a proper memory management system. Without memory, our AI agents are like goldfish swimming in circles – impressive for thirty seconds, then utterly useless.&lt;/p&gt;
&lt;h3&gt;
  
  
  What is Long-term Memory for AI Agents?
&lt;/h3&gt;

&lt;p&gt;Long-term memory for AI agents is the ability to store, retrieve, and reference past interactions across multiple sessions, enabling contextual awareness and personalized responses based on historical data. This fundamental aspect of human intelligence allows us to recall memories from our past, build upon previous experiences, and respond accordingly.&lt;/p&gt;
&lt;h3&gt;
  
  
  Why Adding Memory to Your AI Agent Actually Matters
&lt;/h3&gt;

&lt;p&gt;Here's the brutal truth: stateless agents are party tricks. They answer questions brilliantly but can't maintain a coherent conversation beyond a single exchange. Memory transforms your agent from a fancy autocomplete tool into something genuinely useful:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Contextual Continuity&lt;/strong&gt;: Your agent tracks conversation threads, remembers user preferences, and builds on previous interactions instead of starting from zero every time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Personalization at Scale&lt;/strong&gt;: Store user-specific details (project names, coding preferences, domain context) and deliver tailored responses that feel custom-built.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complex Task Handling&lt;/strong&gt;: Break down multi-step workflows where each step builds on the last—project management, workflow automation, or even chatbot-based customer service.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Adding Memory to Your AI Agent with MrMemory
&lt;/h3&gt;

&lt;p&gt;To add memory to your Python AI agent, you can use MrMemory's Managed Memory API. Here's an example of how to do it in just 3 lines of code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;pip&lt;/span&gt; &lt;span class="n"&gt;install&lt;/span&gt; &lt;span class="n"&gt;mrmemory&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;mrmemory&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MrMemory&lt;/span&gt;
&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MrMemory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your-key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this simple setup, you can store and recall memories using the &lt;code&gt;remember&lt;/code&gt; and &lt;code&gt;recall&lt;/code&gt; methods:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;remember&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user prefers dark mode&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tags&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;preferences&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;recall&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;what theme does the user like?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;results&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# Output: "dark mode"
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Comparison to Alternative Solutions
&lt;/h3&gt;

&lt;p&gt;While there are other solutions available, such as Mem0, Zep, and MemGPT, MrMemory's Managed Memory API stands out for its ease of use, scalability, and compression capabilities. For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Mem0 lacks memory compression, making it less efficient for large datasets.&lt;/li&gt;
&lt;li&gt;Zep is a self-hosted solution that requires significant infrastructure setup and maintenance.&lt;/li&gt;
&lt;li&gt;MemGPT is also self-hosted, which can limit its applicability in certain scenarios.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Adding memory to your Python AI agent is no longer a daunting task. With MrMemory's Managed Memory API, you can create stateful conversational AIs that remember conversations, build context, and respond intelligently. Try MrMemory today and take the first step towards building more advanced AI applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Try MrMemory:&lt;/strong&gt; &lt;a href="//buy.stripe.com/00w4gB2REex4daHeP38g001"&gt;Sign up for a 7-day free trial&lt;/a&gt; or &lt;a href="//mrmemory.dev/docs"&gt;visit our documentation&lt;/a&gt; to learn more about how MrMemory can help you build better AI agents.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>python</category>
      <category>mrmemory</category>
    </item>
    <item>
      <title>Adding Persistent Memory to LangChain Agents: A Guide</title>
      <dc:creator>Darren</dc:creator>
      <pubDate>Mon, 06 Apr 2026 05:34:53 +0000</pubDate>
      <link>https://forem.com/realmrmemory/adding-persistent-memory-to-langchain-agents-a-guide-3235</link>
      <guid>https://forem.com/realmrmemory/adding-persistent-memory-to-langchain-agents-a-guide-3235</guid>
      <description>&lt;p&gt;Here is the article:&lt;/p&gt;




&lt;p&gt;title: "Adding Persistent Memory to LangChain Agents: A Guide"&lt;br&gt;
description: "Learn how to add persistent memory to your LangChain agents and improve their conversational capabilities."&lt;br&gt;
tags: ["LangChain", "persistent memory", "AI development"]&lt;/p&gt;
&lt;h2&gt;
  
  
  date: 2026-04-04
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;How to Add Persistent Memory to LangChain Agents&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As AI developers, we're well aware of the importance of memory in building conversational agents that can learn from user interactions and adapt accordingly. However, adding persistent memory to your LangChain agents can be a daunting task, especially if you're new to agent development.&lt;/p&gt;

&lt;p&gt;In this article, we'll explore the different approaches to adding long-term memory to LangChain agents, including the use of composite backends, background processes, and integration with external databases. We'll also discuss some best practices for configuring memory in your agents and provide code examples to get you started.&lt;/p&gt;
&lt;h3&gt;
  
  
  Configuring Memory in LangChain Agents
&lt;/h3&gt;

&lt;p&gt;According to LangChain's documentation, one way to add long-term memory to your agents is by using a composite backend that routes the &lt;code&gt;/memories/&lt;/code&gt; path to a StoreBackend. This approach allows you to store memories persistently across interactions and threads.&lt;/p&gt;

&lt;p&gt;Here's an example of how you can configure memory in your LangChain agent:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;deepagents&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;create_deep_agent&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;deepagents.backends&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;CompositeBackend&lt;/span&gt;

&lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;create_deep_agent&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;composite_backend&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;CompositeBackend&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;StoreBackend&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Using Background Processes to Update Memory
&lt;/h3&gt;

&lt;p&gt;Another way to update memory is by running a background process that updates memories either during or after the conversation. This approach allows you to decouple memory updates from the main agent thread and improve overall performance.&lt;/p&gt;

&lt;p&gt;Here's an example of how you can use a background process to update memory in your LangChain agent:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;update_memory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# Update memory logic goes here
&lt;/span&gt;    &lt;span class="k"&gt;pass&lt;/span&gt;

&lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;create_deep_agent&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;loop&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_event_loop&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;loop&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_task&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;update_memory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Integrating with External Databases
&lt;/h3&gt;

&lt;p&gt;To add persistent memory to your LangChain agents, you can also integrate them with external databases such as PostgreSQL. This approach allows you to store memories persistently across interactions and threads.&lt;/p&gt;

&lt;p&gt;Here's an example of how you can integrate your LangChain agent with a PostgreSQL database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;psycopg2&lt;/span&gt;

&lt;span class="n"&gt;conn&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;psycopg2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;host&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your_host&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;database&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your_database&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your_user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;password&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your_password&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;cursor&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;cursor&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Best Practices for Configuring Memory in LangChain Agents
&lt;/h3&gt;

&lt;p&gt;When configuring memory in your LangChain agents, there are a few best practices to keep in mind:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use a composite backend to route memories persistently across interactions and threads.&lt;/li&gt;
&lt;li&gt;Update memories in the background to decouple memory updates from the main agent thread.&lt;/li&gt;
&lt;li&gt;Integrate with external databases such as PostgreSQL to store memories persistently.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By following these best practices, you can ensure that your LangChain agents have persistent memory and can learn from user interactions and adapt accordingly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Alternatives to MrMemory
&lt;/h3&gt;

&lt;p&gt;If you're looking for alternatives to MrMemory, there are a few options available:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Mem0: A managed memory API that allows you to add persistent memory to your AI agents.&lt;/li&gt;
&lt;li&gt;Zep: A self-hosted alternative to MrMemory that allows you to manage memory persistently across interactions and threads.&lt;/li&gt;
&lt;li&gt;MemGPT: A self-hosted alternative to MrMemory that allows you to generate text based on user input.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While these alternatives may offer some similar features to MrMemory, they have their own limitations and use cases. When choosing an alternative, be sure to consider your specific needs and requirements.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Adding persistent memory to your LangChain agents can be a powerful way to improve their conversational capabilities and adaptability. By following the best practices outlined in this article, you can ensure that your agents have persistent memory and can learn from user interactions and adapt accordingly.&lt;/p&gt;

&lt;p&gt;To get started with adding persistent memory to your LangChain agents, try MrMemory today! With its managed memory API and seamless integration with external databases, MrMemory is the perfect solution for AI developers looking to add persistent memory to their agents.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Try MrMemory Today!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://mrmemory.dev/docs" rel="noopener noreferrer"&gt;CTA link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Suggested Internal Links:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://blog.mrmemory.dev/adding-long-term-memory-to-langgraph-and-langchain-agents" rel="noopener noreferrer"&gt;Adding Long-Term Memory to LangGraph and LangChain Agents&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.langchain.com/oss/deepagents/memory" rel="noopener noreferrer"&gt;Configuring Memory in LangChain Agents&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Suggested Tags:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;LangChain&lt;/li&gt;
&lt;li&gt;Persistent memory&lt;/li&gt;
&lt;li&gt;AI development&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>langchain</category>
      <category>persistentmemory</category>
      <category>aidevelopment</category>
    </item>
    <item>
      <title>Three-Layer Memory Governance: Core, Provisional, Private</title>
      <dc:creator>Darren</dc:creator>
      <pubDate>Mon, 06 Apr 2026 05:34:52 +0000</pubDate>
      <link>https://forem.com/realmrmemory/three-layer-memory-governance-core-provisional-private-4cl0</link>
      <guid>https://forem.com/realmrmemory/three-layer-memory-governance-core-provisional-private-4cl0</guid>
      <description>&lt;p&gt;Here is the article:&lt;/p&gt;




&lt;p&gt;title: "Three-Layer Memory Governance: Core, Provisional, Private"&lt;br&gt;
description: "Discover how MrMemory's three-layer memory governance framework ensures secure and efficient management of AI agent memories."&lt;br&gt;
tags: ["AI", "memory governance", "MrMemory"]&lt;/p&gt;
&lt;h2&gt;
  
  
  date: 2026-04-06
&lt;/h2&gt;
&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In the world of AI agents, memory is a critical component that enables learning, adaptation, and decision-making. However, managing memory effectively can be a daunting task, especially as AI systems become more complex and interconnected. The lack of robust memory governance can lead to issues such as data pollution, incorrect recall, and poor performance. In this article, we'll explore the concept of three-layer memory governance and how MrMemory's API can help you implement it.&lt;/p&gt;
&lt;h2&gt;
  
  
  What is Three-Layer Memory Governance?
&lt;/h2&gt;

&lt;p&gt;The idea of three-layer memory governance was first proposed by Haichang Li in his paper "Memory as a Service (MaaS): Rethinking Contextual Memory as Service-Oriented Modules for Collaborative Agents". The concept revolves around creating a framework that separates memories into three layers: Core, Provisional, and Private. Each layer serves a specific purpose:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Core&lt;/strong&gt;: This is the foundation of governance. It's where you store your most valuable and sensitive information.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Provisional&lt;/strong&gt;: This layer is for temporary or provisional memories that may need to be updated or revised.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Private&lt;/strong&gt;: This is where you keep personal or private information that requires additional security measures.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By separating memories into these three layers, you can ensure that sensitive data is properly protected and that your AI agents operate efficiently.&lt;/p&gt;
&lt;h2&gt;
  
  
  How does MrMemory's API Implement Three-Layer Memory Governance?
&lt;/h2&gt;

&lt;p&gt;MrMemory's API provides a simple and efficient way to implement the three-layer memory governance framework. Here are some code examples:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;mrmemory&lt;/span&gt;

&lt;span class="c1"&gt;# Create a client instance
&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MrMemory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your-key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Store a piece of information in the Core layer
&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;remember&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user prefers dark mode&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tags&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;preferences&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="c1"&gt;# Retrieve information from the Core layer
&lt;/span&gt;&lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;recall&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;what theme does the user like?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, using MrMemory's API is straightforward. You can store and retrieve memories with ease, while also benefiting from the three-layer memory governance framework.&lt;/p&gt;

&lt;h2&gt;
  
  
  Alternatives: What's Out There?
&lt;/h2&gt;

&lt;p&gt;If you're looking for alternatives to MrMemory's API, there are a few options available:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Mem0&lt;/strong&gt;: Mem0 provides a similar service-oriented memory architecture, but lacks compression and self-edit tools.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Zep&lt;/strong&gt;: Zep is a self-hosted solution that requires manual curation and lacks the benefits of MrMemory's three-layer governance framework.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Letta/MemGPT&lt;/strong&gt;: Letta and MemGPT are also self-hosted solutions that require manual curation and lack the compression and self-edit tools found in MrMemory.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;While these alternatives may provide some similar features, they don't offer the same level of efficiency, security, and scalability as MrMemory's API.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article, we've explored the concept of three-layer memory governance and how MrMemory's API can help you implement it. By separating memories into Core, Provisional, and Private layers, you can ensure that your AI agents operate efficiently and securely. Try MrMemory today and experience the benefits of a robust memory governance framework.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Try MrMemory&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://mrmemory.dev/docs" rel="noopener noreferrer"&gt;Link to MrMemory documentation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Get Started&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://pypi.org/project/mrmemory/" rel="noopener noreferrer"&gt;Install MrMemory with pip: &lt;code&gt;pip install mrmemory&lt;/code&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.npmjs.com/package/memorymr" rel="noopener noreferrer"&gt;Install MrMemory with npm: &lt;code&gt;npm install memorymr&lt;/code&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>memorygovernance</category>
      <category>mrmemory</category>
    </item>
    <item>
      <title>Why AI Agents Need Long-Term Memory to Be Truly Useful</title>
      <dc:creator>Darren</dc:creator>
      <pubDate>Sun, 05 Apr 2026 16:24:22 +0000</pubDate>
      <link>https://forem.com/realmrmemory/why-ai-agents-need-long-term-memory-to-be-truly-useful-4fed</link>
      <guid>https://forem.com/realmrmemory/why-ai-agents-need-long-term-memory-to-be-truly-useful-4fed</guid>
      <description>&lt;h1&gt;
  
  
  Why AI Agents Need Long-Term Memory to Be Truly Useful
&lt;/h1&gt;

&lt;p&gt;Every AI agent you've built has the same fatal flaw: &lt;strong&gt;amnesia&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Your chatbot nails the first conversation. The user says they prefer dark mode, work in fintech, and hate verbose responses. Perfect — the agent adapts. Then the session ends, and it's all gone. Next conversation? "Hi! How can I help you today?" Like you never met.&lt;/p&gt;

&lt;p&gt;This isn't a minor UX issue. It's the single biggest gap between AI agents that feel like tools and AI agents that feel like teammates.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Cost of Forgetting
&lt;/h2&gt;

&lt;p&gt;Think about what happens when your agent forgets:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Users repeat themselves&lt;/strong&gt; — "I already told you I use TypeScript, not Python"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Personalization resets&lt;/strong&gt; — every session starts from zero&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Context is lost&lt;/strong&gt; — multi-day workflows fall apart&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Trust erodes&lt;/strong&gt; — users stop investing in the relationship&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For enterprise use cases, the stakes are even higher. An AI sales assistant that forgets a client's preferences? A support bot that can't recall the ticket from yesterday? These aren't edge cases — they're dealbreakers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why In-Memory Solutions Don't Scale
&lt;/h2&gt;

&lt;p&gt;The naive fix is stuffing conversation history into the context window. But this hits walls fast:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Token limits&lt;/strong&gt; — GPT-4 gives you 128K tokens. Sounds like a lot until you're 50 conversations deep.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost&lt;/strong&gt; — Every token in context costs money. Replaying full history on every call gets expensive.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Relevance&lt;/strong&gt; — Not everything matters. A conversation from 3 weeks ago about API keys isn't relevant to today's debugging session.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No semantic search&lt;/strong&gt; — You can't ask "what does this user prefer?" across flat conversation logs.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The Solution: Persistent Semantic Memory
&lt;/h2&gt;

&lt;p&gt;What agents actually need is a memory layer that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Persists&lt;/strong&gt; across sessions, restarts, and deployments&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Searches semantically&lt;/strong&gt; — find relevant memories by meaning, not keywords&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compresses&lt;/strong&gt; automatically — keep what matters, forget what doesn't&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scales&lt;/strong&gt; without blowing up your token budget&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is exactly what &lt;a href="https://mrmemory.dev" rel="noopener noreferrer"&gt;MrMemory&lt;/a&gt; provides. It's a managed API that gives your agent persistent, searchable memory in 3 lines of code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;mrmemory&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MrMemory&lt;/span&gt;

&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MrMemory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your-key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Store a memory
&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;remember&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;User prefers dark mode and concise responses&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tags&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;preferences&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="c1"&gt;# Later, in a new session...
&lt;/span&gt;&lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;recall&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;what are the user&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;s preferences?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# Returns: [Memory(text="User prefers dark mode and concise responses", ...)]
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No database setup. No embedding pipeline. No vector store configuration. Just &lt;code&gt;remember&lt;/code&gt; and &lt;code&gt;recall&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Works Under the Hood
&lt;/h2&gt;

&lt;p&gt;MrMemory uses a three-layer architecture:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Storage&lt;/strong&gt; — PostgreSQL for durable, structured memory storage&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Embeddings&lt;/strong&gt; — OpenAI embeddings convert text to semantic vectors&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Search&lt;/strong&gt; — Qdrant vector database enables cosine similarity search&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When you call &lt;code&gt;remember()&lt;/code&gt;, the text is embedded and stored. When you call &lt;code&gt;recall()&lt;/code&gt;, your query is embedded and matched against stored memories by semantic similarity. The most relevant memories come back, ready to inject into your agent's context.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real-World Patterns
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Personal Assistant
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# After learning user preferences
&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;remember&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;User&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;s name is Alex, works at Stripe, prefers Python&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;remember&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;User is building a payment integration for their SaaS&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Next session — agent instantly has context
&lt;/span&gt;&lt;span class="n"&gt;memories&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;recall&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;what is the user working on?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Customer Support Bot
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Store ticket context
&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;remember&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Ticket #&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;ticket_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;: User reported billing error on Pro plan&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;tags&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;support&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;billing&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="c1"&gt;# When user returns
&lt;/span&gt;&lt;span class="n"&gt;memories&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;recall&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;previous billing issues&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tags&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;billing&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  LangChain Integration
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;mrmemory.langchain&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MrMemoryCheckpointer&lt;/span&gt;

&lt;span class="n"&gt;checkpointer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MrMemoryCheckpointer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your-key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;graph&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;workflow&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;compile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;checkpointer&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;checkpointer&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# All state automatically persisted across sessions
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Alternatives Comparison
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;MrMemory&lt;/th&gt;
&lt;th&gt;Mem0&lt;/th&gt;
&lt;th&gt;Zep&lt;/th&gt;
&lt;th&gt;MemGPT&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Managed API&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Self-host option&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Semantic search&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Auto-compression&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memory governance&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;LangChain plugin&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Price&lt;/td&gt;
&lt;td&gt;$5/mo&lt;/td&gt;
&lt;td&gt;$49/mo&lt;/td&gt;
&lt;td&gt;$29/mo&lt;/td&gt;
&lt;td&gt;Free (DIY)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;Install the SDK and start remembering:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;mrmemory
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Get your API key at &lt;a href="https://mrmemory.dev" rel="noopener noreferrer"&gt;mrmemory.dev&lt;/a&gt; — there's a 7-day free trial, no credit card mind games.&lt;/p&gt;

&lt;p&gt;Your agents deserve to remember. Your users deserve to be remembered.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;MrMemory is an open-source managed memory API for AI agents. &lt;a href="https://github.com/masterdarren23/mrmemory" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; | &lt;a href="https://mrmemory.dev/docs" rel="noopener noreferrer"&gt;Docs&lt;/a&gt; | &lt;a href="https://buy.stripe.com/00w4gB2REex4daHeP38g001" rel="noopener noreferrer"&gt;Try Free&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>memory</category>
      <category>agents</category>
      <category>mrmemory</category>
    </item>
  </channel>
</rss>
