<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Hazel</title>
    <description>The latest articles on Forem by Hazel (@hazelme).</description>
    <link>https://forem.com/hazelme</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/hazelme"/>
    <language>en</language>
    <item>
      <title>I built a memory system for AI agents that stores everything as readable Git records</title>
      <dc:creator>Hazel</dc:creator>
      <pubDate>Tue, 31 Mar 2026 22:23:09 +0000</pubDate>
      <link>https://forem.com/hazelme/i-built-a-memory-system-for-ai-agents-that-stores-everything-as-readable-git-records-5f5i</link>
      <guid>https://forem.com/hazelme/i-built-a-memory-system-for-ai-agents-that-stores-everything-as-readable-git-records-5f5i</guid>
      <description>&lt;p&gt;Every time I start a new session with my AI agent, it forgets everything. My preferences, the decisions we made yesterday, the project structure we spent an hour discussing — all gone.&lt;/p&gt;

&lt;p&gt;I got frustrated enough to build something about it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem
&lt;/h2&gt;

&lt;p&gt;I use OpenClaw agents daily. The standard workaround is a MEMORY.md file — a big text file loaded at the start of each session. It works until it doesn't:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It grows until it eats your context window&lt;/li&gt;
&lt;li&gt;You have to maintain it manually&lt;/li&gt;
&lt;li&gt;It doesn't scale to teams. If my colleague's agent learns something, mine has no idea&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I tried vector databases, SQLite + FTS5, various combinations. They all share the same flaw: memories get injected into the context window once at session start, and then they're just tokens. When context compaction kicks in during a long session, those memories can get summarized away or dropped entirely.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I built
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://clawmem.ai" rel="noopener noreferrer"&gt;ClawMem&lt;/a&gt; is a memory plugin for OpenClaw that takes a different approach.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The core loop:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You talk to your agent normally — it doesn't know the memory system exists&lt;/li&gt;
&lt;li&gt;After each session, an LLM subagent analyzes the conversation and extracts durable facts, decisions, and lessons&lt;/li&gt;
&lt;li&gt;Each memory is stored as a structured, labeled record on a GitHub-compatible Git server&lt;/li&gt;
&lt;li&gt;Before the next session, relevant memories are searched and injected into context&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here's what a stored memory looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#23 [Open] kind:decision topic:api status:active
API rate limiting uses sliding window policy.
Source: session #42, turn 3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Human-readable. You can browse them, search them, edit them, trace each one back to the exact conversation that created it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The team memory part
&lt;/h2&gt;

&lt;p&gt;This is what I'm most excited about. I recorded a demo of it here:&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/vOxyhoFCdfs"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Multiple agents share the same memory repository. When one team member's agent learns something — "the client meeting moved to Thursday, they want the proposal to focus on sustainability" — every other team member's agent can recall that in their next session.&lt;/p&gt;

&lt;p&gt;No Slack archaeology. No "did anyone tell the agent about the deadline change?" Just shared, inspectable, structured memory.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it works under the hood
&lt;/h2&gt;

&lt;p&gt;The backend is a GitHub-compatible API server written in Go&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;REST API endpoints (GitHub API v3 compatible)&lt;/li&gt;
&lt;li&gt;GraphQL API (v4 compatible)&lt;/li&gt;
&lt;li&gt;Real Git HTTP protocol — &lt;code&gt;git clone&lt;/code&gt; actually works&lt;/li&gt;
&lt;li&gt;Hybrid search: SQL LIKE for exact matches + OpenAI embedding vectors for semantic similarity&lt;/li&gt;
&lt;li&gt;Multi-tenant with per-agent database isolation&lt;/li&gt;
&lt;li&gt;Tested with real teamwork scenarios.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The plugin (&lt;code&gt;@clawmem-ai/clawmem&lt;/code&gt;) hooks into OpenClaw's lifecycle:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;before_agent_start&lt;/code&gt; → recall relevant memories, inject into context&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;session_end&lt;/code&gt; / &lt;code&gt;before_reset&lt;/code&gt; → extract memories via LLM subagent&lt;/li&gt;
&lt;li&gt;SHA256 deduplication so the same fact doesn't get stored twice&lt;/li&gt;
&lt;li&gt;8 memory tools available to agents (&lt;code&gt;memory_recall&lt;/code&gt;, &lt;code&gt;memory_store&lt;/code&gt;, &lt;code&gt;memory_list&lt;/code&gt;, &lt;code&gt;memory_update&lt;/code&gt;, &lt;code&gt;memory_forget&lt;/code&gt;, etc.)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Install
&lt;/h2&gt;

&lt;p&gt;Paste this into your OpenClaw chat and let the agent handle it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;Read https://clawmem.ai/SKILL.md and follow the instructions to install ClawMem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The plugin provisions your memory repository automatically on first run. No separate account needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it doesn't do (yet)
&lt;/h2&gt;

&lt;p&gt;I want to be upfront about the current limitations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Only works with OpenClaw&lt;/strong&gt; — MCP server is next on the roadmap, which would enable Claude Code, Cursor, and other MCP-compatible tools&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Search quality is a work in progress&lt;/strong&gt; — the backend supports vector search, but the plugin's fallback lexical scoring isn't great yet&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Web console with minimal visualization and management capabilities&lt;/strong&gt; — API integration is in progress&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why Git-based storage?
&lt;/h2&gt;

&lt;p&gt;People ask me this a lot. Most memory systems use vector databases. They're great for retrieval but terrible for inspection. You query through an API and hope for the best.&lt;/p&gt;

&lt;p&gt;With ClawMem, every memory is a record you can read. You can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open a list and see everything your agent knows&lt;/li&gt;
&lt;li&gt;Edit or delete any memory before it shapes the next answer&lt;/li&gt;
&lt;li&gt;Trace any memory back to the session and turn that created it&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;git clone&lt;/code&gt; to download all your data — zero lock-in&lt;/li&gt;
&lt;li&gt;And AI agent knows &lt;code&gt;git&lt;/code&gt; a lot!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When you're trusting an AI system to remember things about your work, your team, your decisions - being able to actually see what it remembers matters.&lt;/p&gt;

&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Website &amp;amp; docs: &lt;a href="https://clawmem.ai" rel="noopener noreferrer"&gt;clawmem.ai&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Demo video: &lt;a href="https://www.youtube.com/watch?v=vOxyhoFCdfs" rel="noopener noreferrer"&gt;YouTube&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;X: &lt;a href="https://x.com/ClawmemAI" rel="noopener noreferrer"&gt;Clawmem&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Feedback &amp;amp; issues: &lt;a href="https://github.com/clawmem-ai/landing-page/issues/new/choose" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Would love to hear your thoughts — especially if you've dealt with agent memory in your own projects. What approaches have worked for you?&lt;/p&gt;

</description>
      <category>openclaw</category>
      <category>agentskills</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
