<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Lax</title>
    <description>The latest articles on Forem by Lax (@lax_cc17386).</description>
    <link>https://forem.com/lax_cc17386</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/lax_cc17386"/>
    <language>en</language>
    <item>
      <title>Why we ditched the knowledge graph approach for agent memory</title>
      <dc:creator>Lax</dc:creator>
      <pubDate>Tue, 14 Apr 2026 17:37:18 +0000</pubDate>
      <link>https://forem.com/lax_cc17386/why-we-ditched-the-knowledge-graph-approach-for-agent-memory-4gp9</link>
      <guid>https://forem.com/lax_cc17386/why-we-ditched-the-knowledge-graph-approach-for-agent-memory-4gp9</guid>
      <description>&lt;p&gt;Every other week someone drops a new memory layer for AI agents. Most of them do the same thing-&amp;gt; take conversation history, extract entities and relationships, compress it into a knowledge graph.&lt;/p&gt;

&lt;p&gt;The problem is thats lossy compression. You are making irreversible decisions about what matters at ingestion time before you know what the agent will actually need. Information that doesnt fit the graph schema gets dropped. Nuance gets flattened into edges.&lt;/p&gt;

&lt;p&gt;We ran into this building Vektori and ended up going a different direction.&lt;/p&gt;

&lt;p&gt;Instead of compressing conversations into a graph, we keep three layers:&lt;/p&gt;

&lt;p&gt;L0: extracted facts - high signal, quality filtered, your fast search surface&lt;/p&gt;

&lt;p&gt;L1: episodes - auto-discovered across conversations, not hand-written schemas&lt;/p&gt;

&lt;p&gt;L2: raw sentences - never loaded by default, only fetched when you need to trace something back&lt;/p&gt;

&lt;p&gt;The raw sentence layer is the key difference. Nothing gets thrown away at ingestion. If the agent needs to reconstruct exactly what was said in session 47 it can. The graph structure lives above it not instead of it.&lt;/p&gt;

&lt;p&gt;Early benchmarks: 73% on LongMemEval-S.&lt;/p&gt;

&lt;p&gt;Free and open source: github.com/vektori-ai/vektori (do star if found useful :)&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>opensource</category>
      <category>showdev</category>
    </item>
  </channel>
</rss>
