<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: offbyonce</title>
    <description>The latest articles on Forem by offbyonce (@offbyonce).</description>
    <link>https://forem.com/offbyonce</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/offbyonce"/>
    <language>en</language>
    <item>
      <title>Stop Preloading Everything: How We Cut AI Agent Context by 50–87% with Lazy Discovery</title>
      <dc:creator>offbyonce</dc:creator>
      <pubDate>Mon, 04 May 2026 21:27:37 +0000</pubDate>
      <link>https://forem.com/offbyonce/stop-preloading-everything-how-we-cut-ai-agent-context-by-50-87-with-lazy-discovery-36ag</link>
      <guid>https://forem.com/offbyonce/stop-preloading-everything-how-we-cut-ai-agent-context-by-50-87-with-lazy-discovery-36ag</guid>
      <description>&lt;p&gt;This is the second post in our stigmem production series. If you are new to stigmem, the &lt;a href="https://dev.to/offbyonce/stigmem-v10-a-federated-knowledge-fabric-for-ai-agents-open-source-lf8"&gt;first post covers the foundation&lt;/a&gt;: what stigmem is, the federated fact model &lt;code&gt;(entity, relation, value, source, timestamp, confidence, scope)&lt;/code&gt;, how two nodes federate via Ed25519-signed peer handshakes, and MCP integration. This post assumes that foundation and focuses on what we built on top of it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; We shadow-audited lazy instruction discovery across two production AI agents and cut per-heartbeat context token use by 50-87% with zero regressions, using a manifest + recall architecture built on stigmem. Read on to find out why seven added keywords unlocked 100% coverage, and what that tells you about designing instruction sets for production agents.&lt;/p&gt;

&lt;p&gt;The problem with eager instruction loading is that your agent burns the same context tokens every heartbeat regardless of what it's actually doing. Here's how we fixed it across two production agents, what the data showed, and what we learned about keyword design along the way.&lt;/p&gt;




&lt;h2&gt;
  
  
  The problem we were solving
&lt;/h2&gt;

&lt;p&gt;If you run a long-lived AI agent that operates in a loop, waking up on events, executing tasks, then sleeping, you're probably loading the same instruction files into context every single time. AGENTS.md, HEARTBEAT.md, SOUL.md, TOOLS.md: it all goes in whether the agent is doing a routine check or executing a deep engineering task.&lt;/p&gt;

&lt;p&gt;For our CEO agent, that eager preload cost &lt;strong&gt;3,190 tokens per heartbeat&lt;/strong&gt;. For our CTO agent, &lt;strong&gt;1,129 tokens per heartbeat&lt;/strong&gt;. Both numbers, every heartbeat, regardless of what either agent was actually doing.&lt;/p&gt;

&lt;p&gt;That's not a crisis today. But multiply it across agents, across roles, across dozens of heartbeats per hour, and you're paying for a lot of tokens that contribute nothing to the task. There's also a subtler cost: instruction boilerplate crowds out working context, leaving less room for the actual task data that matters.&lt;/p&gt;




&lt;h2&gt;
  
  
  The architecture: manifest + recall
&lt;/h2&gt;

&lt;p&gt;The solution we built into &lt;a href="https://docs.stigmem.dev" rel="noopener noreferrer"&gt;stigmem&lt;/a&gt; is lazy instruction discovery. This builds directly on stigmem's fact store: instruction chunks are stored as typed facts with relation &lt;code&gt;instruction:content&lt;/code&gt;, and the manifest is a fact that indexes them. The same federation and MCP infrastructure &lt;a href="https://dev.to/offbyonce/stigmem-v10-a-federated-knowledge-fabric-for-ai-agents-open-source-lf8"&gt;we described in the v1.0 post&lt;/a&gt; carries these instruction facts across nodes. Instead of loading all instructions at boot, the agent loads a small &lt;strong&gt;boot stub&lt;/strong&gt; (under 400 tokens) that describes the manifest, then calls &lt;code&gt;recall_instruction&lt;/code&gt; at the start of each heartbeat to fetch only the chunks it actually needs.&lt;/p&gt;

&lt;p&gt;Here's how it works:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;boot stub (&amp;lt;400t)
    |
    v
recall_instruction(intent="...", max_chunks=4)
    |
    v
manifest keyword match -&amp;gt; top 4 chunks scored + returned
    |
    v
agent loads only the targeted instructions for this heartbeat
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Three components:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Instruction chunks.&lt;/strong&gt; Your markdown instruction files are parsed into atomic units, one per H1/H2/H3 heading. Each unit gets a URI, a token estimate, and a list of &lt;code&gt;load_triggers&lt;/code&gt;: the keywords and phrases that should cause this chunk to be recalled.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. The manifest.&lt;/strong&gt; A lightweight JSON index of all chunks with their load triggers. The CEO agent has 23 chunks totaling ~3,172 tokens; the CTO agent has 11 chunks totaling 1,129 tokens. Neither agent loads the whole manifest at runtime. It just knows it exists and calls &lt;code&gt;recall_instruction&lt;/code&gt; to retrieve what it needs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. &lt;code&gt;recall_instruction&lt;/code&gt;.&lt;/strong&gt; An MCP tool that takes an intent string, matches it against manifest keywords, and returns the top-N scoring chunks. This is the only instruction-loading call the agent makes each heartbeat. It sits alongside the &lt;code&gt;assert_fact&lt;/code&gt;, &lt;code&gt;query_facts&lt;/code&gt;, and &lt;code&gt;synthesize_scope&lt;/code&gt; tools &lt;a href="https://dev.to/offbyonce/stigmem-v10-a-federated-knowledge-fabric-for-ai-agents-open-source-lf8"&gt;described in the v1.0 launch post&lt;/a&gt; and uses the same MCP server configuration.&lt;/p&gt;

&lt;p&gt;The migration CLI converts existing instruction files automatically:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;stigmem instruction migrate ./instructions/ &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--role&lt;/span&gt; cto &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--agent-id&lt;/span&gt; &lt;span class="nv"&gt;$AGENT_ID&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--dry-run&lt;/span&gt;   &lt;span class="c"&gt;# preview first, no writes&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  The shadow audit: 14 heartbeats each, two agents
&lt;/h2&gt;

&lt;p&gt;We didn't flip either production agent on faith. We ran a 14-heartbeat shadow audit for each: the agent ran with both eager loading and lazy discovery in parallel, logging what each would have loaded. No behavior change, just measurement.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pass criteria (CEO):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Coverage &amp;gt;= 95%: lazy candidates must include &amp;gt;= 95% of chunks the agent's output actually referenced&lt;/li&gt;
&lt;li&gt;Token budget &amp;lt; 25% of the 3,190t eager baseline (&amp;lt; 798t)&lt;/li&gt;
&lt;li&gt;Zero regressions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Pass criteria (CTO):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Coverage &amp;gt;= 95% on critical chunks&lt;/li&gt;
&lt;li&gt;Token budget &amp;lt;= 50% of the 1,129t eager baseline (&amp;lt;= 565t), calibrated for the smaller instruction set where a 50% cut is still substantial&lt;/li&gt;
&lt;li&gt;Zero regressions post-fix&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  CEO audit results (28 total HBs across both runs, but CEO ran first)
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;HB&lt;/th&gt;
&lt;th&gt;Intent class&lt;/th&gt;
&lt;th&gt;max_chunks&lt;/th&gt;
&lt;th&gt;Lazy tokens&lt;/th&gt;
&lt;th&gt;Token %&lt;/th&gt;
&lt;th&gt;Coverage&lt;/th&gt;
&lt;th&gt;Regressions&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;Setup/migration&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;769t&lt;/td&gt;
&lt;td&gt;24.1%&lt;/td&gt;
&lt;td&gt;60%&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;Task-execution&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;383t&lt;/td&gt;
&lt;td&gt;12.0%&lt;/td&gt;
&lt;td&gt;67%&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;Audit-review&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;548t&lt;/td&gt;
&lt;td&gt;17.2%&lt;/td&gt;
&lt;td&gt;67%&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;4&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Progress-review&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;4&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;415t&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;13.0%&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;100%&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;Task-execution&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;415t&lt;/td&gt;
&lt;td&gt;13.0%&lt;/td&gt;
&lt;td&gt;100%&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;6-14&lt;/td&gt;
&lt;td&gt;Mixed&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;415t&lt;/td&gt;
&lt;td&gt;13.0%&lt;/td&gt;
&lt;td&gt;100%&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;CEO final result:&lt;/strong&gt; 415t median, 13.0% of eager baseline, 100% coverage across HBs 4-14, 0 regressions.&lt;/p&gt;

&lt;h3&gt;
  
  
  CTO audit results
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;HB&lt;/th&gt;
&lt;th&gt;Intent type&lt;/th&gt;
&lt;th&gt;Lazy tokens&lt;/th&gt;
&lt;th&gt;Budget %&lt;/th&gt;
&lt;th&gt;Critical coverage&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;Setup/migration&lt;/td&gt;
&lt;td&gt;310t&lt;/td&gt;
&lt;td&gt;27.5%&lt;/td&gt;
&lt;td&gt;100% (sim)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;Task-execution&lt;/td&gt;
&lt;td&gt;525t&lt;/td&gt;
&lt;td&gt;46.5%&lt;/td&gt;
&lt;td&gt;100%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;Code review&lt;/td&gt;
&lt;td&gt;565t&lt;/td&gt;
&lt;td&gt;50.0%&lt;/td&gt;
&lt;td&gt;100% (post-fix)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;Bug fix/incident&lt;/td&gt;
&lt;td&gt;457t&lt;/td&gt;
&lt;td&gt;40.5%&lt;/td&gt;
&lt;td&gt;100%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;Architecture decision&lt;/td&gt;
&lt;td&gt;565t&lt;/td&gt;
&lt;td&gt;50.0%&lt;/td&gt;
&lt;td&gt;100%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;CI/CD pipeline&lt;/td&gt;
&lt;td&gt;457t&lt;/td&gt;
&lt;td&gt;40.5%&lt;/td&gt;
&lt;td&gt;100%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;td&gt;Security audit&lt;/td&gt;
&lt;td&gt;457t&lt;/td&gt;
&lt;td&gt;40.5%&lt;/td&gt;
&lt;td&gt;100%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;Data model&lt;/td&gt;
&lt;td&gt;565t&lt;/td&gt;
&lt;td&gt;50.0%&lt;/td&gt;
&lt;td&gt;100%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;td&gt;Performance&lt;/td&gt;
&lt;td&gt;565t&lt;/td&gt;
&lt;td&gt;50.0%&lt;/td&gt;
&lt;td&gt;100%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;Hiring/team&lt;/td&gt;
&lt;td&gt;493t&lt;/td&gt;
&lt;td&gt;43.7%&lt;/td&gt;
&lt;td&gt;100%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;11&lt;/td&gt;
&lt;td&gt;Spec/planning&lt;/td&gt;
&lt;td&gt;565t&lt;/td&gt;
&lt;td&gt;50.0%&lt;/td&gt;
&lt;td&gt;100%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;12&lt;/td&gt;
&lt;td&gt;Refactoring&lt;/td&gt;
&lt;td&gt;457t&lt;/td&gt;
&lt;td&gt;40.5%&lt;/td&gt;
&lt;td&gt;100%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;13&lt;/td&gt;
&lt;td&gt;Documentation&lt;/td&gt;
&lt;td&gt;565t&lt;/td&gt;
&lt;td&gt;50.0%&lt;/td&gt;
&lt;td&gt;100%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;14&lt;/td&gt;
&lt;td&gt;Flip HB&lt;/td&gt;
&lt;td&gt;356t&lt;/td&gt;
&lt;td&gt;31.5%&lt;/td&gt;
&lt;td&gt;100%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;CTO final result:&lt;/strong&gt; 356-565t range (bimodal: typically 457t or 565t), 31.5-50.0% of eager baseline, 100% critical coverage HBs 4-14, 0 regressions post-fix.&lt;/p&gt;




&lt;h2&gt;
  
  
  The keyword-design lesson: it showed up in both runs
&lt;/h2&gt;

&lt;p&gt;This is the finding we didn't fully anticipate, and seeing it replicate across two independent agents made it undeniable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In the CEO run:&lt;/strong&gt; The &lt;code&gt;HEARTBEAT-9-exit&lt;/code&gt; chunk (32 tokens, critical for the wrap-up phase of every heartbeat) missed recall@3 on HB1, HB2, and HB3. A quick naming note: &lt;code&gt;HEARTBEAT-9-exit&lt;/code&gt; is the chunk's identifier, derived from section 9 of the HEARTBEAT.md instruction file. The "9" refers to that file section, not to shadow audit heartbeat number 9. The miss was present from the start; we diagnosed and fixed the root cause during HB3, which is why you will see HB3 referenced as the fix point in the audit log. Its keywords were &lt;code&gt;exit&lt;/code&gt;, &lt;code&gt;comment&lt;/code&gt;, &lt;code&gt;before&lt;/code&gt;, &lt;code&gt;exiting&lt;/code&gt;, &lt;code&gt;assignments&lt;/code&gt;, &lt;code&gt;valid&lt;/code&gt;, &lt;code&gt;mention&lt;/code&gt;: all content descriptors that describe what the chunk covers, not the intents that require it. None of them matched the agent's actual wrap-up phrasing like "wrap up this task" or "post a comment and finish."&lt;/p&gt;

&lt;p&gt;Fix applied in HB3: added 7 natural-language synonyms: &lt;code&gt;wrap, done, complete, finish, final, closing, conclude&lt;/code&gt;. Coverage jumped to 100% at HB4. We also bumped &lt;code&gt;max_chunks&lt;/code&gt; from 3 to 4 to handle simultaneous chunk needs, adding only 32 tokens (under 1% of eager baseline).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In the CTO run (HB3):&lt;/strong&gt; The &lt;code&gt;work-standards&lt;/code&gt; chunk (162 tokens, critical for any engineering task) had zero hits on code-review intents. Its keywords covered commit, bug, done, task. A pure code-review intent produced no matches.&lt;/p&gt;

&lt;p&gt;Fix: added &lt;code&gt;review, code, implement, test, bug, fix&lt;/code&gt; to load triggers. Critical coverage went to 100% at HB4.&lt;/p&gt;

&lt;p&gt;Same root cause both times. Same fix pattern both times. Different chunks, different agents, same design error.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The lesson:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Load triggers should describe the &lt;em&gt;intents and situations&lt;/em&gt; that require a chunk, not the content of the chunk itself.&lt;/p&gt;

&lt;p&gt;Completion-phase chunks, safety constraints, and task-execution standards are all systematically under-served by keyword lists derived from their content. The fix is to add the natural-language verbs and phrases your agents actually use when they need this information.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We're encoding this directly into the migration tool. When generating &lt;code&gt;load_triggers&lt;/code&gt;, it now flags "wrap-up" and "constraint" style sections and suggests completion-phase synonyms as part of the preview output.&lt;/p&gt;

&lt;p&gt;The CEO run also let the CTO team apply lessons from day one: CTO started with &lt;code&gt;max_chunks=4&lt;/code&gt; and exit keywords pre-tuned, which is why the CTO HB1 critical coverage was already 100% on the simulated task-execution intent. Replication made the pattern learnable and transferable.&lt;/p&gt;




&lt;h2&gt;
  
  
  Results compared
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;CEO (before)&lt;/th&gt;
&lt;th&gt;CEO (after)&lt;/th&gt;
&lt;th&gt;CTO (before)&lt;/th&gt;
&lt;th&gt;CTO (after)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Eager load per HB&lt;/td&gt;
&lt;td&gt;3,190t&lt;/td&gt;
&lt;td&gt;415t&lt;/td&gt;
&lt;td&gt;1,129t&lt;/td&gt;
&lt;td&gt;356-565t&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Reduction&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;87%&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;50-69%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Token profile&lt;/td&gt;
&lt;td&gt;fixed&lt;/td&gt;
&lt;td&gt;flat (13%)&lt;/td&gt;
&lt;td&gt;fixed&lt;/td&gt;
&lt;td&gt;bimodal (40-50%)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Coverage (eval phase)&lt;/td&gt;
&lt;td&gt;100% by definition&lt;/td&gt;
&lt;td&gt;100%&lt;/td&gt;
&lt;td&gt;100% by definition&lt;/td&gt;
&lt;td&gt;100%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Regressions post-fix&lt;/td&gt;
&lt;td&gt;n/a&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;n/a&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Consecutive green HBs&lt;/td&gt;
&lt;td&gt;n/a&lt;/td&gt;
&lt;td&gt;11 (HB4-14)&lt;/td&gt;
&lt;td&gt;n/a&lt;/td&gt;
&lt;td&gt;11 (HB4-14)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The bimodal profile in the CTO run (457t or 565t) is structural: the top-4 recall slots either pull architecture and technical chunks (565t ceiling) or fill with soul and heartbeat (457t floor). Both pass the budget criterion, and neither is a problem.&lt;/p&gt;




&lt;h2&gt;
  
  
  Running this in production: the dogfood story
&lt;/h2&gt;

&lt;p&gt;These are not benchmarks run against synthetic workloads. Both agents operate in production: the CEO agent manages task delegation, board-level review, and company governance; the CTO agent handles architecture decisions, code review, CI/CD, and engineering hiring. The shadow audits ran across real heartbeats, real tasks, real agent outputs.&lt;/p&gt;

&lt;p&gt;Coverage numbers reflect actual references in agent outputs. Regression counts reflect real task execution. Neither agent was flipped until 14 heartbeats confirmed all criteria. The CEO flip informed the CTO setup, which is why the CTO run started in a better position.&lt;/p&gt;

&lt;p&gt;Running it on yourself first is the only honest way to know it works.&lt;/p&gt;




&lt;h2&gt;
  
  
  Try it yourself
&lt;/h2&gt;

&lt;p&gt;Stigmem is open source. The instruction migration CLI ships as part of the reference node. Clone the repo and install from source:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/Eidetic-Labs/stigmem
&lt;span class="nb"&gt;cd &lt;/span&gt;stigmem

&lt;span class="c"&gt;# install the node package (includes the stigmem CLI)&lt;/span&gt;
pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; node/
&lt;span class="c"&gt;# or, if you use uv:&lt;/span&gt;
&lt;span class="c"&gt;# uv sync&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then start a local node and run the migration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# start the node (Docker is the easiest path)&lt;/span&gt;
docker compose up &lt;span class="nt"&gt;--build&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt;

&lt;span class="c"&gt;# 1. Preview your instruction set as lazy-loadable chunks&lt;/span&gt;
stigmem instruction migrate ./instructions/ &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--role&lt;/span&gt; myagent &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--agent-id&lt;/span&gt; &lt;span class="nv"&gt;$YOUR_AGENT_ID&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--dry-run&lt;/span&gt;

&lt;span class="c"&gt;# 2. Run the migration&lt;/span&gt;
stigmem instruction migrate ./instructions/ &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--role&lt;/span&gt; myagent &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--agent-id&lt;/span&gt; &lt;span class="nv"&gt;$YOUR_AGENT_ID&lt;/span&gt;

&lt;span class="c"&gt;# 3. Verify recall works for your agent's typical intents&lt;/span&gt;
curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST http://localhost:8765/v1/agents/&lt;span class="nv"&gt;$AGENT_ID&lt;/span&gt;/recall-instruction &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Authorization: Bearer &lt;/span&gt;&lt;span class="nv"&gt;$STIGMEM_API_KEY&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{"intent": "wrap up this task", "max_chunks": 4}'&lt;/span&gt;

&lt;span class="c"&gt;# 4. Run your own shadow audit before flipping&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The migration tool handles idempotency (NOOP on unchanged chunks), tombstoning (removed sections cleaned from the manifest, underlying facts preserved in DB for audit history), and CI integration via the &lt;code&gt;--yes&lt;/code&gt; flag.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's next: stigmem v2.0 and how you can help
&lt;/h2&gt;

&lt;p&gt;Phase 10 proved that lazy instruction discovery works in production. The &lt;a href="https://dev.to/offbyonce/stigmem-v10-a-federated-knowledge-fabric-for-ai-agents-open-source-lf8"&gt;v1.0 launch&lt;/a&gt; established the federated fact substrate: immutable typed facts, HLC timestamps, Ed25519-signed peer replication, and MCP tooling. Lazy discovery is the first major capability we've built on top of that substrate in a production setting. But getting stigmem to a point where any team can run it securely and robustly at scale is larger than what we can finish alone.&lt;/p&gt;

&lt;p&gt;Here's what v2.0 needs to solve, and where we're looking for collaborators:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-organization trust and federation.&lt;/strong&gt; The v1.0 federation model &lt;a href="https://dev.to/offbyonce/stigmem-v10-a-federated-knowledge-fabric-for-ai-agents-open-source-lf8"&gt;covers two-node peering&lt;/a&gt; with scope-gated replication. Scaling that to organizations that don't already trust each other requires preventing memory poisoning and containing malicious agent behavior. We have design sketches but not a finished protocol.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Persistent memory across reboots.&lt;/strong&gt; The current local SQLite store disappears when the host system reboots. We're evaluating Turso and other durable SQLite-compatible backends for persistence without sacrificing the local-first deployment model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Graph-based memory relationships.&lt;/strong&gt; Agents should be able to navigate related memories, not just retrieve isolated facts. We want agents to touch a memory and discover connected context through graph traversal rather than pulling the full memory table on every query.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Composite intent for parallel workloads.&lt;/strong&gt; The single &lt;code&gt;intent&lt;/code&gt; string works well when an agent handles one task per heartbeat. Multi-threaded agents need a way to express composite intent so recall surfaces instruction chunks for all concurrent task threads.&lt;/p&gt;

&lt;p&gt;If any of these problems are interesting to you, we want to hear from you. The project is at &lt;a href="https://docs.stigmem.dev" rel="noopener noreferrer"&gt;docs.stigmem.dev&lt;/a&gt; and the source is on GitHub. v2.0 is being designed collaboratively, and early contributors will have real influence over the architecture.&lt;/p&gt;




&lt;h2&gt;
  
  
  What we learned, summarized
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Eager instruction loading is a hidden tax. Most teams don't notice until they're running many agents or many heartbeats per hour.&lt;/li&gt;
&lt;li&gt;The manifest and recall architecture is sound. Both audits ran zero regressions post-fix across 11 consecutive heartbeats.&lt;/li&gt;
&lt;li&gt;Keyword design is the tuning lever, not the recall algorithm. Content-derived keywords systematically miss real usage patterns. Use intent-derived synonyms instead.&lt;/li&gt;
&lt;li&gt;Lessons transfer across agents. The CEO run directly improved the CTO setup. Multi-agent rollouts get cheaper over time.&lt;/li&gt;
&lt;li&gt;Shadow auditing before flipping is worth the overhead. It gives you real coverage data and catches manifest gaps before they become production regressions.&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;&lt;em&gt;Stigmem is an open protocol for federated AI agent memory. Docs and source at &lt;a href="https://docs.stigmem.dev" rel="noopener noreferrer"&gt;docs.stigmem.dev&lt;/a&gt;. New to stigmem? Start with the &lt;a href="https://dev.to/offbyonce/stigmem-v10-a-federated-knowledge-fabric-for-ai-agents-open-source-lf8"&gt;v1.0 launch post&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>llm</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Stigmem v1.0: A federated knowledge fabric for AI agents (open-source)</title>
      <dc:creator>offbyonce</dc:creator>
      <pubDate>Mon, 04 May 2026 02:39:35 +0000</pubDate>
      <link>https://forem.com/offbyonce/stigmem-v10-a-federated-knowledge-fabric-for-ai-agents-open-source-lf8</link>
      <guid>https://forem.com/offbyonce/stigmem-v10-a-federated-knowledge-fabric-for-ai-agents-open-source-lf8</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Cross-posted from the &lt;a href="https://docs.stigmem.dev/blog/stigmem-v1-launch" rel="noopener noreferrer"&gt;Stigmem blog&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Today we're releasing stigmem v1.0: A stable, open-source specification and reference implementation for a federated knowledge fabric for AI agents.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why "stigmem"?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Stigmem&lt;/strong&gt; = &lt;strong&gt;Stigmergy&lt;/strong&gt; + &lt;strong&gt;Memory&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Stigmergy" rel="noopener noreferrer"&gt;Stigmergy&lt;/a&gt; (Greek &lt;em&gt;stigma&lt;/em&gt; — mark; &lt;em&gt;ergon&lt;/em&gt; — work) is the coordination mechanism you see in ant colonies and termite mounds: agents don't communicate directly with each other. Instead, they leave traces in a shared environment: a pheromone trail, a soil deposit; and those traces guide the behavior of future agents. The colony's intelligence emerges from the environment itself, not from any central controller.&lt;/p&gt;

&lt;p&gt;Stigmem applies the same principle to multi-agent AI systems. Agents write typed, provenance-tagged facts into a shared substrate. Other agents running later, on different platforms, inside different organizations read those facts and act on them. No central coordinator, no point-to-point protocol overhead. The knowledge environment carries the coordination signal.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Memory&lt;/strong&gt; half reflects persistence and decay: facts have &lt;code&gt;valid_until&lt;/code&gt; expiries and confidence scores, so the substrate stays fresh rather than accumulating stale state just as pheromone trails fade when they're no longer reinforced.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem
&lt;/h2&gt;

&lt;p&gt;Multi-agent systems accumulate knowledge in isolated silos. One agent knows a user prefers dark mode. Another inferred which projects are high priority. A third discovered a bug in the payment flow. None of them can see what the others know, because there's no shared place to put typed, provenance-tagged facts that travel across tool boundaries.&lt;/p&gt;

&lt;p&gt;Stigmem gives agents that shared layer.&lt;/p&gt;

&lt;h2&gt;
  
  
  A fact
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;(entity, relation, value, source, timestamp, confidence, scope)&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;entity&lt;/strong&gt; — what the fact is about (&lt;code&gt;user:alice&lt;/code&gt;, &lt;code&gt;project:payments&lt;/code&gt;, &lt;code&gt;stigmem://org/task/42&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;relation&lt;/strong&gt; — the predicate (&lt;code&gt;memory:prefers&lt;/code&gt;, &lt;code&gt;status:blocked&lt;/code&gt;, &lt;code&gt;infers:next_action&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;value&lt;/strong&gt; — typed payload: string, number, boolean, or JSON&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;source&lt;/strong&gt; — who asserted it (an agent ID, tool name, or user session)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;timestamp&lt;/strong&gt; — Hybrid Logical Clock, monotonic across distributed nodes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;confidence&lt;/strong&gt; — 0.0–1.0, decays over time if not re-asserted&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;scope&lt;/strong&gt; — access boundary (&lt;code&gt;public&lt;/code&gt;, &lt;code&gt;company&lt;/code&gt;, &lt;code&gt;team&lt;/code&gt;, &lt;code&gt;private&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Facts are immutable. Contradictions between nodes are surfaced as first-class conflict records, not silently overwritten.&lt;/p&gt;

&lt;h2&gt;
  
  
  Federation in 2 minutes
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/Eidetic-Labs/stigmem &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd &lt;/span&gt;stigmem
docker compose up &lt;span class="nt"&gt;--build&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt;

&lt;span class="c"&gt;# Register peers&lt;/span&gt;
docker &lt;span class="nb"&gt;exec &lt;/span&gt;stigmem-node-a-1 stigmem federation register-peer &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--local-url&lt;/span&gt; http://node-a:8765 &lt;span class="nt"&gt;--remote-url&lt;/span&gt; http://node-b:8765 &lt;span class="nt"&gt;--scopes&lt;/span&gt; company,public

&lt;span class="c"&gt;# Assert a fact&lt;/span&gt;
curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST http://localhost:8765/v1/facts &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s1"&gt;'Content-Type: application/json'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{"entity":"user:alice","relation":"memory:prefers","value":{"type":"string","v":"dark mode"},"source":"agent:settings","confidence":1.0,"scope":"company"}'&lt;/span&gt;

&lt;span class="c"&gt;# 30s later — fact replicated to node-b&lt;/span&gt;
curl &lt;span class="s1"&gt;'http://localhost:8766/v1/facts?entity=user:alice&amp;amp;scope=company'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Two nodes start immediately. The federation handshake uses Ed25519 signatures; facts replicate under the scopes you declare. Scope enforcement is strict: &lt;code&gt;private&lt;/code&gt;-scope facts never leave the node that created them.&lt;/p&gt;

&lt;h2&gt;
  
  
  MCP integration
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"stigmem"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"npx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"-y"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"@stigmem/mcp-server"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"env"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"STIGMEM_NODE_URL"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"http://localhost:8765"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Any MCP-compatible agent runtime (Claude Code, Cursor, Zed, Codex CLI) gets five tools: &lt;code&gt;assert_fact&lt;/code&gt;, &lt;code&gt;query_facts&lt;/code&gt;, &lt;code&gt;retract_fact&lt;/code&gt;, &lt;code&gt;synthesize_scope&lt;/code&gt;, and &lt;code&gt;lint_scope&lt;/code&gt;. The &lt;code&gt;synthesize_scope&lt;/code&gt; tool aggregates recent facts into a structured summary that slots directly into a context window so agents get fresh, scoped knowledge without managing embeddings or retrieval pipelines.&lt;/p&gt;

&lt;h2&gt;
  
  
  What stigmem is not
&lt;/h2&gt;

&lt;p&gt;Stigmem does not replace:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Agent runtimes:&lt;/strong&gt; it's the substrate those runtimes reason over, not a runtime itself&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Orchestration platforms:&lt;/strong&gt; the Paperclip and OpenClaw adapters emit events &lt;em&gt;into&lt;/em&gt; stigmem; they compose, not compete&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tool protocols like MCP:&lt;/strong&gt; MCP is the transport; the stigmem MCP adapter rides on top&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It fills the gap none of them fill: typed, provenance-traceable, expiry-aware, federated shared knowledge that travels across tool and organizational boundaries.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get involved
&lt;/h2&gt;

&lt;p&gt;Stigmem is Apache 2.0 and genuinely needs contributors. A few areas where help would make the biggest difference right now:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Spec feedback:&lt;/strong&gt; the &lt;a href="https://github.com/Eidetic-Labs/stigmem/blob/main/spec/stigmem-spec-v1.0.md" rel="noopener noreferrer"&gt;Intent envelope&lt;/a&gt; (§4: goal, constraint, preference, handoff) is still in draft status and we're actively looking for real use-case feedback before stabilizing it&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Adapter authors:&lt;/strong&gt; there are stubs for additional agent runtimes and we'd love to see community-maintained adapters&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Node implementations:&lt;/strong&gt; the reference node is FastAPI + SQLite; alternative implementations in other languages are explicitly encouraged by the spec&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-world federation topologies:&lt;/strong&gt; if you run an interesting multi-node setup, &lt;a href="https://github.com/Eidetic-Labs/stigmem/discussions" rel="noopener noreferrer"&gt;open a discussion&lt;/a&gt; — we'd like to document it&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The contribution process uses an RFC model: open a GitHub issue with the RFC template, discuss, then PR against the spec. New spec sections start as draft blocks and need ≥2 approvals from active contributors to merge.&lt;/p&gt;

&lt;p&gt;If you're building something on stigmem or have questions about the federation protocol, the HLC implementation, or how to write an adapter, drop a comment or open a &lt;a href="https://github.com/Eidetic-Labs/stigmem/discussions" rel="noopener noreferrer"&gt;GitHub discussion&lt;/a&gt;, we read everything.&lt;/p&gt;

&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Repo (Apache 2.0):&lt;/strong&gt; &lt;a href="https://github.com/Eidetic-Labs/stigmem" rel="noopener noreferrer"&gt;https://github.com/Eidetic-Labs/stigmem&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docs:&lt;/strong&gt; &lt;a href="https://docs.stigmem.dev" rel="noopener noreferrer"&gt;https://docs.stigmem.dev&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Spec:&lt;/strong&gt; &lt;a href="https://github.com/Eidetic-Labs/stigmem/blob/main/spec/stigmem-spec-v1.0.md" rel="noopener noreferrer"&gt;https://github.com/Eidetic-Labs/stigmem/blob/main/spec/stigmem-spec-v1.0.md&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Quickstart:&lt;/strong&gt; &lt;a href="https://docs.stigmem.dev/en/latest/docs/getting-started/quickstart" rel="noopener noreferrer"&gt;https://docs.stigmem.dev/en/latest/docs/getting-started/quickstart&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Contributing:&lt;/strong&gt; &lt;a href="https://github.com/Eidetic-Labs/stigmem/blob/main/CONTRIBUTING.md" rel="noopener noreferrer"&gt;https://github.com/Eidetic-Labs/stigmem/blob/main/CONTRIBUTING.md&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Discussions:&lt;/strong&gt; &lt;a href="https://github.com/Eidetic-Labs/stigmem/discussions" rel="noopener noreferrer"&gt;https://github.com/Eidetic-Labs/stigmem/discussions&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>devtools</category>
      <category>agentprotocol</category>
    </item>
  </channel>
</rss>
