<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Acontext</title>
    <description>The latest articles on Forem by Acontext (@acontext_4dc5ced58dc515fd).</description>
    <link>https://forem.com/acontext_4dc5ced58dc515fd</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/acontext_4dc5ced58dc515fd"/>
    <language>en</language>
    <item>
      <title>The Missing Infrastructure for AI Agents: Unified Context</title>
      <dc:creator>Acontext</dc:creator>
      <pubDate>Fri, 06 Feb 2026 17:11:00 +0000</pubDate>
      <link>https://forem.com/acontext_4dc5ced58dc515fd/the-missing-infrastructure-for-ai-agents-unified-context-5905</link>
      <guid>https://forem.com/acontext_4dc5ced58dc515fd/the-missing-infrastructure-for-ai-agents-unified-context-5905</guid>
      <description>&lt;p&gt;Acontext is a data platform designed to store multimodal context data, monitor agent success, and simplify context engineering.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbjnu8e4vfd25urs67gde.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbjnu8e4vfd25urs67gde.png" alt=" " width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For all the talk about 'agents', the word itself has become surprisingly fuzzy. In research circles, startup decks, and engineering teams, people often refer to very different things using the same term. And that confusion hides an important truth: &lt;strong&gt;most systems we casually call agents today are not actually agents in any meaningful sense.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A tool-calling LLM is not automatically an agent.&lt;/li&gt;
&lt;li&gt;A model wired to two predefined tools, even if it chooses when to call them, rarely feels like an agent.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And many projects that appear agent-like are, in practice, deterministic workflows disguised behind prompting.&lt;/p&gt;

&lt;p&gt;The distinction matters, not for semantics, but because &lt;strong&gt;we are finally seeing real AI agents emerge&lt;/strong&gt;, and if we want to build the next generation of systems, &lt;strong&gt;we must understand why some models behave like agents… and why most don't.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Gap Between Tools and True Agents&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The industry is full of examples where people script a multi-step prompt:&lt;/p&gt;

&lt;p&gt;''First, call this tool. Then call that tool. Then summarize the result.''&lt;/p&gt;

&lt;p&gt;This is not an AI agent. It's simply a workflow encoded through natural language.&lt;/p&gt;

&lt;p&gt;The intuition people hold about agents points to something much more profound:&lt;br&gt;
&lt;strong&gt;An agent should be capable of outcomes far beyond what its tools explicitly encode.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That is why Claude Code or Codex feels magical. Their tool implementations are trivial: any Computer Science intern can write a &lt;code&gt;read\_file&lt;/code&gt;, &lt;code&gt;write\_file&lt;/code&gt;, or &lt;code&gt;exec&lt;/code&gt; wrapper. Yet no intern can outperform the model when actually coding.&lt;/p&gt;

&lt;p&gt;Tools are mundane.&lt;/p&gt;

&lt;p&gt;Behavior that emerges from them is not.&lt;/p&gt;

&lt;p&gt;This gap between the simplicity of tools and the sophistication of behavior is the first real clue to what a true agent actually is.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Workflow Builders vs Agent Builders&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Once you recognize this gap, the real question becomes: &lt;strong&gt;What separates building workflows from building agents?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It comes down to mindset.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Workflow builders think in sequences:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What should be executed first?&lt;/li&gt;
&lt;li&gt;What comes next?&lt;/li&gt;
&lt;li&gt;What conditions trigger which branch?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Agent builders think in environments:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;What environment does my agent operate in?&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;What are the atomic actions available in this scope?&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Does the combination of those actions theoretically cover the action space of a human operator?&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;What guidelines shape behavior in this environment?&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To build agents, you must shift from scripting procedures to &lt;strong&gt;designing environments&lt;/strong&gt;.&lt;br&gt;
Agents emerge not from control, but from &lt;strong&gt;well-constructed uncertainty&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Heart of It All: Context Engineering&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Once you step into the environment mindset, a new challenge appears: &lt;strong&gt;Your agent is only as capable as the context it can see, retrieve, and use,&lt;/strong&gt; which means context, not tools, not prompts, is the real substrate of intelligence:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Define a complete and accurate tool set&lt;/li&gt;
&lt;li&gt;Define human guidelines and behavioral constraints&lt;/li&gt;
&lt;li&gt;Determine the agent's real-time context state&lt;/li&gt;
&lt;li&gt;Decide what to load into context and what to offload&lt;/li&gt;
&lt;li&gt;Enable the agent to discover new context when needed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Today's discussion around 'context' is dominated by RAG and MCP, but search and tool schema only cover a tiny slice of the problem. &lt;strong&gt;Context Engineering is about managing the entire universe of information that&lt;/strong&gt; &lt;strong&gt;an agent can act upon.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A helpful way to think about it is through three types of context:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;In-Session Context&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The live state of an ongoing interaction.&lt;/p&gt;

&lt;p&gt;Most of the engineering work today focuses only here.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;External Context&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Everything the agent can discover or load: &lt;strong&gt;skills, files, knowledge bases, artifacts, and tool descriptions&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Claude Skills is the first industry example in which the builder community has truly embraced this idea. Claude Skills isn't a protocol, it's a way of thinking. It frames &lt;strong&gt;context as *experience&lt;/strong&gt;*,not data, and encourages selective loading rather than building a human-like search index.&lt;/p&gt;

&lt;p&gt;Manus follows a similar pattern in its sandbox: its terminal-use exposes discoverable tools and lets agents dynamically uncover new capabilities.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Cross-Session Context&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This is what the agent has done before. Traditionally called 'memory', but memory is too weak a word.&lt;br&gt;
What agents really need is &lt;strong&gt;experience&lt;/strong&gt;. Check out more about its &lt;a href="https://acontext.io/blog/acontext-vs-memory-layer%22%20%5Ct%20%22https%3A//acontext.io/blog/_blank" rel="noopener noreferrer"&gt;differences from memory layer↗&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Understanding and engineering these layers is the real work of agent design.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why the Agent Era Needs a Context Data Platform&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Look at the current landscape.&lt;/p&gt;

&lt;p&gt;Developers have frameworks such as LangGraph, Agno, and n8n to orchestrate agent workflows. These tools help with execution, but not with context.&lt;/p&gt;

&lt;p&gt;The intelligence of an agent no longer sits in the workflow.&lt;br&gt;
&lt;strong&gt;It sits in context.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Yet we still lack a platform dedicated to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;storing all context data&lt;/li&gt;
&lt;li&gt;engineering its structure&lt;/li&gt;
&lt;li&gt;observing agent behavior&lt;/li&gt;
&lt;li&gt;capturing reusable experience&lt;/li&gt;
&lt;li&gt;providing continuity across tasks and sessions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If context is now the source of complexity and intelligence, we need infrastructure built for it.&lt;/p&gt;

&lt;p&gt;We need a new category—a &lt;strong&gt;Context Data Platform&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft0oi2dkmnanmx1h85eg5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft0oi2dkmnanmx1h85eg5.png" alt=" " width="800" height="274"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We call it &lt;a href="https://acontext.io/%22%20%5Ct%20%22https%3A//acontext.io/blog/_blank" rel="noopener noreferrer"&gt;Acontext↗&lt;/a&gt;, a platform designed to store &lt;a href="https://acontext.io/blog/how-acontext-stores-ai-messages%22%20%5Ct%20%22https%3A//acontext.io/blog/_blank" rel="noopener noreferrer"&gt;multi-modal context data↗&lt;/a&gt;, to monitor agent success, and to provide a layer of certainty amid the inherent uncertainty of agent behavior.&lt;/p&gt;

&lt;p&gt;It focuses on two core problems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How to move the storage and observability of context data into the cloud&lt;/li&gt;
&lt;li&gt;How to ensure that when a powerful agent completes a complex task right once, it can continue getting it right every time thereafter&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What Lives Inside Context Data?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Acontext is built around this new category, which can manage:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Multi-Modal Messages&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Messages from OpenAI, Anthropic, Gemini, and future providers: all normalized, stored, indexed, and accessible across sessions. Text, code, PDFs, images, upcoming modalities, Acontext can handle them for you seamlessly.&lt;/p&gt;

&lt;p&gt;No more gluing Postgres, S3, Redis by hand: &lt;a href="https://acontext.io/blog/how-acontext-stores-ai-messages%22%20%5Ct%20%22https%3A//acontext.io/blog/_blank" rel="noopener noreferrer"&gt;How Acontext Stores AI Messages?↗&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Artifacts&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Agents need a place to store what they produce.&lt;/p&gt;

&lt;p&gt;Acontext's &lt;strong&gt;Artifact Disk&lt;/strong&gt; offers a cloud-hosted filesystem abstraction (based on Linux file path): intuitive for agents, scalable for developers.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Plan &amp;amp; Task Observability&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;What did the agent promise?&lt;/p&gt;

&lt;p&gt;What did it do?&lt;/p&gt;

&lt;p&gt;Was it successful?&lt;/p&gt;

&lt;p&gt;Acontext includes background observers that track tasks, gather user feedback, and help quantify agent success.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;How to Improve Agents' Success Rates?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The logic is simple: &lt;strong&gt;when an agent gets something right, it should keep getting it right; when it gets something wrong, it should avoid repeating the mistake.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Acontext builds a Skill Space for capturing these execution patterns. An internal &lt;strong&gt;Experience Agent&lt;/strong&gt; detects meaningful tasks, extracts the successful workflow, and stores it in a structured space so future agents can reuse it automatically.&lt;/p&gt;

&lt;p&gt;This is not memory in the traditional sense.&lt;br&gt;
&lt;strong&gt;It is the accumulation of SOPs: the practical know-how generated through real agent–human collaboration.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;From context to skill, a further breakdown in this blog: &lt;a href="https://acontext.io/blog/acontext-architecture-explained%22%20%5Ct%20%22https%3A//acontext.io/blog/_blank" rel="noopener noreferrer"&gt;Inside Acontext: How AI Agents Learn from Experience↗&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;For the Visionaries Building What Comes Next&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Acontext is open source.&lt;/p&gt;

&lt;p&gt;We're actively developing it with the community and learning from builders pushing the boundaries of agent capability.&lt;/p&gt;

&lt;p&gt;If you see the future of agents the way we are doing, we'd love for you to explore Acontext.&lt;/p&gt;

&lt;p&gt;GitHub: &lt;a href="https://github.com/memodb-io/Acontext" rel="noopener noreferrer"&gt;https://github.com/memodb-io/Acontext&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One line of code to launch the full stack with &lt;code&gt;curl -fsSL&lt;/code&gt; &lt;a href="https://install.acontext.io/" rel="noopener noreferrer"&gt;https://install.acontext.io&lt;/a&gt; | sh.&lt;/p&gt;

&lt;p&gt;The agent era has entered the era of context. Let's build the infrastructure it deserves.&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>architecture</category>
      <category>llm</category>
    </item>
    <item>
      <title>Native Per-User Resource Management for Multi-Tenant AI Apps</title>
      <dc:creator>Acontext</dc:creator>
      <pubDate>Thu, 05 Feb 2026 17:37:00 +0000</pubDate>
      <link>https://forem.com/acontext_4dc5ced58dc515fd/native-per-user-resource-management-for-multi-tenant-ai-apps-1iip</link>
      <guid>https://forem.com/acontext_4dc5ced58dc515fd/native-per-user-resource-management-for-multi-tenant-ai-apps-1iip</guid>
      <description>&lt;p&gt;Associate resources with users, filter by user, and clean up with cascade deletion - all in a few lines of code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe33her51t7b8sjl4cbhi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe33her51t7b8sjl4cbhi.png" alt=" " width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Building multi-tenant AI applications has always required custom infrastructure for user isolation and resource management. With this release, &lt;strong&gt;Acontext now supports per-user resource management&lt;/strong&gt; natively.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What's New&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;You can now associate any Acontext resource (Spaces, Sessions, Disks, Skills) with a user identifier. This enables:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Multi-tenant isolation&lt;/strong&gt; without separate API keys per user&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;User-scoped queries&lt;/strong&gt; to retrieve only a specific user's resources&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cascade deletion&lt;/strong&gt; to clean up all resources when a user leaves&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;How It Works&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Associate Resources with Users&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Pass a user parameter when creating resources. Users are created automatically when first referenced.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from acontext import AcontextClient

client = AcontextClient(api_key=os.getenv("ACONTEXT_API_KEY"))

# Create a space for a user
space = client.spaces.create(
    user="alice@example.com",
    configs={"name": "Alice's Workspace"}
)

# Create a session for the same user
session = client.sessions.create(
    user="alice@example.com",
    space_id=space.id
)

# Create a disk for the user
disk = client.disks.create(user="alice@example.com")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { AcontextClient } from '@acontext/acontext';

const client = new AcontextClient({ apiKey: process.env.ACONTEXT_API_KEY });

// Create a space for a user
const space = await client.spaces.create({
  user: 'alice@example.com',
  configs: { name: "Alice's Workspace" }
});

// Create a session for the same user
const session = await client.sessions.create({
  user: 'alice@example.com',
  spaceId: space.id
});

// Create a disk for the user
const disk = await client.disks.create({ user: 'alice@example.com' });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Filter Resources by User&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;All list operations now support user filtering:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# List only Alice's resources
spaces = client.spaces.list(user="alice@example.com")
sessions = client.sessions.list(user="alice@example.com")
disks = client.disks.list(user="alice@example.com")
skills = client.skills.list_catalog(user="alice@example.com")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// List only Alice's resources
const spaces = await client.spaces.list({ user: 'alice@example.com' });
const sessions = await client.sessions.list({ user: 'alice@example.com' });
const disks = await client.disks.list({ user: 'alice@example.com' });
const skills = await client.skills.listCatalog({ user: 'alice@example.com' });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Cascade Delete Users&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;When a user leaves, clean up everything in one call:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Delete user and ALL associated resources
client.users.delete("alice@example.com")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Delete user and ALL associated resources
await client.users.delete('alice@example.com');
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Use Cases&lt;/strong&gt;
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Scenario&lt;/td&gt;
&lt;td&gt;Before&lt;/td&gt;
&lt;td&gt;After&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Multi-tenant apps&lt;/td&gt;
&lt;td&gt;Self-managed user-resource mapping&lt;/td&gt;
&lt;td&gt;Native user parameter&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;User offboarding&lt;/td&gt;
&lt;td&gt;Manual cleanup across resources&lt;/td&gt;
&lt;td&gt;One users.delete() call&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Per-user queries&lt;/td&gt;
&lt;td&gt;Self-maintained filtering logic&lt;/td&gt;
&lt;td&gt;Native user filter parameter&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The feature is available now in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Python SDK&lt;/strong&gt;: pip install acontext --upgrade&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TypeScript SDK&lt;/strong&gt;: npm install @acontext/acontext@latest&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Have feedback or questions? Join us on &lt;a href="https://discord.acontext.io/%22%20%5Ct%20%22https%3A//acontext.io/blog/_blank" rel="noopener noreferrer"&gt;Discord↗&lt;/a&gt; or open an issue on &lt;a href="https://github.com/memodb-io/Acontext%22%20%5Ct%20%22https%3A//acontext.io/blog/_blank" rel="noopener noreferrer"&gt;GitHub↗&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agentaichallenge</category>
      <category>openai</category>
      <category>programming</category>
    </item>
    <item>
      <title>How to Handle Growing AI Context Without Endless Scripts</title>
      <dc:creator>Acontext</dc:creator>
      <pubDate>Wed, 04 Feb 2026 17:18:00 +0000</pubDate>
      <link>https://forem.com/acontext_4dc5ced58dc515fd/new-features-in-acontext-context-engineering-in-a-few-lines-of-code-8n5</link>
      <guid>https://forem.com/acontext_4dc5ced58dc515fd/new-features-in-acontext-context-engineering-in-a-few-lines-of-code-8n5</guid>
      <description>&lt;p&gt;Learn how Acontext simplifies context engineering from days of manual work to just a few hours，and why a context data platform matters for production agents.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs9a9wygz7sbxzvbhc4cw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs9a9wygz7sbxzvbhc4cw.png" alt="Image6-1" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When building AI agents, the real source of complexity is rarely the model itself. It's the &lt;strong&gt;context&lt;/strong&gt; around it.&lt;/p&gt;

&lt;p&gt;Model capabilities have improved rapidly, but anyone who has built a production-grade agent knows the pattern: as soon as the system runs longer, handles tools, or spans multiple turns, the hard work shifts away from prompting and into &lt;strong&gt;context engineering&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why does context engineering become the real bottleneck?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In practice, 'context engineering' is not an abstract idea. It is a collection of very concrete engineering tasks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Storing text, images, memories, artifacts, tool calls...&lt;/li&gt;
&lt;li&gt;Managing sessions that grow unbounded over time&lt;/li&gt;
&lt;li&gt;Switching message formats across OpenAI, Anthropic, and Gemini&lt;/li&gt;
&lt;li&gt;Truncating or filtering context without breaking semantics&lt;/li&gt;
&lt;li&gt;Answering:'What exactly did the LLM see at that moment?'&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These tasks share a few traits: almost every production agent runs into them; they are ongoing costs, not one-time work; they tend to be implemented as ad hoc scripts and glue code.&lt;/p&gt;

&lt;p&gt;As a result, &lt;strong&gt;teams often spend more time maintaining context logic than improving the agent's actual reasoning or behavior.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;A Simpler Pattern: Context Data as a Unified Layer&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The problem is not that context engineering is inherently complex, but that it usually lives in the wrong place.&lt;/p&gt;

&lt;p&gt;A more inspiring mental model is to treat context as &lt;strong&gt;a system-level data concern&lt;/strong&gt; rather than as*&lt;em&gt;prompt logic.&lt;/em&gt;*&lt;/p&gt;

&lt;p&gt;This model has three core ideas:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Context data maintains complete&lt;/strong&gt; Raw messages and artifacts should be stored intact, not rewritten or mutated as the session evolves.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Editing is in visible control, not a mutation&lt;/strong&gt; Truncation, filtering, and token control should happen at retrieval, while the original session remains unchanged. Developers can utilize clear context window usage to determine if they need to edit the context.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rules over scripts&lt;/strong&gt; Instead of scattering heuristics throughout business logic, developers can describe how context should be shaped using declarative rules.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With this shift, context engineering becomes predictable and reusable rather than fragile and bespoke.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Acontext Simplifies Context Engineering in Two APIs&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Seen this way, 'a few lines of code' does not mean doing less work. It means &lt;strong&gt;moving complexity into the system layer&lt;/strong&gt;, where it can be handled consistently.&lt;/p&gt;

&lt;p&gt;Acontext follows this approach with two APIs: store_message() and get_messages()&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Raw context is stored once&lt;/li&gt;
&lt;li&gt;Context editing happens only when messages are retrieved&lt;/li&gt;
&lt;li&gt;Editing behavior is defined through explicit strategies&lt;/li&gt;
&lt;li&gt;The same rules apply across agents and sessions&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Code in Action: Context Editing On-the-fly&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The example below shows how context editing is moved from application logic into a declarative read step.&lt;/p&gt;

&lt;p&gt;Instead of pulling messages, counting tokens, trimming history, and rebuilding model inputs in code, the application describes its editing rules once and passes them to get_messages. Acontext applies those rules consistently and returns an edited view of the session.&lt;/p&gt;

&lt;p&gt;Because &lt;strong&gt;editing happens at read time&lt;/strong&gt;, changing context behavior does not require rewriting storage logic or reprocessing historical data. &lt;strong&gt;You can adjust strategies, compare different views, or reuse the same rules across agents without duplicating code.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Editing Strategies Ready-to-use:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;get_token_counts()&lt;/strong&gt; You can inspect the current token size of a session before editing.
&lt;strong&gt;&lt;em&gt;Benefit:&lt;/em&gt;&lt;/strong&gt;&lt;em&gt;context editing becomes data-driven instead of heuristic-based.&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;token_limit&lt;/strong&gt; Truncates the session by removing the oldest messages until the total token count is within a specified limit. Tool-call and tool-result pairs are preserved correctly.
&lt;strong&gt;&lt;em&gt;Benefit:&lt;/em&gt;&lt;/strong&gt;&lt;em&gt;no custom truncation logic, no broken tool histories.&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;remove_tool_result&lt;/strong&gt; Replaces older tool result contents with a placeholder, while keeping recent results intact.
&lt;strong&gt;&lt;em&gt;Benefit:&lt;/em&gt;&lt;/strong&gt;&lt;em&gt;large, verbose tool outputs*no longer consume*context without losing execution structure.&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;remove_tool_call_params&lt;/strong&gt; Removes arguments from older tool calls, keeping only IDs and names so tool-results can still reference them.
&lt;strong&gt;&lt;em&gt;Benefit:&lt;/em&gt;&lt;/strong&gt;&lt;em&gt;reduces token usage while preserving causal links between tool calls and results&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;middle_out&lt;/li&gt;
&lt;li&gt;offload_to_log&lt;/li&gt;
&lt;li&gt;remove_by_completed_tasks&lt;/li&gt;
&lt;li&gt;...
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from acontext import AcontextClient

client = AcontextClient(
    base_url="http://localhost:8029/api/v1",
    api_key="sk-ac-your-root-api-bearer-token"
)

edited_session = client.sessions.get_messages(
  session_id="session-uuid"
  edit_strategies=[
    {"type": "token_limit", "params":{"limit_tokens": 20000}},
    {"type": "remove_tool_result", "params": {"keep_recent_n_tool_result": 3}},
    ...
  ],
)

original_session = client.sessions.get_messages(
  session_id="session-uuid"
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;em&gt;In practice, this reduces the amount of glue code developers maintain, shortens iteration cycles when tuning context behavior, and makes context engineering more predictable rather than ad hoc.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why a Context Data Platform Matters for Production Agents&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This unified data layer abstraction is not about making AI agents look better in a demo. It is about making them &lt;strong&gt;behave reliably in real production environments&lt;/strong&gt;, where context grows, tools run repeatedly, and failures are expensive.&lt;/p&gt;

&lt;p&gt;In practice, teams begin to see clear, concrete improvements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Long-running tasks fail less often because context growth is handled safely and consistently&lt;/li&gt;
&lt;li&gt;Agent behavior becomes predictable and reproducible across runs, environments, and model upgrades&lt;/li&gt;
&lt;li&gt;Debugging no longer depends on guessing what the model may have seen at a given moment&lt;/li&gt;
&lt;li&gt;Context engineering recedes into the background, treated as infrastructure rather than application logic&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When context is treated as a first-class system concern through a dedicated data platform, production agents become easier to reason about, easier to debug, and significantly easier to maintain as systems evolve.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Way Forward&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Context engineering does not have to take days of manual work.&lt;/p&gt;

&lt;p&gt;With Acontext, tasks that once required days of custom wiring &lt;strong&gt;can be done in hours&lt;/strong&gt;: safely, consistently, and without rewriting the same glue code.&lt;/p&gt;

&lt;p&gt;That is what &lt;strong&gt;simplifying context engineering&lt;/strong&gt; means in practice.&lt;/p&gt;

&lt;p&gt;If this aligns with how you build agents, please do try Acontext in a real workflow.&lt;/p&gt;

&lt;p&gt;Share what works, what breaks, and what you wish existed.&lt;/p&gt;

&lt;p&gt;Join the community, give feedback, and help agent builders shape the open-source roadmap.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GitHub: &lt;a href="https://www.github.com/memodb-io/Acontext%22%20%5Ct%20%22https%3A//acontext.io/blog/_blank" rel="noopener noreferrer"&gt;https://www.github.com/memodb-io/Acontext↗&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Discord: &lt;a href="https://discord.gg/SG9xJcqVBu%22%20%5Ct%20%22https%3A//acontext.io/blog/_blank" rel="noopener noreferrer"&gt;https://discord.gg/SG9xJcqVBu↗&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>opensource</category>
      <category>agentaichallenge</category>
    </item>
    <item>
      <title>Self-Learning Agents: From Prompt Evolving to Experience Learning</title>
      <dc:creator>Acontext</dc:creator>
      <pubDate>Tue, 03 Feb 2026 17:32:00 +0000</pubDate>
      <link>https://forem.com/acontext_4dc5ced58dc515fd/self-learning-agents-from-prompt-evolving-to-experience-learning-5d0d</link>
      <guid>https://forem.com/acontext_4dc5ced58dc515fd/self-learning-agents-from-prompt-evolving-to-experience-learning-5d0d</guid>
      <description>&lt;p&gt;Acontext vs. DSPy: Why self-learning AI agents require user-specific experience, not global prompt evolution.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffnli11e5sbc9m8mubxf7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffnli11e5sbc9m8mubxf7.png" alt=" " width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A self-evolving agent framework like DSPy works beautifully when you can define a clear eval for your entire product. With a stable metric, it can evolve prompts and steadily improve overall performance. But agent self-learning in real applications is not 'global' at all, it should be &lt;strong&gt;per user&lt;/strong&gt;, &lt;strong&gt;per task&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Users bring different goals: one uses Manus to build a website, another for trip planning, and another for code and architecture docs. These tasks don't share a single success definition. And without a unified eval, DSPy has nothing to optimize toward.&lt;/p&gt;

&lt;p&gt;Some might imagine running a separate DSPy loop for each user. But as soon as you try, two structural problems appear.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The system prompts fragments into countless user-specific variants, which are long, unstructured, and impossible to maintain.&lt;/li&gt;
&lt;li&gt;Each user would need a specific evaluation, but &lt;strong&gt;user tasks vary too widely to define and maintain at scale.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The result is clear: prompt evolution cannot deliver accurate per-user self-learning. It's built for overall intelligence improvement, not task-driven adaptation.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Acontext: User-level Experience Learning&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Acontext takes a different path. It never rewrites your prompts or asks you to design evals. It learns directly from real execution and real user confirmation in a Notion-style Skill Space:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;No prompt changes:&lt;/strong&gt;Your system prompt stays clean. The learned skill lives in the user's workspace, not in the prompt.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No eval design required:&lt;/strong&gt;Acontext uses real user feedback as the signal: confirmation equals success; negative feedback equals failure; silence does not count.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Developers also get direct visibility into what's happening: per-user success rates, day-over-day changes, and task volume, so the learning process is &lt;strong&gt;transparent&lt;/strong&gt; rather than a black box.&lt;/p&gt;

&lt;p&gt;Acontext is intentionally conservative about what to learn. Only complex, multi-step experiences become skills; trivial tasks are ignored. And when needed, you can require &lt;strong&gt;explicit user approval&lt;/strong&gt;before a new skill is added: &lt;a href="https://docs.acontext.io/learn/advance/wait-user%22%20%5Ct%20%22https%3A//acontext.io/blog/_blank" rel="noopener noreferrer"&gt;https://docs.acontext.io/learn/advance/wait-user↗&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Try Acontext&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;If you're building agents and want them to learn from real experiences, Acontext is ready for you. It's open-source, and we're improving it rapidly with the help of the community.&lt;/p&gt;

&lt;p&gt;👉 &lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/memodb-io/Acontext%22%20%5Ct%20%22https%3A//acontext.io/blog/_blank" rel="noopener noreferrer"&gt;https://github.com/memodb-io/Acontext↗&lt;/a&gt; ⭐️ Give it a star, explore the examples, and try it in your agent stack.&lt;/p&gt;

&lt;p&gt;🤟&lt;strong&gt;Discord:&lt;/strong&gt; &lt;a href="https://discord.acontext.io/%22%20%5Ct%20%22https%3A//acontext.io/blog/_blank" rel="noopener noreferrer"&gt;https://discord.acontext.io↗&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you try Acontext, please let us know how it works for you. We'd love to hear your feedback and see how we can make Acontext more helpful.&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>llm</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>How Acontext Stores AI Messages?</title>
      <dc:creator>Acontext</dc:creator>
      <pubDate>Thu, 29 Jan 2026 17:11:00 +0000</pubDate>
      <link>https://forem.com/acontext_4dc5ced58dc515fd/how-acontext-stores-ai-messages-4k5i</link>
      <guid>https://forem.com/acontext_4dc5ced58dc515fd/how-acontext-stores-ai-messages-4k5i</guid>
      <description>&lt;p&gt;Agent developers shouldn't be writing custom message converters every time providers change their APIs. You need to work with messages from OpenAI, Anthropic, and other providers—but each one structures messages differently. Without a unified approach, you end up with fragmented, inconsistent data that's hard to store, query, and learn from.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqp23bkp0cvzs33q6yyhb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqp23bkp0cvzs33q6yyhb.png" alt=" " width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Agent developers shouldn't be writing custom message converters every time providers change their APIs. You need to work with messages from OpenAI, Anthropic, and other providers—but each one structures messages differently. Without a unified approach, you end up with fragmented, inconsistent data that's hard to store, query, and learn from.&lt;/p&gt;

&lt;p&gt;Building reliable agent systems requires treating message data as first-class infrastructure. Acontext provides a unified and durable message layer that makes context observable, persistent, and reusable. You send messages in any format—OpenAI, Anthropic, or Acontext's unified format—and they just work.&lt;/p&gt;

&lt;p&gt;This post explores how Acontext stores and normalizes multi-format messages, the architecture that makes it possible, and what this means for building production-ready agents.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Problem: Format Fragmentation&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Today, agent context is scattered across different formats, storage systems, and provider-specific structures. The result? Fragmented, transient, and nearly impossible to analyze over time.&lt;/p&gt;

&lt;p&gt;Consider what happens when you build an agent that needs to work with multiple providers:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Format Fragmentation&lt;/strong&gt;: Each provider uses different message structures. OpenAI uses &lt;code&gt;tool\_calls[]&lt;/code&gt; with nested &lt;code&gt;function.name&lt;/code&gt; and &lt;code&gt;function.arguments&lt;/code&gt;, while Anthropic uses &lt;code&gt;tool\_use[]&lt;/code&gt; with &lt;code&gt;name&lt;/code&gt; and &lt;code&gt;input&lt;/code&gt;. Without normalization, you can't query or analyze messages uniformly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Information Loss&lt;/strong&gt;: During format conversion, important metadata (tool call IDs, message names, source format) often gets lost or becomes inconsistent. You end up with incomplete context that can't be reliably used for learning or debugging.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Storage Inefficiency&lt;/strong&gt;: Storing large message payloads (images, files, long text) directly in PostgreSQL bloats the database and slows queries. But storing everything in object storage makes querying complex and loses transactional guarantees.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Provider Lock-in&lt;/strong&gt;: Systems become tightly coupled to one provider's format. When a new provider emerges or an existing one updates their API, you're rewriting converters and risking breaking changes.&lt;/p&gt;

&lt;p&gt;All these challenges stem from one root cause: &lt;strong&gt;message data is not treated as first-class infrastructure.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We don't need another database or storage system. We need a unified layer that accepts messages in any format, normalizes them to a consistent representation, and stores them efficiently—all while preserving every piece of context. That's what Acontext does.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Solution: Unified Message Processing&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Acontext's message processing system provides a single interface through which your agents can flexibly receive, normalize, and store messages from any provider. Here's how it works:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Multi-Format Support&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Instead of requiring clients to convert messages to a single format, Acontext accepts messages in their native format (OpenAI, Anthropic, or Acontext's unified format) and normalizes them internally.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This means:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Clients can use their existing SDKs without modification&lt;/li&gt;
&lt;li&gt;The system validates messages using official provider SDKs, ensuring spec compliance&lt;/li&gt;
&lt;li&gt;Format conversion happens once at ingestion, not repeatedly during processing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Acontext uses &lt;strong&gt;normalizers as adapters&lt;/strong&gt; that translate provider-specific formats into a unified internal representation, so your downstream systems only need to understand one format.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Complete Context Preservation&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Different providers structure messages differently. Without careful normalization, this information gets lost.&lt;/p&gt;

&lt;p&gt;Acontext preserves all metadata by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Storing the original source format in message metadata&lt;/li&gt;
&lt;li&gt;Unifying field names (e.g., &lt;code&gt;tool\_use&lt;/code&gt; → &lt;code&gt;tool-call&lt;/code&gt;, &lt;code&gt;input&lt;/code&gt; → &lt;code&gt;arguments&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Preserving provider-specific metadata in a flexible &lt;code&gt;meta&lt;/code&gt; field&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This ensures downstream systems can handle tool calls uniformly while still accessing original provider-specific information when needed.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Efficient Hybrid Storage&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Messages can contain large payloads—images, files, or long text content. Storing these directly in PostgreSQL would bloat the database and slow queries.&lt;/p&gt;

&lt;p&gt;Acontext uses a &lt;strong&gt;three-tier storage strategy&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;PostgreSQL&lt;/strong&gt; stores lightweight metadata (message ID, session ID, role, timestamps)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;S3&lt;/strong&gt; stores the actual message parts as JSON files, referenced by the database&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Redis&lt;/strong&gt; provides hot cache for frequently accessed message parts, reducing S3 reads&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Content-addressed storage&lt;/strong&gt; using SHA256 hashes enables automatic deduplication&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This separation allows the database to remain fast for queries while S3 handles large payloads efficiently. Redis caching ensures hot data is served instantly without hitting S3, dramatically improving read performance for active sessions. It's like having a filesystem with intelligent caching for your agent's context data.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Provider Flexibility&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;As new providers emerge or existing providers update their formats, the system needs to adapt without breaking existing functionality.&lt;/p&gt;

&lt;p&gt;Acontext achieves this through:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pluggable normalizers&lt;/strong&gt;: Each provider has its own normalizer that can be updated independently&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Unified internal format&lt;/strong&gt;: Downstream systems only need to understand one format&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Format versioning&lt;/strong&gt;: Message metadata tracks the source format, enabling format-specific handling when needed&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Architecture: Layered Design&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Acontext's message processing follows a clean layered architecture that separates concerns, making it easy to add new providers or modify existing behavior:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdhsfqiqeer5em5bn77sb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdhsfqiqeer5em5bn77sb.png" alt=" " width="800" height="1081"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Router Layer&lt;/strong&gt;: Handles HTTP routing, authentication, and extracts URL parameters. It's format-agnostic and focuses on request routing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Handler Layer&lt;/strong&gt;: Detects content type (JSON vs multipart), parses the request, and validates the format parameter. It orchestrates the normalization process but doesn't perform format-specific parsing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Normalizer Layer&lt;/strong&gt;: This is where format-specific logic lives. Each provider has its own normalizer that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Uses the official provider SDK for validation (ensuring spec compliance)&lt;/li&gt;
&lt;li&gt;Converts provider-specific structures to the unified internal format&lt;/li&gt;
&lt;li&gt;Preserves metadata and handles edge cases&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Service Layer&lt;/strong&gt;: Handles business logic—file uploads, S3 storage, Redis caching, reference counting. It works with the unified format, so it doesn't need to know about provider differences. The caching layer ensures hot data is served instantly while maintaining S3 as the source of truth.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Repository Layer&lt;/strong&gt;: Manages database operations and transactions. It stores metadata in PostgreSQL and references to S3 objects.&lt;/p&gt;

&lt;p&gt;This separation means adding a new provider only requires implementing a new normalizer—the rest of the system remains unchanged. It's designed to scale with your needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Data Flow: From Request to Storage&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Here's how a message flows through the system:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1zftvhvvitzmpknxgnlv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1zftvhvvitzmpknxgnlv.png" alt=" " width="800" height="990"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Ingestion&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Client sends message in native format (OpenAI, Anthropic, or Acontext's unified format)&lt;/li&gt;
&lt;li&gt;Router extracts format parameter and routes to handler&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Normalization&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Handler detects content type and parses request&lt;/li&gt;
&lt;li&gt;Normalizer uses official provider SDK to validate and convert&lt;/li&gt;
&lt;li&gt;Message is transformed to unified internal format&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Storage (Write Path)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Service layer handles file uploads and calculates SHA256 hashes&lt;/li&gt;
&lt;li&gt;Content-addressed storage enables automatic deduplication&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;S3&lt;/strong&gt;: Parts JSON and files uploaded to S3, organized by SHA256 hash&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Redis&lt;/strong&gt;: After successful S3 upload, parts data cached in Redis with fixed TTL&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PostgreSQL&lt;/strong&gt;: Message metadata (ID, session ID, role, timestamps, source format) and S3 asset reference stored&lt;/li&gt;
&lt;li&gt;Reference counting ensures safe deletion when messages are removed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Retrieval (Read Path with Caching)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;System queries PostgreSQL for message metadata&lt;/li&gt;
&lt;li&gt;For each message, first checks Redis cache using SHA256 hash from metadata&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cache hit&lt;/strong&gt;: Parts returned instantly from Redis (sub-millisecond latency)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cache miss&lt;/strong&gt;: Parts loaded from S3, then cached in Redis for subsequent requests&lt;/li&gt;
&lt;li&gt;This ensures hot data (recently accessed messages) is served with minimal latency&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Format Comparison: Provider Differences&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Each provider structures messages differently. Here's how they compare:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4lh5dkeeq1hcp86pwqbr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4lh5dkeeq1hcp86pwqbr.png" alt="Image4" width="800" height="510"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Example: Same Message, Different Formats&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;OpenAI Format:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "role": "assistant",
  "content": [
    {
      "type": "text",
      "text": "I'll check the weather for you."
    }
  ],
  "tool_calls": [
    {
      "id": "call_123",
      "type": "function",
      "function": {
        "name": "get_weather",
        "arguments": "{\"location\":\"San Francisco\"}"
      }
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Anthropic Format:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "role": "assistant",
  "content": [
    {
      "type": "text",
      "text": "I'll check the weather for you."
    },
    {
      "type": "tool_use",
      "id": "toolu_123",
      "name": "get_weather",
      "input": {
        "location": "San Francisco"
      }
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Acontext Unified Format:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "role": "assistant",
  "parts": [
    {
      "type": "text",
      "text": "I'll check the weather for you."
    },
    {
      "type": "tool-call",
      "meta": {
        "name": "get_weather",
        "arguments": "{\"location\":\"San Francisco\"}",
        "id": "call_123"
      }
    }
  ],
  "meta": {
    "source_format": "openai"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The unified format means downstream systems only need to understand one structure, while original format information is preserved for debugging and auditing.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Storage: Three-Tier Architecture (PostgreSQL + S3 + Redis)&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Why Three-Tier Storage?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Storing everything in PostgreSQL leads to database bloat and slow queries. Storing everything in S3 makes querying complex and loses transactional guarantees. Reading from S3 on every request adds latency. Acontext uses &lt;strong&gt;PostgreSQL for metadata, S3 for durable storage, Redis for hot cache&lt;/strong&gt;—the best of all worlds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PostgreSQL stores:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Message identifiers (ID, session ID, parent ID)&lt;/li&gt;
&lt;li&gt;Lightweight fields (role, timestamps, status)&lt;/li&gt;
&lt;li&gt;S3 reference (asset metadata pointing to the parts JSON file)&lt;/li&gt;
&lt;li&gt;Message-level metadata (source format, name, etc.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;S3 stores:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Message parts as JSON files (content-addressed by SHA256)&lt;/li&gt;
&lt;li&gt;Uploaded files (images, documents, etc.)&lt;/li&gt;
&lt;li&gt;Organized by project and date (format: &lt;code&gt;parts/{project\_id}/{YYYY/MM/DD}/{sha256}.json&lt;/code&gt;) for efficient access&lt;/li&gt;
&lt;li&gt;Serves as the source of truth for all message content&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Redis caches:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Frequently accessed message parts (hot data)&lt;/li&gt;
&lt;li&gt;Keyed by SHA256 hash for content-based caching&lt;/li&gt;
&lt;li&gt;Fixed TTL strategy ensures automatic expiration and bounded memory usage&lt;/li&gt;
&lt;li&gt;Dramatically reduces S3 read operations for active sessions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This three-tier architecture allows fast SQL queries on metadata, efficient storage for large payloads, and instant access to hot data. It's designed like a filesystem with intelligent caching for your agent's context data.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Content-Addressed Storage and Deduplication&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;When the same content is uploaded multiple times (e.g., the same image used in different messages), Acontext automatically deduplicates it:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Calculate SHA256 hash of the content&lt;/li&gt;
&lt;li&gt;Check if an object with this hash already exists in S3&lt;/li&gt;
&lt;li&gt;If found, return the existing object metadata&lt;/li&gt;
&lt;li&gt;If not found, upload with a path like &lt;code&gt;parts/{project\_id}/{YYYY/MM/DD}/{sha256}.json&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This means identical content is stored only once, saving storage costs and improving cache efficiency. The SHA256-based addressing also enables efficient Redis caching—the same hash is used as the cache key, so identical content benefits from cache hits regardless of which message references it. It's automatic—you don't need to think about it.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Reference Counting for Safe Deletion&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;When a message is deleted, we can't immediately delete its S3 assets—other messages might reference the same content. Acontext uses reference counting:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each asset has a &lt;code&gt;ref\_count&lt;/code&gt; in the database&lt;/li&gt;
&lt;li&gt;When a message references an asset, increment the count&lt;/li&gt;
&lt;li&gt;When a message is deleted, decrement the count&lt;/li&gt;
&lt;li&gt;Only delete from S3 when &lt;code&gt;ref\_count&lt;/code&gt; reaches zero&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is handled atomically at the database level using PostgreSQL's row-level locking, eliminating race conditions without application-level synchronization. It's safe, automatic, and efficient.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Redis Hot Cache for Performance&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;To minimize latency when reading messages, Acontext uses Redis as a hot cache layer:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;On Write (SendMessage):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After successfully uploading parts to S3, the parts data is cached in Redis&lt;/li&gt;
&lt;li&gt;Cache key: &lt;code&gt;message:parts:{sha256}&lt;/code&gt; (using SHA256 for content-based caching)&lt;/li&gt;
&lt;li&gt;Uses a fixed TTL strategy: entries automatically expire after a set duration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;On Read (GetMessages):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;System first checks Redis cache using the SHA256 hash from message metadata&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cache hit&lt;/strong&gt;: Parts returned instantly from Redis (sub-millisecond latency)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cache miss&lt;/strong&gt;: Parts loaded from S3, then cached in Redis for subsequent requests&lt;/li&gt;
&lt;li&gt;This ensures recently accessed messages are served with minimal latency&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Benefits:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Reduced S3 reads&lt;/strong&gt;: Hot data (active sessions) rarely hits S3&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lower latency&lt;/strong&gt;: Cache hits are orders of magnitude faster than S3 reads&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Content-based caching&lt;/strong&gt;: Same content (same SHA256) benefits from cache regardless of which message references it&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automatic cache management&lt;/strong&gt;: Fixed TTL strategy ensures cache automatically expires stale entries without manual intervention or complex eviction policies&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Predictable behavior&lt;/strong&gt;: Simple, deterministic expiration makes cache behavior easy to reason about&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bounded memory&lt;/strong&gt;: Cache size naturally self-regulates as entries expire, preventing unbounded growth&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The cache is transparent—if Redis is unavailable, the system gracefully falls back to S3. This ensures reliability while maximizing performance for the common case.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Parsing: Multi-Format Normalization&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Using Official SDKs for Validation&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Rather than manually parsing JSON and hoping we got it right, Acontext uses official provider SDKs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;OpenAI&lt;/strong&gt;: Uses &lt;a href="https://github.com/openai/openai-go%22%20%5Ct%20%22https%3A//acontext.io/blog/_blank" rel="noopener noreferrer"&gt;openai-go↗&lt;/a&gt; types&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Anthropic&lt;/strong&gt;: Uses &lt;a href="https://github.com/anthropics/anthropic-sdk-go%22%20%5Ct%20%22https%3A//acontext.io/blog/_blank" rel="noopener noreferrer"&gt;anthropic-sdk-go↗&lt;/a&gt; types&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This ensures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Type safety&lt;/strong&gt;: Compile-time checking of message structure&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Spec compliance&lt;/strong&gt;: SDKs stay updated with provider changes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error handling&lt;/strong&gt;: SDKs handle edge cases we might miss&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The normalizers act as thin adapters that use SDK types for parsing, then convert to the unified format. It's reliable, maintainable, and future-proof.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;The Unified Internal Format&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Acontext converts all formats to a unified structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "role": "user",
  "parts": [
    {
      "type": "text",
      "text": "Hello"
    },
    {
      "type": "tool-call",
      "meta": {
        "name": "get_weather",
        "arguments": "{\"location\":\"SF\"}"
      }
    }
  ],
  "meta": {
    "source_format": "openai"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This unified format means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Downstream systems only need to understand one structure&lt;/li&gt;
&lt;li&gt;Tool calls are handled uniformly regardless of source&lt;/li&gt;
&lt;li&gt;Original format is preserved in metadata for debugging/auditing&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Use Cases: Real-World Applications&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Multi-Provider Agent Systems&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;You're building an agent that needs to work with both OpenAI and Anthropic, depending on the task. Without Acontext, you'd need separate storage systems, different query logic, and custom converters for each provider.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;With Acontext:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from acontext import AcontextClient

client = AcontextClient(api_key="your-api-key")

# Send OpenAI format message
openai_message = {"role": "user", "content": "Hello"}
client.sessions.send_message(
    session_id=session_id,
    blob=openai_message,  # Native OpenAI format
    format="openai"
)

# Send Anthropic format message
anthropic_message = {"role": "user", "content": "Hello"}
client.sessions.send_message(
    session_id=session_id,
    blob=anthropic_message,  # Native Anthropic format
    format="anthropic"
)

# Query uniformly
messages = client.sessions.get_messages(session_id=session_id)
# All messages in unified format, regardless of source
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Context Learning and Analysis&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;You want to analyze tool usage patterns across all your agent interactions, but messages are stored in different formats with inconsistent structures.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;With Acontext:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;All messages normalized to unified format&lt;/li&gt;
&lt;li&gt;Tool calls queryable uniformly: &lt;code&gt;WHERE parts-&amp;gt;&amp;gt;'type' = 'tool-call'&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Source format preserved for debugging: &lt;code&gt;WHERE meta-&amp;gt;&amp;gt;'source\_format' = 'openai'&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Fast queries on metadata, efficient storage for payloads&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Large Payload Handling&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Your agent processes images, documents, and long-form content. Storing everything in PostgreSQL would bloat the database, but object storage makes querying difficult.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;With Acontext:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Metadata in PostgreSQL for fast queries&lt;/li&gt;
&lt;li&gt;Payloads in S3 for efficient storage&lt;/li&gt;
&lt;li&gt;Redis cache for instant access to hot data&lt;/li&gt;
&lt;li&gt;Automatic deduplication saves storage costs&lt;/li&gt;
&lt;li&gt;Reference counting ensures safe deletion&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Provider Migration&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;You're migrating from OpenAI to Anthropic, or need to support both during a transition period. Without normalization, you'd need separate code paths and risk data inconsistency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;With Acontext:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Send messages in either format to the same API&lt;/li&gt;
&lt;li&gt;Unified storage means consistent queries&lt;/li&gt;
&lt;li&gt;Source format tracked for auditing&lt;/li&gt;
&lt;li&gt;No code changes needed for downstream systems&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Built to Scale&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Acontext's message processing is designed to grow with your needs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Add new providers&lt;/strong&gt;: Just implement a new normalizer&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Handle large payloads&lt;/strong&gt;: Hybrid storage scales automatically&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Support new formats&lt;/strong&gt;: The unified internal format remains stable&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maintain reliability&lt;/strong&gt;: Official SDK validation ensures spec compliance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It's infrastructure that works, so you don't have to think about it.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Vision: Message Data as First-Class Infrastructure&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Building a multi-format message processing system is about treating message data as first-class infrastructure. By providing a unified layer that normalizes, stores, and preserves context data, Acontext enables agents to work with consistent, observable, and learnable context.&lt;/p&gt;

&lt;p&gt;This is what a Context Data Platform should do: make context data observable, persistent, and reusable, so you can focus on building agents that deliver real value.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You focus on building agents that solve real problems, not on babysitting message formats.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The system:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Accepts messages in any supported format (OpenAI, Anthropic, or Acontext's unified format)&lt;/li&gt;
&lt;li&gt;Normalizes them to a unified internal representation&lt;/li&gt;
&lt;li&gt;Stores them efficiently using hybrid storage&lt;/li&gt;
&lt;li&gt;Preserves all necessary metadata&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This enables Acontext to serve as a reliable foundation for context data management, allowing your agents to work with consistent, queryable, and learnable context data—the foundation for self-learning AI systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;References&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/openai/openai-go%22%20%5Ct%20%22https%3A//acontext.io/blog/_blank" rel="noopener noreferrer"&gt;OpenAI Go SDK↗&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/anthropics/anthropic-sdk-go%22%20%5Ct%20%22https%3A//acontext.io/blog/_blank" rel="noopener noreferrer"&gt;Anthropic Go SDK↗&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/memodb-io/Acontext/raw/main/assets/acontext_dataflow.png%22%20%5Ct%20%22https%3A//acontext.io/blog/_blank" rel="noopener noreferrer"&gt;Acontext Data Flow Diagram↗&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://acontext.io/blog/introducing-acontext-context-data-platform-for-self-learning-ai-agents%22%20%5Ct%20%22https%3A//acontext.io/blog/_blank" rel="noopener noreferrer"&gt;Introducing Acontext: Context Data Platform for Self-learning AI Agents↗&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>rag</category>
      <category>tooling</category>
    </item>
    <item>
      <title>Inside Acontext: How AI Agents Learn from Experience</title>
      <dc:creator>Acontext</dc:creator>
      <pubDate>Wed, 28 Jan 2026 17:11:00 +0000</pubDate>
      <link>https://forem.com/acontext_4dc5ced58dc515fd/inside-acontext-how-ai-agents-learn-from-experience-488j</link>
      <guid>https://forem.com/acontext_4dc5ced58dc515fd/inside-acontext-how-ai-agents-learn-from-experience-488j</guid>
      <description>&lt;p&gt;Acontext transforms raw agent execution into structured tasks and reusable skills. Explore how Store → Observe → Learn → Act enables self-improving AI agents.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1dlz370ealw3ahqlls5x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1dlz370ealw3ahqlls5x.png" alt="Image1" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;🧩 Acontext Architecture: From Context to Skill&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The diagram below captures the entire process: how Acontext takes raw LLM messages, turns them into structured tasks, distills knowledge, and builds reusable skills.&lt;/p&gt;

&lt;p&gt;Let's walk through it step by step.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm91rz226o8dws4thzpl5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm91rz226o8dws4thzpl5.png" alt="Image2" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;context data → extract patterns → create &amp;amp; store skills → apply &amp;amp; refine → agent improves&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every message, tool call, artifact, and user feedback is captured as context. Patterns are extracted into reusable skills, applied to new tasks, and refined through feedback, enabling the agent to learn continuously from its own experience.&lt;/p&gt;

&lt;p&gt;Acontext organizes and manages this data flow, turning raw context into structured, self-improving behavior.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;1️⃣ Multi-Modal Context: Capturing the Raw Stream&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;On the left, we have everything that happens inside an agent run:&lt;/p&gt;

&lt;p&gt;user requests, plans, tool calls, Memories, and feedback.&lt;/p&gt;

&lt;p&gt;A typical workflow looks like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;User: "Make me a landing page."&lt;/li&gt;
&lt;li&gt;Agent: "Here's the plan → Initialize → Build → Deploy."&lt;/li&gt;
&lt;li&gt;Agent calls tools (list_dir, find_file, etc.) and reports progress.&lt;/li&gt;
&lt;li&gt;The user intervenes: "Wrong stack, use Next.js."&lt;/li&gt;
&lt;li&gt;The agent continues, completes the task, or fails in the attempt.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All of these steps, including messages, tool traces, generated artifacts, and memory, are &lt;strong&gt;saved to Acontext&lt;/strong&gt; through a unified storage API.&lt;/p&gt;

&lt;p&gt;This is the &lt;strong&gt;Store&lt;/strong&gt; phase. It captures every relevant context, so nothing gets lost between runs.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;2️⃣ Extract Tasks and Feedback: Making Behavior Observable&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The middle section represents &lt;strong&gt;Acontext's Observer layer&lt;/strong&gt;. Once data is stored, this layer automatically extracts &lt;strong&gt;tasks&lt;/strong&gt; and &lt;strong&gt;feedback&lt;/strong&gt; from the raw message stream.&lt;/p&gt;

&lt;p&gt;Each task is represented as a structured record containing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Objective:&lt;/strong&gt; what the agent was trying to achieve&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Progress:&lt;/strong&gt; key execution steps&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;User Preferences:&lt;/strong&gt; hints or corrections from users&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;State:&lt;/strong&gt; pending / success / failed&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Feedback:&lt;/strong&gt; what worked and what didn't&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Task 1 — pending&lt;/em&gt;: located the source folder but didn't build.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Task 2 — success&lt;/em&gt;: built components; user prefers Next.js.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Task 3 — failed&lt;/em&gt;: deployment attempted but hit compile errors.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By organizing the execution trace into task-level records, Acontext gives you a clear, context-aware, and task-level view of &lt;strong&gt;what the agent promised, what it did, and why it succeeded or failed&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This isn't token-level observability. It's &lt;strong&gt;behavioral observability&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;3️⃣ Distill &amp;amp; Learn: Turning Execution into Experience&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This is where Acontext begins to learn on its own.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 1: Structured Data Extraction: Converting Executions into Training Signals&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Whenever a task is marked as *successful,*either because the agent achieved the goal or the user explicitly confirmed it, Acontext's background learner retrieves the complete semantic trace related to that task, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The conversation segment that established the goal and constraints&lt;/li&gt;
&lt;li&gt;The agent's reasoning or planning (when available)&lt;/li&gt;
&lt;li&gt;Tool calls, parameters, outputs, and intermediate state&lt;/li&gt;
&lt;li&gt;User corrections, preferences, and approvals&lt;/li&gt;
&lt;li&gt;The final result and the success signal&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The key principle: &lt;strong&gt;Only validated successes are selected as learning samples.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Each sample is stored as a structured execution record that includes the goal, the reasoning chain, the operational steps, context conditions, and evidence of correctness.&lt;/p&gt;

&lt;p&gt;This is not log scraping; it is targeted data extraction for learning.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 2: Semantic Grouping to Identify Successful Patterns Across Runs&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Next comes &lt;strong&gt;semantic clustering.&lt;/strong&gt;Acontext automatically groups similar successful tasks by intent and outcome.&lt;/p&gt;

&lt;p&gt;It looks for recurring execution patterns, then identifies consistent behavioral patterns and filters out noise or partial runs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Typical examples include:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Repeated workflows for project initialization&lt;/li&gt;
&lt;li&gt;Recurring patterns in 'collect → analyze → report' tasks&lt;/li&gt;
&lt;li&gt;Consistent resolution strategies after user corrections&lt;/li&gt;
&lt;li&gt;Multi-step tool-call chains that reliably lead to success&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Clustering allows Acontext to:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Surface what &lt;em&gt;consistently&lt;/em&gt; worked&lt;/li&gt;
&lt;li&gt;Filter out partial executions, noise, or one-off behaviors&lt;/li&gt;
&lt;li&gt;Identify the underlying structure of a successful process&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The result: clusters of tasks that consistently led to success form the foundation for new skills.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 3: Compress Successful Behavior into Reusable Skills&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;From each stable cluster, Acontext synthesizes a &lt;strong&gt;Skill:&lt;/strong&gt;a distilled blueprint describing how to accomplish a goal effectively. There are four core elements which are drawn directly from the execution data:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;• Procedures - the actionable steps&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The sequence of operations or tool calls that repeatedly produced the desired outcome.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;• Patterns - general strategies&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;How the agent approached the problem:&lt;/p&gt;

&lt;p&gt;e.g., "search → filter → summarize," or "update plan after user correction."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;• Context - when the skill applies&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The input conditions or environment under which the procedure is valid.&lt;/p&gt;

&lt;p&gt;e.g., "This workflow is more efficient when the user favors Next.js."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;• Preferences - user's implicit rules&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Individual user preferences extracted from interactions, such as:&lt;/p&gt;

&lt;p&gt;"Always report before executing," or "Use pnpm instead of npm."&lt;/p&gt;

&lt;p&gt;Acontext transforms these elements into a structured Skill object, which is ready to be stored and retrieved.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Skill Space: Organize, Retrieve, Reuse&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;On the right of Architecture Diagram, these learned skills are saved in a &lt;strong&gt;Notion-style workspace&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Each Skill Space acts as a &lt;strong&gt;dynamic&lt;/strong&gt;, &lt;strong&gt;structured memory laye&lt;/strong&gt;r for your agents, organized by domain or capability, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;frontend_preferences&lt;/li&gt;
&lt;li&gt;github_operations&lt;/li&gt;
&lt;li&gt;linkedin_operations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Skill Space is dynamic and continuously maintained by Acontext, forms a &lt;strong&gt;living library of what consistently works&lt;/strong&gt;, tailored to each individual user or agent.&lt;/p&gt;

&lt;p&gt;Skills are automatically created from new successful runs&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Overlapping skills are merged&lt;/li&gt;
&lt;li&gt;Stale or redundant skills are pruned&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Skill Retrieval
&lt;/h4&gt;

&lt;p&gt;When a new task arrives, Acontext searches the Skill Space to surface relevant past experience.&lt;/p&gt;

&lt;p&gt;It supports two modes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Semantic Search：&lt;/strong&gt;quickly find skills that match the task's intent&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agentic Search：&lt;/strong&gt;combine multiple skills when the task is more complex&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Retrieved skills are provided back to the agent as context or guidance, allowing it to start with learned experience instead of a blank slate.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;4️⃣ Act Smarter and Start the Next Loop&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;At the bottom, the arrow labeled &lt;strong&gt;"Improve the Agent next time"&lt;/strong&gt; represents Acontext's continuous learning loop.&lt;/p&gt;

&lt;p&gt;With learned skills in place, the agent begins each new run with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;better defaults&lt;/li&gt;
&lt;li&gt;clearer workflows&lt;/li&gt;
&lt;li&gt;personalized preferences&lt;/li&gt;
&lt;li&gt;fewer mistakes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In short:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Act = the agent behaving smarter because it has learned.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;And once the run finishes, its execution trace flows back into the same loop:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Store → Observe → Learn → Act → …&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This continuous cycle empowers a self-improving agent: each action generates new data, each success becomes new skillset, and every skill enables better actions in the next iteration.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Data Model in Acontext&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Under the hood, Acontext operates on those object types:&lt;/p&gt;

&lt;p&gt;Object&lt;/p&gt;

&lt;p&gt;Description&lt;/p&gt;

&lt;p&gt;Session&lt;/p&gt;

&lt;p&gt;A conversation thread that stores all messages with multi-modal support (message, tool calls, artifacts). Acontext automatically tracks what tasks the agent plans and executes.&lt;/p&gt;

&lt;p&gt;Trace&lt;/p&gt;

&lt;p&gt;The granular execution timeline within a session: every plan, tool call, state transition, and response captured in sequence.&lt;/p&gt;

&lt;p&gt;Task&lt;/p&gt;

&lt;p&gt;A step of the agent’s plan, extracted automatically from conversation. Tasks transition through: pending → running → success/failed. Represents intent, progress, and outcome.&lt;/p&gt;

&lt;p&gt;Disk&lt;/p&gt;

&lt;p&gt;File storage for agent-generated artifacts (e.g., code, images, documents). Used to preserve outputs that tasks produce.&lt;/p&gt;

&lt;p&gt;Space&lt;/p&gt;

&lt;p&gt;A knowledge repository (like a Notion workspace) where learned skills are stored. Connecting sessions to a Space enables automatic skill learning from completed tasks.&lt;/p&gt;

&lt;p&gt;Skill Block&lt;/p&gt;

&lt;p&gt;A learned SOP (Standard Operating Procedure) derived from complex tasks. Includes use_when conditions, user preferences, and tool_sops patterns. Only sufficiently complex and validated tasks become skills.&lt;/p&gt;

&lt;p&gt;Skill / Skill Space&lt;/p&gt;

&lt;p&gt;A distilled, reusable representation of successful behavior. Skills inside a Skill Space are searchable, versioned, and continuously refined. Each skill contains: • Procedures: the step-by-step approach that worked • Patterns: recurring strategies or action flows • Context: where the skill applies • Preferences:user-specific requirements or constraints&lt;/p&gt;

&lt;p&gt;Experience Agents&lt;/p&gt;

&lt;p&gt;Background AIs that automatically extract tasks, group patterns, and synthesize Skill Blocks. They run continuously and require no direct interaction.&lt;/p&gt;

&lt;p&gt;This model turns context from "raw logs" into &lt;strong&gt;structured experience data&lt;/strong&gt;that's both searchable and reusable.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;⚙️ Key Design Principles&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Unified Context Storage:&lt;/strong&gt; Handle messages, sessions, plans, and artifacts through one API.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Task-Level Observability:&lt;/strong&gt; Focus on what the agent did, not just how long it took.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automatic Experience Learning:&lt;/strong&gt; Experience extraction runs continuously in the background.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Personalized Skill Library:&lt;/strong&gt; Each end user accumulates their own adaptive skill set.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Framework-Agnostic:&lt;/strong&gt; Works with OpenAI, Anthropic, LangChain, and more.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;🌍 Join the Community&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Acontext is open source, and we need your feedback to help it grow.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;💻 &lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/memodb-io/Acontext%22%20%5Ct%20%22https%3A//acontext.io/blog/_blank" rel="noopener noreferrer"&gt;github.com/memodb-io/Acontext↗&lt;/a&gt; If you find it useful, give it a ⭐️ , it will help others discover it.&lt;/li&gt;
&lt;li&gt;💬 &lt;strong&gt;Discord:&lt;/strong&gt; Join the &lt;a href="https://discord.gg/SG9xJcqVBu%22%20%5Ct%20%22https%3A//acontext.io/blog/_blank" rel="noopener noreferrer"&gt;Acontext Community ↗&lt;/a&gt;Share integrations, feedback, or ideas for what to build next.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Further reading&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;If you want a more opinionated, developer-centric take on why agents need something beyond simple memory layers, check out &lt;a href="https://gpt.gekko.de/acontent-the-memory-implant-ai-agents-deserve/%22%20%5Ct%20%22https%3A//acontext.io/blog/_blank" rel="noopener noreferrer"&gt;Acontext: The Memory Implant Your AI Agents Have Been Dreaming About↗&lt;/a&gt; by gekko.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
    </item>
    <item>
      <title>Why Self-Learning Agent Needs More Than Memory</title>
      <dc:creator>Acontext</dc:creator>
      <pubDate>Tue, 27 Jan 2026 17:37:00 +0000</pubDate>
      <link>https://forem.com/acontext_4dc5ced58dc515fd/why-self-learning-agent-needs-more-than-memory-4lgh</link>
      <guid>https://forem.com/acontext_4dc5ced58dc515fd/why-self-learning-agent-needs-more-than-memory-4lgh</guid>
      <description>&lt;p&gt;Why Memory layer like Mem0 and Zep can recall conversations but cannot help agents improve, and how Acontext enables true self-learning through workflows and tool usage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzm2ss1fckfb01vwkv4us.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzm2ss1fckfb01vwkv4us.png" alt="Image1" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When developers see Acontext learning from past interactions, the first reaction is often:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;"Isn't this just another memory layer? Why not use Mem0, Zep, or&lt;a href="https://www.memobase.io/%22%20%5Ct%20%22https%3A//acontext.io/blog/_blank" rel="noopener noreferrer"&gt;Memobase&lt;/a&gt;&lt;/em&gt;&lt;a href="https://www.memobase.io/%22%20%5Ct%20%22https%3A//acontext.io/blog/_blank" rel="noopener noreferrer"&gt;↗&lt;/a&gt;&lt;em&gt;for this?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is a common misunderstanding.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Memory and self-learning are not the same thing.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;And this difference is exactly why traditional memory systems can't make agents improve.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;em&gt;Memory Layers Remember 'What was Said', Not 'How Work was Done'&lt;/em&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Tools like Mem0, Zep, and Memobase were designed with a specific purpose: to help an agent recall information from past conversations.&lt;/p&gt;

&lt;p&gt;Each takes a slightly different approach: gist extraction, graph structures, and user profile modeling, but they all optimize for the same capability - remembering facts and preferences.&lt;/p&gt;

&lt;p&gt;However, every useful agent relies on tools. Booking flights, running SQL, browsing the web, these are &lt;strong&gt;tool workflows&lt;/strong&gt;, not dialogue.&lt;/p&gt;

&lt;p&gt;If a system can't observe the thing the agent actually &lt;em&gt;did&lt;/em&gt;, it can't learn from it.&lt;/p&gt;

&lt;p&gt;This is the fundamental limitation of every memory solution today.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;em&gt;Self-Learning Requires Remembering Workflows, Not Sentences&lt;/em&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;If you ask a human to clean a database, they don't just store one or two lines in memory. They remember the procedure: the steps, the order, the tools, the checks, the edge cases. Over time, that procedure becomes refined into a predictable workflow.&lt;/p&gt;

&lt;p&gt;We call that an &lt;strong&gt;SOP (Standard Operating Procedure)&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It requires structure, context, execution history, and most importantly, awareness of the task itself.&lt;/p&gt;

&lt;p&gt;A self-learning agent must be able to review a full trail of actions, reflect on what happened, understand the user's corrections, and preserve the successful workflow for future use. Traditional memory systems cannot even represent this information, let alone learn from it.&lt;/p&gt;

&lt;p&gt;For agents, the most critical kind of memory is how to reach a goal using the available tools and the user's preferences. This requires memory that understands tool usage directly, but no existing memory system does this. They only store conversations, not the tools that actually complete the work.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;em&gt;Acontext Learns From Tasks, Not Text&lt;/em&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Acontext treats an entire task: its objective, steps, corrections, and final success, as the unit of learning.&lt;/p&gt;

&lt;p&gt;Instead of saving fragments of text, it distills workflows into structured blocks that say:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;when a workflow should be reused&lt;/li&gt;
&lt;li&gt;what user preferences shaped it&lt;/li&gt;
&lt;li&gt;which tools were combined and how&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is why Acontext can update workflows when tools change, and&lt;a href="https://acontext.io/blog/acontext-architecture-explained%22%20%5Ct%20%22https%3A//acontext.io/blog/_blank" rel="noopener noreferrer"&gt;why only confirmed successful executions become learnable skills↗&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "use_when": "star a repo on github.com",
    "preferences": "use personal account. star but not fork",
    "tool_sops": [
        {
            "tool_name": "goto",
            "action": "goto github.com"
        },
        {
            "tool_name": "click",
            "action": "find login button if any. login first"
        },
        {
            "...": "..."
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Acontext makes this very clear. &lt;strong&gt;It knows which task a piece of experience belongs to, what tools or procedures the user preferred during that task, and how the agent used its tools to get the job done.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We also know developers change their tools over time, so Acontext includes APIs that update all related SOPs in one go: &lt;a href="https://docs.acontext.io/learn/tool" rel="noopener noreferrer"&gt;https://docs.acontext.io/learn/tool&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you want an agent that truly gets better at doing work, today's memory systems will not get you there. Acontext is built specifically for that purpose. (And yes, we learned this the hard way when building Memobase.)&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;em&gt;The Difference at a Glance&lt;/em&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Dimension&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Memory&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Acontext&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Position&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Memory layer&lt;/td&gt;
&lt;td&gt;Context as experience layer&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Storage&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;User facts, preferences, conversation history&lt;/td&gt;
&lt;td&gt;Sessions, tool calls, tasks, SOP skills&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Memory unit&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Semantic facts&lt;/td&gt;
&lt;td&gt;SOP block (use_when, preferences, tool_sops)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;How it's used&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Added to prompts for personalization&lt;/td&gt;
&lt;td&gt;Guides planning &amp;amp; execution directly&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Applied Scenarios&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Chatbots, customer support agents&lt;/td&gt;
&lt;td&gt;Tool-using agents, workflows, automation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Main value&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Better continuity and user recall&lt;/td&gt;
&lt;td&gt;Higher task success rates &amp;amp; fewer running steps&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Not designed for&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Experience learning&lt;/td&gt;
&lt;td&gt;Long-term user memory&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;If you have questions or want to discuss Acontext with the team and agent developers, feel free to join our&lt;a href="https://discord.gg/SG9xJcqVBu%22%20%5Ct%20%22https%3A//acontext.io/blog/_blank" rel="noopener noreferrer"&gt;Discord↗&lt;/a&gt; or leave a message on&lt;a href="https://x.com/acontext_io%22%20%5Ct%20%22https%3A//acontext.io/blog/_blank" rel="noopener noreferrer"&gt;X↗&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Further reading&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;If you want a more opinionated, developer-centric take on why agents need something beyond simple memory layers, check out: &lt;a href="https://gpt.gekko.de/acontent-the-memory-implant-ai-agents-deserve/%22%20%5Ct%20%22https%3A//acontext.io/blog/_blank" rel="noopener noreferrer"&gt;Acontext: The Memory Implant Your AI Agents Have Been Dreaming About ↗&lt;/a&gt;by gekko.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agentaichallenge</category>
      <category>rag</category>
      <category>openai</category>
    </item>
    <item>
      <title>Acontext: Where Context Data Becomes the Foundation for Agent Learning</title>
      <dc:creator>Acontext</dc:creator>
      <pubDate>Thu, 18 Dec 2025 03:24:25 +0000</pubDate>
      <link>https://forem.com/acontext_4dc5ced58dc515fd/acontext-where-context-data-becomes-the-foundation-for-agent-learning-29e0</link>
      <guid>https://forem.com/acontext_4dc5ced58dc515fd/acontext-where-context-data-becomes-the-foundation-for-agent-learning-29e0</guid>
      <description>&lt;p&gt;Every AI agent developer has felt the same pain: your agent works beautifully in one run, and fails mysteriously in the next.&lt;/p&gt;

&lt;p&gt;It doesn't remember what worked before.&lt;br&gt;
It can't explain why it failed.&lt;br&gt;
And even when it succeeds, you can't capture why.&lt;/p&gt;

&lt;p&gt;We built Acontext to change that. It started with a simple question:&lt;br&gt;
“What if an agent could observe, remember, and learn from every interaction it has, just like a human learning skills from its own past?”&lt;/p&gt;

&lt;p&gt;Acontext's mission is to give agent developers the missing foundation for self-evolving agents: a context data platform that makes agents' execution observable, reusable, and learnable. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvjshekbsn3yadtfqwb75.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvjshekbsn3yadtfqwb75.png" alt=" " width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Do We Need Something like Acontext?
&lt;/h2&gt;

&lt;p&gt;Today, an agent's context is scattered across memory stores, RAG pipelines, logs, and user feedback.&lt;br&gt;
The result? Fragmented, transient, and nearly impossible to analyze over time.&lt;/p&gt;

&lt;p&gt;This creates critical challenges for agent developers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Integration Overhead and Local-to-Cloud Complexity: Agent developers spend excessive time merging data to maintain a consistent state. Local setups work fine, but when you move to production, your memory, files, and context systems don't scale seamlessly.&lt;/li&gt;
&lt;li&gt;Complexity in Context Engineering: Reduction, compression, offloading, and "Claude Skills"-style capability building often become ad-hoc, brittle layers scattered across your stack. &lt;/li&gt;
&lt;li&gt;Agent Stability: Getting an agent to work once is easy. Maintaining reliability over time is challenging. Developers struggle to track what the agent promised, whether users were satisfied, and why performance drifts.&lt;/li&gt;
&lt;li&gt;Limited Experience Learning: Agents today don't learn from their successes effectively. "Memory" solutions only store text summaries, but true experience learning requires capturing the full task context: actions, tool calls, and solid SOPs, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All these challenges stem from one root cause: context data is not treated as a first-class infrastructure.&lt;/p&gt;

&lt;p&gt;We don't need another framework or database; we need a unified layer that makes context observable, persistent, and reusable.&lt;br&gt;
That's what Acontext does.&lt;/p&gt;

&lt;p&gt;What Acontext Brings to Your Agent Stack&lt;br&gt;
Acontext is a context data platform that gives your AI agents context management, observability, and experience learning. Here's how it works:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Multi-Modal Context Storage
Acontext acts as your storage backend, offering a simple API for unified storage and persistent management of multimodal conversation data.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl13m2ikspy478wzs2jqt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl13m2ikspy478wzs2jqt.png" alt=" " width="800" height="467"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is designed like a filesystem for agents' artifact storage, enabling easy context offloading and cross-task collaboration, especially ideal for scenarios where a sandbox environment is not required.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxbm5ii4fumv5yhm624ke.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxbm5ii4fumv5yhm624ke.png" alt=" " width="800" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Real-Time Context Observability
Acontext provides a built-in dashboard, giving you a clear view of your agent's execution process:&lt;/li&gt;
&lt;li&gt;Track all context sessions, monitor each task's objectives, execution process, and success or failure status.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftkm20qrzjqv6u4mag9av.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftkm20qrzjqv6u4mag9av.png" alt=" " width="800" height="1167"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fld9s7fisiri93ki11vkm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fld9s7fisiri93ki11vkm.png" alt=" " width="800" height="553"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Unlike LangSmith or Langfuse, which focus on latency or token usage, Acontext focuses on real-time, context-aware, task-level tracking. Every dynamic aspect of agent running is evident and actionable.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx9h461rqihfxwkiqjj4i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx9h461rqihfxwkiqjj4i.png" alt=" " width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Memory Layer for Experience Learning
Acontext is the memory layer your AI agent actually needs. It captures what your agent does well and turns those wins into reusable skills:
Organized Workspace: Each agent has its own Notion-like workspace where skills are automatically organized and managed.
Personalized Skill Library: Imagine you have 100,000 users, each with their own autonomous "Claude Skills" that collect past successful complex tasks. Every time your agent exceeds expectations and achieves a breakthrough, that success is captured and stored in Acontext. This means your agent's success rate is no longer a random occurrence, and the experience-based learning continues to grow and improve.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2iac9i7vtuf5r676kjig.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2iac9i7vtuf5r676kjig.png" alt=" " width="800" height="506"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Customized Control: Use the full CRUD API to add custom experiences or manage workspace content manually, whenever you need.&lt;br&gt;
Flexible and Precise Skill Retrieval: Acontext offers two search modes for quickly retrieving the skills and experiences you need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Semantic Search: Fast and accurate retrieval of relevant skills.&lt;/li&gt;
&lt;li&gt;Agentic Search: A progressive, multi-step skill organization and retrieval system, suitable for both auto-learned and manually customized content.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Built to Collaborate,  not Compete
&lt;/h2&gt;

&lt;p&gt;Acontext doesn't replace existing frameworks, databases, or observability tools. It's here to work alongside them, providing your agents with a shared layer where all their context data lives and evolves.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Acontext vs. Frameworks&lt;/strong&gt;&lt;br&gt;
Acontext is not an agent framework. Think of it as the place where your agent's data lives. Use OpenAI, Anthropic, LangChain, or any other stack. Acontext simply ensures agents' messages and artifacts are stored and reusable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Acontext vs. Databases&lt;/strong&gt;&lt;br&gt;
Acontext is not a new type of database either. It builds on Postgres, S3, Redis, and other stack to store all the data your agent needs — text, code, PDFs, or images — in one place.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Acontext vs. Observability Tools&lt;/strong&gt;&lt;br&gt;
Acontext doesn't replace existing AI observability tools, but it reveals what they can't. Traditional tools can show errors, latency, and token usage, but they can't tell you whether your agent actually satisfied the user. &lt;br&gt;
Acontext tracks the full context: capturing what really happened, whether it worked, and why.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Acontext Means for Agent Developers
&lt;/h2&gt;

&lt;p&gt;Imagine having a Supabase-like data platform, but purpose-built for AI agents.&lt;br&gt;
That's what we want to create with Acontext.&lt;br&gt;
Instead of juggling storage, logging, and context engineering, you get a clean API that tailors to your needs. No more wiring multiple systems, no more tedious bits.&lt;br&gt;
You are the person to build agents to solve real problems, not to babysit context loops.&lt;br&gt;
Let Acontext handle the memory, context, and skill learning underneath. You focus on delivering real value.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to Use Acontext
&lt;/h3&gt;

&lt;p&gt;Acontext is still in its early stage, and what you see today is just the first version of what it will become. &lt;br&gt;
Currently, Acontext supports storing agents' context data using Postgres and S3, offers an intuitive local dashboard, and delivers one of the most effective agent self-learning experiences available today.&lt;br&gt;
But there's much more on the roadmap, and we'd love for you to try Acontext in your POC and help shape what comes next.&lt;br&gt;
So, how to get started?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Open-Source Release&lt;/strong&gt;&lt;br&gt;
We're currently in open-source mode, moving fast and gathering feedback from the community. Download our acontext-cli for a quick test drive:&lt;br&gt;
curl -fsSL &lt;a href="https://install.acontext.io" rel="noopener noreferrer"&gt;https://install.acontext.io&lt;/a&gt; | sh&lt;br&gt;
With acontext docker up, you can quickly launch an Acontext instance on your local machine.&lt;br&gt;
We provide Python and TypeScript SDKs, so you can easily push data in. &lt;br&gt;
Acontext also supports storing OpenAI and Anthropic message formats directly.&lt;br&gt;
You can explore a few examples to get a feel for how to use Acontext:&lt;br&gt;
acontext create&lt;br&gt;
If you'd like to look through more examples first, check out our example repo.&lt;/p&gt;

&lt;h2&gt;
  
  
  Join the Journey
&lt;/h2&gt;

&lt;p&gt;We're building the Context Data Platform for AI agents together, through open source.&lt;br&gt;
This is a new territory. No one exactly knows what a Context Data Platform should look like yet. But that's what makes it exciting: we get to figure it out together with the developers who are building the next generation of AI Agents.&lt;br&gt;
Here's how you can get involved:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Join our Discord Community to connect with other builders&lt;/li&gt;
&lt;li&gt;Try Acontext locally and share what you learn&lt;/li&gt;
&lt;li&gt;Open issues, submit PRs, or tell us what's working (and what's not) &lt;/li&gt;
&lt;li&gt;GitHub: &lt;a href="http://github.com/memodb-io/Acontext" rel="noopener noreferrer"&gt;http://github.com/memodb-io/Acontext&lt;/a&gt;
We're at the beginning of something big: building a Data Platform for AI Agents. 
Let's build it together!&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>opensource</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
