<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Oracle Developers</title>
    <description>The latest articles on Forem by Oracle Developers (@oracledevs).</description>
    <link>https://forem.com/oracledevs</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/oracledevs"/>
    <language>en</language>
    <item>
      <title>Oracle AI Agent Memory: A Governed, Unified Memory Core for Enterprise AI Agents</title>
      <dc:creator>Wojtek Pluta</dc:creator>
      <pubDate>Mon, 04 May 2026 08:57:04 +0000</pubDate>
      <link>https://forem.com/oracledevs/oracle-ai-agent-memory-a-governed-unified-memory-core-for-enterprise-ai-agents-4ml8</link>
      <guid>https://forem.com/oracledevs/oracle-ai-agent-memory-a-governed-unified-memory-core-for-enterprise-ai-agents-4ml8</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;This article is syndicated from the original post on &lt;a href="https://blogs.oracle.com/developers/oracle-ai-agent-memory-a-governed-unified-memory-core-for-enterprise-ai-agents" rel="noopener noreferrer"&gt;blogs.oracle.com&lt;/a&gt;. Read the canonical version there for the latest updates.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Recently, Oracle introduced the &lt;a href="https://www.oracle.com/database/ai-agent-memory/" rel="noopener noreferrer"&gt;Oracle AI Agent Memory&lt;/a&gt; Python package, a model and framework-agnostic memory solution that gives enterprise AI teams a governed memory core on Oracle AI Database: short-term threads with summaries and context cards, long-term durable memories with vector search, automatic LLM-based memory extraction, and the governance and isolation production agents require.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F05%2FDiagram-1-transparent-1024x698.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F05%2FDiagram-1-transparent-1024x698.png" title="Oracle Agent Memory: a unified agent memory layer built on Oracle AI Database." alt="Diagram titled “Oracle Agent Memory” showing a layered architecture. At the top, a framework layer includes LangGraph, Claude Agent SDK, OpenAI Agent SDK, WayFlow, and custom integrations feeding into a unified Oracle Agent Memory client. The client manages working, semantic, episodic, and procedural memory types. Below, the system is powered by Oracle AI Database, which provides governed, isolated, audited, encrypted, and highly available infrastructure with vector search, graph traversal, and relational query capabilities." width="800" height="545"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Oracle Agent Memory: a unified agent memory layer built on Oracle AI Database.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Oracle AI Agent Memory is available now via &lt;a href="https://pypi.org/project/oracleagentmemory/" rel="noopener noreferrer"&gt;PyPI&lt;/a&gt; as &lt;code&gt;oracleagentmemory&lt;/code&gt; and documented in the &lt;a href="https://docs.oracle.com/en/database/oracle/agent-memory/26.4/" rel="noopener noreferrer"&gt;Oracle Help Center&lt;/a&gt;. It is designed to replace the patchwork memory stack most production agents inherit with a single governed memory substrate built on Oracle AI Database.&lt;/p&gt;

&lt;p&gt;This is the difference between an agent that is memory-augmented, given a vector store to consult, and one that is memory-aware, responsible for reading from and writing to its own governed, durable state with one enterprise-grade backend.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"&lt;em&gt;Agent memory has shifted from a research curiosity to a production requirement in under two years. Teams shipping serious agentic systems need a backend that handles vectors, structured data, and transactional consistency in one place, not three stitched together. Oracle AI Database is one of the few platforms that delivers all of that natively, which is why we built Hindsight to run on it as a first-class backend.&lt;/em&gt;"&lt;br&gt;
&lt;em&gt;—&lt;/em&gt; &lt;em&gt;Chris Latimer, Co-Founder &amp;amp; CEO of Vectorize&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Why Agent Memory, Why Now
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F05%2FDiagram-2-transparent-1024x914.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F05%2FDiagram-2-transparent-1024x914.png" title="The four types of agent memory: working, semantic, episodic, and procedural." alt="Diagram titled “The four types of agent memory” showing an Oracle Agent Memory taxonomy. A central “Agent memory” layer branches into four categories: working memory (active state like current conversation and in-flight tasks), semantic memory (durable facts such as user preferences and entity data), episodic memory (specific past experiences like prior sessions and resolved tasks), and procedural memory (behavioral rules and tool preferences). Each category includes a short description and example elements." width="800" height="714"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The four types of agent memory: working, semantic, episodic, and procedural.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Most agent implementations treat memory as a bolt-on. A vector store consulted at retrieval time, a chat history table glued on beside it, and whatever hand-written extraction logic the team can maintain.&lt;/p&gt;

&lt;p&gt;That stack holds together for a demo. It falls apart from the moment an enterprise AI team asks questions that actually matter. Who owns the memory? Where is it governed? How do we isolate tenants? How do we audit what the agent learned, and how do we forget it on request?&lt;/p&gt;

&lt;p&gt;Context windows have grown over the years, but no context window is large enough to hold the full state of a long-running agent: weeks of user preferences, accumulated domain knowledge, prior tool outcomes, evolving task state, and the reasoning history that makes each decision defensible.&lt;/p&gt;

&lt;p&gt;Agents need memory for the same reasons people do: to hold an active state while working on a problem, to retain facts learned over time, to recall specific past experiences, and to encode behavioral rules and procedures.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A practical taxonomy for agent memory commonly used in agent design covers four types:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Working memory is the active state the agent is reasoning over right now, the running conversation and the scratchpad the model sees at inference time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Semantic memory is the durable facts and knowledge the agent accumulates about users, entities, and the world: preferences, canonical definitions, structured reference data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Episodic memory is specific past experiences the agent can recall, what happened on a prior session, what the user asked three weeks ago, how a similar task resolved last time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Procedural memory is the behavioral rules, guidelines, and learned procedures that shape how the agent acts, how to handle customers, which tools to prefer, what not to do.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These are not four different systems. They are four access patterns over the same underlying state, which is what makes a unified memory core the right architectural answer rather than four bolted-together services.&lt;/p&gt;




&lt;h2&gt;
  
  
  Oracle AI Database as the Memory Core
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F05%2FDiagmar-3-transparent-775x1024.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F05%2FDiagmar-3-transparent-775x1024.png" title="Reference architecture for Oracle AI Agent Memory on Oracle AI Database." alt="Reference architecture diagram titled “Oracle AI Agent Memory on Oracle AI Database.” It shows the flow from a customer-owned application tier (end users interacting via natural language with an AI agent built using frameworks like LangGraph or OpenAI Agent SDK) to the Oracle-owned memory SDK and Oracle AI Database. The Oracle AI Agent Memory layer provides APIs for search, message handling, and memory extraction with tenant isolation and governance. It connects to Oracle AI Database, which supports vector search, relational queries, graph traversal, and JSON storage. The diagram also highlights enterprise capabilities like backup, replication, high availability, encryption, access control, and auditing, with arrows indicating request and response flow." width="775" height="1024"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Reference architecture for Oracle AI Agent Memory on Oracle AI Database.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Oracle AI Database combines vector similarity search, relational querying, and graph-aware data access in one governed engine, enabling semantic recall alongside precise transactional and relationship-centric retrieval. Combined with Oracle's operational story, backups, replication, high availability, encryption, fine-grained access control, and audit, teams get a path from notebook to regulated production without swapping storage layers, rewriting compliance reviews, or stitching together bespoke isolation logic along the way.&lt;/p&gt;

&lt;p&gt;Memory engineering, as a discipline, demands substrate choices that hold up under the access patterns a real enterprise agent actually has: concurrent writes, per-user and per-tenant scoping, full audit, and semantic retrieval at scale.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;“Enterprise agents need an agent memory solution with robust security guarantees, strong governance controls, sophisticated workload isolation, as well as deep integration within the enterprise data platform. Oracle AI Agent Memory greatly simplifies building agent memory solutions by consolidating what are usually multiple separate and fragmented services, within the converged database architecture that customers already trust for their most critical data.”&lt;/em&gt;&lt;br&gt;
&lt;em&gt;— Tirthankar Lahiri, SVP, Mission-Critical Data and AI Engines, Oracle Database&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Production agent memory carries two loads. Developers wire the stack together: vector store, chat log, extraction scripts, isolation logic, governance per piece. Agents reason over the fragments, deciding what to retrieve from where and fitting the relevant world into a finite context window each turn.&lt;/p&gt;

&lt;p&gt;Oracle AI Agent Memory lifts both. One governed client replaces the four-service stack, with one set of credentials, one compliance review, and one backup story. Working, semantic, episodic, and procedural memory share one substrate and one retrieval surface, so the model reasons over a coherent view of its state. Summarization and scoped retrieval put the right subset into context at the right moment, freeing the model to spend its reasoning budget on the task rather than memory bookkeeping.&lt;/p&gt;

&lt;p&gt;Automatic LLM-based extraction turns conversation into durable memories without hand-rolled prompt chains. Multi-tenant isolation is enforced at the store layer, so a single schema can host multiple deployments without cross-tenant leakage. And because the SDK is framework-agnostic, integrating with &lt;a href="https://github.com/oracle-devrel/oracle-ai-developer-hub/tree/main/notebooks/agent_memory" rel="noopener noreferrer"&gt;LangGraph, Claude Agent SDK, OpenAI Agent SDK, WayFlow, and custom harnesses&lt;/a&gt;, teams aren't locked into a single runtime to get the substrate.&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Benefits For AI Workloads
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F05%2FDiagram-4-transparent-1024x584.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F05%2FDiagram-4-transparent-1024x584.png" title="Configuration: gpt-5.5, reasoning effort xhigh, nomic-embed-v1.5 embeddings, local HNSW index, top-K = 200. X-axis truncated; all categories scored above 88%." alt="Bar chart showing LongMemEval results with 93.8% overall accuracy (469/500). Per-category scores: single-session assistant 100%, temporal reasoning 96.2%, knowledge update 94.9%, single-session user 94.3%, single-session preference 93.3%, and multi-session 88.0%. Configuration notes include GPT-5.5, nomic-embed-v1.5 embeddings, and HNSW index." width="800" height="456"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Configuration: gpt-5.5, reasoning effort xhigh, nomic-embed-v1.5 embeddings, local HNSW index, top-K = 200. X-axis truncated; all categories scored above 88%.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Oracle AI Agent Memory is built for the operational realities of running AI agents in production.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Production-grade recall on long-horizon memory benchmarks.&lt;/strong&gt; On &lt;a href="https://arxiv.org/abs/2410.10813" rel="noopener noreferrer"&gt;LongMemEval&lt;/a&gt;, the standard academic benchmark for long-context agent memory, Oracle AI Agent Memory scores &lt;strong&gt;93.8%&lt;/strong&gt; (469 of 500), with the strongest results on the categories that matter most for production agents: 100% on single-session assistant recall, 96% on temporal reasoning, and 95% on knowledge-update tasks. Multi-session recall, the hardest category in the benchmark, lands at 88%. Configuration: OpenAI gpt-5.5 (reasoning effort xhigh), nomic-embed-text-v1.5 embeddings, local HNSW index, top-K = 200.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bounded per-turn cost as sessions extend.&lt;/strong&gt; Periodic thread summarization, durable memory extraction, and prompt-time message compaction keep the working context bounded as conversations grow. In an 80-turn scripted conversation, Oracle AI Agent Memory held per-request input around 1,300 tokens for the full run while a flat-history baseline grew linearly past 13,900 — roughly 9.5× more tokens per request by the final turn, and a much steeper bill across the full conversation. Teams shipping long-running agents trade a linear-in-history cost curve for a flat one.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F05%2FDiagram-5-transparent-1024x552.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F05%2FDiagram-5-transparent-1024x552.png" title="80-turn ChromAtlas-ND scripted conversation · gpt-5.4 (raw OpenAI client, no framework). Token estimate: chars / 4 (notebook convention)" alt="Line chart comparing tokens per request over 80 conversation turns. A gray line (no memory management) rises steadily to ~13,900 tokens, while a red line (Oracle AI Agent Memory) stays flat around ~1,300 tokens. The chart highlights ~9.5× lower token usage with memory, showing stable context size as conversations grow." width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;80-turn ChromAtlas-ND scripted conversation · gpt-5.4 (raw OpenAI client, no framework). Token estimate: chars / 4 (notebook convention)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Better answers than a flat-history baseline.&lt;/strong&gt; A flat-history agent has the entire verbatim conversation in its prompt — every fact ever mentioned, in order. By rights it should be hard to beat on recall. Across the same 80-turn conversation, evaluated by an impartial gpt-5.4 judge on accuracy, completeness, relevance, and coherence, Oracle AI Agent Memory won &lt;strong&gt;48 turns to flat history's 13&lt;/strong&gt;, with 19 ties: &lt;strong&gt;3.7× more wins despite the baseline's information advantage&lt;/strong&gt;. A retrieved context card focuses the model on what matters; a sprawling transcript dilutes attention across noise.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F05%2FDiagram-6-transparent-1024x563.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F05%2FDiagram-6-transparent-1024x563.png" title="80-turn ChromAtlas-ND scripted conversation; judge: gpt-5.4; scored on accuracy, completeness, relevance, coherence" alt="Bar chart showing agent performance over 80 turns. Oracle AI Agent Memory wins 48 turns (60%), compared to 13 wins (16%) for a naive flat history" width="800" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;80-turn ChromAtlas-ND scripted conversation; judge: gpt-5.4; scored on accuracy, completeness, relevance, coherence&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cost is a tunable knob, not a fixed value.&lt;/strong&gt; The summarization trigger controls how aggressively the package compacts thread context, and it moves the cost-fidelity trade-off directly. In an 8-query demo conversation (five runs per threshold), a 10,000-token trigger landed at a mean of 121,268 total tokens, about 60% under the 306,823-token flat-history baseline. As the trigger rises, the package compacts less often and preserves more raw context per turn; by a 50–70k trigger, mean total tokens approach or exceed the baseline, and run-to-run variance widens. Teams pick the threshold that matches their answer-quality requirements and lock in the cost envelope they want, rather than accepting whatever curve a fragmented stack produces.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F05%2FDiagram-7-transparent-scaled.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F05%2FDiagram-7-transparent-scaled.png" title="Memory Agent Efficiency vs Summarization Threshold on Demo Conversation. Num queries = 8; num runs per threshold = 5." alt="Line chart titled “Oracle Agent Memory Threshold Sweep” showing total tokens vs. summarization trigger (10k–70k). Mean tokens rise as the threshold increases, with shaded min–max and standard deviation bands. The lowest mean (~121k tokens) occurs at 10k, while higher thresholds approach or exceed a dashed naive baseline (~306k tokens)." width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Memory Agent Efficiency vs Summarization Threshold on Demo Conversation. Num queries = 8; num runs per threshold = 5.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;One backend, every Python runtime.&lt;/strong&gt; &lt;a href="https://github.com/oracle-devrel/oracle-ai-developer-hub/tree/main/notebooks/agent_memory" rel="noopener noreferrer"&gt;LangGraph, the Claude Agent SDK, the OpenAI Agents SDK, WayFlow, and custom Python harnesses&lt;/a&gt; all instantiate the same OracleAgentMemory client and read and write the same Oracle Database store. Teams running more than one framework no longer rebuild memory per runtime, and migrations between frameworks no longer mean migrating memory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Primitives for audit and erasure on a single substrate.&lt;/strong&gt; Every record carries user, agent, thread, and timestamp scoping fields, and the SDK exposes search, list, and per-record delete operations across memories, threads, and messages, so callers can locate records for a subject and remove them on request. Oracle Database's native auditing covers the storage layer underneath. Compliance reviews land on a single substrate (one database with audit, retention, and access controls already in the data plane) rather than four services with four reviews.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;One vendor relationship for production agent memory.&lt;/strong&gt; A single Oracle AI Database instance carries vector search, structured state, JSON document retrieval, transactional consistency, and database-native audit. No second vector database to license, no third service to monitor and scale, no fourth backup pipeline to maintain.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who Oracle AI Agent Memory Is For
&lt;/h2&gt;

&lt;p&gt;Oracle AI Agent Memory is designed for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AI Developers and engineers&lt;/strong&gt; building production agents who need durable short-term and long-term memory in one place, with enterprise security and isolation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Teams already running Oracle AI Database&lt;/strong&gt; who want their agents to write to the same governed backend as the rest of the business&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Technical leaders&lt;/strong&gt; evaluating Oracle AI Database for agent memory infrastructure at scale, with compliance and audit requirements&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;Install the Oracle AI Agent Memory package:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;oracleagentmemory
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A minimal end-to-end loop in Python looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;oracleagentmemory&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;AgentMemory&lt;/span&gt;

&lt;span class="n"&gt;memory&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;AgentMemory&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_connection&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;connection_string&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user_123&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Add conversation turns to a short-term thread
&lt;/span&gt;&lt;span class="n"&gt;thread_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;memory&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_thread&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user_123&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;memory&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_messages&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;I prefer vegan meals.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;assistant&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Noted.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="c1"&gt;# Extract durable long-term memories from the thread
&lt;/span&gt;&lt;span class="n"&gt;memory&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;extract_memories&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Scoped search over long-term memory, enforced per-user
&lt;/span&gt;&lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;memory&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;search&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user_123&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;dietary preferences&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;limit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Code samples are illustrative; the final API surface is documented in the &lt;a href="https://docs.oracle.com/en/database/oracle/agent-memory/26.4/" rel="noopener noreferrer"&gt;Oracle Help Center&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/oracle-devrel/oracle-ai-developer-hub/tree/main/notebooks/agent_memory" rel="noopener noreferrer"&gt;quickstart notebook and framework how-to guides&lt;/a&gt; are available in the Oracle AI Developer Hub, and the full API reference is available in the &lt;a href="https://docs.oracle.com/en/database/oracle/agent-memory/26.4/" rel="noopener noreferrer"&gt;Oracle Help Center&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Oracle AI Agent Memory is the first release of a broader commitment to a governed memory substrate enterprise agents need. Memory engineering is still an emerging discipline. The infrastructure behind it should not be.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>database</category>
      <category>oracle</category>
    </item>
    <item>
      <title>What Is Agent Memory? A Beginner’s Guide for AI Developers</title>
      <dc:creator>Anya Summers</dc:creator>
      <pubDate>Wed, 29 Apr 2026 09:11:11 +0000</pubDate>
      <link>https://forem.com/oracledevs/what-is-agent-memory-a-beginners-guide-for-ai-developers-5djd</link>
      <guid>https://forem.com/oracledevs/what-is-agent-memory-a-beginners-guide-for-ai-developers-5djd</guid>
      <description>&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; Agent memory is stored state an AI agent can retrieve across sessions to maintain continuity. A bigger context window does not fix the problem. Once memory has to persist, be scoped to the right user, and be retrieved reliably, it becomes a data problem, and is often best handled in a database such as Oracle AI Database.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters
&lt;/h2&gt;

&lt;p&gt;You can build a convincing AI agent surprisingly fast. Give it a model, wire up a few tools, and it can look sharp in the first session. Then the user comes back the next day. They ask a follow-up question. They refer to a failed attempt from yesterday. They expect the agent to remember that they prefer Python examples and concise answers. Instead, the agent starts from scratch.&lt;/p&gt;

&lt;p&gt;That is usually the moment when a demo stops feeling clever and starts feeling flimsy. A lot of beginner guides blur this point. They talk as if a larger context window solves the whole problem. It does not. A larger context window gives the model more room to work during one session. It does not give the system a memory of what happened last week. When the session ends, the context goes with it. Memory is the layer that preserves what matters.&lt;/p&gt;




&lt;h2&gt;
  
  
  What You'll Learn
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;What agent memory actually is, and why a bigger context window is not a substitute&lt;/li&gt;
&lt;li&gt;The four useful types of agent memory: working, procedural, semantic, and episodic&lt;/li&gt;
&lt;li&gt;When memory stops being a prompt trick and starts being infrastructure&lt;/li&gt;
&lt;li&gt;How to implement a persistent semantic memory store using LangChain and Oracle AI Database&lt;/li&gt;
&lt;li&gt;Common mistakes to avoid and a checklist for building your first memory layer&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What Is Agent Memory?
&lt;/h2&gt;

&lt;p&gt;Agent memory is the information an AI agent can carry from one interaction to the next. That information might be a user preference, a summary of an earlier conversation, a previous task result, or facts the system has learned and may need later.&lt;/p&gt;

&lt;p&gt;The key point is simple. It is not enough that the model saw the information once. The system needs to be able to bring it back when it matters.&lt;/p&gt;

&lt;p&gt;Imagine a user tells an assistant three things today:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;they prefer concise answers&lt;/li&gt;
&lt;li&gt;they are working in Python&lt;/li&gt;
&lt;li&gt;the last attempt failed because their API key had expired&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the assistant can use that information tomorrow without being told again, it has memory. If the user has to repeat all three points, it does not. That is the difference.&lt;/p&gt;




&lt;h2&gt;
  
  
  Context Window vs Memory: What Is the Difference?
&lt;/h2&gt;

&lt;p&gt;This is the part that trips people up. A context window is the text the model can see right now. That includes the prompt, the recent messages, retrieved documents, tool outputs, and any system instructions passed into the current call. It is the model's live working space. Memory is different. Memory is stored state the system can recover later.&lt;/p&gt;

&lt;p&gt;The simplest analogy is this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the context window is the desk&lt;/li&gt;
&lt;li&gt;memory is the filing cabinet&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A bigger desk is useful. You can spread out more notes and hold more detail in front of the model. But the desk gets cleared. The filing cabinet is what lets you come back tomorrow, open the right folder, and pick up where you left off.&lt;/p&gt;

&lt;p&gt;It also helps to separate memory from Retrieval Augmented Generation (RAG). RAG brings in external knowledge (e.g.,company PDFs) so the model can answer a question with better grounding. Memory, by contrast, preserves useful state from previous interactions. One helps the agent know more in the moment; the other helps it behave with continuity over time.&lt;/p&gt;

&lt;p&gt;In practice, the strongest systems usually use all three layers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;context window for active reasoning&lt;/li&gt;
&lt;li&gt;retrieval for outside knowledge&lt;/li&gt;
&lt;li&gt;memory for continuity across sessions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A simple scenario makes the difference concrete. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Monday:&lt;/strong&gt; The user says, "I am learning Python and I prefer short answers." The agent helps them debug a script and the session ends. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tuesday:&lt;/strong&gt; The user returns and asks, "Can you help me sort a list?" &lt;/p&gt;

&lt;p&gt;Without memory, the agent gives a long answer in whichever language it guesses. With memory, the agent retrieves the user's preference, responds in Python, and keeps the answer concise. Same model. &lt;/p&gt;

&lt;p&gt;Same prompt. Different system around it.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Four Types of Agent Memory
&lt;/h2&gt;

&lt;p&gt;A simple way to understand agent memory is to borrow a rough model from human memory. It's not perfect, but it's useful.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
  &lt;tr&gt;
    &lt;th&gt;Memory type&lt;/th&gt;
    &lt;th&gt;Simple meaning&lt;/th&gt;
&lt;th&gt;Example in an agent&lt;/th&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;Working memory&lt;/td&gt;
    &lt;td&gt;What the agent is handling right now&lt;/td&gt;
&lt;td&gt;Current messages, tool outputs, temporary reasoning state&lt;/td&gt;
  &lt;/tr&gt;
&lt;tr&gt;
    &lt;td&gt;Procedural memory&lt;/td&gt;
    &lt;td&gt;How the agent does things&lt;/td&gt;
&lt;td&gt;Instructions, workflows, and tool-use rules&lt;/td&gt;
  &lt;/tr&gt;
&lt;tr&gt;
    &lt;td&gt;Semantic memory&lt;/td&gt;
    &lt;td&gt;Facts the agent has learned&lt;/td&gt;
&lt;td&gt;User preferences, saved facts, product knowledge&lt;/td&gt;
  &lt;/tr&gt;
&lt;tr&gt;
    &lt;td&gt;Episodic memory&lt;/td&gt;
    &lt;td&gt;Specific past events&lt;/td&gt;
&lt;td&gt;Previous sessions, task history, and failed attempts&lt;/td&gt;
  &lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This framework matters because not every agent needs the same mix. A customer support agent may need semantic memory for customer preferences and episodic memory for past tickets. A coding agent may care more about procedural memory for workflows and semantic memory for project conventions. A one-shot Q&amp;amp;A bot may not need much memory at all.&lt;/p&gt;

&lt;p&gt;That's worth keeping in mind: "add memory" is not a universal requirement. It only makes sense when continuity actually improves the experience or the outcome.&lt;/p&gt;




&lt;h2&gt;
  
  
  When Does Agent Memory Become a Data Problem?
&lt;/h2&gt;

&lt;p&gt;The moment you want memory to persist, you're no longer just writing prompts. You are making storage and retrieval decisions.&lt;/p&gt;

&lt;p&gt;You need to decide:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;what is worth storing&lt;/li&gt;
&lt;li&gt;what should be ignored&lt;/li&gt;
&lt;li&gt;how memories are tied to the right user&lt;/li&gt;
&lt;li&gt;how old or stale memories get updated or removed&lt;/li&gt;
&lt;li&gt;how the system finds the right memory at the right time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is where many first agent builds get messy. Saving text is easy. Bringing back the right memory for the right user, in the right context, without pulling in noise, is the hard part. Once memory has to persist and be searchable, structure matters. You typically need metadata such as user_id, memory_type, timestamps, and maybe expiry rules. You also need a retrieval strategy that avoids surfacing irrelevant or outdated information.&lt;/p&gt;

&lt;p&gt;Persistence also introduces governance concerns. As soon as an agent is storing anything that can be traced to a person, you are dealing with personally identifiable information, and every mature system needs answers to a small set of questions. What personal data is being stored? How long is it kept? How does a user request deletion, and can the system actually honour that request? &lt;/p&gt;

&lt;p&gt;Building those answers in from day one is much easier than retrofitting them after the first audit or data subject request. Governance lives best as code and schemas, not as a Confluence page somebody hopes to find later.&lt;/p&gt;

&lt;p&gt;This is why databases appear so quickly in serious agent systems. Prompts are temporary. Memory needs storage, filtering, and lifecycle rules. If you are storing embeddings, scoping memories to a user, and reusing them later, you are designing a small data system whether you planned to or not. &lt;/p&gt;

&lt;p&gt;That sounds heavier than it is. You do not need a giant memory platform on day one. But you do need to stop thinking of memory as "extra text for the prompt". It is a system component. Without structure and filtering, memory quickly turns into noisy context that reduces answer quality instead of improving it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Architecture Overview
&lt;/h2&gt;

&lt;p&gt;At a high level, a production-ready agent memory system has three layers working together: a context window for active reasoning, a retrieval layer for outside knowledge (RAG), and a persistent memory store for continuity across sessions. The memory store is where platforms like Oracle AI Database provide the most value.&lt;/p&gt;

&lt;p&gt;Oracle AI Database is a strong fit for production memory systems for three reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Durability&lt;/strong&gt;: Memory is only valuable if it survives restarts, deployments, and the kind of quiet infrastructure changes that happen in any real engineering environment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Metadata-driven filtering&lt;/strong&gt;: Storing vectors next to structured columns like user_id, tenant_id, memory_type, and created_at means retrieval can be scoped cleanly without building a second database to hold the filters. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lifecycle control&lt;/strong&gt;: Expiry, archival, soft-delete, and audit trails are problems databases have been solving for decades, and memory needs all of them. Running vector search in the same database that already holds the relational and governance layer removes a whole category of synchronisation bugs that would otherwise appear on week three.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Python 3.9 or later&lt;/li&gt;
&lt;li&gt;Access to an Oracle AI Database 26ai instance (Autonomous Database, container, or local install)&lt;/li&gt;
&lt;li&gt;An embedding model configured and callable from your environment&lt;/li&gt;
&lt;li&gt;Basic familiarity with LangChain concepts (vector stores, retrievers)&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Step-by-Step Guide: A Simple Memory Layer with LangChain and Oracle
&lt;/h2&gt;

&lt;p&gt;The goal here is not to build a huge platform. It is to make the pattern concrete: save a useful memory, attach metadata, and retrieve it later when it becomes relevant.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Install the Packages
&lt;/h3&gt;

&lt;p&gt;Install the LangChain Oracle integration along with the Oracle Python driver and LangChain core.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;langchain-oracledb oracledb langchain-core
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Connect to a Persistent Store
&lt;/h3&gt;

&lt;p&gt;Open a connection to Oracle and wrap it in a LangChain OracleVS vector store. This example assumes you already have an embedding model configured. The broad idea matters more than the exact class names — the agent now has somewhere durable to store semantic memory outside the prompt.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;oracledb&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_oracledb.vectorstores&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;OracleVS&lt;/span&gt;

&lt;span class="n"&gt;connection&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;oracledb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;agent_user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;password&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;password&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;dsn&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;hostname:port/service&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;memory_store&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;OracleVS&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;connection&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;embedding_function&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;embeddings&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;table_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;AGENT_MEMORY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Store a Memory and Retrieve It Later
&lt;/h3&gt;

&lt;p&gt;Save something worth remembering, attach metadata so it stays scoped correctly, and retrieve it when the next interaction needs it. That is the core loop.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;memory_store&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_texts&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;texts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;User prefers concise answers and Python examples.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;metadatas&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user_123&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;memory_type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;preference&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;memory_store&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;similarity_search&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;How should I answer this user?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nb"&gt;filter&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user_123&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you only remember one design lesson from this section, make it this: memory quality depends less on storing more information and more on retrieving the right information cleanly.&lt;/p&gt;




&lt;h2&gt;
  
  
  When Do You Actually Need Agent Memory?
&lt;/h2&gt;

&lt;p&gt;Not every agent needs memory. This is where it is easy to overbuild. You probably need memory when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the same user comes back repeatedly&lt;/li&gt;
&lt;li&gt;the agent needs to remember preferences or previous decisions&lt;/li&gt;
&lt;li&gt;tasks span multiple sessions&lt;/li&gt;
&lt;li&gt;the system improves when it learns from earlier outcomes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You may not need much memory when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the task is one-off question answering&lt;/li&gt;
&lt;li&gt;document retrieval is enough&lt;/li&gt;
&lt;li&gt;users are unlikely to return&lt;/li&gt;
&lt;li&gt;continuity adds more complexity than value&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A lot of first agent projects do not need a big memory layer. They need a clear use case and a small amount of well-scoped memory. That's usually a better place to start.&lt;/p&gt;




&lt;h2&gt;
  
  
  Validation &amp;amp; Troubleshooting
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Retrieval returns noise&lt;/strong&gt;: If results look bad with ten stored memories, they will be worse at ten thousand. Validate retrieval quality early by running a handful of realistic queries and inspecting what comes back.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Wrong user's memories appear&lt;/strong&gt;: Every similarity_search call should include a filter on user_id. If you see cross-user leakage, check that metadata is written on every add_texts call.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stale memories resurface&lt;/strong&gt;: Add a created_at timestamp and define lifecycle rules. Preferences change. Facts expire. If the system never updates or retires old memories, it will eventually return stale context.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Connection errors to Oracle&lt;/strong&gt;: Verify your DSN string matches your service name, and that the oracledb driver can reach the host. Autonomous Database users should confirm their wallet configuration.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Everything gets saved as a "memory"&lt;/strong&gt;: That sounds safe, but it usually creates noise. Decide upfront what qualifies as a memory worth storing.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Common Mistakes to Avoid
&lt;/h2&gt;

&lt;p&gt;A few mistakes show up again and again. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;First&lt;/strong&gt;, people treat a larger context window as if it solves memory. It helps within a single session. It does not create continuity on its own.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Second&lt;/strong&gt;, they save everything. That sounds safe, but it usually creates noise. If every past detail becomes a "memory", retrieval quality drops fast. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Third&lt;/strong&gt;, they skip metadata. Without fields like user_id, memory_type, or timestamps, the system has no reliable way to determine which memory belongs to whom or whether it is still relevant.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fourth&lt;/strong&gt;, they forget memory lifecycle. Preferences change. Facts expire. Previous failures become irrelevant. If the system never updates or retires old memories, it will eventually return stale context.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Finally, some teams add memory before they have proved they need it. That is backwards. Start with the user problem. Then decide whether continuity genuinely improves the product.&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Agent memory is stored state an agent retrieves across sessions to maintain continuity. A context window helps with the current interaction; memory enables continuity over time.&lt;/li&gt;
&lt;li&gt;Four useful memory types are working, procedural, semantic, and episodic. Not every agent needs the same mix.&lt;/li&gt;
&lt;li&gt;As soon as memory must persist, be scoped to the right user, and be retrieved reliably, you are dealing with a data problem.&lt;/li&gt;
&lt;li&gt;Start with one memory type to begin with. Semantic memory for user preferences is usually the highest-value entry point.&lt;/li&gt;
&lt;li&gt;Scope every memory by user_id, and include memory_type and a timestamp from the first write.&lt;/li&gt;
&lt;li&gt;Oracle AI Database 26ai fits well here by combining durable storage, metadata-driven filtering, and lifecycle control in the same system that already holds your relational and governance layer.&lt;/li&gt;
&lt;li&gt;Building an agent is easy. Keeping one alive in production is where memory stops being a prompt trick and bedomes infrastructure.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Frequently Asked Questions
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What is the difference between a context window and agent memory?&lt;/strong&gt;&lt;br&gt;
A context window is the information the model can see during the current interaction. Agent memory is information the system stores and can bring back in future interactions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What are the main types of agent memory?&lt;/strong&gt;&lt;br&gt;
A simple framework uses four types: working, procedural, semantic, and episodic. In practice, most agent builds care most about semantic memory for facts and preferences, and episodic memory for past interactions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Do all AI agents need memory?&lt;/strong&gt;&lt;br&gt;
No. Some agents only answer one-off questions and do fine with a prompt plus retrieval. Memory becomes useful when continuity across sessions actually improves the result.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Can a vector database be used for agent memory?&lt;/strong&gt;&lt;br&gt;
Yes. A vector database or vector-capable store can work well for semantic memory, especially when you need similarity search. It still needs metadata and retrieval rules, otherwise it turns into a pile of loosely relevant text.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When should I use Oracle AI Database 26ai for agent memory?&lt;/strong&gt;&lt;br&gt;
Use it when you need durable storage, vector similarity search, and metadata-driven filtering in the same system. It is especially valuable when your application already has a relational and governance layer, because running vector search alongside it removes a whole category of synchronisation bugs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What are the limitations?&lt;/strong&gt;&lt;br&gt;
Memory architectures are still largely bespoke. There is no clean default answer for when to summarise versus store verbatim, or how to balance recall against retrieval noise as the store grows. Eviction, forgetting, and evaluation of memory systems are all genuinely open problems. Start small, instrument retrieval, and treat the design as something you will revise.&lt;/p&gt;




&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/oracle-devrel/oracle-ai-developer-hub" rel="noopener noreferrer"&gt;Oracle AI Developer Hub - GitHub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://python.langchain.com/docs/integrations/vectorstores/oracle/" rel="noopener noreferrer"&gt;LangChain Oracle integration docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.oracle.com/en/database/oracle/oracle-database/23/vecse/" rel="noopener noreferrer"&gt;Oracle AI Vector Search documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.oracle.com/developer/resources/" rel="noopener noreferrer"&gt;Build with Oracle AI Database - Resources&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://livelabs.oracle.com/" rel="noopener noreferrer"&gt;Oracle LiveLabs&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>database</category>
      <category>oracle</category>
      <category>agents</category>
    </item>
    <item>
      <title>Unified Memory Core for AI Agents</title>
      <dc:creator>Anya Summers</dc:creator>
      <pubDate>Mon, 27 Apr 2026 16:05:40 +0000</pubDate>
      <link>https://forem.com/oracledevs/unified-memory-core-for-ai-agents-3da3</link>
      <guid>https://forem.com/oracledevs/unified-memory-core-for-ai-agents-3da3</guid>
      <description>&lt;p&gt;A practical guide to building episodic, lexical, vector, and graph memory workflows in Oracle AI Database&lt;/p&gt;

&lt;p&gt;Companion notebook: &lt;a href="https://github.com/oracle-devrel/oracle-ai-developer-hub/blob/main/notebooks/unified_agent_memory_oracle_ai_database.ipynb" rel="noopener noreferrer"&gt;Unified Agent Memory with Oracle AI Database&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Key takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A unified memory core combines episodic, lexical, semantic, and relationship-aware retrieval in one governed platform.&lt;/li&gt;
&lt;li&gt;Hybrid retrieval (Oracle Text + vector + metadata filters) improves reliability in enterprise queries.&lt;/li&gt;
&lt;li&gt;GRAPH_TABLE adds business relationship context beyond nearest-neighbor similarity.&lt;/li&gt;
&lt;li&gt;DBMS_SCHEDULER and VPD patterns make memory lifecycle and security operational.&lt;/li&gt;
&lt;li&gt;The companion notebook demonstrates all core patterns in a runnable workflow.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  What agent memory is and why it matters
&lt;/h2&gt;

&lt;p&gt;Agent memory is the stored context an AI system can access across steps, sessions, or workflows. In practice, it supports several critical functions:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;State persistence&lt;/strong&gt; – remembering what the agent is currently doing&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Context continuity&lt;/strong&gt; – carrying forward prior user goals and constraints&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Knowledge retrieval&lt;/strong&gt; – finding facts, documents, and learned abstractions&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Workflow resilience&lt;/strong&gt; – resuming long-running tasks after delays or failures&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Personalization&lt;/strong&gt; – adapting behavior based on historical interactions&lt;/p&gt;

&lt;p&gt;This matters because modern agents are not just answering isolated questions. They are coordinating tools, operating over enterprise systems, and producing outputs that depend on both immediate context and historical knowledge.&lt;/p&gt;

&lt;p&gt;At a high level, memory is what allows an agent to behave less like a stateless API and more like a system that can learn and adapt over time.&lt;/p&gt;




&lt;h2&gt;
  
  
  You'll learn how to
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;model episodic memory with JSON and query it using SQL/JSON.&lt;/li&gt;
&lt;li&gt;run lexical retrieval with Oracle Text and semantic retrieval with vectors.&lt;/li&gt;
&lt;li&gt;combine lexical, semantic, and metadata constraints into hybrid retrieval.&lt;/li&gt;
&lt;li&gt;traverse user-ticket-document context with SQL Property Graph and GRAPH_TABLE.&lt;/li&gt;
&lt;li&gt;apply lifecycle and tenant-aware security patterns for governed agent memory.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Architecture overview
&lt;/h2&gt;

&lt;p&gt;The unified memory flow keeps ingestion, storage, retrieval, and governance inside Oracle AI Database. Episodic events are stored in JSON, reusable knowledge is retrieved with Oracle Text and vectors, relationship context is traversed with SQL Property Graph, and lifecycle/security controls are enforced with scheduler and VPD patterns.&lt;/p&gt;




&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Python 3.10+&lt;/li&gt;
&lt;li&gt;Oracle AI Database 26ai (or compatible environment)&lt;/li&gt;
&lt;li&gt;Dependencies: oracledb, python-dotenv, pandas, optional langchain-core&lt;/li&gt;
&lt;li&gt;Privileges for tables, indexes, SQL/JSON, and queries&lt;/li&gt;
&lt;li&gt;Optional privileges for Oracle Text and SQL Property Graph features&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Types of agent memory and their storage needs
&lt;/h2&gt;

&lt;p&gt;Not all memory behaves the same way. Different memory types have different latency, durability, and retrieval requirements.&lt;/p&gt;

&lt;h3&gt;
  
  
  Short-term vs. long-term memory
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Short-term memory&lt;/strong&gt; is the agent's working context. It typically includes the current conversation window, recent tool outputs, temporary plans, and session variables. It requires very low latency but does not need to be persisted indefinitely.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Long-term memory&lt;/strong&gt; persists beyond a single interaction. It may include user preferences, completed tasks, conversation summaries, business objects, knowledge artifacts, and execution history. It should be durable, searchable, and governed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Episodic memory
&lt;/h3&gt;

&lt;p&gt;Episodic memory stores events and experiences. For agents, that can mean:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;previous conversations&lt;/li&gt;
&lt;li&gt;tool calls and outputs&lt;/li&gt;
&lt;li&gt;actions taken on behalf of a user&lt;/li&gt;
&lt;li&gt;workflow checkpoints&lt;/li&gt;
&lt;li&gt;timestamps, actors, and outcome metadata&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This memory is usually time-oriented and benefits from structured metadata, durable storage, and filtering by user, task, tenant, or date range.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example: episodic memory as JSON documents&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A practical pattern is to store each conversation turn, tool invocation, or workflow checkpoint as &lt;a href="https://docs.oracle.com/en/database/oracle/oracle-database/26/jsnvu/overview-json-relational-duality-views.html" rel="noopener noreferrer"&gt;a JSON document&lt;/a&gt; and query it with SQL/JSON functions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;agent_events&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;event_id&lt;/span&gt;    &lt;span class="n"&gt;NUMBER&lt;/span&gt; &lt;span class="k"&gt;GENERATED&lt;/span&gt; &lt;span class="n"&gt;ALWAYS&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="k"&gt;IDENTITY&lt;/span&gt; &lt;span class="k"&gt;PRIMARY&lt;/span&gt; &lt;span class="k"&gt;KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;session_id&lt;/span&gt;  &lt;span class="n"&gt;VARCHAR2&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;event_data&lt;/span&gt;  &lt;span class="n"&gt;JSON&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;created_at&lt;/span&gt;  &lt;span class="nb"&gt;TIMESTAMP&lt;/span&gt; &lt;span class="k"&gt;DEFAULT&lt;/span&gt; &lt;span class="n"&gt;SYSTIMESTAMP&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;session_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="n"&gt;JSON_VALUE&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;event_data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'$.type'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;event_type&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="n"&gt;jt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tool_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="n"&gt;jt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;latency_ms&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;   &lt;span class="n"&gt;agent_events&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="n"&gt;JSON_TABLE&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;event_data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'$'&lt;/span&gt;
         &lt;span class="n"&gt;COLUMNS&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
           &lt;span class="n"&gt;tool_name&lt;/span&gt;  &lt;span class="n"&gt;VARCHAR2&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;PATH&lt;/span&gt; &lt;span class="s1"&gt;'$.tool.name'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
           &lt;span class="n"&gt;latency_ms&lt;/span&gt; &lt;span class="n"&gt;NUMBER&lt;/span&gt;        &lt;span class="n"&gt;PATH&lt;/span&gt; &lt;span class="s1"&gt;'$.tool.latencyMs'&lt;/span&gt;
         &lt;span class="p"&gt;)&lt;/span&gt;
       &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;jt&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt;  &lt;span class="n"&gt;JSON_VALUE&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;event_data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'$.type'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'tool_call'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This pattern is especially useful when an agent needs durable session history without flattening every attribute into separate columns on day one.&lt;/p&gt;

&lt;h3&gt;
  
  
  Semantic memory
&lt;/h3&gt;

&lt;p&gt;Semantic memory stores generalized knowledge rather than a raw event log. It includes facts, policies, product information, documentation, ontologies, embeddings, and derived knowledge the agent can reuse across tasks.&lt;/p&gt;

&lt;p&gt;This memory often benefits from a combination of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.oracle.com/en/database/oracle/oracle-database/26/vecse/vector-search-pl-sql-packages-node.html" rel="noopener noreferrer"&gt;vector search&lt;/a&gt; for semantic similarity&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.oracle.com/en/database/oracle/oracle-database/26/ccapp/overview-getting-started-oracle-text.html" rel="noopener noreferrer"&gt;keyword/text search&lt;/a&gt; for exact terminology and domain-specific phrases&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;relational filters&lt;/strong&gt; for governance, freshness, and access constraints&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;graph relationships&lt;/strong&gt; for connected business meaning&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example: semantic retrieval with Oracle Text&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Why does lexical retrieval still matter in a vector-first architecture? An agent can use CONTAINS to rank policy, support, or product documents by relevance and combine that with vector search in the surrounding workflow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;article_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="n"&gt;SCORE&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;relevance&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;   &lt;span class="n"&gt;knowledge_articles&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt;  &lt;span class="k"&gt;CONTAINS&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'database performance'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
&lt;span class="k"&gt;ORDER&lt;/span&gt;  &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;relevance&lt;/span&gt; &lt;span class="k"&gt;DESC&lt;/span&gt;
&lt;span class="k"&gt;FETCH&lt;/span&gt; &lt;span class="k"&gt;FIRST&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt; &lt;span class="k"&gt;ROWS&lt;/span&gt; &lt;span class="k"&gt;ONLY&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For short text catalogs, &lt;a href="https://docs.oracle.com/en/database/oracle/oracle-database/26/ccapp/overview-getting-started-oracle-text.html" rel="noopener noreferrer"&gt;Oracle Text&lt;/a&gt;'s CTXCAT model is also a strong fit when agents need keyword matching plus structured filters such as product family, severity, or tenant.&lt;/p&gt;

&lt;h2&gt;
  
  
  Matching memory types to storage technologies
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7jcz1aev0s7q6ovzmy6q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7jcz1aev0s7q6ovzmy6q.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The key design principle is simple: &lt;strong&gt;use multiple memory types, but avoid fragmenting them across too many disconnected systems unless you truly need to.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Storage technologies for agent memory
&lt;/h2&gt;

&lt;h3&gt;
  
  
  In-memory storage: pros, cons, and use cases
&lt;/h3&gt;

&lt;p&gt;This type of storage is well suited for fast, transient state.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;extremely low latency&lt;/li&gt;
&lt;li&gt;simple fit for session state and active workflow context&lt;/li&gt;
&lt;li&gt;useful for intermediate reasoning artifacts and recent tool results&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;limited durability&lt;/li&gt;
&lt;li&gt;not suitable as the system of record&lt;/li&gt;
&lt;li&gt;difficult to govern and audit if used alone&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Best use cases&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;active conversation buffers&lt;/li&gt;
&lt;li&gt;current plan state for an orchestrator&lt;/li&gt;
&lt;li&gt;short-lived coordination across steps in a single run&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  File-based storage: when and why to use it
&lt;/h2&gt;

&lt;p&gt;Files and object storage are useful for large, unstructured artifacts such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;PDFs and reports&lt;/li&gt;
&lt;li&gt;images and media&lt;/li&gt;
&lt;li&gt;transcript archives&lt;/li&gt;
&lt;li&gt;exported workflow bundles&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They work well when the artifact itself is large, rarely updated, or naturally belongs in a document repository. However, file-based storage alone is a weak memory layer because it lacks rich query semantics. In practice, it works best when paired with database metadata, vector indexes, or a catalog layer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Databases: SQL, NoSQL, and key-value stores
&lt;/h2&gt;

&lt;p&gt;Databases are the backbone of durable agent memory.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;SQL databases&lt;/strong&gt; excel when agents need strong consistency, joins, transactions, governance, and structured filters.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;NoSQL/document stores&lt;/strong&gt; are useful when schemas evolve quickly and payloads are semi-structured.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Key-value stores&lt;/strong&gt; are effective for simple lookups, caching, and session persistence at high speed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For enterprise agents, the strongest pattern is often not choosing one memory store per memory type, but choosing a platform that can support multiple memory representations together.&lt;/p&gt;

&lt;h2&gt;
  
  
  Vector databases for semantic memory retrieval
&lt;/h2&gt;

&lt;p&gt;Vector retrieval is essential when an agent must find content by meaning rather than exact wording. It is especially effective for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;semantic search over documents&lt;/li&gt;
&lt;li&gt;similarity matching for prior cases&lt;/li&gt;
&lt;li&gt;memory recall from summarized or embedded interactions&lt;/li&gt;
&lt;li&gt;grounding RAG workflows with relevant context&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But vector search should not be treated as the entire memory architecture. Enterprise retrieval often requires semantic matching plus exact filtering, freshness rules, tenant boundaries, business keys, and joins to live data.&lt;/p&gt;

&lt;p&gt;That is where Oracle AI Database stands out as a unified memory core. It brings &lt;a href="https://docs.oracle.com/en/database/oracle/oracle-database/26/vecse/overview-node.html" rel="noopener noreferrer"&gt;vector search&lt;/a&gt; next to &lt;a href="https://docs.oracle.com/en/database/oracle/oracle-database/26/cncpt/tables-and-table-clusters.html#GUID-F845B1A7-71E3-4312-B66D-BC16C198ECE5" rel="noopener noreferrer"&gt;relational data&lt;/a&gt;, &lt;a href="https://docs.oracle.com/en/database/oracle/oracle-database/26/adjsn/json-data-and-oracle-ai-database.html" rel="noopener noreferrer"&gt;JSON&lt;/a&gt;, &lt;a href="https://docs.oracle.com/en/database/oracle/oracle-database/26/rdfrm/rdf-graph-overview.html#GUID-F422BB9F-8473-4980-9D6C-848F708C10E0" rel="noopener noreferrer"&gt;graph&lt;/a&gt;, &lt;a href="https://docs.oracle.com/en/database/oracle/oracle-database/26/spatl/what-is-oracle-spatial.html" rel="noopener noreferrer"&gt;spatial&lt;/a&gt;, and enterprise governance features, allowing agents to retrieve semantically relevant context without losing operational control.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why text search belongs in the unified memory core
&lt;/h2&gt;

&lt;p&gt;A significant portion of production searches need both &lt;strong&gt;vector similarity and keyword matching&lt;/strong&gt;. Users do not always ask only by meaning. They often include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;exact product names&lt;/li&gt;
&lt;li&gt;policy clauses&lt;/li&gt;
&lt;li&gt;ticket IDs&lt;/li&gt;
&lt;li&gt;account numbers&lt;/li&gt;
&lt;li&gt;legal terms&lt;/li&gt;
&lt;li&gt;error messages and codes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is why a mature memory core should include &lt;strong&gt;&lt;a href="https://docs.oracle.com/en/database/oracle/oracle-database/26/ccapp/" rel="noopener noreferrer"&gt;text search&lt;/a&gt; alongside vector, graph, spatial, JSON, and relational capabilities&lt;/strong&gt;. Semantic retrieval helps with meaning; keyword retrieval helps with precision. Together they produce more reliable enterprise context.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hybrid storage architectures and patterns
&lt;/h2&gt;

&lt;p&gt;Most serious agent systems use a hybrid pattern, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;in-memory working context&lt;/strong&gt; for active sessions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;relational/JSON persistence&lt;/strong&gt; for durable state and episodic history&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;vector indexes&lt;/strong&gt; for semantic recall&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;text search&lt;/strong&gt; for lexical precision&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;object storage&lt;/strong&gt; for large source artifacts&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;graph structures&lt;/strong&gt; where relationships are central to reasoning&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The question is not whether hybrid memory exists—it almost always does. The real design decision is whether those layers are operationally fragmented or organized around a unified platform.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3itrb9narv7ihy5vjtua.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3itrb9narv7ihy5vjtua.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Example: relationship-aware memory with SQL Property Graph
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://docs.oracle.com/en/database/oracle/property-graph/26.1/spgdg/sql-property-graph.html" rel="noopener noreferrer"&gt;Oracle SQL Property Graph&lt;/a&gt; lets you model and query graph data — vertices (nodes) and edges (relationships) — directly on top of existing relational tables, views, materialized views, or external tables. No data is copied; the graph definition stores only metadata, and queries operate against current table data. This is useful when an agent must follow connected context such as user → ticket → service → document:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;GRAPH_TABLE&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;memory_graph&lt;/span&gt;
  &lt;span class="k"&gt;MATCH&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;u&lt;/span&gt; &lt;span class="k"&gt;IS&lt;/span&gt; &lt;span class="k"&gt;user&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;r&lt;/span&gt; &lt;span class="k"&gt;IS&lt;/span&gt; &lt;span class="n"&gt;opened&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="k"&gt;IS&lt;/span&gt; &lt;span class="n"&gt;ticket&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;m&lt;/span&gt; &lt;span class="k"&gt;IS&lt;/span&gt; &lt;span class="n"&gt;mentions&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;d&lt;/span&gt; &lt;span class="k"&gt;IS&lt;/span&gt; &lt;span class="n"&gt;document&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="n"&gt;COLUMNS&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;u&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;  &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;user_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;title&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;ticket_title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;d&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;title&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;document_title&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That kind of retrieval complements vector similarity by surfacing the business relationships around a memory item, not just the nearest semantic neighbors.&lt;/p&gt;




&lt;h2&gt;
  
  
  Scaling agent memory for large applications
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Externalizing memory from models
&lt;/h3&gt;

&lt;p&gt;Large language models should not be expected to carry all relevant context in their parameters or prompt window. As applications grow, memory must be externalized into governed stores that can be queried on demand.&lt;/p&gt;

&lt;p&gt;This improves:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;freshness of retrieved context&lt;/li&gt;
&lt;li&gt;controllability of business logic&lt;/li&gt;
&lt;li&gt;auditability and compliance&lt;/li&gt;
&lt;li&gt;reuse across workflows and teams&lt;/li&gt;
&lt;li&gt;cost efficiency versus oversizing prompts&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Hierarchical and tiered memory layers
&lt;/h3&gt;

&lt;p&gt;At scale, memory is usually tiered:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Hot memory&lt;/strong&gt; – immediate session context and recent tool outputs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Warm memory&lt;/strong&gt; – summaries, recent episodes, and active task state&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cold memory&lt;/strong&gt; – historical records, archived artifacts, and long-term facts&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This layered approach keeps latency manageable while retaining historical depth.&lt;/p&gt;

&lt;h2&gt;
  
  
  Retrieval-augmented generation (RAG) techniques
&lt;/h2&gt;

&lt;p&gt;RAG is the most common pattern for grounding an agent with external memory. Strong RAG systems typically combine:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;semantic retrieval over embeddings&lt;/li&gt;
&lt;li&gt;lexical retrieval for exact terms&lt;/li&gt;
&lt;li&gt;metadata filtering for tenant, time, trust, or policy&lt;/li&gt;
&lt;li&gt;reranking for relevance and precision&lt;/li&gt;
&lt;li&gt;source attribution for traceability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In most real-world systems, no single retrieval method is enough on its own. In enterprise settings, hybrid retrieval matters. Many useful searches depend on both meaning and exact terminology, so text search is not a legacy feature to bolt on later. It is an important part of the unified memory core.&lt;/p&gt;

&lt;h2&gt;
  
  
  Retrieval evaluation metrics
&lt;/h2&gt;

&lt;p&gt;To validate retrieval quality rigorously, hybrid memory systems should be measured with explicit ranking and coverage metrics, not only by subjective answer quality.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Precision@K&lt;/strong&gt;: how many of the top-K retrieved results are relevant.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Recall@K:&lt;/strong&gt; how much of the relevant context is recovered within top-K results.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MRR (Mean Reciprocal Rank)&lt;/strong&gt;: how early the first relevant result appears in the ranked list.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Latency (p50/p95)&lt;/strong&gt;: retrieval responsiveness under realistic concurrency and tenant load.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In practice, these metrics should be tracked per retrieval mode (lexical, semantic, hybrid, graph-augmented) and per query class (policy, troubleshooting, identity, compliance) to detect ranking drift early and keep retrieval behavior stable over time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Memory management: summarization, pruning, and lifecycle policies
&lt;/h2&gt;

&lt;p&gt;Memory is not just about storing more. It is also about deciding what to keep, compress, expire, and promote.&lt;/p&gt;

&lt;p&gt;Important practices include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;summarization&lt;/strong&gt; to condense long histories into reusable state&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;pruning&lt;/strong&gt; to remove redundant or low-value context&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;retention policies&lt;/strong&gt; based on legal, business, and product rules&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;promotion rules&lt;/strong&gt; to move temporary knowledge into durable memory&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;staleness checks&lt;/strong&gt; so outdated facts do not keep reappearing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example: scheduled summarization and pruning&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.oracle.com/en/database/oracle/oracle-database/21/arpls/DBMS_SCHEDULER.html" rel="noopener noreferrer"&gt;DBMS_SCHEDULER&lt;/a&gt; is Oracle's enterprise job scheduling framework. It provides a rich feature set for scheduling PL/SQL code, stored procedures, executables, and scripts — either on a time-based calendar expression, in response to external events, or as part of dependency chains. The DBMS_SCHEDULER maps naturally to memory lifecycle automation. Teams can schedule summarization, retention enforcement, and archive workflows directly in the database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plsql"&gt;&lt;code&gt;&lt;span class="k"&gt;BEGIN&lt;/span&gt;
  &lt;span class="n"&gt;DBMS_SCHEDULER&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;CREATE_JOB&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;job_name&lt;/span&gt;        &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;SUMMARIZE_AGENT_SESSIONS&lt;/span&gt;&lt;span class="o"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;job_type&lt;/span&gt;        &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;PLSQL_BLOCK&lt;/span&gt;&lt;span class="o"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;job_action&lt;/span&gt;      &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;BEGIN memory_pkg.summarize_old_sessions(30); END;&lt;/span&gt;&lt;span class="o"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;repeat_interval&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;FREQ=HOURLY;INTERVAL=6&lt;/span&gt;&lt;span class="o"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;enabled&lt;/span&gt;         &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;TRUE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;auto_drop&lt;/span&gt;       &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;FALSE&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;END&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="o"&gt;/&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The same scheduling pattern can also keep search infrastructure fresh, for example by running CTX_DDL.SYNC_INDEX and CTX_DDL.OPTIMIZE_INDEX jobs for Oracle Text indexes on a predictable cadence.&lt;/p&gt;

&lt;h2&gt;
  
  
  Infrastructure considerations for scalability
&lt;/h2&gt;

&lt;p&gt;As memory volume grows, architects should consider:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;partitioning by tenant, time, or workload&lt;/li&gt;
&lt;li&gt;indexing strategies for vector, text, and relational access&lt;/li&gt;
&lt;li&gt;concurrency and transaction isolation for multi-agent workflows&lt;/li&gt;
&lt;li&gt;cost of re-embedding and re-ranking pipelines&lt;/li&gt;
&lt;li&gt;observability for recall quality, latency, and drift&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Oracle AI Database is compelling here because it supports enterprise-grade scalability while keeping multiple data modalities close together. That reduces the coordination overhead of moving context between independent systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Operations runbook for memory systems
&lt;/h2&gt;

&lt;p&gt;To keep unified memory reliable in production, teams should operationalize a lightweight runbook that covers indexing, partitioning, scheduler cadence, and observability.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Indexing&lt;/strong&gt;: monitor Oracle Text and vector index health, and schedule regular sync/optimize routines.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Partitioning&lt;/strong&gt;: partition event and knowledge tables by tenant and/or time windows to control growth and query cost.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scheduler cadence&lt;/strong&gt;: run summarization, pruning, retention, and index-maintenance jobs on predictable intervals.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Observability&lt;/strong&gt;: track retrieval latency (p50/p95), result quality metrics, and drift signals across retrieval modes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Operational review&lt;/strong&gt;: review failed jobs, low-confidence retrieval patterns, and tenant hot spots on a recurring schedule.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A compact runbook helps maintain retrieval quality, governance compliance, and performance consistency as memory volume and workload complexity increase.&lt;/p&gt;




&lt;h2&gt;
  
  
  Implementation walkthrough
&lt;/h2&gt;

&lt;p&gt;This implementation builds a &lt;a href="https://github.com/oracle-devrel/oracle-ai-developer-hub/blob/main/notebooks/unified_agent_memory_oracle_ai_database.ipynb" rel="noopener noreferrer"&gt;unified memory flow&lt;/a&gt; in Oracle AI Database, from event storage to multi-mode retrieval and governance.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Initialize the environment and establish one Oracle connection reused across the workflow.&lt;/li&gt;
&lt;li&gt;Create episodic memory (agent_events) with realistic JSON events for user messages, tool calls, tool results, checkpoints, and summaries.&lt;/li&gt;
&lt;li&gt;Query episodic memory with SQL/JSON (JSON_VALUE, JSON_TABLE) for filtering, extraction, and analytics.&lt;/li&gt;
&lt;li&gt;Apply tenant-aware retrieval patterns so every recall path remains policy-aligned.&lt;/li&gt;
&lt;li&gt;Create and populate the knowledge store (knowledge_articles) with tenant-scoped support and policy content.&lt;/li&gt;
&lt;li&gt;Run lexical retrieval with Oracle Text (CONTAINS, SCORE) for exact-term precision.&lt;/li&gt;
&lt;li&gt;Add semantic retrieval with vectors and rank results by similarity using VECTOR_DISTANCE.&lt;/li&gt;
&lt;li&gt;Combine lexical, semantic, and metadata constraints into hybrid retrieval ranking.&lt;/li&gt;
&lt;li&gt;Execute unified recall by combining latest episodic context with knowledge retrieval candidates.&lt;/li&gt;
&lt;li&gt;Add relationship-aware retrieval with SQL Property Graph and GRAPH_TABLE over user-ticket-document paths.&lt;/li&gt;
&lt;li&gt;Apply lifecycle automation and security patterns using DBMS_SCHEDULER and DBMS_RLS.&lt;/li&gt;
&lt;li&gt;Validate outputs end to end and keep the workflow rerunnable with cleanup.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  How this guide maps to the notebook
&lt;/h2&gt;

&lt;p&gt;To make the guide easier to navigate, each concept in this article is implemented in a corresponding notebook section.&lt;/p&gt;

&lt;p&gt;Episodic memory is introduced through the agent_events model and sample event ingestion, then expanded with SQL/JSON extraction using JSON_VALUE and JSON_TABLE.&lt;/p&gt;

&lt;p&gt;Tenant-aware retrieval and knowledge storage follow, along with lexical retrieval using Oracle Text and semantic retrieval using vectors (VECTOR_DISTANCE).&lt;/p&gt;

&lt;p&gt;Hybrid retrieval then combines lexical relevance, semantic distance, and metadata filtering in one ranking path.&lt;/p&gt;

&lt;p&gt;The workflow continues with unified episodic-plus-knowledge recall, relationship-aware traversal using GRAPH_TABLE, and operational patterns for lifecycle automation (DBMS_SCHEDULER) and row-level tenant isolation (DBMS_RLS / VPD).&lt;/p&gt;

&lt;p&gt;The notebook concludes with an optional &lt;a href="https://python.langchain.com/docs/integrations/vectorstores/oracle" rel="noopener noreferrer"&gt;LangChain interoperability layer&lt;/a&gt; that keeps retrieval Oracle-native.&lt;/p&gt;




&lt;h2&gt;
  
  
  Security and privacy considerations in agent memory storage
&lt;/h2&gt;

&lt;p&gt;Memory makes agents more capable, but it also expands the attack surface.&lt;/p&gt;

&lt;h3&gt;
  
  
  Common security risks and attack vectors
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;data leakage across users, roles, or tenants&lt;/li&gt;
&lt;li&gt;prompt injection through retrieved content&lt;/li&gt;
&lt;li&gt;memory poisoning from incorrect or malicious inputs&lt;/li&gt;
&lt;li&gt;over-retention of sensitive information&lt;/li&gt;
&lt;li&gt;stale or conflicting memory causing unsafe decisions&lt;/li&gt;
&lt;li&gt;weak authorization around recalled context and tool actions&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Data protection and access control strategies
&lt;/h3&gt;

&lt;p&gt;Best practices include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;role-based and attribute-based access control&lt;/li&gt;
&lt;li&gt;encryption in transit and at rest&lt;/li&gt;
&lt;li&gt;row-level or tenant-aware data isolation&lt;/li&gt;
&lt;li&gt;retrieval filters tied to identity and policy&lt;/li&gt;
&lt;li&gt;audit logs for memory access and mutation events&lt;/li&gt;
&lt;li&gt;data classification for sensitive memory types&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Operational security checklist
&lt;/h3&gt;

&lt;p&gt;To translate security principles into day-to-day practice, teams should validate a compact operational checklist for every memory workflow.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enforce tenant isolation in every retrieval path, not only at the application layer.&lt;/li&gt;
&lt;li&gt;Log memory reads and writes for auditability, including tool-triggered retrieval actions.&lt;/li&gt;
&lt;li&gt;Apply data classification and PII handling rules before memory is persisted or retrieved.&lt;/li&gt;
&lt;li&gt;Use role-aware authorization checks for both retrieval and mutation operations.&lt;/li&gt;
&lt;li&gt;Define retention and deletion controls so sensitive memory does not persist beyond policy windows.&lt;/li&gt;
&lt;li&gt;Protect retrieved context against prompt-injection and memory-poisoning propagation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This checklist helps ensure that memory quality, security, and compliance remain aligned as agent usage scales.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example: tenant-aware memory access with VPD&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.oracle.com/en/database/oracle/oracle-database/26/dbseg/using-oracle-vpd-to-control-data-access.html" rel="noopener noreferrer"&gt;Virtual Private Database (VPD)&lt;/a&gt;, also called Fine-Grained Access Control (FGAC), is Oracle's mechanism for enforcing row-level security transparently at the database kernel level. Unlike application-layer filtering — which can be bypassed by ad-hoc queries, ETL tools, or reporting applications — VPD policies are enforced by the Oracle query engine itself, regardless of how a query reaches the database.. The Row-Level Security can enforce tenant isolation directly in the database with VPD (DBMS_RLS):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plsql"&gt;&lt;code&gt;&lt;span class="k"&gt;BEGIN&lt;/span&gt;
  &lt;span class="n"&gt;DBMS_RLS&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ADD_POLICY&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;object_schema&lt;/span&gt;   &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;APP&lt;/span&gt;&lt;span class="o"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;object_name&lt;/span&gt;     &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;AGENT_MEMORY&lt;/span&gt;&lt;span class="o"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;policy_name&lt;/span&gt;     &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;TENANT_ISOLATION&lt;/span&gt;&lt;span class="o"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;function_schema&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;APP&lt;/span&gt;&lt;span class="o"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;policy_function&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;TENANT_ISOLATION_POLICY&lt;/span&gt;&lt;span class="o"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;statement_types&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;SELECT, INSERT, UPDATE, DELETE&lt;/span&gt;&lt;span class="o"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;update_check&lt;/span&gt;    &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;TRUE&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;END&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="o"&gt;/&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With SYS_CONTEXT-driven predicates, the same memory tables can serve many tenants while ensuring an agent only recalls context it is authorized to access.&lt;/p&gt;

&lt;h3&gt;
  
  
  Mitigation best practices and compliance
&lt;/h3&gt;

&lt;p&gt;Organizations should design agent memory with compliance in mind from the start. That means applying retention rules, provenance tracking, redaction strategies, and approval workflows where needed. It also means ensuring the retrieval layer does not bypass the same governance standards applied to transactional systems.&lt;/p&gt;

&lt;p&gt;This is another reason a unified, governed database platform is attractive: it allows memory retrieval to inherit mature &lt;a href="https://docs.oracle.com/en/database/oracle/oracle-database/26/dbseg/" rel="noopener noreferrer"&gt;enterprise security controls&lt;/a&gt; instead of recreating them separately for every store.&lt;/p&gt;




&lt;h3&gt;
  
  
  Recap of memory types and storage options
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Short-term memory&lt;/strong&gt; supports immediate task execution and should be fast.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Long-term memory&lt;/strong&gt; preserves durable context across sessions and workflows.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Episodic memory&lt;/strong&gt; captures what happened, when, and under what conditions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Semantic memory&lt;/strong&gt; helps the agent retrieve meaning, facts, and abstractions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No single storage mechanism solves every memory problem. In-memory caches, files, relational stores, key-value systems, text indexes, graph structures, and vector search all have a role.&lt;/p&gt;

&lt;h3&gt;
  
  
  Guidelines for choosing and managing agent memory
&lt;/h3&gt;

&lt;p&gt;Use the following rules of thumb:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Match storage to memory behavior, not just data format.&lt;/li&gt;
&lt;li&gt;Keep working memory fast, but make durable memory governed and auditable.&lt;/li&gt;
&lt;li&gt;Combine semantic retrieval with keyword and metadata filtering.&lt;/li&gt;
&lt;li&gt;Treat lifecycle management as part of memory design, not an afterthought.&lt;/li&gt;
&lt;li&gt;Prefer unified platforms when governance, consistency, and scale matter.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Balancing performance, scalability, and security
&lt;/h3&gt;

&lt;p&gt;The best agent memory architectures do not optimize only for retrieval quality. They balance:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;performance&lt;/strong&gt; for responsive agent interactions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;scalability&lt;/strong&gt; for growing users, tasks, and data volumes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;security&lt;/strong&gt; for enterprise trust and compliance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Oracle AI Database fits this balance especially well for enterprise agents. It provides one of the most sophisticated and scalable ways to unify vector, JSON, graph, spatial, relational, and analytic data access in a governed platform. That makes it a strong foundation for a true unified memory core rather than a collection of disconnected memory services.&lt;/p&gt;

&lt;p&gt;When memory becomes a first-class architectural concern, agents become more reliable, more context-aware, and more useful in real business workflows.&lt;/p&gt;




&lt;h2&gt;
  
  
  Validation &amp;amp; troubleshooting: failure modes and fallback strategy
&lt;/h2&gt;

&lt;p&gt;Production memory systems should not assume every retrieval mode is always available. A resilient workflow defines deterministic fallback behavior so the agent can continue safely and predictably.&lt;/p&gt;

&lt;p&gt;In this architecture, retrieval degrades gracefully: if lexical retrieval is unavailable or returns weak matches, the workflow can fall back to semantic retrieval; if vector retrieval is unavailable, lexical retrieval and tenant-scoped filtering remain active; if graph traversal is unavailable, relationship context can be reconstructed with relational joins; and if all retrieval modes return low-confidence results, the system should return a safe tenant-scoped fallback response and request clarification.&lt;/p&gt;

&lt;p&gt;This fallback strategy preserves continuity, improves user trust, and prevents silent retrieval failure in enterprise workflows.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Validate Oracle Text index creation and lexical ranking output.&lt;/li&gt;
&lt;li&gt;Validate vector dimensions and input format used by TO_VECTOR / VECTOR_DISTANCE.&lt;/li&gt;
&lt;li&gt;If GRAPH_TABLE parsing fails, avoid reserved labels and use names like user_v, ticket_v, document_v.&lt;/li&gt;
&lt;li&gt;If results are empty, verify tenant/category filters and fallback query paths.&lt;/li&gt;
&lt;li&gt;Run the notebook end-to-end and confirm outputs across episodic, lexical, vector, hybrid, and graph sections.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Frequently Asked Questions
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Why not use vector search alone?&lt;/strong&gt;&lt;br&gt;
Enterprise queries often contain exact policy/product terms and IDs, so hybrid retrieval is usually more reliable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What does unified memory mean in practice?&lt;/strong&gt;&lt;br&gt;
It means episodic, lexical, semantic, and relationship-aware retrieval are handled in one governed database workflow.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What happens if one retrieval mode is unavailable?&lt;/strong&gt;&lt;br&gt;
The workflow can use fallback paths (lexical, semantic, or relational fallback) to preserve continuity and safe defaults.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How does this relate to the companion notebook?&lt;/strong&gt;&lt;br&gt;
The notebook implements each pattern as executable steps so readers can validate outputs end-to-end.&lt;/p&gt;




&lt;h2&gt;
  
  
  Related documentation and further reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.oracle.com/en/database/oracle/oracle-database/26/" rel="noopener noreferrer"&gt;Oracle Database 26ai documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.oracle.com/en/database/oracle/oracle-database/26/vecse/" rel="noopener noreferrer"&gt;Oracle AI Vector Search User's Guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.oracle.com/en/database/oracle/oracle-database/26/adjsn/" rel="noopener noreferrer"&gt;Oracle JSON Developer's Guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.oracle.com/en/database/oracle/oracle-database/26/ccapp/" rel="noopener noreferrer"&gt;Oracle Text Application Developer's Guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.oracle.com/en/database/oracle/oracle-database/26/spatl/" rel="noopener noreferrer"&gt;Oracle Spatial and Graph documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.oracle.com/en/database/oracle/oracle-database/26/dbseg/" rel="noopener noreferrer"&gt;Oracle Database Security Guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://python.langchain.com/docs/integrations/vectorstores/oracle" rel="noopener noreferrer"&gt;LangChain Oracle vector store integration&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>database</category>
      <category>memory</category>
    </item>
    <item>
      <title>Agent Memory with LangChain4j and Oracle AI Database</title>
      <dc:creator>Anders Swanson</dc:creator>
      <pubDate>Wed, 22 Apr 2026 17:22:17 +0000</pubDate>
      <link>https://forem.com/oracledevs/agent-memory-with-langchain4j-and-oracle-ai-database-27bl</link>
      <guid>https://forem.com/oracledevs/agent-memory-with-langchain4j-and-oracle-ai-database-27bl</guid>
      <description>&lt;p&gt;One of the quickest ways to make an impressive agent demo is to prepare a clever prompt. One of the quickest ways to make that same agent fall apart in production is to give it no durable memory.&lt;/p&gt;

&lt;p&gt;In this article, we'll build a small, memory-backed assistant with &lt;a href="https://github.com/langchain4j/langchain4j" rel="noopener noreferrer"&gt;LangChain4j&lt;/a&gt; and Oracle AI Database. The assistant can search prior incidents, runbooks, decisions, and shift handoffs to answer questions. It can write new memories back to the database so they become searchable in any session. Additionally, all user, agent, and tool messages are logged to database table for observability and auditing.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Database feature overview&lt;/li&gt;
&lt;li&gt;Run the sample&lt;/li&gt;
&lt;li&gt;Chat Memory vs Durable Memory&lt;/li&gt;
&lt;li&gt;Hybrid retrieval: semantic + full-text search&lt;/li&gt;
&lt;li&gt;Lightweight reranking&lt;/li&gt;
&lt;li&gt;LangChain4j agent&lt;/li&gt;
&lt;li&gt;Memory writeback&lt;/li&gt;
&lt;li&gt;Recording user, agent, and tool messages&lt;/li&gt;
&lt;li&gt;Why database memory is useful for agents&lt;/li&gt;
&lt;li&gt;Code pointers&lt;/li&gt;
&lt;li&gt;Where you can take this next&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Database feature overview
&lt;/h4&gt;

&lt;p&gt;The agent is built with modern Oracle AI Database features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;persistent &lt;code&gt;JSON&lt;/code&gt; memory documents in Oracle AI Database&lt;/li&gt;
&lt;li&gt;vector embeddings in a &lt;code&gt;VECTOR&lt;/code&gt; column&lt;/li&gt;
&lt;li&gt;Oracle Text search over the same JSON document&lt;/li&gt;
&lt;li&gt;hybrid ranking that blends semantic and exact-match retrieval&lt;/li&gt;
&lt;li&gt;append-only transcript logging by conversation ID&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Using these features, the agent (a fictional operations assistant) can answer question about runbooks, incident reviews, change requests, and shift handoffs from its persistent memory. Because the memory is database backed, multiple agents from concurrent sessions may access the same data safely.&lt;/p&gt;

&lt;h4&gt;
  
  
  Run the sample
&lt;/h4&gt;

&lt;p&gt;You will need Java 21+, Maven, Docker, and an &lt;a href="https://platform.openai.com/" rel="noopener noreferrer"&gt;OpenAI API Key&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;From the &lt;a href="https://github.com/andersswanson/oracle-ai-database-examples/tree/main/langchain4j-agent-memory" rel="noopener noreferrer"&gt;module root&lt;/a&gt;, run the tests:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;OPENAI_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;your key&amp;gt;
mvn &lt;span class="nb"&gt;test&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To run the live terminal app using your database connection string and user:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;OPENAI_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;your key&amp;gt;
mvn compile &lt;span class="nb"&gt;exec&lt;/span&gt;:java &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-Dexec&lt;/span&gt;.args&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"jdbc:oracle:thin:@localhost:1521/freepdb1 testuser testpwd"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once it starts, try prompts like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;What happened during the checkout incident after CHG2145?&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Which runbook section should I use for the checkout rollback?&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Draft a next-shift handoff and remember it.&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Chat Memory vs Durable Memory
&lt;/h4&gt;

&lt;p&gt;Chat memory and durable memory solve different problems. Operational memory has different requirements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;it should survive process restarts&lt;/li&gt;
&lt;li&gt;it should be queryable across conversations from distributed, concurrent agents&lt;/li&gt;
&lt;li&gt;it should support structured metadata like service, environment, incident ID, and change ticket&lt;/li&gt;
&lt;li&gt;it should be searchable both semantically and exactly&lt;/li&gt;
&lt;li&gt;it should allow writeback when the agent learns something worth preserving&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That starts to look a lot more like a database problem than a prompt engineering problem.&lt;/p&gt;

&lt;h4&gt;
  
  
  Hybrid retrieval: semantic + full-text search
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvjj8zm4u2n7p6vt8zr5v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvjj8zm4u2n7p6vt8zr5v.png" alt="Hybrid search" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/andersswanson/oracle-ai-database-examples/blob/main/langchain4j-agent-memory/src/main/java/dev/andersswanson/oracle/langchain4jmemory/MemoryRepository.java" rel="noopener noreferrer"&gt;&lt;code&gt;MemoryRepository&lt;/code&gt;&lt;/a&gt; runs two queries, which are fused into one ranked list:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://github.com/andersswanson/oracle-ai-database-examples/blob/main/langchain4j-agent-memory/src/main/java/dev/andersswanson/oracle/langchain4jmemory/MemoryRepository.java#L53-L63" rel="noopener noreferrer"&gt;Vector search&lt;/a&gt; over the &lt;code&gt;embedding&lt;/code&gt; column using cosine distance.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/andersswanson/oracle-ai-database-examples/blob/main/langchain4j-agent-memory/src/main/java/dev/andersswanson/oracle/langchain4jmemory/MemoryRepository.java#L65-L75" rel="noopener noreferrer"&gt;Oracle Text search&lt;/a&gt; over the JSON payload using &lt;code&gt;json_textcontains&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here is the vector query:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;select&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="n"&gt;memory_kind&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="n"&gt;memory_doc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;vector_distance&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;embedding&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;COSINE&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;vector_score&lt;/span&gt;
&lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="n"&gt;agent_memories&lt;/span&gt;
&lt;span class="k"&gt;order&lt;/span&gt; &lt;span class="k"&gt;by&lt;/span&gt; &lt;span class="n"&gt;vector_score&lt;/span&gt; &lt;span class="k"&gt;desc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;
&lt;span class="k"&gt;fetch&lt;/span&gt; &lt;span class="k"&gt;first&lt;/span&gt; &lt;span class="o"&gt;?&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt; &lt;span class="k"&gt;only&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And here is the text query:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;select&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="n"&gt;memory_kind&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="n"&gt;memory_doc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="n"&gt;score&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;text_score&lt;/span&gt;
&lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="n"&gt;agent_memories&lt;/span&gt;
&lt;span class="k"&gt;where&lt;/span&gt; &lt;span class="n"&gt;json_textcontains&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;memory_doc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'$'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;order&lt;/span&gt; &lt;span class="k"&gt;by&lt;/span&gt; &lt;span class="n"&gt;score&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;desc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;
&lt;span class="k"&gt;fetch&lt;/span&gt; &lt;span class="k"&gt;first&lt;/span&gt; &lt;span class="o"&gt;?&lt;/span&gt; &lt;span class="k"&gt;rows&lt;/span&gt; &lt;span class="k"&gt;only&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pure vector search is often too fuzzy for ticket IDs. Pure text search is often too brittle for paraphrases. Hybrid retrieval handles both.&lt;/p&gt;

&lt;h4&gt;
  
  
  Lightweight reranking
&lt;/h4&gt;

&lt;p&gt;Once both branches return hits, &lt;a href="https://github.com/andersswanson/oracle-ai-database-examples/blob/main/langchain4j-agent-memory/src/main/java/dev/andersswanson/oracle/langchain4jmemory/MemorySearchRanker.java" rel="noopener noreferrer"&gt;&lt;code&gt;MemorySearchRanker&lt;/code&gt;&lt;/a&gt; merges the results with deterministic weights:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a bonus when the incident ID or change ticket matches directly&lt;/li&gt;
&lt;li&gt;a bonus for keyword overlap in the indexed memory text&lt;/li&gt;
&lt;li&gt;a combined &lt;code&gt;matchedBy&lt;/code&gt; indicator of &lt;code&gt;VECTOR&lt;/code&gt;, &lt;code&gt;TEXT&lt;/code&gt;, or &lt;code&gt;BOTH&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The deterministic ranker could be implemented by an LLM judge or a more complex re-ranking system. For this sample, I kept it intentionally lightweight and low-latency.&lt;/p&gt;

&lt;h4&gt;
  
  
  LangChain4j agent
&lt;/h4&gt;

&lt;p&gt;The &lt;a href="https://github.com/andersswanson/oracle-ai-database-examples/blob/main/langchain4j-agent-memory/src/main/java/dev/andersswanson/oracle/langchain4jmemory/OpsMemoryAssistant.java" rel="noopener noreferrer"&gt;LangChain4j agent implementation&lt;/a&gt; is quite small, using a single interface:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;interface&lt;/span&gt; &lt;span class="nc"&gt;OpsMemoryAssistant&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nd"&gt;@SystemMessage&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"""
            You are an operations handoff assistant backed by Oracle AI Database memory.
            Use searchMemories when prior incidents, runbooks, handoffs, decisions, or change history are relevant.
            When you rely on memory results, include the references in the form [M123].
            If the user asks you to remember or preserve a new handoff or decision, call storeMemory after drafting it.
            Keep answers concise and operational. Mention incident IDs and change tickets when they matter.
            """&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="nd"&gt;@UserMessage&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"{{message}}"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="nf"&gt;chat&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nd"&gt;@V&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"message"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;userMessage&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is the right level of abstraction for this sample.&lt;/p&gt;

&lt;p&gt;LangChain4j handles chat orchestration and tool wiring. Oracle AI Database handles durable memory, search, and transcript persistence. Each layer is doing the job it is actually good at.&lt;/p&gt;

&lt;h4&gt;
  
  
  Memory writeback
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9u662b85vn3qic184gnd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9u662b85vn3qic184gnd.png" alt="Memory writeback" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The sample keeps two memory stores:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a curated durable memory store for retrieval&lt;/li&gt;
&lt;li&gt;an append-only transcript for observability and auditing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This one also stores new durable memory through the &lt;code&gt;storeMemory&lt;/code&gt; tool when the user explicitly asks the assistant to preserve a handoff or decision.&lt;/p&gt;

&lt;p&gt;That matters because an agent memory system should not just be a read-only archive. If a useful conclusion comes out of a conversation, the system should be able to keep it.&lt;/p&gt;

&lt;p&gt;In this sample, writeback creates a new &lt;code&gt;MemoryDocument&lt;/code&gt;, generates an embedding, and inserts both the JSON payload and vector into &lt;code&gt;agent_memories&lt;/code&gt;. Because the JSON search index is configured with &lt;code&gt;sync (on commit)&lt;/code&gt;, newly stored handoffs are searchable immediately after commit.&lt;/p&gt;

&lt;p&gt;That last detail is important. Delayed indexing is exactly the kind of thing that makes an agent feel unreliable.&lt;/p&gt;

&lt;h4&gt;
  
  
  Recording user, agent, and tool messages
&lt;/h4&gt;

&lt;p&gt;With our database connection, it's easy to record chat sessions in the database. To do this with LangChain4j, we implement the ChatMemory interface in the &lt;a href="https://github.com/andersswanson/oracle-ai-database-examples/blob/main/langchain4j-agent-memory/src/main/java/dev/andersswanson/oracle/langchain4jmemory/LoggingChatMemory.java" rel="noopener noreferrer"&gt;LoggingChatMemory.java class&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Each session gets its own unique conversation ID, and user/agent/tool messages are written to the &lt;code&gt;agent_conversation_log&lt;/code&gt; table.&lt;/p&gt;

&lt;p&gt;That table captures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;conversation_id&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;message_seq&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;role and message type&lt;/li&gt;
&lt;li&gt;message text&lt;/li&gt;
&lt;li&gt;tool name and tool call ID when relevant&lt;/li&gt;
&lt;li&gt;optional JSON context&lt;/li&gt;
&lt;li&gt;creation timestamp&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That distinction tends to get blurred in agent demos. It should not.&lt;/p&gt;

&lt;h4&gt;
  
  
  Why database memory is useful for agents
&lt;/h4&gt;

&lt;p&gt;Chat windows and flat files can't scale the same way a database can. A database-backed memory layer gives you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;durable storage&lt;/li&gt;
&lt;li&gt;structured metadata&lt;/li&gt;
&lt;li&gt;many types of retrieval: semantic, text, relationship, graph, etc.&lt;/li&gt;
&lt;li&gt;transactional writes and concurrency&lt;/li&gt;
&lt;li&gt;better auditability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Databases can help you progress from agent demos to real applications that effectively utilize agent memory.&lt;/p&gt;

&lt;h4&gt;
  
  
  Code pointers
&lt;/h4&gt;

&lt;p&gt;If you want to explore the implementation, start here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/andersswanson/oracle-ai-database-examples/blob/main/langchain4j-agent-memory/README.md" rel="noopener noreferrer"&gt;README.md&lt;/a&gt; -&amp;gt; app overview&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/andersswanson/oracle-ai-database-examples/blob/main/langchain4j-agent-memory/src/main/java/dev/andersswanson/oracle/langchain4jmemory/OpsMemoryAgentApplication.java" rel="noopener noreferrer"&gt;OpsMemoryAgentApplication.java&lt;/a&gt; -&amp;gt; Main class and agent loop&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/andersswanson/oracle-ai-database-examples/blob/main/langchain4j-agent-memory/src/main/java/dev/andersswanson/oracle/langchain4jmemory/MemoryRepository.java" rel="noopener noreferrer"&gt;MemoryRepository.java&lt;/a&gt; -&amp;gt; Memory retrieval for text and vector search&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/andersswanson/oracle-ai-database-examples/blob/main/langchain4j-agent-memory/src/main/java/dev/andersswanson/oracle/langchain4jmemory/MemoryTools.java" rel="noopener noreferrer"&gt;MemoryTools.java&lt;/a&gt; -&amp;gt; LangChain4j tool bindings to search and store memories&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/andersswanson/oracle-ai-database-examples/blob/main/langchain4j-agent-memory/src/main/java/dev/andersswanson/oracle/langchain4jmemory/LoggingChatMemory.java" rel="noopener noreferrer"&gt;LoggingChatMemory.java&lt;/a&gt; -&amp;gt; LangChain4j ChatMemory implementation to log chat interactions&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/andersswanson/oracle-ai-database-examples/blob/main/langchain4j-agent-memory/src/test/java/dev/andersswanson/oracle/langchain4jmemory/MemoryRepositoryIntegrationTest.java" rel="noopener noreferrer"&gt;MemoryRepositoryIntegrationTest.java&lt;/a&gt; -&amp;gt; test using Oracle AI Database Free and Testcontainers&lt;/li&gt;
&lt;/ul&gt;

&lt;h6&gt;
  
  
  The tests validate the behavior that matters
&lt;/h6&gt;

&lt;p&gt;The integration tests are worth reading because they verify the actual retrieval patterns we care about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;exact text search finds the checkout incident for &lt;code&gt;CHG2145&lt;/code&gt; and &lt;code&gt;INC4721&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;vector search finds the same incident from a paraphrased outage description&lt;/li&gt;
&lt;li&gt;hybrid fusion marks the strongest result as matched by both channels&lt;/li&gt;
&lt;li&gt;a stored handoff can be found on the next combined search&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Where you can take this next
&lt;/h4&gt;

&lt;p&gt;If you'd like to extend this sample, here's a few ideas to play with:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add "forgetting" with recency ranking so newer memories are ranked as more relevant.&lt;/li&gt;
&lt;li&gt;Parameterize scoring and filtering mechanisms to make the app more flexible.&lt;/li&gt;
&lt;li&gt;Add another agent tool that uses an LLM to judge search results.&lt;/li&gt;
&lt;li&gt;Add approval/rejection when storing memories. Maintain a log of failures so the agent knows what not to do.&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>oracle</category>
      <category>java</category>
      <category>agents</category>
      <category>ai</category>
    </item>
    <item>
      <title>Using Agent Skills to develop with Oracle AI Database</title>
      <dc:creator>Anders Swanson</dc:creator>
      <pubDate>Wed, 22 Apr 2026 17:19:24 +0000</pubDate>
      <link>https://forem.com/oracledevs/using-agent-skills-to-develop-with-oracle-ai-database-3h6j</link>
      <guid>https://forem.com/oracledevs/using-agent-skills-to-develop-with-oracle-ai-database-3h6j</guid>
      <description>&lt;p&gt;&lt;a href="https://anthropic.skilljar.com" rel="noopener noreferrer"&gt;Skills&lt;/a&gt; are reusable, task-specific workflows for your agents: each skill is a directory centered on a &lt;code&gt;SKILL.md&lt;/code&gt; file, with optional scripts, references, and assets packaged together.&lt;/p&gt;

&lt;p&gt;Skills bring context efficient capabilities to agents in a reliable, repeatable manner.&lt;/p&gt;

&lt;p&gt;In this article, we'll look at what skills are, how to install them, and how the open source &lt;a href="https://github.com/krisrice/oracle-db-skills" rel="noopener noreferrer"&gt;oracle-db-skills repo&lt;/a&gt; can help your agents do real work with Oracle AI Database. You can even easily write or extend the existing skills on your own!&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install skills&lt;/li&gt;
&lt;li&gt;Using Oracle AI Database skills&lt;/li&gt;
&lt;li&gt;Have a suggestion or a new skill? Contribute it!&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Install skills
&lt;/h4&gt;

&lt;p&gt;First, install the skills into your agent. With Codex, you can use the skills installer. Make sure to restart Codex afterwards:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$skill&lt;/span&gt;&lt;span class="nt"&gt;-installer&lt;/span&gt; https://github.com/krisrice/oracle-db-skills
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With Claude Code, either clone or copy the repo into your Claude-specific skills directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone git@github.com:krisrice/oracle-db-skills.git
&lt;span class="nb"&gt;cp&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; oracle-db-skills ~/.claude/skills/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Using Oracle AI Database skills
&lt;/h4&gt;

&lt;p&gt;Let's use our newly installed skills to create a local Oracle AI Database Free container. These containers are great for local dev, POCs, and testing.&lt;/p&gt;

&lt;p&gt;Using Codex, we can explicitly invoke the skill using the &lt;code&gt;$oracle-db-skills&lt;/code&gt; skill command. With Claude Code, use &lt;code&gt;/oracle-db-skills&lt;/code&gt; to invoke the skill. Let's do so using the following prompt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$oracle-db-skills start a new oracle ai database container for local development and give me the sqlcl connection command
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using this prompt, Codex inspected my local environment to ensure it could pull and run the container, and then ran the following command to start an Oracle AI Database Free container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;• Ran docker run -d --name oracle-ai-free -p 1521:1521 -e ORACLE_PWD=Oracle123! -v oracle-ai-free-data:/opt/oracle/oradata
  │ container-registry.oracle.com/database/free:latest
  └ b4daa0c4f9b5966087395005ddd251e68a7cdf1cae851b006cd7800da947fd04
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After verifying the container started correctly, Codex then gave me the following SQLcl commands to connect:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;• Started a new Oracle AI Database Free container from the official image as oracle-ai-free. It's healthy on localhost:1521, and the PDB
  FREEPDB1 is open read/write.

  Use SQLcl like this for normal development work:

  sql system/Oracle123!@//localhost:1521/FREEPDB1

  If you need SYSDBA, use:

  sql sys/Oracle123!@//localhost:1521/FREE as sysdba
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Both of which connect to the container started by the agent, using the skill.&lt;/p&gt;

&lt;h4&gt;
  
  
  Have a suggestion or a new skill? Contribute it!
&lt;/h4&gt;

&lt;p&gt;The &lt;a href="https://github.com/krisrice/oracle-db-skills" rel="noopener noreferrer"&gt;oracle-db-skills GitHub repo&lt;/a&gt; is open source, so if you have an idea or suggestion, go ahead and contribute it. The repo is designed to be an evolving set of curated skills, meaning developer feedback is essential.&lt;/p&gt;

</description>
      <category>oracle</category>
      <category>ai</category>
      <category>agents</category>
      <category>software</category>
    </item>
    <item>
      <title>16 Ways to Make a Small Language Model Think Bigger</title>
      <dc:creator>Wojtek Pluta</dc:creator>
      <pubDate>Tue, 21 Apr 2026 07:56:58 +0000</pubDate>
      <link>https://forem.com/oracledevs/16-ways-to-make-a-small-language-model-think-bigger-2lbo</link>
      <guid>https://forem.com/oracledevs/16-ways-to-make-a-small-language-model-think-bigger-2lbo</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;This article is syndicated from the original post on &lt;a href="https://blogs.oracle.com/developers/16-ways-to-make-a-small-language-model-think-bigger" rel="noopener noreferrer"&gt;blogs.oracle.com&lt;/a&gt;. Read the canonical version there for the latest updates.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;All of the code in this article is available in the &lt;a href="http://github.com/oracle-devrel/oracle-ai-developer-hub" rel="noopener noreferrer"&gt;Oracle AI Developer Hub&lt;/a&gt;. The repository is part of Oracle’s open-source AI collection and serves as the reference implementation for everything covered here.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;You can install it with &lt;code&gt;pip install agent-reasoning&lt;/code&gt;, browse the 16 agent classes, run the TUI, or integrate it directly into an existing Ollama pipeline as a zero-change replacement client. If you find it useful, a GitHub star goes a long way.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Small language models struggle with complex reasoning on their own, but agent-based architectures (like Tree of Thoughts or Self-Consistency) can significantly improve their performance.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;agent-reasoning&lt;/code&gt; framework adds 16 research-backed reasoning strategies to any Ollama model using a simple &lt;code&gt;+strategy&lt;/code&gt; tag—no code changes required.&lt;/li&gt;
&lt;li&gt;Different strategies suit different tasks: CoT works well overall, ReAct excels with external data, and branching methods improve accuracy at the cost of speed.&lt;/li&gt;
&lt;li&gt;Much of modern AI progress comes from orchestration (prompting, search, control flow), not just larger models.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;Generally, a 270M parameter LLM (as of today, April 2026) struggles with even basic multi-step reasoning. Ask a model like &lt;code&gt;gemma3:270m&lt;/code&gt; to solve the classic water jug problem, and it will often return a confidently incorrect answer—much like other small language models (SLMs) of similar size and training.&lt;/p&gt;

&lt;p&gt;However, take that same model and wrap it inside a Tree of Thoughts (ToT) agent, running a breadth-first search (BFS) with three levels and weighted branches, and it can reliably solve the puzzle. The improvement comes from the architecture: the agent distributes the reasoning process across structured exploration steps, compensating for the limitations of a single LLM call.&lt;/p&gt;

&lt;p&gt;This is where things get interesting. Much of the progress in applied AI isn't coming from bigger models alone, but from engineers rethinking how to orchestrate them—layering search, memory, and control flow on top of a standard LLM call to unlock new capabilities.&lt;/p&gt;

&lt;p&gt;This is the fundamental idea behind &lt;a href="https://github.com/oracle-devrel/oracle-ai-developer-hub/tree/main/apps/agent-reasoning" rel="noopener noreferrer"&gt;agent-reasoning&lt;/a&gt;: sixteen cognitive architectures—each backed by peer-reviewed research—can be applied to any Ollama-served model via a simple &lt;code&gt;+Strategy&lt;/code&gt; tag appended to the model name. Call &lt;code&gt;gemma3:270m+tot&lt;/code&gt; instead of &lt;code&gt;gemma3:270m&lt;/code&gt;, and the interceptor handles everything else.&lt;/p&gt;

&lt;p&gt;We’ll talk about the different ways to invoke these reasoning strategies through the project.&lt;/p&gt;




&lt;h2&gt;
  
  
  What You’ll Learn
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;How the &lt;code&gt;ReasoningInterceptor&lt;/code&gt; intercepts model names, removes the &lt;code&gt;+Strategy&lt;/code&gt; tag, and directs traffic to one of 16 agent classes&lt;/li&gt;
&lt;li&gt;How 16 strategies divide into four families: sequential, branching, reflective, and meta —each representing a different reasoning approach and set of trade-offs&lt;/li&gt;
&lt;li&gt;What each major strategy accomplishes in practice, focusing on implementation rather than theory&lt;/li&gt;
&lt;li&gt;Which type of problem each strategy is best suited for, based on benchmark results from March 2026&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Interception Layer
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Key insight:&lt;/strong&gt; The &lt;code&gt;ReasoningInterceptor&lt;/code&gt; is an interchangeable drop-in client for Ollama that analyzes the model name for a &lt;code&gt;+Strategy&lt;/code&gt; tag and directs traffic to one of 16 cognitive agent classes while making no modifications to your pre-existing code.&lt;/p&gt;

&lt;p&gt;Everything relies on a single template: add &lt;code&gt;+Strategy&lt;/code&gt; to any Ollama model name.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F800%2F1%2APLi2WumhUe2et_POG0V_Og.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F800%2F1%2APLi2WumhUe2et_POG0V_Og.png" title="Using ReasoningInterceptor as a drop-in replacement client" alt="Using ReasoningInterceptor as a drop-in replacement client; strategy routing can be enabled via model name tags (e.g., +tot)." width="800" height="249"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Using ReasoningInterceptor as a drop-in replacement client&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The image below illustrates the entire routing process from start to finish. The interceptor acts as a middleman between your code and Ollama, removes the &lt;code&gt;+Strategy tag&lt;/code&gt;, and sends traffic to the correct agent class.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F800%2F1%2A5MwkQVsNUA1pqBEzsV4ACA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F800%2F1%2A5MwkQVsNUA1pqBEzsV4ACA.png" title="Illustrating how the interceptor separates the base model from the Strategy tag" alt="Diagram illustrating how the interceptor separates the base model from the Strategy tag and directs traffic to the corresponding agent class." width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Illustrating how the interceptor separates the base model from the Strategy tag&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;agent_map&lt;/code&gt; contains over fifty-five aliases mapped to sixteen agent classes. For example, &lt;code&gt;cot&lt;/code&gt;, &lt;code&gt;chain_of_thought&lt;/code&gt;, and &lt;code&gt;CoT&lt;/code&gt; all map to &lt;code&gt;CotAgent&lt;/code&gt;, while &lt;code&gt;mcts&lt;/code&gt; and &lt;code&gt;monte_carlo&lt;/code&gt; map to &lt;code&gt;MCTSAgent&lt;/code&gt;. Because the interceptor is a drop-in client for Ollama—supporting the same &lt;code&gt;.generate()&lt;/code&gt; and &lt;code&gt;.chat()&lt;/code&gt; APIs— existing LangChain pipelines, web UIs, and scripts can automatically gain reasoning capabilities by changing a single string in the model name.&lt;/p&gt;

&lt;p&gt;Additionally, the interceptor can be used as a network proxy. Instead of pointing an Ollama compatible application at &lt;code&gt;http://localhost:11434&lt;/code&gt;, direct it to &lt;code&gt;http://localhost:8080&lt;/code&gt; instead. Using a model name like &lt;code&gt;gemma3:270m+CoT&lt;/code&gt;, the gateway will apply reasoning transparently.&lt;/p&gt;

&lt;h2&gt;
  
  
  Family 1: Sequential Strategies
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Key insight:&lt;/strong&gt; Sequential Strategies process problems in a linear chain, where each step feeds into the next. In benchmarks, CoT achieved 88.7% average accuracy, compared to 81.3% for standard generation on the same model and weights.&lt;/p&gt;

&lt;p&gt;Each of the sixteen strategies fall into one of four families. The diagram below illustrates how they are grouped.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F800%2F1%2AqIVVyTPUDA2luQCNkzWgKw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F800%2F1%2AqIVVyTPUDA2luQCNkzWgKw.png" title="Categorization of the four strategy families" alt="Categorization of the four Strategy families: sequential, branching, reflective, and meta. Each route leads to a specific type of reasoning agent. The fastest Sequential Strategies occupy the top-left quadrant while slower Branching strategies sacrifice speed for increased accuracy." width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Categorization of the four strategy families&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Sequential strategies are designed for high-speed processing with minimal latency. They are ideal for problems with discrete, sequential steps.&lt;/p&gt;

&lt;h3&gt;
  
  
  Chain of Thought (CoT)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Paper:&lt;/strong&gt; Wei et al. (2022), &lt;a href="https://arxiv.org/abs/2201.11903" rel="noopener noreferrer"&gt;“Chain-of-Thought Prompting Elicits Reasoning in Large Language Models”&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Chain of Thought (CoT) is a prompting strategy in which the model generates intermediate reasoning steps before producing a final response. As noted in the original paper: prompting a model to produce these intermediate steps can significantly improve accuracy.&lt;/p&gt;

&lt;p&gt;For example, standard prompting on GSM8K achieves 66.7% accuracy. With CoT prompting, this increases to 73.3%— a 10% relative improvement achieved through simple prompt design alone.&lt;/p&gt;

&lt;p&gt;The following graphic illustrates how CoT chains appear in practice: a sequence of numbered steps, each building on the previous one.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F800%2F1%2ANwSyAs818bWZ3mCEDW2lOg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F800%2F1%2ANwSyAs818bWZ3mCEDW2lOg.png" title="CoT in operation" alt="Visual representation of CoT in operation: the model sequentially progresses through numbered steps (step 1…step n). Each subsequent step depends on previously generated steps. The numbering in the prompt is the only special instruction provided." width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;CoT in operation&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In terms of implementation within &lt;code&gt;CotAgent&lt;/code&gt;, the query is wrapped in a structured prompt:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F800%2F1%2AolcatRJAj5naE6svAHQbOA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F800%2F1%2AolcatRJAj5naE6svAHQbOA.png" title="Structured prompting enforces step-by-step reasoning in CoTAgent" alt="Structured prompting enforces step-by-step reasoning in CoTAgent" width="800" height="237"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Structured prompting enforces step-by-step reasoning in CoTAgent&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Benchmark result for qwen3.5:9b (9.7B): CoT achieves &lt;strong&gt;88.7% average accuracy&lt;/strong&gt;, across GSM8K (math), MMLU (logic), and ARC-Challenge (reasoning), compared to 81.3% for standard generation. This seven-point gain in performance is attributable solely to structural prompts. Identical weights and temperatures were applied to both models.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recommended usage:&lt;/strong&gt; Math word problems; logic puzzles; any multi-step reasoning task where the individual steps are sequential and do not have branches.&lt;/p&gt;

&lt;h3&gt;
  
  
  Decomposed Prompting
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Paper:&lt;/strong&gt; Khot et al. (2022), &lt;a href="https://arxiv.org/abs/2210.02406" rel="noopener noreferrer"&gt;“Decomposed Prompting: A Modular Approach for Solving Complex Tasks”&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Decomposed prompting is an architectural module that splits large problems into smaller sub-problems. Each sub-problem is handled independently while carrying forward accumulated context from earlier steps. Once all sub-problems are processed, their outputs are synthesized into a final result. &lt;code&gt;DecomposedAgent&lt;/code&gt; follows a three-phase process—decomposition, execution and synthesis—and propagating context throughout so that each step can build on prior results.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recommended usage:&lt;/strong&gt; Planning problems; trip itinerary generation; any problem where the ultimate answer consists of multiple distinguishable parts that may be individually addressed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Decomposed prompting achieved only 38.5% average accuracy in benchmark testing. This result requires context. GSM8K primarily evaluates arithmetic reasoning, where decomposing a problem like “what is 47 × 13 + 9?” introduces overhead without improving the model's ability to compute the answer.&lt;/p&gt;

&lt;p&gt;Decomposition is more effective for problems with genuinely separable components (trip planning, multi-section reports etc.), where each part benefits from focused attention. These strengths are not captured by the benchmark, and the results reflect that mismatch.&lt;/p&gt;

&lt;h3&gt;
  
  
  Least-to-Most Prompting
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Paper:&lt;/strong&gt; Zhou et al. (2022), &lt;a href="https://arxiv.org/abs/2205.10625" rel="noopener noreferrer"&gt;“Least-to-Most Prompting Enables Complex Reasoning in Large Language Models”&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Least-to-most prompting is a strategy that orders sub-questions from simplest to most complex, establishing prerequisite knowledge before tackling harder steps. Unlike decomposed prompting which generates arbitrary sub-problems, it enforces a deliberate progression where each step builds on the last. Knowledge is accumulated iteratively until the model reaches the final question.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recommended usage:&lt;/strong&gt; Questions with genuine prerequisites — e.g., “what is x?” before determining “how does x relate to y?”; educational style explanation sequences (“concept ladder”); tasks that require establishing foundational concepts before addressing more complex components.&lt;/p&gt;

&lt;h2&gt;
  
  
  Family 2: Branching Strategies
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Key insight:&lt;/strong&gt; Branching strategies explore multiple reasoning paths simultaneously and choose the best path. ToT scored 76.7% on GSM8K math, compared to 66.7% on GSM8K math with standard generation.&lt;/p&gt;

&lt;p&gt;More LLM calls mean higher latency— but often better answers on hard problems. Take this into consideration when running all branching strategies.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tree of Thoughts (ToT)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Paper:&lt;/strong&gt; Yao et al. (2023), &lt;a href="https://arxiv.org/abs/2305.10601" rel="noopener noreferrer"&gt;“Tree of Thoughts: Deliberate Problem Solving with Large Language Models”&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;ToT is a search-based methodology that evaluates numerous possible reasoning paths concurrently, selecting the best performing path as determined by evaluation metrics such as distance traveled or quality of intermediate solutions etc.&lt;/p&gt;

&lt;p&gt;Similar to chess engines, ToT applies BFS through an expanding tree of possible solutions. The core idea is straightforward: generate multiple partial solutions, evaluate them, prune weaker candidates, and continue exploring the most promising branches.&lt;/p&gt;

&lt;p&gt;Below is an illustration of how ToT generates and eliminates branches: green nodes represent surviving branches, while red nodes indicate those that have been eliminated. The final answer is derived from the highest scoring leaf node.&lt;/p&gt;

&lt;p&gt;A key design decision is how branches are evaluated. Should the same model handle both generation and scoring, or should a stronger model be introduced as a judge? In these benchmarks, the same model was used for both roles, but this is an area worth experimenting with, depending on your accuracy and latency constraints.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F800%2F1%2AQHJPySSkNpDOji9BCKz-Ng.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F800%2F1%2AQHJPySSkNpDOji9BCKz-Ng.png" title="Generating candidate branches at each level" alt="Illustration of how to generate candidate branches at each level; score candidate branches between 0 &amp;amp; 1; prune low-scored candidates; continue exploring surviving high-scored candidates until all levels are exhausted and then generate final answer from most promising leaf node." width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Generating candidate branches at each level&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ToTAgent&lt;/code&gt; implements this as configurable by &lt;code&gt;depth&lt;/code&gt; (default=3) and &lt;code&gt;width&lt;/code&gt; (default=2 branches). At every level, the agent generates a set of candidate next steps, evaluates them using a scoring function, prunes low-scoring options, and expands the remaining candidates into the next level.&lt;/p&gt;

&lt;p&gt;Tot achieved &lt;strong&gt;76.7% accuracy&lt;/strong&gt;—a 10% percent improvement over standard generation on GSM8K math problems. This performance comes at a cost: additional LLM calls are required at each step to evaluate candidate paths and their intermediate result, making it roughly 5-8x slower than CoT equivalent queries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recommended usage:&lt;/strong&gt; Logic puzzles with multiple solution paths; strategic decision problems; tasks where multiple approaches can be explored and compared.&lt;/p&gt;

&lt;h3&gt;
  
  
  Self-Consistency (Majority Voting)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Paper:&lt;/strong&gt; Wang et al. (2022), &lt;a href="https://arxiv.org/abs/2203.11171" rel="noopener noreferrer"&gt;“Self-Consistency Improves Chain of Thought Reasoning in Language Models”&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Self-Consistency is a sampling method that generates multiple independent reasoning traces and selects a final answer through majority voting. Unlike standard prompting, it relies on sampling k diverse traces at a higher temperature to encourage variation. Each trace produces a candidate answer, and the most frequently occurring answer is selected as the final output.&lt;/p&gt;

&lt;p&gt;The image below illustrates how both Self-Consistency and Monte Carlo Tree Search (MCTS) sample multiple reasoning paths, but differ fundamentally in how those paths are evaluated—majority voting versus UCB1-based exploration-exploitation balancing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F800%2F1%2AUKyufmNfjpFnSizTxD1M2w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F800%2F1%2AUKyufmNfjpFnSizTxD1M2w.png" title="Self-Consistency vs MCTS comparison" alt="Left: Self-Consistency flowchart — sampling k independent traces &amp;amp; selecting most commonly occurring final answer via majority vote. Right: Monte Carlo Tree Search (MCTS) flowchart — sampling new paths through UCB1-based exploration/exploitation tradeoff balancing — both generate multiple possible answers — selection methodology differ significantly." width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Self-Consistency vs MCTS comparison&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ConsistencyAgent&lt;/code&gt; uses &lt;code&gt;k=5&lt;/code&gt; samples at temperature of &lt;code&gt;0.7&lt;/code&gt; by default. It extracts final answers using regex-based pattern matching and selects the most frequent result via &lt;code&gt;counter.most_common()&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Self-Consistency matches CoT on both MMLU (96.7%) and GSM8K (76.7%). Its advantage lies in reliability rather than raw accuracy: majority voting across independent reasoning traces reduces the risk of single-trace errors propagating to the final answer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recommended usage:&lt;/strong&gt; Factual question answering; multiple-choice style questions; problems where arriving at the correct answer via diverse reasoning paths is more important than inspecting a single reasoning trace.&lt;/p&gt;

&lt;h2&gt;
  
  
  Family 3: Reflective Strategies
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Self-Reflection
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Paper:&lt;/strong&gt; Shinn et al. (2023), “Reflexion: Language Agents with Verbal Reinforcement Learning” — &lt;a href="https://arxiv.org/abs/2303.11366" rel="noopener noreferrer"&gt;arXiv:2303.11366&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Self-Reflection is a draft-critique-refine loop in which the model generates an initial answer, critiques it for errors, and then revises it. The Reflexion paper showed that this iterative process can meaningfully improve output quality, even without any gradient updates.&lt;/p&gt;

&lt;p&gt;The image below shows all 3 reflective strategies side by side: Self-Reflection, Debate, and Refinement Loop.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F800%2F1%2AGyy_CHbQa01wEnpRxsWMcA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F800%2F1%2AGyy_CHbQa01wEnpRxsWMcA.png" title="Reflective strategies comparison" alt="Left: Self-Reflection drafts, critiques, and refines until the critique says “CORRECT.” Right: Debate puts PRO and CON agents against each other with a Judge scoring each round. Bottom: Refinement Loop uses a numeric quality gate (0.0–1.0) to decide when to stop iterating." width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Reflective strategies comparison&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;SelfReflectionAgent&lt;/code&gt; runs a draft-critique-refine loop for up to 5 iterations, with early termination when the critique returns “CORRECT” in under 20 characters. If the critique is satisfied on an early pass, subsequent iterations are skipped. This approach helps keeps latency low for queries the model answers correctly on the initial pass.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recommended usage:&lt;/strong&gt; Creative writing, high-stakes technical explanations, anything where “good enough on the first try” is insufficient.&lt;/p&gt;

&lt;h3&gt;
  
  
  Adversarial Debate
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Paper:&lt;/strong&gt; Irving et al. (2018), &lt;a href="https://arxiv.org/abs/1805.00899" rel="noopener noreferrer"&gt;“AI Safety via Debate”&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Irving proposed debate as a mechanism for improving AI safety. Two agents present opposing arguments, and a judge (either a human or another LLM) evaluates their merits. The underlying premise is that that identifying flaw in weak arguments is often easier than constructing strong ones.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;DebateAgent&lt;/code&gt; conducts multiple rounds of PRO and CON arguments, with a judge evaluating each exchange. Following all rounds, the strongest arguments from both sides are synthesized into a final answer that balances competing perspectives. Context is carried forward between rounds, enabling incremental refinement rather than redundant arguments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recommended usage:&lt;/strong&gt; Controversial or ambiguous subjects; policy analysis; ethics and any subject matter requiring a balanced perspective.&lt;/p&gt;

&lt;h3&gt;
  
  
  Refinement Loop
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Paper:&lt;/strong&gt; Madaan et al. (2023), &lt;a href="https://arxiv.org/abs/2303.17651" rel="noopener noreferrer"&gt;“Self-Refine: Iterative Refinement with Self-Feedback”&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This paper describes a refinement loop similar to self-reflection, but instead of relying on a human-style critique to guide revisions, it uses a machine-based evaluation system with quantifiable quality metrics. These metrics determine whether further refinement is necessary. The loop terminates when a predefined quality metric is reached (&amp;gt; 0.9 by default) or when the maximum number of iterations is exceeded.&lt;/p&gt;

&lt;p&gt;The five-stage complex refinement pipeline consists of sequential stages, each focused on a distinct type of critique: technical accuracy, structure, depth, examples, and polish.&lt;/p&gt;

&lt;p&gt;Each stage targets a distinct aspect of quality, ensuring the model focuses exclusively on improving that dimension rather than attempting to optimize everything at once.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recommended usage:&lt;/strong&gt; Highly technical writing; documentation; blog posts, a scenario where production-quality output is required rather than simply a first draft.&lt;/p&gt;

&lt;h2&gt;
  
  
  Family 4: Cross-Domain and Meta Strategies
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Key insight:&lt;/strong&gt; Cross-domain strategies enable sharing knowledge among disciplines, while meta-strategies automatically route queries to the most appropriate reasoning technique without requiring manual selection.&lt;/p&gt;

&lt;h3&gt;
  
  
  Analogy-Based Reasoning
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Paper:&lt;/strong&gt; Gentner (1983), &lt;a href="https://doi.org/10.1111/j.1551-6708.1983.tb00497.x" rel="noopener noreferrer"&gt;“Structure Mapping: A Theoretical Framework for Analogy”&lt;/a&gt;, Cognitive Science&lt;/p&gt;

&lt;p&gt;Gentner's structure-mapping theory proposes that analogical reasoning operates by identifying structural correspondences across domains, rather than relying on surface-level similarity. The &lt;code&gt;AnalogicalAgent&lt;/code&gt; builds on this idea through three phases: (1) identify the underlying structure independent of domain specifics, (2) generate analogous solutions from different domains that share that structure, (3) select the most effective analogy and apply its solution approach.&lt;/p&gt;

&lt;p&gt;This process reduces reliance on memorized patterns. By focusing on underlying structure, the model learns &lt;em&gt;why&lt;/em&gt; a solution works, rather than simply recalling &lt;em&gt;what&lt;/em&gt; worked before.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recommended usage&lt;/strong&gt;: Solving problems that are structurally similar to prior ones, even if they differ superficially; transferring knowledge across domains; explaining complex concepts through analogy.&lt;/p&gt;

&lt;h3&gt;
  
  
  Socratic Questioning
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Paper:&lt;/strong&gt; Paul &amp;amp; Elder (2007), &lt;a href="https://www.criticalthinking.org/" rel="noopener noreferrer"&gt;“The Art of Socratic Questioning”&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Socratic Method:&lt;/strong&gt; Do not answer the question directly. Instead, ask follow-up questions that reduce ambiguity in the solution space.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;SocraticAgent&lt;/code&gt; repeatedly asks questions and receives model responses, continuing until it reaches a limit of five question-response exchanges. It then synthesizes the collected information into a final answer. A deduplication or normalization step helps prevent repeated queries that differ only in wording.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recommended for:&lt;/strong&gt; Philosophy; ethics; deep technical knowledge; any field requiring the model to “know” something as opposed to merely answering it.&lt;/p&gt;

&lt;h3&gt;
  
  
  ReAct (Reason + Act)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Paper:&lt;/strong&gt; Yao et al. (2022), &lt;a href="https://arxiv.org/abs/2210.03629" rel="noopener noreferrer"&gt;“ReAct: Synergizing Reasoning and Acting in Language Models”&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;ReAct is a conceptual framework that interweaves reasoning steps with tool invocations, allowing the model to ground its thinking in external information. In practice, the model decides what action to take, calls a tool such as a web search engine, examines the result, updates its reasoning, and repeats the cycle until it reaches a satisfactory answer. Current tools include web scraping, accessing Wikipedia via an API call, and a calculator interface, with mock-ups available for off-line execution scenarios.&lt;/p&gt;

&lt;p&gt;Using ReAct achieved 70.0% accuracy on ARC-Challenge (Science Reasoning). While not the highest on this particular benchmark, it enabled tool use for the LLM and allowed it to search for required information on the Internet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recommended usage&lt;/strong&gt;: Fact-checking; current events queries; mathematical calculations; tasks where access to grounded, external information is important.&lt;/p&gt;

&lt;h2&gt;
  
  
  Auto Router: MetaReasoningAgent
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Key insight:&lt;/strong&gt; A single LLM invocation allows &lt;code&gt;MetaReasoningAgent&lt;/code&gt; to classify each input into one of eleven categories and route it to the most appropriate strategy, without human intervention.&lt;/p&gt;

&lt;p&gt;All sixteen strategies depend on selecting the appropriate strategy for a given task. By removing this requirement, &lt;code&gt;MetaReasoningAgent&lt;/code&gt; eliminates the need for manual selection.&lt;/p&gt;

&lt;p&gt;The diagram below shows how each category maps to its corresponding strategy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F800%2F1%2ASSObpiuAEGr1s3E7oVbKGA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F800%2F1%2ASSObpiuAEGr1s3E7oVbKGA.png" title="MetaReasoningAgent classification diagram" alt="Classification occurs using a single LLM invocation returning CATEGORY, CONFIDENCE, and REASON." width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;MetaReasoningAgent classification diagram&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;MetaReasoningAgent&lt;/code&gt; instantiates the selected strategy class and passes control to it, along with all event objects for visualization.&lt;/p&gt;

&lt;p&gt;To use this capability, specify a model such as &lt;code&gt;gemma3:270m+meta&lt;/code&gt; or &lt;code&gt;gemma3:270m+auto&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;In practice, routing is generally intuitive: math problems are directed to CoT, logic puzzles to ToT, philosophical questions to Socratic Questioning, and controversial topics to Adversarial Debate.&lt;/p&gt;

&lt;p&gt;The trade-off is reduced control over strategy-specific hyperparameters in exchange for automatic routing aligned with the problem type.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Strategy Should You Pick? Benchmark Results (March 2026)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Key insight:&lt;/strong&gt; CoT performs best on average (88.7%) across diverse tasks. ReAct excels when tool use is available (70.0% on ARC-Challenge). ToT and Self-Consistency tie on GSM8K math at 76.7%.&lt;/p&gt;

&lt;p&gt;These results are based on 4,200 evaluations across 11 strategies using &lt;code&gt;qwen3.5:9b&lt;/code&gt;, collected as of March 2026. All 16 strategies are implemented and production-ready. However, the benchmarks shown below focus on the 11 that produce a single extractable answer. The remaining five are generation-focused and not suited to multiple-choice evaluation.&lt;/p&gt;

&lt;p&gt;The heat map and bar chart below provide a complete view of the results.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F800%2F1%2AlkHAnyNpsABYEqnoueCr9g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F800%2F1%2AlkHAnyNpsABYEqnoueCr9g.png" title="Benchmark results heatmap and bar chart" alt="Left: accuracy heatmap across GSM8K, MMLU, and ARC-Challenge for each strategy. Right: average accuracy bar chart. CoT wins overall at 88.7%." width="800" height="444"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Benchmark results heatmap and bar chart&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;The short version:&lt;/strong&gt; CoT wins on average across diverse tasks. Self-Consistency and ToT beat it on specific math benchmarks. ReAct dominates on factual/science tasks. Self-Reflection and Refinement Loop are not well captured by these benchmarks, as they primarily improve generation quality rather than multiple-choice accuracy.&lt;/p&gt;

&lt;p&gt;For most queries, start with &lt;code&gt;+cot&lt;/code&gt;. If you’re solving logic puzzles or planning problems, try &lt;code&gt;+tot&lt;/code&gt;. If you need factually grounded responses, use &lt;code&gt;+react&lt;/code&gt;. If you need polished, high-quality output rather than a quick answer, use &lt;code&gt;+refinement&lt;/code&gt;. When in doubt, &lt;code&gt;+meta&lt;/code&gt; will route the query automatically.&lt;/p&gt;

&lt;p&gt;In my experience building agent-reasoning, the most surprising finding is how much prompt structure alone can improve performance. For example, &lt;code&gt;qwen3.5:9b&lt;/code&gt; improves from 81.3% to 88.7% average accuracy simply by prompting it to produce numbered reasoning steps.&lt;/p&gt;

&lt;p&gt;As of March 2026, all 16 strategies are production-ready and have been evaluated across 4,200 benchmark runs.&lt;/p&gt;

&lt;p&gt;You can &lt;a href="https://github.com/oracle-devrel/oracle-ai-developer-hub/tree/main/apps/agent-reasoning" rel="noopener noreferrer"&gt;find the repository here&lt;/a&gt;. Install with &lt;code&gt;pip install agent-reasoning&lt;/code&gt; or &lt;code&gt;uv add agent-reasoning&lt;/code&gt;. The commands to get started:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F800%2F1%2AXo6o2jGEUekHQjIkVWUI_A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F800%2F1%2AXo6o2jGEUekHQjIkVWUI_A.png" title="Getting started commands" alt="Getting started commandsInstallation and launching agent-reasoning in seconds to access a TUI with 16 reasoning agents." width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Getting started commands&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The TUI provides a 16-agent sidebar, live streaming, and a step-through debugger. Arena mode runs all 16 agents simultaneously on the same query in a 4×4 grid.&lt;/p&gt;

&lt;p&gt;If this is useful, a GitHub star is always appreciated.&lt;/p&gt;




&lt;h2&gt;
  
  
  Frequently Asked Questions
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Do I need to modify my existing code to use agent-reasoning?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;No. The interceptor is a drop-in replacement for the Ollama client. Just change the model name string by appending &lt;code&gt;+strategy&lt;/code&gt; (e.g., &lt;code&gt;gemma3:270m+cot&lt;/code&gt;) and the interceptor handles everything else. Existing LangChain pipelines, web UIs, and scripts work without any other changes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Which strategy should I start with?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Start with &lt;code&gt;+cot&lt;/code&gt; (Chain of Thought). It scored the highest average accuracy (88.7%) across our benchmarks and adds minimal latency. If you are unsure, use &lt;code&gt;+meta&lt;/code&gt; and let the auto-router pick the best strategy for you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why were only 11 of the 16 strategies benchmarked?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The benchmarks (GSM8K, MMLU, ARC-Challenge) measure multiple-choice accuracy, which works well for strategies that produce a single extractable answer. The remaining five strategies are generation-focused (e.g., Refinement Loop, MCTS) and their strengths in output quality are not captured by multiple-choice evaluations. All 16 strategies are fully implemented and production-ready.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Can I use this with models other than Ollama-served models?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Currently the interceptor targets the Ollama API. Since it exposes the same &lt;code&gt;.generate()&lt;/code&gt; and &lt;code&gt;.chat()&lt;/code&gt; endpoints, any Ollama-compatible client works out of the box. Support for additional inference backends is on the roadmap.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How much slower are branching strategies compared to CoT?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;ToT is roughly 5-8x slower than CoT because it generates and evaluates multiple candidate branches at each level. Self-Consistency (k=5 samples) adds similar overhead. For latency-sensitive applications, stick with sequential strategies (CoT, Least-to-Most) and reserve branching strategies for problems where accuracy matters more than speed.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Created by Nacho Martinez, Data Scientist at Oracle. Find Nacho on &lt;a href="https://github.com/jasperan" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; and &lt;a href="https://linkedin.com/in/jasperan" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/em&gt;, or visit the &lt;a href="https://www.oracle.com/developer/resources/" rel="noopener noreferrer"&gt;Oracle AI Developer page&lt;/a&gt; for more resources.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>opensource</category>
      <category>python</category>
    </item>
    <item>
      <title>Agent Memory: Why Your AI Has Amnesia and How to Fix It</title>
      <dc:creator>Wojtek Pluta</dc:creator>
      <pubDate>Fri, 17 Apr 2026 09:08:10 +0000</pubDate>
      <link>https://forem.com/oracledevs/agent-memory-why-your-ai-has-amnesia-and-how-to-fix-it-475e</link>
      <guid>https://forem.com/oracledevs/agent-memory-why-your-ai-has-amnesia-and-how-to-fix-it-475e</guid>
      <description>&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Today's AI agents forget everything between conversations. Every interaction starts from zero, with no recall of who you are or what you've discussed before.&lt;/li&gt;
&lt;li&gt;Agent memory isn't about bigger context windows. It's about a persistent, evolving state that works across sessions.&lt;/li&gt;
&lt;li&gt;The field has converged on four memory types (working, procedural, semantic, episodic) that map directly to how human memory works.&lt;/li&gt;
&lt;li&gt;Building agent memory at enterprise scale is fundamentally a database problem. You need vectors, graphs, relational data, and ACID transactions working together.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  What Is Agent Memory and Why Does Your AI Agent Need It?
&lt;/h2&gt;

&lt;p&gt;You've spent weeks building an AI customer service agent. It handles complaints, processes refunds, even cracks the occasional joke when the moment's right. A customer calls back the next day, and your agent has no idea who they are. The conversation from yesterday? Gone. The preference they mentioned twice last week? Never happened. Every single interaction starts from scratch.&lt;/p&gt;

&lt;p&gt;This isn't a bug in your code. It's a fundamental design problem in how we build AI agents today.&lt;/p&gt;

&lt;p&gt;LangChain put it well: '&lt;em&gt;Imagine if you had a coworker who never remembered what you told them, forcing you to keep repeating that information&lt;/em&gt;'. In the coworker scenario, that’s frustrating, and for AI applications, forgetfulness, that’s a dealbreaker.&lt;/p&gt;

&lt;p&gt;At Oracle, we've been deep in this problem as we continue to provide support to customers building AI applications. And here's what we've found: the solution isn't bigger context windows or more verbose prompts. It's a proper memory infrastructure. The kind that databases have been providing for decades.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agent memory&lt;/strong&gt; is the composition of system components and infrastructure layer that gives AI agents a persistent, evolving state across conversations and sessions. It enables agents to store, retrieve, update, and forget information over time: learning user preferences, retaining context from past interactions, and adapting behavior based on accumulated experience. Without it, every interaction starts from zero.&lt;/p&gt;

&lt;p&gt;This article breaks down what agent memory actually is, how it works under the hood, the frameworks shaping the field, and guidance on how to build it for production. Whether you're prototyping your first agent or scaling one to thousands of users, this is the foundation you need to get right.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Bigger Context Windows Aren't the Answer
&lt;/h2&gt;

&lt;p&gt;The rapid expansion of context windows, now ranging from hundreds of thousands to millions of tokens, has created a convincing illusion across the industry: that with this much capacity available, the memory problem is effectively solved and retrieval-based mechanisms are behind us. That assumption is wrong.&lt;/p&gt;

&lt;p&gt;The industry calls it '&lt;a href="https://mem0.ai/blog/memory-in-agents-what-why-and-how" rel="noopener noreferrer"&gt;the illusion of memory&lt;/a&gt;'. Stuffing more tokens into a prompt isn't memory. It's a bigger Post-it note: more space to scribble on, but it still goes in the bin when the conversation ends. Memory means the notes survive. Here's why that distinction matters:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Context windows degrade before they fill up.&lt;/strong&gt; Most models break well before their advertised limits. A model claiming 200K tokens typically becomes unreliable around 130K, with sudden performance drops rather than gradual degradation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;There's no sense of importance.&lt;/strong&gt; Context windows treat every token equally. Your name gets the same weight as a throwaway comment from three weeks ago. There's no prioritisation, no salience, no relevance filtering.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nothing persists.&lt;/strong&gt; Close the session and it's all gone. Every conversation starts from zero.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The cost scales linearly.&lt;/strong&gt; Maintaining full context across a long agent lifetime gets expensive fast. You're paying per token, and most of those tokens are irrelevant noise.&lt;/p&gt;

&lt;p&gt;Memory is not only about storing chat history or passing more tokens into the context window. It's about building a persistent state stored in an external system, that evolves and informs every interaction the agent has, even weeks or months apart.&lt;/p&gt;

&lt;p&gt;Another misconception to address early on is that RAG (retrieval augmented generation) is agent memory. &lt;strong&gt;RAG brings external knowledge into the prompt at inference time&lt;/strong&gt;. It's great for grounding responses with facts from documents. But RAG is fundamentally stateless. It has no awareness of previous interactions, user identity, or how the current query relates to past conversations. Memory brings continuity. Put simply: RAG helps an agent answer better. Memory helps it learn and adapt. You need both.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Concept: A Mental Model for Agent Memory
&lt;/h2&gt;

&lt;p&gt;Let me give you a framework that makes all of this click. It maps directly to how your own brain works.&lt;/p&gt;

&lt;p&gt;In 2023, researchers at Princeton published the &lt;a href="https://arxiv.org/pdf/2309.02427" rel="noopener noreferrer"&gt;CoALA framework&lt;/a&gt; (Cognitive Architectures for Language Agents). It defines four types of memory, drawn from cognitive science and the &lt;a href="https://arxiv.org/pdf/2205.03854" rel="noopener noreferrer"&gt;SOAR architecture&lt;/a&gt; of the 1980s. Every major framework in the field builds on this taxonomy, and it answers a fundamental question: what options are available for adding memory to an AI agent?&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Memory Type&lt;/th&gt;
&lt;th&gt;Human Equivalent&lt;/th&gt;
&lt;th&gt;What It Does in an Agent&lt;/th&gt;
&lt;th&gt;Example Implementation&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Working Memory&lt;/td&gt;
&lt;td&gt;Your brain's scratch pad: holding what you're actively thinking about&lt;/td&gt;
&lt;td&gt;Current conversation context, retrieved data, intermediate reasoning&lt;/td&gt;
&lt;td&gt;Conversation buffers, sliding windows, rolling summaries, scratchpads&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Procedural Memory&lt;/td&gt;
&lt;td&gt;Muscle memory: knowing how to ride a bike without thinking&lt;/td&gt;
&lt;td&gt;System prompts, agent code, decision logic&lt;/td&gt;
&lt;td&gt;Prompt templates, tool definitions, agent configs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Semantic Memory&lt;/td&gt;
&lt;td&gt;General knowledge: facts and concepts accumulated over your lifetime&lt;/td&gt;
&lt;td&gt;User preferences, extracted facts, knowledge bases&lt;/td&gt;
&lt;td&gt;Vector stores with similarity search&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Episodic Memory&lt;/td&gt;
&lt;td&gt;Autobiographical memory: recalling specific experiences from your past&lt;/td&gt;
&lt;td&gt;Past action sequences, conversation logs, few-shot examples&lt;/td&gt;
&lt;td&gt;Timestamped logs with metadata filtering&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Think of it this way. When you're in a meeting, your working memory holds what's being discussed right now. Your procedural memory knows how to take notes and when to speak up. Your semantic memory reminds you that Sarah's team prefers Slack over email. Your episodic memory recalls that the last time you proposed this feature, the VP shut it down because of budget constraints.&lt;/p&gt;

&lt;p&gt;An agent needs all four types working together. Most agents today only have working memory: whatever fits in the current context window. That's like trying to do your job using nothing but a whiteboard that gets wiped clean every evening.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://lilianweng.github.io/posts/2023-06-23-agent/" rel="noopener noreferrer"&gt;Lilian Weng's influential formula&lt;/a&gt; captures the big picture simply:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agent = LLM + Memory + Planning + Tool Use.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Her short-term memory maps to CoALA's working memory. Her long-term memory encompasses the other three types.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.langchain.com/oss/python/concepts/memory" rel="noopener noreferrer"&gt;LangChain&lt;/a&gt; adds a practical layer with two approaches to memory updates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Hot path memory&lt;/strong&gt;: the agent explicitly decides to remember something before responding. This is what ChatGPT does. It adds latency but ensures immediate memory updates.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Background memory&lt;/strong&gt;: a separate process extracts and stores memories during or after the conversation. No latency hit, but memories aren't available straight away.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The key insight: memory is application-specific. What a coding agent remembers about a user is very different from what a research agent might store.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://arxiv.org/pdf/2310.08560" rel="noopener noreferrer"&gt;Letta&lt;/a&gt; (formerly MemGPT) takes a different angle entirely, borrowing from operating systems. Treat the context window like RAM and external storage like a disk. The agent pages data between these tiers, creating a 'virtual context' that feels unlimited. The agent manages its own memory using tools: it decides what to remember, what to update, and what to archive.&lt;/p&gt;

&lt;p&gt;The distinction between programmatic memory (developer decides what to store) and agentic memory (the agent itself decides) matters. The field is moving towards the latter. Agents that manage their own memory adapt to individual users without requiring developer intervention for each new use case. The decision as to which memory operations are programmatic and agent triggered isn’t always as clear cut, and we’ve seen various approaches work well in certain use cases and domains. In a future post, we will go into the common patterns and design principles of memory engineering.&lt;/p&gt;

&lt;p&gt;Referring back to the customer service agent from the start of this article. Customer service is the most common use case for agents in production (26.5% of deployments, per &lt;a href="https://www.langchain.com/state-of-agent-engineering" rel="noopener noreferrer"&gt;LangChain's 2025 industry survey&lt;/a&gt;), and it demands all four memory types working together. Episodic memory recalls past tickets and interactions. Semantic memory stores customer preferences and account details. Working memory tracks the live conversation. Procedural memory encodes resolution workflows and escalation rules. All four memory types enable the chatbot to perform well on continuous tasks and adapt to new information.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Landscape: Frameworks and Open-Source Libraries
&lt;/h2&gt;

&lt;p&gt;What are the commonly used libraries and open-source projects for agent memory? The ecosystem has matured quickly. Here are the projects shaping how developers build agent memory today.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Project&lt;/th&gt;
&lt;th&gt;What It Does&lt;/th&gt;
&lt;th&gt;Open Source&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;LangChain / LangMem / LangGraph&lt;/td&gt;
&lt;td&gt;Agent orchestration with built-in memory abstractions. Hot path and background memory. LangMem SDK handles extraction and consolidation.&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Letta (MemGPT)&lt;/td&gt;
&lt;td&gt;Stateful agent platform with OS-inspired memory hierarchy. Agents self-edit their own memory via tool calls.&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Zep / Graphiti&lt;/td&gt;
&lt;td&gt;Temporal knowledge graphs for relationship-aware memory. Bi-temporal modelling with sub-200ms retrieval.&lt;/td&gt;
&lt;td&gt;Yes (Graphiti)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Mem0&lt;/td&gt;
&lt;td&gt;Self-improving memory layer with vector and graph architecture. Automatic memory extraction and conflict resolution.&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;langchain-oracledb&lt;/td&gt;
&lt;td&gt;Official LangChain integration for Oracle Database. Vector stores, hybrid search, and embeddings with enterprise-grade security.&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The orchestration library matters, but at scale, the storage backend matters more. Most of these frameworks are database-agnostic by design. The question isn't which framework to use. It's what database sits underneath it.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F02%2FAgent-Memory_-Why-Your-AI-Has-Amnesia-and-How-to-Fix-It-visual-selection-5-1-edited.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F02%2FAgent-Memory_-Why-Your-AI-Has-Amnesia-and-How-to-Fix-It-visual-selection-5-1-edited.png" alt="Illustration related to agent memory architectures" width="800" height="644"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Deep Dive: How Agent Memory Actually Works
&lt;/h2&gt;

&lt;p&gt;What are the common storage options for agent memory? Production systems today use three paradigms working together. You need to understand all three.&lt;/p&gt;

&lt;h3&gt;
  
  
  Vector stores for semantic memory
&lt;/h3&gt;

&lt;p&gt;This is the most common approach. You take text, convert it to embeddings (typically 128 to 2,048 dimensions depending on embedding model utilised), and store them in a vector database. Retrieval works through vector search, against vectors that are indexed using HNSW (hierarchical navigable small world); typically we find the memories (embeddings in database) that are semantically closest to the current query.&lt;/p&gt;

&lt;p&gt;It's fast and simple but limited. Vector search captures semantic similarity well, yet misses structural relationships.&lt;/p&gt;

&lt;h3&gt;
  
  
  Knowledge graphs for relationship memory
&lt;/h3&gt;

&lt;p&gt;Vector search can tell you that a user mentioned coffee. But it can't tell you that they prefer a specific shop, ordered last Tuesday, and always get oat milk. That chain of connections (person, preference, place, time, detail) is a graph problem.&lt;/p&gt;

&lt;p&gt;Knowledge graphs store facts as entities and relationships, with edges capturing how they connect. Add bi-temporal modelling (tracking both when events happened and when the system learned about them) and you can ask not just 'what do we know?' but 'what did we know at any point in time?'&lt;/p&gt;

&lt;p&gt;Frameworks like Zep's Graphiti implement this pattern, &lt;a href="https://arxiv.org/html/2501.13956v1" rel="noopener noreferrer"&gt;achieving 94.8% accuracy&lt;/a&gt; on the Deep Memory Retrieval benchmark. Oracle Database supports property graphs natively through SQL/PGQ, so graph queries run inside the same engine as your vector search and relational data.&lt;/p&gt;

&lt;h3&gt;
  
  
  Structured databases for factual memory
&lt;/h3&gt;

&lt;p&gt;Relational databases store the structured data: user profiles, access controls, session metadata, audit logs. As &lt;a href="https://www.cognee.ai/blog/fundamentals/vectors-and-graphs-in-practice" rel="noopener noreferrer"&gt;Cognee&lt;/a&gt; puts it: 'Vectors deliver high-recall semantic candidates (what feels similar), while graphs provide the structure to trace relationships across entities and time (how things relate).' Relational tables anchor both with the transactional guarantees that production systems demand.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why does a converged database change the equation?
&lt;/h3&gt;

&lt;p&gt;Most teams stitch this together with separate databases: Pinecone for vectors, Neo4j for graphs, Postgres for relational data. Three security models, three failure modes, no shared transaction boundaries. If one write fails, your agent's memory is in an inconsistent state.&lt;/p&gt;

&lt;p&gt;Oracle's converged database runs all three paradigms natively inside a single engine:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AI Vector Search&lt;/strong&gt; for embedding storage and similarity retrieval&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SQL/PGQ&lt;/strong&gt; for property graph queries across entity relationships&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Relational tables&lt;/strong&gt; for structured data, metadata, and audit trails&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;JSON Document Store&lt;/strong&gt; for flexible, schema-free memory objects&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All four share the same ACID transaction boundary and the same security model. Row-level security, encryption, and access controls apply uniformly across every data type. One engine, one transaction, one security policy: the three paradigms above become three views of the same underlying data.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Four Memory Operations
&lt;/h2&gt;

&lt;p&gt;Every memory system runs on four core operations:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;ADD&lt;/strong&gt;: Store a completely new fact&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;UPDATE&lt;/strong&gt;: Modify an existing memory when new information complements or corrects it&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DELETE&lt;/strong&gt;: Remove a memory when new information contradicts it&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SKIP&lt;/strong&gt;: Do nothing when information is a repeat or irrelevant&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Modern memory systems delegate these decisions to the LLM itself rather than using brittle if/else logic. The extraction phase ingests context sources (the latest exchange, a rolling summary, recent messages) and uses the LLM to extract candidate memories. The update phase compares each new fact against the most similar entries in the vector database, using conflict detection to determine whether to add, merge, update, or skip.&lt;/p&gt;

&lt;h3&gt;
  
  
  Retrieval: how agents recall
&lt;/h3&gt;

&lt;p&gt;Due to the heterogenous nature of data that agents encounter, production systems combine multiple retrieval approaches:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Semantic search&lt;/strong&gt;: vector similarity (cosine distance) for meaning-based matching&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Temporal search&lt;/strong&gt;: bi-temporal models enable point-in-time queries ('What did the user prefer last March?')&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Graph traversal&lt;/strong&gt;: multi-hop queries across knowledge graph edges for complex reasoning&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hybrid retrieval&lt;/strong&gt;: combining keyword (full-text) and semantic (vector) search in a single query, which is critical for retrieving specific facts like names, dates, or project codes alongside conceptually related memories&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Forgetting: the underrated operation
&lt;/h3&gt;

&lt;p&gt;Effective forgetting can be implemented with decay functions applied to vector relevance scores: by analysing the results of vector search, old and unreferenced embeddings naturally fade from the agent's attention, imitating biological human memory decay patterns. In a database, this is straightforward. A recency-weighted scoring function multiplies semantic similarity by an exponential decay factor based on time since last access. The result: memories that haven't been recalled recently lose salience gradually, just like human recall.&lt;/p&gt;

&lt;p&gt;Some systems take a different approach entirely. Old facts are invalidated but never discarded, preserving historical accuracy for audit trails. The right strategy depends on your use case, but both are fundamentally database operations.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Enterprise Reality: What Changes at Scale
&lt;/h2&gt;

&lt;p&gt;Here's where the gap between demo and production becomes a chasm.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://view.ceros.com/kpmg-design/kpmg-genai-study/p/1" rel="noopener noreferrer"&gt;KPMG's Pulse Survey&lt;/a&gt; of 130 C-suite leaders (all at companies with over $1B revenue) found that 65% cite agentic system complexity as the top barrier for two consecutive quarters. Agent deployment has more than doubled, from 11% in Q1 2025 to 26% in Q4 2025, but that still means three quarters of large enterprises haven't deployed. &lt;a href="https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work" rel="noopener noreferrer"&gt;McKinsey&lt;/a&gt; puts it even more starkly: only 1% of leaders describe their companies as 'mature' in AI deployment.&lt;/p&gt;

&lt;p&gt;The problems that surface at scale are database problems. They've been database problems all along.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F02%2FGemini_Generated_Image_9flpfr9flpfr9flp-1024x559.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F02%2FGemini_Generated_Image_9flpfr9flpfr9flp-1024x559.png" alt="Illustration related to enterprise agent memory at scale" width="800" height="437"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security and isolation.&lt;/strong&gt; Memory must be scoped per user, per team, per organisation. Memory poisoning is a real attack vector: adversaries can inject malicious information into an agent's memory to corrupt future decision-making. You need row-level security, not just namespace-level isolation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-tenancy.&lt;/strong&gt; Agents serving multiple organisations need complete data isolation. Most vector-only databases offer namespace-level separation. That's not the same as the row-level security that regulated industries require. Oracle's native PDB/CDB architecture provides inherent multi-tenant isolation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Compliance is getting complex.&lt;/strong&gt; GDPR's right to be forgotten applies to explicit agent memory stores. But the EU AI Act (fully applicable from August 2026) requires 10-year audit trails for high-risk AI systems. Think about that tension: you need to delete personal data on request while maintaining a decade of audit history. That requires architectural sophistication that most startups are only beginning to address.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ACID transactions matter.&lt;/strong&gt; Agent memory operations often touch multiple data types simultaneously. Updating a vector embedding, modifying a graph relationship, and changing relational metadata must all succeed or all fail. Without atomicity, partial memory updates leave your agent in an inconsistent state.&lt;/p&gt;

&lt;p&gt;These aren't theoretical concerns. They're the reasons three quarters of enterprises are still stuck at the pilot stage.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Implementation: Building Agent Memory with LangChain and Oracle
&lt;/h2&gt;

&lt;p&gt;Let's get practical. We'll use LangChain as our orchestration framework and Oracle Database as the memory backend, using the langchain-oracledb package. Here's how quickly you can go from zero to a working memory system.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;langchain-oracledb oracledb langchain-core
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Connect and create a vector store
&lt;/h3&gt;

&lt;p&gt;This is all it takes to set up a production-ready vector store backed by Oracle:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;oracledb&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_oracledb.vectorstores&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;OracleVS&lt;/span&gt;

&lt;span class="c1"&gt;# Create a connection pool (production-ready)
&lt;/span&gt;&lt;span class="n"&gt;pool&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;oracledb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_pool&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
 &lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;agent_user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;password&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;password&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="n"&gt;dsn&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;hostname:port/service&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="nb"&gt;min&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;max&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;increment&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Initialise vector store for semantic memory
&lt;/span&gt;&lt;span class="n"&gt;semantic_memory&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;OracleVS&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
 &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;pool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;acquire&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
 &lt;span class="n"&gt;embedding_function&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;embeddings&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;# any LangChain-compatible embeddings
&lt;/span&gt; &lt;span class="n"&gt;table_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;AGENT_SEMANTIC_MEMORY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="n"&gt;distance_strategy&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;DistanceStrategy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;COSINE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's your semantic memory store. Oracle handles the vector indexing, ACID transactions, and security natively. No separate vector database needed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Store and retrieve a memory
&lt;/h3&gt;

&lt;p&gt;The core pattern is simple: write memories with metadata, retrieve them with similarity search.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Store a memory
&lt;/span&gt;&lt;span class="n"&gt;semantic_memory&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_texts&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
 &lt;span class="n"&gt;texts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;User prefers dark mode and concise responses.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
 &lt;span class="n"&gt;metadatas&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user_123&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;memory_type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;preference&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Retrieve relevant memories
&lt;/span&gt;&lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;semantic_memory&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;similarity_search&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
 &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;What are this user&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;s preferences?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="nb"&gt;filter&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user_123&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;doc&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;results&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
 &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;page_content&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From here, you can create separate vector stores for each memory type (semantic, episodic, procedural) under the same Oracle instance, all sharing the same security policies and transaction guarantees.&lt;/p&gt;

&lt;h3&gt;
  
  
  Go deeper: the full memory engineering notebook
&lt;/h3&gt;

&lt;p&gt;The snippets above show the building blocks, but a production agent memory system needs considerably more. We've published a complete, runnable notebook in the &lt;a href="https://github.com/oracle-devrel/oracle-ai-developer-hub/blob/main/notebooks/memory_context_engineering_agents.ipynb" rel="noopener noreferrer"&gt;Oracle AI Developer Hub&lt;/a&gt; that implements the full architecture discussed in this post. This notebook builds a complete Memory Manager with &lt;strong&gt;six distinct memory types&lt;/strong&gt;, each backed by Oracle:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Memory Type&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;th&gt;Storage&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Conversational&lt;/td&gt;
&lt;td&gt;Chat history per thread&lt;/td&gt;
&lt;td&gt;SQL Table&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Knowledge Base&lt;/td&gt;
&lt;td&gt;Searchable documents and facts&lt;/td&gt;
&lt;td&gt;SQL Table + Vector Enabled&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Workflow&lt;/td&gt;
&lt;td&gt;Learned action patterns&lt;/td&gt;
&lt;td&gt;SQL Table + Vector Enabled&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Toolbox&lt;/td&gt;
&lt;td&gt;Dynamic tool definitions with semantic retrieval&lt;/td&gt;
&lt;td&gt;SQL Table + Vector Enabled&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Entity&lt;/td&gt;
&lt;td&gt;People, places, systems extracted from context&lt;/td&gt;
&lt;td&gt;SQL Table + Vector Enabled&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Summary&lt;/td&gt;
&lt;td&gt;Compressed context for long conversations&lt;/td&gt;
&lt;td&gt;SQL Table + Vector Enabled&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;It also covers &lt;strong&gt;context engineering&lt;/strong&gt; (monitoring context window usage, auto-summarisation at thresholds, just-in-time retrieval), &lt;strong&gt;semantic tool discovery&lt;/strong&gt; (scaling to hundreds of tools while only passing the relevant ones to the LLM), and a &lt;strong&gt;complete agent loop&lt;/strong&gt; that ties everything together.&lt;/p&gt;

&lt;p&gt;Run the notebook: &lt;a href="https://github.com/oracle-devrel/oracle-ai-developer-hub/blob/main/notebooks/memory_context_engineering_agents.ipynb" rel="noopener noreferrer"&gt;oracle-devrel/oracle-ai-developer-hub&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Perspective: Where This Is Heading
&lt;/h2&gt;

&lt;p&gt;Here's what I think is coming, and where I'm still working things out.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sleep-time computation will change the game.&lt;/strong&gt; The idea is simple: agents that 'think' during idle time (reorganising, consolidating, refining their memories) perform better and cost less at query time. &lt;a href="https://openai.com/index/inside-our-in-house-data-agent/" rel="noopener noreferrer"&gt;OpenAI's internal data&lt;/a&gt; agent already runs this pattern in production. Their engineering team describes a daily offline pipeline that aggregates table usage, human annotations, and code-derived enrichment into a single normalised representation, then converts it into embeddings for retrieval. At query time, the agent pulls only the most relevant context rather than scanning raw metadata.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.letta.com/blog/sleep-time-compute" rel="noopener noreferrer"&gt;Letta's&lt;/a&gt; research puts numbers to it: agents using this approach achieve 18% accuracy gains and 2.5x cost reduction per query. We're going to see a clear separation between 'thinking agents' that run in the background and 'serving agents' that handle real-time interactions. That's a pattern databases have supported forever: batch processing alongside real-time queries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Memory will extend naive RAG implementations.&lt;/strong&gt; The spectrum is already shifting: traditional RAG to agentic RAG to full memory systems. VentureBeat predicts that contextual memory will surpass RAG for agentic AI in 2026. I think that's right. RAG retrieves documents. Memory understands context. The agents that win will do both, but memory will be the differentiator.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The convergent database will become non-negotiable.&lt;/strong&gt; Agent memory needs vectors, graphs, relational data, and temporal context working together. Stitching together separate databases for each type creates brittle systems with security gaps and consistency problems. I'm still figuring out exactly how fast this consolidation will happen, but the direction is clear.&lt;/p&gt;

&lt;p&gt;One open question remains, and that is the pace at which enterprises will transition from pilot to production deployment. At the moment the technology is at a clear stage of maturity and architectural design patterns are proven and battle tested. On the other hand, organisational readiness, encompassing governance, infrastructure modernisation, and cross-functional alignment, is a fundamentally different challenge.&lt;/p&gt;

&lt;p&gt;What is clear: agent memory is, at its foundation, a database problem. And building databases for mission-critical workloads is what Oracle has been doing for nearly five decades.&lt;/p&gt;




&lt;h2&gt;
  
  
  Frequently Asked Questions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What are the main types of agent memory used in AI systems?
&lt;/h3&gt;

&lt;p&gt;The field has converged on four types, drawn from cognitive science: &lt;strong&gt;working memory&lt;/strong&gt; (current conversation context), &lt;strong&gt;procedural memory&lt;/strong&gt; (system prompts and decision logic), &lt;strong&gt;semantic memory&lt;/strong&gt; (accumulated facts and user preferences), and &lt;strong&gt;episodic memory&lt;/strong&gt; (past interaction logs and experiences). Every major framework builds on this taxonomy, first formalised in the CoALA framework from Princeton in 2023.&lt;/p&gt;

&lt;h3&gt;
  
  
  What options are available for adding memory to an AI agent?
&lt;/h3&gt;

&lt;p&gt;Two broad approaches exist. &lt;strong&gt;Programmatic memory&lt;/strong&gt; is where the developer defines what gets stored and retrieved. &lt;strong&gt;Agentic memory&lt;/strong&gt; is where the agent itself decides what to remember, update, and forget using tool calls. Frameworks like Letta (formerly MemGPT) and LangChain's LangMem SDK support both patterns. The field is moving towards agentic memory, where agents manage their own state without developer intervention for each new use case.&lt;/p&gt;

&lt;h3&gt;
  
  
  What are common agent memory storage options?
&lt;/h3&gt;

&lt;p&gt;Production systems typically combine three paradigms: &lt;strong&gt;vector stores&lt;/strong&gt; for meaning-based retrieval (storing embeddings and querying by cosine similarity), &lt;strong&gt;knowledge graphs&lt;/strong&gt; for relationship-aware retrieval (entities, edges, and bi-temporal modelling), and &lt;strong&gt;structured relational databases&lt;/strong&gt; for transactional data like user profiles, access controls, and audit logs. Most teams stitch these together with separate databases, though converged databases like Oracle can run all three natively in a single engine.&lt;/p&gt;

&lt;h3&gt;
  
  
  What techniques allow AI agents to forget or selectively erase memory?
&lt;/h3&gt;

&lt;p&gt;The most common approach uses &lt;strong&gt;decay functions&lt;/strong&gt; applied to vector relevance scores: a recency-weighted scoring function multiplies semantic similarity by an exponential decay factor based on time since last access. Memories that haven't been recalled recently lose salience gradually, mimicking biological memory decay. An alternative approach &lt;strong&gt;invalidates&lt;/strong&gt; old facts without discarding them, preserving historical accuracy for audit trails while removing them from active retrieval.&lt;/p&gt;

&lt;h3&gt;
  
  
  What are the differences between short-term and long-term agent memory?
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Short-term memory&lt;/strong&gt; (also called working memory) is the current context window: whatever the agent is actively reasoning about in this conversation. It's fast but volatile; close the session and it's gone. &lt;strong&gt;Long-term memory&lt;/strong&gt; encompasses everything that persists across sessions: semantic memory (facts and preferences), episodic memory (past interactions), and procedural memory (learned behaviours and decision logic). Long-term memory requires external storage and retrieval infrastructure.&lt;/p&gt;

&lt;h3&gt;
  
  
  What are commonly used libraries for agent memory?
&lt;/h3&gt;

&lt;p&gt;The ecosystem includes &lt;strong&gt;LangChain/LangMem&lt;/strong&gt; (hot path and background memory with extraction and consolidation), &lt;strong&gt;Letta/MemGPT&lt;/strong&gt; (OS-inspired memory hierarchy where agents self-edit memory via tool calls), &lt;strong&gt;Zep/Graphiti&lt;/strong&gt; (temporal knowledge graphs with sub-200ms retrieval), &lt;strong&gt;Mem0&lt;/strong&gt; (self-improving memory with automatic conflict resolution), and &lt;strong&gt;langchain-oracledb&lt;/strong&gt; (Oracle Database integration for vector stores, hybrid search, and embeddings with enterprise-grade security).&lt;/p&gt;

&lt;h3&gt;
  
  
  How do I store and query vector embeddings?
&lt;/h3&gt;

&lt;p&gt;The core pattern is straightforward: convert text into embeddings (typically 128 to 2,048 dimensions), store them in a vector-capable database, and retrieve them using cosine similarity search. With langchain-oracledb and Oracle Database, you initialise a vector store, add texts with metadata (such as user ID and memory type), then query with similarity_search() filtered by metadata. Oracle handles vector indexing, ACID transactions, and security natively.&lt;/p&gt;

&lt;h3&gt;
  
  
  Which databases offer vector search capabilities for enterprises?
&lt;/h3&gt;

&lt;p&gt;Several databases now support vector search, but enterprise requirements go beyond basic similarity queries. You need ACID transactions, row-level security, multi-tenancy, and compliance features alongside your vector operations. Oracle Database provides native &lt;strong&gt;AI Vector Search&lt;/strong&gt; within its converged architecture, meaning vector queries run in the same engine as relational tables, property graphs (SQL/PGQ), and JSON document stores, all sharing a single transaction boundary and security model.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>database</category>
      <category>oracle</category>
    </item>
    <item>
      <title>What Is the AI Agent Loop? The Core Architecture Behind Autonomous AI Systems</title>
      <dc:creator>Wojtek Pluta</dc:creator>
      <pubDate>Fri, 17 Apr 2026 09:05:55 +0000</pubDate>
      <link>https://forem.com/oracledevs/what-is-the-ai-agent-loop-the-core-architecture-behind-autonomous-ai-systems-51b7</link>
      <guid>https://forem.com/oracledevs/what-is-the-ai-agent-loop-the-core-architecture-behind-autonomous-ai-systems-51b7</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;Key Takeaways&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The architectural difference between a chatbot and an AI agent is one pattern: the agent loop. It’s an LLM invoking tools inside an iterative cycle, repeating until the task is complete or a stopping condition is reached.&lt;/li&gt;
&lt;li&gt;A chatbot responds in a single pass. An agent persists, adapts, and acts across multiple steps: perceiving its environment, reasoning over available options, executing an action, and observing the result before deciding what comes next.&lt;/li&gt;
&lt;li&gt;Every major AI company (OpenAI, Anthropic, Google, Microsoft, Meta) has converged on this same core pattern, despite building very different products around it.&lt;/li&gt;
&lt;li&gt;Building agent loops for production requires engineering for two constraints: cost, where agents consume approximately 4x more tokens than standard chat interactions and up to 15x in multi-agent systems, and observability, the ability to trace every reasoning step, tool call, and decision across an iterative execution cycle.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;What Is the AI Agent Loop and Why Should You Care?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F03%2FAgent-Loop-%25E2%2580%2594-Linear-Flow-with-Loop-back-1024x431.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F03%2FAgent-Loop-%25E2%2580%2594-Linear-Flow-with-Loop-back-1024x431.png" title="The five-stage agent loop: Perceive, Reason, Plan, Act, Observe" alt="The five-stage agent loop: Perceive, Reason, Plan, Act, Observe" width="800" height="337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The five-stage agent loop: Perceive, Reason, Plan, Act, Observe&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You have built a chatbot. It works. Users ask a question, it generates a response, and the interaction is complete. Then someone asks it to do something that requires more than one step.&lt;/p&gt;

&lt;p&gt;‘Find me the three cheapest flights to Tokyo next month, check if my loyalty points cover any of them, and book the best option’. The chatbot has no mechanism to proceed. It generates a response and stops. It can answer questions about flights. It can explain how loyalty points work. It cannot execute the workflow. The interaction is stateless. Each prompt is processed in isolation, with no persistent context, no access to intermediate results, and no ability to chain decisions across steps.&lt;/p&gt;

&lt;p&gt;This is not a limitation of the model. Chat-GPT, Claude, and Gemini are all capable of reasoning through multi-step problems. The limitation is architectural. A chatbot is built to respond. An agent is built to act.&lt;/p&gt;

&lt;p&gt;The difference is one while loop.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is the Agent Loop?
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;The AI agent loop is the iterative execution cycle at the core of every agentic AI system. At each iteration, the agent assembles context from available inputs, invokes an LLM to reason and select an action, executes that action, observes the outcome, and feeds the observation back into the next iteration. This process repeats until the task is complete or a defined stopping condition is reached.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Across the engineering teams Oracle works with building AI applications, one architectural pattern consistently separates working prototypes from production-grade systems: the agent loop. It’s the architecture that transforms a language model from a text generation system into one that can take actions, adapt to results, and complete multi-step tasks autonomously.&lt;/p&gt;

&lt;p&gt;This article examines the agent loop architecture: what it is, how it works, why every major AI company has converged on the same core pattern, and what is required to build one that holds up in production.&lt;/p&gt;

&lt;p&gt;All code in this article is available as a runnable companion notebook in the &lt;a href="https://github.com/oracle-devrel/oracle-ai-developer-hub/blob/main/notebooks/agent_loop_foundations.ipynb" rel="noopener noreferrer"&gt;Oracle AI Developer Hub on GitHub&lt;/a&gt;. Follow along step by step or execute the full implementation end to end.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Why Single-Pass Responses Hit a Wall&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F03%2FChatbot-vs-Agent-%25E2%2580%2594-Horizontal-Stacked-Blog-Ready-1024x408.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F03%2FChatbot-vs-Agent-%25E2%2580%2594-Horizontal-Stacked-Blog-Ready-1024x408.png" title="Single-pass chatbot vs. iterative agent loop: one response versus continuous execution until task completion" alt="Single-pass chatbot vs. iterative agent loop: one response versus continuous execution until task completion" width="800" height="319"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Single-pass chatbot vs. iterative agent loop: one response versus continuous execution until task completion&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The standard chatbot interaction follows a simple pattern: user sends message, model generates response, done. One input, one output, no state between turns. It works brilliantly for question-answering, summarisation, and creative writing. It falls apart from the moment you need the model to &lt;em&gt;do&lt;/em&gt; something in the real world.&lt;/p&gt;

&lt;p&gt;A single-pass response has three fundamental constraints:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;It cannot iterate on results.&lt;/strong&gt; A single-pass system can execute a tool call within a turn, but it has no mechanism to evaluate whether that action succeeded, adapt based on the outcome, or chain a subsequent decision from the result. There is no feedback loop.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;It cannot recover from failure.&lt;/strong&gt; Without iterative execution, a failed tool call, an empty result set, or an ambiguous API response cannot trigger a revised strategy. The model has no visibility into downstream outcomes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;It cannot decompose dependent tasks.&lt;/strong&gt; Real-world workflows require gathering information, making decisions based on that information, executing actions, and handling the consequences of those actions. Each step depends on the result of the previous one. That is a loop, not a straight line.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="http://lib.ysu.am/disciplines_bk/efdd4d1d4c2087fe1cbe03d9ced67f34.pdf" rel="noopener noreferrer"&gt;Russell and Norvig&lt;/a&gt; defined an agent back in 1995 as 'anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.' That definition is 30 years old and it still holds. The key word is &lt;em&gt;acting&lt;/em&gt;. Not responding. Acting.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://arxiv.org/pdf/2210.03629" rel="noopener noreferrer"&gt;ReAct framework&lt;/a&gt; from Princeton and Google Research (Yao et al., 2022) made this practical for LLMs by interleaving reasoning with action in a single prompt-driven loop. The results demonstrated that models perform significantly better when they can reason, act, observe, and reason again: a 34% improvement on &lt;a href="https://arxiv.org/abs/2010.03768" rel="noopener noreferrer"&gt;ALFWorld&lt;/a&gt; and 10% on &lt;a href="https://arxiv.org/abs/2207.01206" rel="noopener noreferrer"&gt;WebShop&lt;/a&gt;. Single-pass responses are not just architecturally limiting. They leave measurable performance on the table.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;The Agent Loop: A Mental Model&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F03%2FAgent-Loop-%25E2%2580%2594-Linear-Flow-with-Loop-back-1-1024x431.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F03%2FAgent-Loop-%25E2%2580%2594-Linear-Flow-with-Loop-back-1-1024x431.png" title="The five-stage agent loop: Perceive, Reason, Plan, Act, Observe" alt="The five-stage agent loop: Perceive, Reason, Plan, Act, Observe" width="800" height="337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The five-stage agent loop: Perceive, Reason, Plan, Act, Observe&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The agent loop operates across five stages that repeat until the task is complete or a stopping condition is met:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Perceive:&lt;/strong&gt; The agent receives input. This could be a user message, an API response, an error, or the result of its last action.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reason:&lt;/strong&gt; The LLM processes everything in context and decides what to do next.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Plan:&lt;/strong&gt; For complex tasks, the agent decomposes the objective into discrete subtasks before execution. Simpler workflows proceed directly to the Act stage without a dedicated planning step.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Act:&lt;/strong&gt; The agent executes something: a tool call, an API request, a database query, a code execution.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Observe:&lt;/strong&gt; The agent examines the result. Did it work? Is the task complete? Does the plan need adjusting?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Then, it loops back to step 1.&lt;/p&gt;

&lt;p&gt;In pseudocode, the complete pattern reduces to six lines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;while not done:
   response = call_llm(messages)
   if response has tool_calls:
      results = execute_tools(response.tool_calls)
      messages.append(results)
   else:
      done = True
      return response
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This execution pattern underpins every autonomous AI system currently in production. It is the foundation on which every major AI organisation has built its agentic architecture. &lt;a href="https://www.anthropic.com/engineering/building-effective-agents" rel="noopener noreferrer"&gt;Anthropic's engineering&lt;/a&gt; guidance describes the pattern plainly: agents are often just LLMs using tools based on environmental feedback in a loop.&lt;/p&gt;

&lt;h3&gt;
  
  
  When the Agent Loop Is Not the Right Architecture
&lt;/h3&gt;

&lt;p&gt;The agent loop is not the appropriate architecture for every use case. Before building an agentic system, validate that the workflow requires iterative execution.&lt;/p&gt;

&lt;p&gt;Agent loops are well-suited to tasks where the number of required steps cannot be predicted in advance, where the agent must adapt based on intermediate results, and where the cost of latency is acceptable relative to the value of task completion.&lt;/p&gt;

&lt;p&gt;Workflows that follow a fixed, predictable sequence of steps are better served by deterministic pipelines. Single-step tasks that require one LLM call and one tool invocation do not benefit from the overhead of an agent loop. Tasks where latency is the primary constraint should be evaluated carefully, as each loop iteration adds LLM call latency.&lt;/p&gt;

&lt;p&gt;The principle from both &lt;a href="https://cdn.openai.com/business-guides-and-resources/a-practical-guide-to-building-agents.pdf" rel="noopener noreferrer"&gt;OpenAI&lt;/a&gt; and &lt;a href="https://www.anthropic.com/engineering/building-effective-agents" rel="noopener noreferrer"&gt;Anthropic's&lt;/a&gt; published guidance is consistent: start with the simplest architecture that solves the problem. Introduce the agent loop only when iterative reasoning and adaptive tool use are required.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;How Every Major AI Company Converged on the Same Architecture&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F03%2FCompany-Convergence-%25E2%2580%2594-Headed-Cards-v2-1024x426.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F03%2FCompany-Convergence-%25E2%2580%2594-Headed-Cards-v2-1024x426.png" title="Six major AI organisations, one underlying architecture: LLM plus tools in a loop" alt="Six major AI organisations, one underlying architecture: LLM plus tools in a loop" width="800" height="333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Six major AI organisations, one underlying architecture: LLM plus tools in a loop&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Despite differences in SDK design, nomenclature, and architectural philosophy, every major AI organisation has converged on the same underlying execution pattern. The table below maps each implementation against the five stages of the core loop:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Company&lt;/th&gt;
&lt;th&gt;What they call it&lt;/th&gt;
&lt;th&gt;Core pattern&lt;/th&gt;
&lt;th&gt;Key contribution&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;OpenAI&lt;/td&gt;
&lt;td&gt;Agent Loop&lt;/td&gt;
&lt;td&gt;Tool-calling loop via Codex SDK&lt;/td&gt;
&lt;td&gt;Code-first approach; anti-declarative-graph philosophy&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Anthropic&lt;/td&gt;
&lt;td&gt;Agent loop&lt;/td&gt;
&lt;td&gt;Augmented LLM + tools in loop&lt;/td&gt;
&lt;td&gt;Simplicity-first design; workflows vs. agents distinction&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Google&lt;/td&gt;
&lt;td&gt;Orchestration layer&lt;/td&gt;
&lt;td&gt;ReAct (Thought-Action-Observation)&lt;/td&gt;
&lt;td&gt;Invented Chain-of-Thought and co-created ReAct&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Microsoft&lt;/td&gt;
&lt;td&gt;Think-Act-Learn&lt;/td&gt;
&lt;td&gt;Conversation-driven loop&lt;/td&gt;
&lt;td&gt;Dual-loop ledger planning (Magentic-One)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Meta&lt;/td&gt;
&lt;td&gt;Agent loop&lt;/td&gt;
&lt;td&gt;ReAct via Llama Stack&lt;/td&gt;
&lt;td&gt;Open-source building blocks; security-first ('Rule of Two')&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;LangChain&lt;/td&gt;
&lt;td&gt;Agent executor / StateGraph&lt;/td&gt;
&lt;td&gt;Tool-calling state machine&lt;/td&gt;
&lt;td&gt;Graph-based orchestration; middleware hooks for control&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The implementations differ in naming conventions, SDK design, and architectural philosophy. The execution pattern is identical.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://lilianweng.github.io/posts/2023-06-23-agent/" rel="noopener noreferrer"&gt;Lilian Weng's formula&lt;/a&gt; captures it simply: &lt;strong&gt;&lt;em&gt;Agent = LLM + Memory + Planning + Tool Use&lt;/em&gt;&lt;/strong&gt;. The agent loop is the runtime that ties those four components together.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;How the Agent Loop Actually Works&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F03%2FHow-the-Agent-Loop-Works-%25E2%2580%2594-Iteration-Sequence-1024x849.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F03%2FHow-the-Agent-Loop-Works-%25E2%2580%2594-Iteration-Sequence-1024x849.png" title="Three iterations, three tool calls, one complete response." alt="Three iterations, three tool calls, one complete response." width="800" height="663"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Three iterations, three tool calls, one complete response.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The canonical pattern is &lt;a href="https://arxiv.org/abs/2210.03629" rel="noopener noreferrer"&gt;ReAct&lt;/a&gt;: reasoning interleaved with acting. The model does not simply select a tool. It reasons about why that tool is appropriate, executes the call, processes the result, and reasons again.&lt;/p&gt;

&lt;p&gt;To illustrate how the loop executes in practice, consider the following task: identify the most cited paper on agent memory published in 2026 and summarise its key findings.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Iteration 1 (Reason → Act → Observe):&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The agent reasons that it needs to search for papers on agent memory from 2026 and selects the search tool. It calls the search API with relevant keywords. The result returns 15 papers with citation counts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Iteration 2:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The agent identifies the top result with 340 citations and calls a document retrieval tool to access the full abstract and key sections.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Iteration 3:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The agent determines that sufficient information has been gathered, generates the summary, and exits the loop.&lt;/p&gt;

&lt;p&gt;Three iterations. Three tool calls. One complete answer that no single-pass chatbot could have produced.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Tool integration: the universal pattern&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Across every provider, tool integration follows the same structure. Tools are defined with a name, description, and JSON Schema parameters. The model decides whether to call a tool and with what arguments. The system executes the function and returns results as a tool message. The model processes results and decides whether to continue looping or return a final response.&lt;/p&gt;

&lt;p&gt;Tools in an agent loop can be classified into three categories. Data tools retrieve context, such as database queries, vector search, or document retrieval. Action tools perform operations with side effects, such as writing records, calling external APIs, or executing code. Orchestration tools invoke other agents as callable sub-modules, enabling multi-agent coordination within a single workflow. Clear classification of tools at design time reduces ambiguous model behaviour at runtime.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.anthropic.com/news/model-context-protocol" rel="noopener noreferrer"&gt;Anthropic's Model Context Protocol&lt;/a&gt; (MCP) has emerged as a leading open standard for how agents discover and connect to external tools, with adoption across OpenAI, Google, Microsoft, and the broader ecosystem.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Beyond the basic loop&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The core &lt;a href="https://arxiv.org/abs/2210.03629" rel="noopener noreferrer"&gt;ReAct&lt;/a&gt; loop handles most use cases, but the pattern extends in two important directions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Plan-and-execute separates planning from execution.&lt;/strong&gt; Instead of invoking the LLM at every step, a planner generates a full task breakdown upfront, an executor works through each subtask, and a re-planner adjusts when execution diverges from the plan. &lt;a href="https://arxiv.org/abs/2312.04511" rel="noopener noreferrer"&gt;LangChain's LLMCompiler&lt;/a&gt; implementation streams a directed acyclic graph of tasks with explicit dependency tracking, enabling parallel execution. The original paper (Kim et al., ICML 2024) reports a 3.6x speedup over sequential ReAct-style execution. At production scale, where each LLM call carries a direct cost, the architectural decision to plan upfront rather than reason at every step has measurable financial implications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-agent orchestration&lt;/strong&gt; distributes work across specialised agents. &lt;a href="https://www.anthropic.com/engineering/multi-agent-research-system" rel="noopener noreferrer"&gt;Anthropic's Claude Research&lt;/a&gt; system uses an orchestrator-worker pattern where a lead agent spawns sub-agents to explore different threads in parallel. Their multi-agent system outperformed a single-agent setup by 90.2% on internal research evaluations. &lt;a href="https://arxiv.org/abs/2411.04468" rel="noopener noreferrer"&gt;Microsoft's Magentic-One&lt;/a&gt; takes it further with a dual-loop system: an outer loop for strategic planning and an inner loop for step-by-step execution, with the ability to reset the entire strategy when progress stalls.&lt;/p&gt;

&lt;p&gt;These are powerful extensions, but the advice from every company is the same: start with the simplest loop that works. Only add complexity when you can measure the improvement.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;The Enterprise Reality: Cost and Observability&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F03%2FEnterprise-Reality-%25E2%2580%2594-Cost-and-Observability-1024x416.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F03%2FEnterprise-Reality-%25E2%2580%2594-Cost-and-Observability-1024x416.png" title="Token cost scaling from standard chat (1x) to single agent (4x) to multi-agent (15x), with corresponding production requirements" alt="Token cost scaling from standard chat (1x) to single agent (4x) to multi-agent (15x), with corresponding production requirements" width="800" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Token cost scaling from standard chat (1x) to single agent (4x) to multi-agent (15x), with corresponding production requirements&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Agent loops that perform well in controlled environments frequently expose new failure modes at production scale. The two constraints that dominate enterprise deployments are cost and observability.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Cost scales with iteration&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Every loop iteration is an LLM call. &lt;a href="https://www.anthropic.com/engineering/multi-agent-research-system" rel="noopener noreferrer"&gt;Anthropic's internal data&lt;/a&gt; shows that agents consume roughly 4x more tokens than standard chat. Multi-agent systems push that to approximately 15x. At thousands of agent sessions per day, token costs compound with every loop iteration. Without cost controls embedded at the architecture level, this becomes a significant operational constraint.&lt;/p&gt;

&lt;p&gt;The mitigation strategies are architectural. Plan-and-execute patterns reduce the number of LLM calls by planning upfront rather than reasoning at every step. Caching commonly retrieved tool results avoids redundant work. Setting token and cost budgets per agent run prevents runaway spending. These controls must be designed into the system from the start, not added retroactively.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Observability: knowing what your agent did and why&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;A standard chat interaction produces a single response from a single LLM call. An agent running 15 iterations, calling 8 different tools, and branching across multiple reasoning paths produces a complex execution trace. When a failure occurs, diagnosing it requires structured visibility into every stage of that trace: what the model reasoned, which tool it invoked, what arguments it passed, what the result was, and how the model interpreted that result before the next iteration.&lt;/p&gt;

&lt;p&gt;Production agent systems need structured logging at every stage of the loop: what the model reasoned, which tool it called, what arguments it passed, what came back, and how it interpreted the result. &lt;a href="https://www.microsoft.com/en-us/research/blog/autogen-v0-4-reimagining-the-foundation-of-agentic-ai-for-scale-extensibility-and-robustness/" rel="noopener noreferrer"&gt;Microsoft's AutoGen 0.4 builds on OpenTelemetry&lt;/a&gt; for this. LangChain's middleware hooks (before_model, after_model, modify_model_request) let you intercept and inspect every iteration.&lt;/p&gt;

&lt;p&gt;Stopping conditions are the other critical piece. Without them, agents can loop indefinitely, burning tokens and producing increasingly incoherent results. Every production system needs maximum iteration limits, no-progress detection (exiting when repeated iterations produce no new information), and token/cost budgets as hard guardrails.&lt;/p&gt;

&lt;p&gt;The following scenario illustrates the consequence of deploying an agent loop without hard stopping conditions:&lt;/p&gt;

&lt;p&gt;An agent is deployed to scrape a website and summarise the data. The target website updates its structure, causing the scraping tool to return an empty result. The agent lacks a hard stopping condition, and its prompt instructs it to retry until data is retrieved. It enters a runaway loop, calling the broken tool 400 times in five minutes and consuming thousands of tokens before hitting a platform rate limit. A maximum iteration limit of three cycles would have prevented the failure entirely.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Building an Agent Loop with LangChain and Oracle&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Before selecting a framework or writing code, address the following implementation requirements. These apply regardless of which orchestration library is used:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Identify tools and schema:&lt;/strong&gt; What actions can the agent take, and what exact parameters do those tools need?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Choose state representation:&lt;/strong&gt; How will you store the conversation history and intermediate tool results?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Define stopping criteria:&lt;/strong&gt; What are the hard limits (iterations, tokens, budget) that will force the loop to terminate?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Establish logging and telemetry:&lt;/strong&gt; How will you track each reasoning step, tool call, and result?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Select a memory layer:&lt;/strong&gt; Where will you store persistent knowledge (like vector embeddings or user preferences) across sessions?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here is one concrete way to implement that checklist using LangChain and Oracle.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;oracledb&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain.agents&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;create_agent&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_core.tools&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;tool&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_core.messages&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;AIMessage&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ToolMessage&lt;/span&gt;

&lt;span class="c1"&gt;# Connect to Oracle AI Database
&lt;/span&gt;&lt;span class="n"&gt;pool&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;oracledb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_pool&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;agent_user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;password&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;password&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;dsn&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;hostname:port/service&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nb"&gt;min&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;max&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;increment&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Define tools the agent can use
&lt;/span&gt;&lt;span class="nd"&gt;@tool&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;calculate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;expression&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Evaluate a mathematical expression and return the numeric result.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="bp"&gt;...&lt;/span&gt;

&lt;span class="nd"&gt;@tool&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;convert_units&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;float&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;from_unit&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;to_unit&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Convert a numeric value from one unit to another.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="bp"&gt;...&lt;/span&gt;

&lt;span class="nd"&gt;@tool&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;timezone_convert&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;time_str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;from_city&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;to_city&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Convert a local time from one city&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;s timezone to another.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="bp"&gt;...&lt;/span&gt;

&lt;span class="c1"&gt;# Create the agent -- returns a compiled StateGraph that runs the loop
&lt;/span&gt;&lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;create_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;calculate&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;convert_units&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;timezone_convert&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;system_prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You are a precise reasoning assistant. Use tools for all calculations.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;QUESTION&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;A flight from London to New York JFK covers 5,570 km. &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;The aircraft cruises at 900 km/h. &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;The flight departs London at 14:00 local time. &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;How long is the flight in hours and minutes, &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;and what local time does it arrive in New York?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Stream the loop live -- each chunk shows one stage of the agent's reasoning
&lt;/span&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;chunk&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;messages&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;human&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;QUESTION&lt;/span&gt;&lt;span class="p"&gt;)]},&lt;/span&gt;
    &lt;span class="n"&gt;stream_mode&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;values&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;last_msg&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;chunk&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;messages&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;isinstance&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;last_msg&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;AIMessage&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="ow"&gt;and&lt;/span&gt; &lt;span class="n"&gt;last_msg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tool_calls&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;call&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;last_msg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tool_calls&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;[ACT] &lt;/span&gt;&lt;span class="se"&gt;\u2192&lt;/span&gt;&lt;span class="s"&gt; &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;call&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;(&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;call&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;args&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;)&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="nf"&gt;isinstance&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;last_msg&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ToolMessage&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;[OBSERVE] &lt;/span&gt;&lt;span class="se"&gt;\u2190&lt;/span&gt;&lt;span class="s"&gt; &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;last_msg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="nf"&gt;isinstance&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;last_msg&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;AIMessage&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="ow"&gt;and&lt;/span&gt; &lt;span class="n"&gt;last_msg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;Answer: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;last_msg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The implementation above is a working agent loop. The compiled agent graph manages the while loop internally, invoking the LLM, evaluating tool calls, executing them, appending results to the message state, and repeating until the model returns a final response without further tool calls or the recursion limit is reached.&lt;/p&gt;

&lt;p&gt;Oracle AI Database provides the storage backend for the tools the agent calls. Vector search for semantic retrieval, relational tables for structured data, and ACID transactions ensuring that every tool call either fully succeeds or fully rolls back. No partial state. No corrupted memory.&lt;/p&gt;

&lt;p&gt;We've published a complete, runnable notebook that implements a full agent loop architecture with LangChain and Oracle AI Database in the &lt;a href="https://github.com/oracle-devrel/oracle-ai-developer-hub" rel="noopener noreferrer"&gt;Oracle AI Developer Hub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/oracle-devrel/oracle-ai-developer-hub/blob/main/notebooks/agent_loop_foundations.ipynb" rel="noopener noreferrer"&gt;&lt;strong&gt;Run the notebook →&lt;/strong&gt;&lt;/a&gt; &lt;a href="https://github.com/oracle-devrel/oracle-ai-developer-hub/blob/main/notebooks/agent_loop_foundations.ipynb" rel="noopener noreferrer"&gt;oracle-devrel/oracle-ai-developer-hub&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Where This Is Heading&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Three structural shifts are emerging in how production agent systems are designed and operated.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The core loop architecture is stable.&lt;/strong&gt; The active area of development is the infrastructure built around it: context management, multi-loop coordination, and decision auditability. The while loop itself is not changing. What is evolving is how context is managed within it, how multiple loops are coordinated together, and how the loop's decisions are made auditable and controllable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agent middleware is emerging as the standard abstraction layer for production systems.&lt;/strong&gt; LangChain's recent work on middleware hooks (intercepting the loop at before_model, after_model, and modify_model_request) suggests a future where developers don't modify the loop itself but layer behaviour on top of it: summarisation, PII redaction, human-in-the-loop approval, dynamic model switching. It's the same pattern that made web frameworks powerful: don't change the request-response cycle, add middleware to it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cost-per-task will replace cost-per-token as the primary efficiency metric.&lt;/strong&gt; Token usage is an input measure. The metric that reflects actual business value is the total cost required to complete a task end to end, including LLM calls, tool executions, and any human escalations triggered by agent failures.&lt;/p&gt;

&lt;p&gt;An agent that consumes 15x more tokens but resolves a customer issue without human escalation is cheaper than a chatbot that consumes fewer tokens but requires human intervention to complete the task.&lt;/p&gt;

&lt;p&gt;The primary open question in production agent deployment is the pace at which observability tooling will mature. Debugging a 20-iteration agent run currently requires piecing together structured logs, tool call traces, and LLM reasoning outputs across multiple systems. The industry needs better tooling for tracing, replaying, and interpreting agent decisions. The building blocks exist in OpenTelemetry, structured logging, and middleware hooks. The developer experience remains the unsolved problem.&lt;/p&gt;

&lt;p&gt;The agent loop is the foundational pattern for any AI system that needs to do more than generate a response. It is the architectural starting point for production-grade autonomous AI.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Frequently Asked Questions&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;What is an AI agent loop?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The AI agent loop is an iterative architecture where a large language model repeatedly reasons about a task, takes an action (typically a tool call), observes the result, and decides what to do next. The cycle continues until the task is complete or a stopping condition is met. In its simplest form, it's an LLM calling tools inside a while loop. This pattern, formalised in the &lt;a href="https://arxiv.org/abs/2210.03629" rel="noopener noreferrer"&gt;ReAct&lt;/a&gt; framework (2022), is the core architecture behind every major autonomous AI system shipping today.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is the architectural difference between an AI agent and a chatbot?
&lt;/h3&gt;

&lt;p&gt;A chatbot generates a single response to a single input. It answers questions but cannot execute multi-step actions or adapt based on intermediate results. An AI agent uses the agent loop to iteratively reason, act, and observe, handling complex tasks that require multiple steps, tool interactions, and course corrections. The architectural difference is simple: a chatbot is one LLM call; an agent is an LLM calling tools in a loop until the job is done.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;How does the ReAct framework work?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://arxiv.org/abs/2210.03629" rel="noopener noreferrer"&gt;ReAct&lt;/a&gt; (Reasoning + Acting) interleaves reasoning traces with tool actions in a prompt-driven loop. At each step, the model generates a 'thought' explaining its reasoning, takes an 'action' by calling a tool, and receives an 'observation' with the result. This cycle repeats until the task is complete. The key innovation is that reasoning and acting reinforce each other: the model reasons about what to do (reason to act) and uses action results to inform further reasoning (act to reason).&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;What are common patterns for multi-agent orchestration?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Three patterns dominate. The &lt;strong&gt;manager pattern&lt;/strong&gt; uses a central agent that delegates subtasks to specialised sub-agents via tool calls (used by &lt;a href="https://openai.github.io/openai-agents-python/" rel="noopener noreferrer"&gt;OpenAI's Agents SDK&lt;/a&gt;). The &lt;strong&gt;orchestrator-worker pattern&lt;/strong&gt; has a lead agent spawning workers for parallel exploration (used by Anthropic's Claude Research). The &lt;strong&gt;handoff pattern&lt;/strong&gt; treats agents as peers that transfer control to one another based on specialisation. Most production systems start with a single agent loop and only move to multi-agent orchestration when task complexity genuinely demands it.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;How do you prevent an AI agent from running forever?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Production agent loops use multiple stopping conditions layered together. &lt;strong&gt;Maximum iteration limits&lt;/strong&gt; cap the number of loop cycles (for example, max_iterations=10). &lt;strong&gt;Token and cost budgets&lt;/strong&gt; set hard spending limits per agent run. &lt;strong&gt;No-progress detection&lt;/strong&gt; exits the loop when repeated iterations produce no new information. &lt;strong&gt;Goal-achievement checks&lt;/strong&gt; evaluate whether the task objective has been met. Microsoft's Magentic-One adds a dual-loop approach where the outer loop can reset the entire strategy when the inner loop stalls, preventing the agent from spinning on a failed approach.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>llm</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Building ONNX Embedding Workflows in Oracle AI Database with Python</title>
      <dc:creator>Wojtek Pluta</dc:creator>
      <pubDate>Fri, 17 Apr 2026 08:44:35 +0000</pubDate>
      <link>https://forem.com/oracledevs/a-practical-guide-to-importing-an-onnx-embedding-model-generating-embeddings-and-running-semantic-4e1m</link>
      <guid>https://forem.com/oracledevs/a-practical-guide-to-importing-an-onnx-embedding-model-generating-embeddings-and-running-semantic-4e1m</guid>
      <description>&lt;h2&gt;
  
  
  A practical guide to importing an ONNX embedding model, generating embeddings, and running semantic search in Oracle AI Database
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Companion notebook:&lt;/strong&gt; &lt;a href="https://github.com/oracle-devrel/oracle-ai-developer-hub/blob/main/notebooks/onnx_embeddings_oracle_ai_database.ipynb" rel="noopener noreferrer"&gt;ONNX In-Database Embeddings with Oracle AI Database 26ai&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Oracle AI Database can load and register an augmented ONNX embedding model with &lt;a href="https://docs.oracle.com/en/database/oracle/oracle-database/26/arpls/dbms_vector1.html" rel="noopener noreferrer"&gt;DBMS_VECTOR.LOAD_ONNX_MODEL()&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;VECTOR_EMBEDDING()&lt;/code&gt; lets SQL generate embeddings directly inside Oracle AI Database.&lt;/li&gt;
&lt;li&gt;Embeddings can be stored natively in &lt;code&gt;VECTOR&lt;/code&gt; columns.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;VECTOR_DISTANCE()&lt;/code&gt; enables semantic search directly in SQL.&lt;/li&gt;
&lt;li&gt;LangChain can build on the same Oracle-native workflow without moving embeddings or retrieval outside the database (&lt;a href="https://docs.langchain.com/oss/python/integrations/vectorstores/oracle" rel="noopener noreferrer"&gt;LangChain Oracle vector store integration&lt;/a&gt;).&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Why This Matters
&lt;/h2&gt;

&lt;p&gt;In many embedding pipelines, source data resides in a relational database, the model runs somewhere else as an external service, and the vectors are stored in a separate vector database. While this architecture can work well, it introduces additional data movement, infrastructure, and operational complexity.&lt;/p&gt;

&lt;p&gt;Oracle AI Database supports a more consolidated approach. You can load an &lt;a href="https://onnx.ai/" rel="noopener noreferrer"&gt;ONNX&lt;/a&gt; embedding model directly into the database, invoke it, store the generated embeddings in native &lt;code&gt;VECTOR&lt;/code&gt; columns, and perform semantic search in the same database.&lt;/p&gt;

&lt;p&gt;This article walks through that end-to-end workflow using an ONNX model: loading it into Oracle AI Database, validating that it is registered correctly, generating embeddings with SQL, storing them in a native vector column, and querying them using semantic similarity. It also demonstrates how the same architecture can be used with LangChain, without changing where embedding and retrieval occur.&lt;/p&gt;




&lt;h2&gt;
  
  
  What You'll Learn
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;How to load an augmented ONNX model with Oracle AI Database.&lt;/li&gt;
&lt;li&gt;How to generate embeddings directly in SQL with &lt;code&gt;VECTOR_EMBEDDING()&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;How to run semantic search with &lt;code&gt;VECTOR_DISTANCE()&lt;/code&gt; in Oracle AI Database and through LangChain.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Architecture Overview
&lt;/h2&gt;

&lt;p&gt;This workflow keeps model execution, vector storage, and semantic retrieval inside Oracle AI Database. An augmented ONNX model is exposed through an Oracle directory object, loaded with &lt;code&gt;DBMS_VECTOR.LOAD_ONNX_MODEL()&lt;/code&gt;, invoked with &lt;code&gt;VECTOR_EMBEDDING()&lt;/code&gt;, and queried with &lt;code&gt;VECTOR_DISTANCE()&lt;/code&gt;. The model artifact can come either from a local or container-mounted path or directly from Oracle Cloud Object Storage using &lt;a href="https://docs.oracle.com/en/database/oracle/oracle-database/26/vecse/load_onnx_model_cloud.html#GUID-82A8D291-8096-4A7C-8882-9B6AC4A7FCCB" rel="noopener noreferrer"&gt;&lt;code&gt;DBMS_VECTOR.LOAD_ONNX_MODEL_CLOUD()&lt;/code&gt;&lt;/a&gt;. LangChain can build on the same Oracle-native execution path through &lt;code&gt;OracleEmbeddings&lt;/code&gt; and &lt;code&gt;OracleVS&lt;/code&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Python 3.10+&lt;/li&gt;
&lt;li&gt;Oracle AI Database 26ai running in a container&lt;/li&gt;
&lt;li&gt;Dependencies such as &lt;code&gt;oracledb&lt;/code&gt;, &lt;code&gt;python-dotenv&lt;/code&gt;, &lt;code&gt;pandas&lt;/code&gt;, &lt;code&gt;numpy&lt;/code&gt;, &lt;code&gt;langchain&lt;/code&gt;, &lt;code&gt;langchain-community&lt;/code&gt;, and &lt;code&gt;langchain-oracledb&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;For cloud loading: an Oracle Cloud Object Storage bucket and model URI, or a PAR URL&lt;/li&gt;
&lt;li&gt;If not using a PAR URL, an Object Storage credential created with &lt;a href="https://docs.oracle.com/en/database/oracle/oracle-database/26/arpls/dbms_cloud.html" rel="noopener noreferrer"&gt;&lt;code&gt;DBMS_CLOUD.CREATE_CREDENTIAL&lt;/code&gt;&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the notebook, those packages are installed up front:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;subprocess&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;sys&lt;/span&gt;

&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;subprocess&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;executable&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-m&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;pip&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;install&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-q&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
     &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;oracledb&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;python-dotenv&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;pandas&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;numpy&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
     &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;langchain&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;langchain-core&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;langchain-community&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;langchain-oracledb&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;capture_output&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Packages installed.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;returncode&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Install failed: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;stderr&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The example also assumes Oracle AI Database 26ai is running in a container, with a mounted directory for ONNX model files. That mounted directory becomes important later, because Oracle accesses the model through a database directory object rather than through ad hoc file access.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step-by-Step Guide
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Understand why Oracle requires an augmented ONNX model
&lt;/h3&gt;

&lt;p&gt;One of the most important details in this workflow is that Oracle needs an &lt;strong&gt;augmented ONNX model&lt;/strong&gt;, not just a standard transformer export.&lt;/p&gt;

&lt;p&gt;For &lt;code&gt;VECTOR_EMBEDDING()&lt;/code&gt; to accept raw text directly, tokenization and related preprocessing need to be included inside the ONNX graph itself. That is what allows Oracle to take a normal text string and produce an embedding without relying on external preprocessing in Python.&lt;/p&gt;

&lt;p&gt;In the notebook, the model used is an augmented version of &lt;code&gt;all-MiniLM-L12-v2&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;MODEL_NAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;all_MiniLM_L12_v2&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;ONNX_FILE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;all_MiniLM_L12_v2.onnx&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Without that augmented packaging, the flow would no longer be fully Oracle-native, because preprocessing would have to happen outside the database first.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Prepare an ONNX model for Oracle AI Database
&lt;/h3&gt;

&lt;p&gt;Before the model can be used in SQL, Oracle needs controlled access to the ONNX file through a database directory object. This is a database-managed reference to a filesystem location, which means access to the model artifact is handled through Oracle privileges rather than through direct filesystem assumptions.&lt;/p&gt;

&lt;p&gt;The notebook includes a one-time admin setup that creates the user, grants privileges, and registers the ONNX model directory. At runtime, the important pieces are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a database user with the required privileges&lt;/li&gt;
&lt;li&gt;permission to load mining models&lt;/li&gt;
&lt;li&gt;a registered Oracle directory such as &lt;code&gt;ONNX_DIR&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;access to the ONNX file from inside the container&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A simplified version of the directory setup looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;OR&lt;/span&gt; &lt;span class="k"&gt;REPLACE&lt;/span&gt; &lt;span class="n"&gt;DIRECTORY&lt;/span&gt; &lt;span class="n"&gt;ONNX_DIR&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="s1"&gt;'/opt/oracle/onnx_models'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;GRANT&lt;/span&gt; &lt;span class="k"&gt;READ&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;WRITE&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;DIRECTORY&lt;/span&gt; &lt;span class="n"&gt;ONNX_DIR&lt;/span&gt; &lt;span class="k"&gt;TO&lt;/span&gt; &lt;span class="n"&gt;my_user&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This matters because the model import is not treated as an ad hoc file operation. The file is exposed to Oracle through a controlled database object, which is much more aligned with enterprise governance expectations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Figure 1.&lt;/strong&gt; An augmented ONNX model is exposed through an Oracle directory object, loaded with &lt;code&gt;DBMS_VECTOR.LOAD_ONNX_MODEL()&lt;/code&gt;, registered in Oracle, and invoked from SQL.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F04%2Foracle_onnx_flow_reworked-2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F04%2Foracle_onnx_flow_reworked-2.png" alt="Diagram showing the workflow for loading and using an ONNX model in Oracle Database. An ONNX model file is stored in an Oracle directory object (ONNX_DIR), then loaded using the DBMS_VECTOR.LOAD_ONNX_MODEL() procedure. The model is registered inside the database and can then be invoked directly from SQL." width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2b: Cloud option - load ONNX from Oracle Object Storage
&lt;/h3&gt;

&lt;p&gt;Oracle also supports loading ONNX models from Oracle Cloud Object Storage with &lt;a href="https://docs.oracle.com/en/database/oracle/oracle-database/26/vecse/load_onnx_model_cloud.html#GUID-82A8D291-8096-4A7C-8882-9B6AC4A7FCCB" rel="noopener noreferrer"&gt;&lt;code&gt;DBMS_VECTOR.LOAD_ONNX_MODEL_CLOUD()&lt;/code&gt;&lt;/a&gt;. This is a documented alternative to the local directory workflow used in the companion notebook.&lt;/p&gt;

&lt;p&gt;Per Oracle documentation, use a credential for standard Object Storage URIs, and pass &lt;code&gt;credential =&amp;gt; NULL&lt;/code&gt; for pre-authenticated request (PAR) URLs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- Option A: regular Object Storage URI (credential required)&lt;/span&gt;
&lt;span class="k"&gt;EXECUTE&lt;/span&gt; &lt;span class="n"&gt;DBMS_VECTOR&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;LOAD_ONNX_MODEL_CLOUD&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="n"&gt;model_name&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s1"&gt;'ALL_MINILM_L12_V2'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;credential&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s1"&gt;'OBJ_STORE_CRED'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;uri&lt;/span&gt;        &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s1"&gt;'https://objectstorage.&amp;lt;region&amp;gt;.oraclecloud.com/n/&amp;lt;namespace&amp;gt;/b/&amp;lt;bucket&amp;gt;/o/all_MiniLM_L12_v2.onnx'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;metadata&lt;/span&gt;   &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'{
    "function":"embedding",
    "embeddingOutput":"embedding",
    "input":{"input":["DATA"]}
  }'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;-- Option B: PAR URL (credential must be NULL)&lt;/span&gt;
&lt;span class="k"&gt;EXECUTE&lt;/span&gt; &lt;span class="n"&gt;DBMS_VECTOR&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;LOAD_ONNX_MODEL_CLOUD&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="n"&gt;model_name&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s1"&gt;'ALL_MINILM_L12_V2'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;credential&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;uri&lt;/span&gt;        &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s1"&gt;'https://objectstorage.&amp;lt;region&amp;gt;.oraclecloud.com/p/&amp;lt;par-token&amp;gt;/n/&amp;lt;namespace&amp;gt;/b/&amp;lt;bucket&amp;gt;/o/all_MiniLM_L12_v2.onnx'&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; According to Oracle documentation, &lt;code&gt;metadata&lt;/code&gt; is optional for models prepared with Oracle's Python utility defaults, model names must follow Oracle naming rules, and the ONNX file size limit for cloud loading is 2 GB.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2c: Multi-cloud note (AWS/GCP/Google Drive)
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;DBMS_VECTOR.LOAD_ONNX_MODEL_CLOUD()&lt;/code&gt; is documented for Oracle Cloud Object Storage. If your model artifact is hosted in AWS S3, Google Cloud Storage, or Google Drive, use a portable two-step pattern: download the ONNX file to a database-accessible local path, then load it with &lt;code&gt;DBMS_VECTOR.LOAD_ONNX_MODEL()&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This keeps embedding generation and semantic retrieval Oracle-native while allowing model artifact hosting outside OCI.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;

&lt;span class="n"&gt;model_url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;MODEL_SIGNED_URL&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;  &lt;span class="c1"&gt;# S3 pre-signed URL / GCS signed URL / Drive direct URL
&lt;/span&gt;&lt;span class="n"&gt;target_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/opt/oracle/onnx_models/all_MiniLM_L12_v2.onnx&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="n"&gt;resp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model_url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;stream&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;180&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;resp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;raise_for_status&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;target_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;wb&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;chunk&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;resp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;iter_content&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;chunk_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1024&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;1024&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;chunk&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;chunk&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Model downloaded to &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;target_path&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;BEGIN&lt;/span&gt;
  &lt;span class="n"&gt;DBMS_VECTOR&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;LOAD_ONNX_MODEL&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;directory&lt;/span&gt;  &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s1"&gt;'ONNX_DIR'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;file_name&lt;/span&gt;  &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s1"&gt;'all_MiniLM_L12_v2.onnx'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;model_name&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s1"&gt;'ALL_MINILM_L12_V2'&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;END&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="o"&gt;/&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Connect to Oracle AI Database from Python
&lt;/h3&gt;

&lt;p&gt;The notebook connects to Oracle AI Database using &lt;code&gt;python-oracledb&lt;/code&gt; in Thin mode, so no Oracle Client libraries are required:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;oracledb&lt;/span&gt;

&lt;span class="n"&gt;conn&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;oracledb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;(...)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Connected to Oracle AI Database&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That same connection is then reused across the SQL examples and the LangChain integration later in the notebook.&lt;/p&gt;

&lt;p&gt;To keep the notebook readable, it defines a small helper function for executing SQL and optionally returning results as a pandas DataFrame:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pandas&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;run_sql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sql&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;params&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;fetch&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;many&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Execute SQL against Oracle Database.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;cursor&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;cur&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;many&lt;/span&gt; &lt;span class="ow"&gt;and&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;cur&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;executemany&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sql&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="n"&gt;params&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;cur&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sql&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;params&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;cur&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sql&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;cols&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;c&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;cur&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;DataFrame&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cur&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fetchall&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;columns&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;cols&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;commit&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The example also assumes Oracle AI Database 26ai is running in a container, with a mounted directory for ONNX model files. That mounted directory becomes important later, because Oracle accesses the model through a database directory object rather than through ad hoc file access.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Load an ONNX embedding model into Oracle AI Database
&lt;/h3&gt;

&lt;p&gt;The notebook does not assume the ONNX model is already present. If the file is missing, it downloads the official pre-built augmented model and places it in the model directory used by Oracle.&lt;/p&gt;

&lt;p&gt;Once the model file is available, either through an Oracle directory object or a cloud URI, it can be imported with &lt;code&gt;DBMS_VECTOR.LOAD_ONNX_MODEL()&lt;/code&gt; or &lt;code&gt;DBMS_VECTOR.LOAD_ONNX_MODEL_CLOUD()&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;A simplified version of the local directory call looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;BEGIN&lt;/span&gt;
  &lt;span class="n"&gt;DBMS_VECTOR&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;LOAD_ONNX_MODEL&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;directory&lt;/span&gt;  &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s1"&gt;'ONNX_DIR'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;file_name&lt;/span&gt;  &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s1"&gt;'all_MiniLM_L12_v2.onnx'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;model_name&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s1"&gt;'ALL_MINILM_L12_V2'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;metadata&lt;/span&gt;   &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'{
      "function":"embedding",
      "embeddingOutput":"embedding",
      "input":{"input":["DATA"]}
    }'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;END&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="o"&gt;/&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the point where the model becomes more than a file. Oracle registers it, stores the associated metadata, and exposes it as a named object that SQL can invoke directly.&lt;/p&gt;

&lt;p&gt;The metadata is especially important. It defines how Oracle maps the SQL input text into the model graph and identifies which output node should be used as the embedding vector. In the notebook, the workflow also checks whether the model already exists before reloading it. This makes reruns safer and ensures the workflow remains idempotent.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;model_check&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;run_sql&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SELECT COUNT(*) AS cnt FROM USER_MINING_MODELS WHERE MODEL_NAME = UPPER(:model_name)&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;params&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;model_name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;MODEL_NAME&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="n"&gt;fetch&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Expected output:&lt;/strong&gt; the model check confirms whether the ONNX model is already registered, so reruns stay idempotent.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Verify that Oracle registered the model correctly
&lt;/h3&gt;

&lt;p&gt;After the import, the next step is to validate that Oracle recognizes the model.&lt;/p&gt;

&lt;p&gt;The notebook queries the model catalog to verify that the ONNX model has been loaded successfully:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;model_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;mining_function&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;algorithm&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;user_mining_models&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;model_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'ALL_MINILM_L12_V2'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a small but important part of the workflow. It confirms that the model is visible to Oracle as a registered object and is ready to be used by the vector functions that come next.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Expected output:&lt;/strong&gt; the query returns the registered ONNX model from &lt;code&gt;USER_MINING_MODELS&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 6: Generate embeddings in SQL with VECTOR_EMBEDDING()
&lt;/h3&gt;

&lt;p&gt;Once the model is registered, Oracle can use it directly through &lt;code&gt;VECTOR_EMBEDDING()&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The notebook first tests this with a simple text input to confirm that the model works and that the returned vector has the expected size.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;VECTOR_EMBEDDING&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
         &lt;span class="n"&gt;ALL_MINILM_L12_V2&lt;/span&gt;
         &lt;span class="k"&gt;USING&lt;/span&gt; &lt;span class="s1"&gt;'Oracle Database supports vector search.'&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="k"&gt;DATA&lt;/span&gt;
       &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;embedding&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;dual&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is one of the most important parts of the article. Embedding generation is no longer a separate service call. It becomes a SQL operation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the application does not need to call an external embedding API&lt;/li&gt;
&lt;li&gt;the database can generate embeddings internally&lt;/li&gt;
&lt;li&gt;the semantic representation stays close to the data it describes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Expected output:&lt;/strong&gt; Oracle returns a 384-dimensional embedding for the supplied text.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 7: Store embeddings in a native VECTOR column
&lt;/h3&gt;

&lt;p&gt;After validating embedding generation, the notebook creates a table where the source text and its embedding live together.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;onnx_docs&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="n"&gt;id&lt;/span&gt;        &lt;span class="n"&gt;NUMBER&lt;/span&gt; &lt;span class="k"&gt;GENERATED&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="k"&gt;DEFAULT&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="k"&gt;IDENTITY&lt;/span&gt; &lt;span class="k"&gt;PRIMARY&lt;/span&gt; &lt;span class="k"&gt;KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;category&lt;/span&gt;  &lt;span class="n"&gt;VARCHAR2&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="n"&gt;doc_text&lt;/span&gt;  &lt;span class="k"&gt;CLOB&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;embedding&lt;/span&gt; &lt;span class="n"&gt;VECTOR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;384&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;FLOAT32&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is an important design choice. The vector is not stored as an opaque blob or external payload. It is stored in Oracle's native &lt;code&gt;VECTOR&lt;/code&gt; type, which means it becomes part of the same database model as the relational data.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;vectors stay linked to the exact rows they describe&lt;/li&gt;
&lt;li&gt;access control applies consistently&lt;/li&gt;
&lt;li&gt;backups and retention policies stay unified&lt;/li&gt;
&lt;li&gt;the application does not need to coordinate data across multiple storage systems&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The notebook inserts demo content and generates the embedding directly in the same SQL statement:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;onnx_docs&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;category&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;doc_text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;embedding&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;VALUES&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="s1"&gt;'database'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="s1"&gt;'Oracle AI Database supports in-database vector search and semantic retrieval.'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;VECTOR_EMBEDDING&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;ALL_MINILM_L12_V2&lt;/span&gt;
    &lt;span class="k"&gt;USING&lt;/span&gt; &lt;span class="s1"&gt;'Oracle AI Database supports in-database vector search and semantic retrieval.'&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="k"&gt;DATA&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The semantic representation is created at the same time as the row is written, inside the same transactional boundary.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Figure 2.&lt;/strong&gt; Embedding generation happens at insert time inside Oracle AI Database, where document text is embedded with &lt;code&gt;VECTOR_EMBEDDING()&lt;/code&gt; and stored together with the row in a &lt;code&gt;VECTOR&lt;/code&gt; column.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F04%2FFigure-2-v2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F04%2FFigure-2-v2.png" alt="Diagram showing embedding generation inside Oracle AI Database during data insertion. Document text is passed through a SQL INSERT statement, where the VECTOR_EMBEDDING() function generates a vector (for example, VECTOR(384)) within the same transactional boundary, and the resulting embedding is stored alongside the data as stored vectors." width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before moving into retrieval, the notebook inspects the inserted rows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;category&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;DBMS_LOB&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;SUBSTR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;doc_text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;120&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;preview&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;onnx_docs&lt;/span&gt;
&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 8: Run semantic search in SQL and LangChain
&lt;/h3&gt;

&lt;p&gt;Once embeddings are stored, semantic retrieval is handled entirely inside Oracle. The notebook uses &lt;code&gt;VECTOR_DISTANCE()&lt;/code&gt; together with &lt;code&gt;VECTOR_EMBEDDING()&lt;/code&gt; so that the query text is embedded on the fly and compared against the stored vectors:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;
  &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;category&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;DBMS_LOB&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;SUBSTR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;doc_text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;doc_preview&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="n"&gt;VECTOR_DISTANCE&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;embedding&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;VECTOR_EMBEDDING&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ALL_MINILM_L12_V2&lt;/span&gt; &lt;span class="k"&gt;USING&lt;/span&gt; &lt;span class="s1"&gt;'How does Oracle support semantic search?'&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="k"&gt;DATA&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;COSINE&lt;/span&gt;
  &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="n"&gt;distance&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;onnx_docs&lt;/span&gt;
&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;distance&lt;/span&gt;
&lt;span class="k"&gt;FETCH&lt;/span&gt; &lt;span class="k"&gt;FIRST&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="k"&gt;ROWS&lt;/span&gt; &lt;span class="k"&gt;ONLY&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The user query is embedded directly within Oracle, where it is compared against stored document vectors. The results are then ranked by similarity, and the closest semantic matches are returned through SQL.&lt;/p&gt;

&lt;p&gt;The notebook explicitly explains how to interpret the output: the smaller the cosine distance, the more semantically similar the document is to the query.&lt;/p&gt;

&lt;p&gt;The notebook also runs several queries to validate that semantic ranking remains meaningful across different phrasings:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;test_queries&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Which Oracle feature helps semantic retrieval?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Can I store embeddings in the database?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;How does LangChain work with Oracle vectors?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Why are ONNX models useful here?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Figure 3.&lt;/strong&gt; At query time, Oracle embeds the input text, compares it with stored vectors using &lt;code&gt;VECTOR_DISTANCE()&lt;/code&gt;, and returns the nearest semantic matches directly through SQL.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F04%2FFigure-3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F04%2FFigure-3.png" alt="Diagram showing semantic search in Oracle AI Database at query time. A user query is embedded into a query vector, which is then compared against stored vectors using a distance search with VECTOR_DISTANCE(). The system returns the closest semantic matches as ranked results directly through SQL." width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The notebook then adds an optional framework layer using LangChain:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_oracledb.embeddings&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;OracleEmbeddings&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_oracledb.vectorstores.oraclevs&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;OracleVS&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With &lt;code&gt;OracleEmbeddings&lt;/code&gt;, the application can use Oracle's registered in-database embedding model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;oracle_embedder&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;OracleEmbeddings&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;params&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;provider&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;database&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;model&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;MODEL_NAME&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The notebook also validates that the LangChain embedding call returns a vector of the expected size:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;lc_embedding&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;oracle_embedder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;embed_query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Oracle AI Database performs semantic search using vectors.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Embedding dimension: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;lc_embedding&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;First 5 values: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;lc_embedding&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The notebook then uses &lt;code&gt;OracleVS&lt;/code&gt;, a LangChain-compatible vector store backed by Oracle AI Vector Search.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_core.documents&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Document&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_oracledb.vectorstores.oraclevs&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;OracleVS&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_community.vectorstores.utils&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;DistanceStrategy&lt;/span&gt;

&lt;span class="n"&gt;langchain_docs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="nc"&gt;Document&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;page_content&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Oracle AI Database supports vector storage and semantic search.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="nc"&gt;Document&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;page_content&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;An ONNX embedding model can be loaded directly into Oracle.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="nc"&gt;Document&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;page_content&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;LangChain can use OracleVS to query Oracle AI Vector Search.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="nc"&gt;Document&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;page_content&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Using in-database embeddings can reduce architectural complexity.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="n"&gt;vector_store&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;OracleVS&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_documents&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;documents&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;langchain_docs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;embedding&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;oracle_embedder&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;table_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;LC_ONNX_DEMO&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;distance_strategy&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;DistanceStrategy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;COSINE&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The notebook also runs a similarity query through the LangChain abstraction:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vector_store&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;similarity_search&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;How can Oracle Database help with semantic retrieval?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;doc&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;enumerate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;results&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;start&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;. &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;page_content&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Validation &amp;amp; Troubleshooting
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Validate that the model appears in &lt;code&gt;USER_MINING_MODELS&lt;/code&gt; after &lt;code&gt;DBMS_VECTOR.LOAD_ONNX_MODEL()&lt;/code&gt; or &lt;code&gt;DBMS_VECTOR.LOAD_ONNX_MODEL_CLOUD()&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Confirm that &lt;code&gt;VECTOR_EMBEDDING()&lt;/code&gt; returns a 384-dimensional embedding for the loaded model.&lt;/li&gt;
&lt;li&gt;If semantic ranking looks off, verify that the same model is used for both stored document embeddings and query embeddings.&lt;/li&gt;
&lt;li&gt;If using cloud loading, verify URI or PAR validity, bucket path, region, and credential privileges.&lt;/li&gt;
&lt;li&gt;When rerunning the notebook, check whether the model and demo tables already exist to avoid duplicate object errors.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Frequently Asked Questions
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Why load the model into Oracle instead of calling an external API?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Because Oracle can generate embeddings directly in SQL, which reduces external dependencies and keeps data and inference inside the same system boundary.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why does the model need to be augmented?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Because Oracle must be able to accept raw text input directly. That requires tokenization and preprocessing logic to already be included in the ONNX graph.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What does &lt;code&gt;VECTOR_EMBEDDING()&lt;/code&gt; do?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
It invokes the registered model inside Oracle and returns the embedding vector for the input text.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What does the &lt;code&gt;VECTOR&lt;/code&gt; column store?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
It stores the numeric embedding representation produced by the model. In this example, the vectors are 384-dimensional &lt;code&gt;FLOAT32&lt;/code&gt; values.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How is semantic similarity computed?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
This workflow uses &lt;code&gt;VECTOR_DISTANCE()&lt;/code&gt; with cosine distance to compare the stored document vectors with the embedded query.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Can the model be reused by multiple applications?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Yes. Once registered and granted appropriately, the model can be invoked by any application that has access to the Oracle environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Can I load the model from cloud storage instead of a local mounted directory?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Yes. Oracle AI Database supports &lt;code&gt;DBMS_VECTOR.LOAD_ONNX_MODEL_CLOUD()&lt;/code&gt; for models in Oracle Cloud Object Storage, with either a credential or a PAR URL.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Does LangChain move embeddings outside Oracle?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
No. LangChain provides a higher-level interface, but the model execution and vector search still run in Oracle.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Does this replace a separate vector database?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
For many use cases, yes. Oracle provides native vector storage and vector search directly in the database.&lt;/p&gt;




&lt;h2&gt;
  
  
  Related Documentation and Further Reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/Ela689/oracle-ai-developer-hub/blob/onnx-embeddings/notebooks/onnx_embeddings_oracle_ai_database.ipynb" rel="noopener noreferrer"&gt;Companion notebook on GitHub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.oracle.com/en/database/oracle/oracle-database/26/" rel="noopener noreferrer"&gt;Oracle Database 26ai documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.oracle.com/en/database/oracle/oracle-database/26/vecse/" rel="noopener noreferrer"&gt;Oracle AI Vector Search User's Guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.oracle.com/en/database/oracle/oracle-database/26/vecse/load_onnx_model_cloud.html#GUID-82A8D291-8096-4A7C-8882-9B6AC4A7FCCB" rel="noopener noreferrer"&gt;LOAD_ONNX_MODEL_CLOUD&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.oracle.com/en/database/oracle/oracle-database/26/arpls/dbms_vector1.html" rel="noopener noreferrer"&gt;DBMS_VECTOR package reference&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.oracle.com/en/database/oracle/oracle-database/26/sqlrf/vector_embedding.html" rel="noopener noreferrer"&gt;VECTOR_EMBEDDING SQL reference&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.oracle.com/en/database/oracle/oracle-database/26/sqlrf/vector_distance.html" rel="noopener noreferrer"&gt;VECTOR_DISTANCE SQL reference&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.oracle.com/en/database/oracle/oracle-database/26/arpls/dbms_cloud.html" rel="noopener noreferrer"&gt;DBMS_CLOUD package reference&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.oracle.com/en/database/oracle/oracle-database/26/adjsn/" rel="noopener noreferrer"&gt;Oracle JSON Developer's Guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.oracle.com/en/database/oracle/oracle-database/26/ccapp/" rel="noopener noreferrer"&gt;Oracle Text Application Developer's Guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.oracle.com/en/database/oracle/oracle-database/26/spatl/" rel="noopener noreferrer"&gt;Oracle Spatial and Graph documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.oracle.com/en/database/oracle/oracle-database/26/dbseg/" rel="noopener noreferrer"&gt;Oracle Database Security Guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.langchain.com/oss/python/integrations/vectorstores/oracle" rel="noopener noreferrer"&gt;LangChain Oracle vector store integration&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>oracle</category>
      <category>database</category>
      <category>ai</category>
      <category>python</category>
    </item>
    <item>
      <title>Vector Embeddings: How They Work, Where to Store Them, and Best Practices</title>
      <dc:creator>Wojtek Pluta</dc:creator>
      <pubDate>Thu, 16 Apr 2026 15:51:46 +0000</pubDate>
      <link>https://forem.com/oracledevs/vector-embeddings-how-they-work-where-to-store-them-and-best-practices-429g</link>
      <guid>https://forem.com/oracledevs/vector-embeddings-how-they-work-where-to-store-them-and-best-practices-429g</guid>
      <description>&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Vector embeddings convert unstructured data into numeric representations that power semantic search, recommendations, and multimodal analytics beyond keywords.&lt;/li&gt;
&lt;li&gt;Embedding success isn’t just about the model—it also depends on a data platform that can meet requirements for scale, low latency, security, and governance, including vector indexing/ANN search, access controls, encryption, and monitoring.&lt;/li&gt;
&lt;li&gt;Oracle AI Database unifies native vector types and similarity search, enterprise-grade security, and integrated vector, structured, and unstructured data—so teams can build RAG, search, and analytics without piecing together multiple systems.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F02%2Fimage-3-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F02%2Fimage-3-1.png" title="Semantic similarity search over vector space - Oracle Help Center" alt="Semantic similarity search over vector space - Oracle Help Center" width="719" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Semantic similarity search over vector space - Oracle Help Center&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction to Vector Embeddings
&lt;/h2&gt;

&lt;p&gt;Vector embeddings have changed the way we interact with unstructured data such as text, images, audio, and code. By transforming this data into high-dimensional numeric vectors, we can use embeddings to process the semantic meaning and relationships within the data.&lt;/p&gt;

&lt;p&gt;We can look at embeddings as task or domain-specific representations of vectors. The geometric relationships among them represent meaningful similarities between concepts in semantic space. The efficient storage and querying of vector embeddings enables capabilities such as semantic search, recommendations, and advanced analytics; and bridges the gap between unstructured and structured information.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are Vector Embeddings? A Definition and Their Role
&lt;/h2&gt;

&lt;p&gt;Vector embeddings are mathematical representations of objects—such as words, sentences, images, or audio—encoded as dense, high-dimensional vectors. Each vector encapsulates features that capture semantic meaning, context, or structure of the data. For example, similar words or images will have embeddings positioned closely in the vector space, enabling similarity-based operations. This allows for similar “things” to be grouped together under a distance metric.&lt;/p&gt;

&lt;p&gt;The adoption of vector embeddings underpins many cutting-edge technologies:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Retrieval-augmented generation (RAG):&lt;/strong&gt; Enhances large language models by retrieving relevant context using embedding similarity.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Semantic search:&lt;/strong&gt; Finds documents with similar context, not just matching keywords.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Recommendations:&lt;/strong&gt; Suggests products or content by comparing user or item embeddings.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Deduplication and anomaly detection:&lt;/strong&gt; Identifies near-duplicates or outliers based on embedding distances.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Multimodal analytics:&lt;/strong&gt; Links information across text, image, audio, and other domains.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This ability to bridge structured and unstructured data makes embeddings indispensable across modern data architectures.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Create Embeddings? Some Tools That Can Help
&lt;/h2&gt;

&lt;p&gt;A variety of tools can encode text, images, and code as vector embeddings, enabling similarity search, retrieval workflows (including RAG), and other ML tasks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;OpenAI – provides hosted embedding APIs backed by task-optimized models, accessible with REST interfaces.&lt;/li&gt;
&lt;li&gt;Hugging Face – offers a large catalog of pre-trained multimodal embedding models and libraries (such as the Transformers library), plus community benchmarks.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.oracle.com/database/" rel="noopener noreferrer"&gt;Oracle AI Database&lt;/a&gt; – provides a native vector memory store in Oracle Database, enabling storage, indexing (e.g., IVF/flat/HNSW), and retrieval of vector embeddings alongside relational data with SQL and PL/SQL integration; supports hybrid search (vector + metadata filters), enterprise-grade security, and governance for RAG and semantic search workloads&lt;/li&gt;
&lt;li&gt;TensorFlow – enables building and serving custom embedding models using Keras, enabling easy integration into training pipelines.&lt;/li&gt;
&lt;li&gt;PyTorch – provides flexible primitives to fine-tune or implement embedding models, and deploy them via TorchScript.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Benefits of Working With Vector Embeddings
&lt;/h2&gt;

&lt;p&gt;The following are just a few of the benefits vector embeddings have brought to today's AI tech stack:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Vector embeddings are currently the best way to transform complex data into numerical units that reflect meaning, similarity and enable clustering and retrieval beyond keyword matching.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The limitations of keyword methods were particularly visible in areas such as synonym handling, typos, and paraphrasing, and are now absent in modern-day LLMs relying on vector embeddings.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Embeddings support multilingual and cross-modal experiences by aligning meaning across languages and modalities.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Other approaches, such as sparse lexical retrieval and symbolic/ontology-based methods, can be effective, but dense vector embeddings are often a better fit when you need semantic similarity matching (for example, paraphrases and synonyms) rather than exact keyword overlap.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges in Working With Vector Embeddings
&lt;/h2&gt;

&lt;p&gt;The following are some of the potential challenges you may face in working with vector embeddings, and potential ways to mitigate them:&lt;/p&gt;

&lt;h3&gt;
  
  
  Storage Volume and High Dimensionality
&lt;/h3&gt;

&lt;p&gt;Storage challenges include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Large embedding volumes:&lt;/strong&gt; Billions of vectors require scalable storage and efficient indexing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;High dimensionality:&lt;/strong&gt; Embeddings of 128, 512, or 1024+ dimensions need specialized data structures and optimized storage formats.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Performance and Latency Bottlenecks
&lt;/h3&gt;

&lt;p&gt;Performance factors include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Indexing and search speed:&lt;/strong&gt; ANN techniques improve latency, but very large datasets demand optimized infrastructure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Batch insertion and streaming:&lt;/strong&gt; Efficiently handling ongoing ingestion of new embeddings.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Distributed System Complexities and Operational Overhead
&lt;/h3&gt;

&lt;p&gt;At scale, sharding, replication, and consistency management become complex. Automated scaling, monitoring, and failover are desirable for production systems.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cost Factors
&lt;/h3&gt;

&lt;p&gt;Vector embeddings may affect operational cost:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Compute and storage requirements:&lt;/strong&gt; High-dimensional data and fast search consume substantial resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Operational overhead:&lt;/strong&gt; Consider cost of infrastructure, team expertise, and maintenance.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Encryption at Rest and in Transit
&lt;/h3&gt;

&lt;p&gt;Securing embeddings is crucial as they can encode sensitive information:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Encryption at rest:&lt;/strong&gt; Protects stored vectors using strong industry-standard algorithms.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Encryption in transit:&lt;/strong&gt; Ensures vectors remain confidential when transmitted between systems or users.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Oracle AI Database enforces encryption by default and integrates with enterprise key management solutions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Access Control and Authentication
&lt;/h3&gt;

&lt;p&gt;Control who can access, modify, or query embeddings:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Granular permissions:&lt;/strong&gt; Define user roles and table-level permissions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Integration with SSO and identity providers:&lt;/strong&gt; Streamlines enterprise authentication.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Audit trails:&lt;/strong&gt; Track access and changes for compliance.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Data Sanitization and Monitoring
&lt;/h3&gt;

&lt;p&gt;Reduce risk by implementing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Sanitization:&lt;/strong&gt; Remove or obfuscate sensitive or personal information in embeddings before storage.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Monitoring and anomaly detection:&lt;/strong&gt; Detect unusual access patterns or potential misuse.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Advanced Cryptographic Techniques
&lt;/h3&gt;

&lt;p&gt;For highly sensitive embeddings:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Homomorphic encryption or secure multi-party computation:&lt;/strong&gt; Enables computation and search on encrypted embeddings, minimizing exposure.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Common Vector Embedding Use Cases
&lt;/h2&gt;

&lt;p&gt;Embeddings open up a wide array of practical use cases:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Enterprise search and information retrieval:&lt;/strong&gt; Improved accuracy and relevance in document and knowledge base searches.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Personalization and recommendation engines:&lt;/strong&gt; Enhanced user experiences by surfacing relevant content or products.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fraud and anomaly detection:&lt;/strong&gt; Early identification of unusual patterns using embedding distances.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data deduplication and clustering:&lt;/strong&gt; Streamlined datasets and improved analytics through intelligent grouping.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multimodal retrieval and analytics:&lt;/strong&gt; Unified analysis over diverse data types, fostering deeper insights.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Storing Vector Embeddings and the Oracle Advantage
&lt;/h2&gt;

&lt;p&gt;The following are a few key points related to the storage of vector embeddings, and how Oracle AI Database's native vector store capabilities can streamline and strengthen your stack with its native vector store capabilities.&lt;/p&gt;

&lt;h3&gt;
  
  
  Specialized Vector Databases
&lt;/h3&gt;

&lt;p&gt;Dedicated vector databases are built for storing, indexing, and searching embeddings efficiently. These databases excel at large-scale similarity search with features such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;High-dimensional indexing:&lt;/strong&gt; Specialized data structures to support billion-scale embeddings.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Approximate search capabilities:&lt;/strong&gt; Fast, scalable similarity queries using Approximate Nearest Neighbor (ANN) techniques.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;RESTful APIs and SDKs:&lt;/strong&gt; Developer-friendly interfaces for ingestion and search.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Popular examples include Pinecone, Weaviate, Milvus, and Vespa. Specialized databases are ideal for workloads with large volumes of embeddings and demanding similarity search requirements.&lt;/p&gt;

&lt;h3&gt;
  
  
  SQL/NoSQL Databases with Vector Support
&lt;/h3&gt;

&lt;p&gt;Traditional databases are evolving to meet AI's demands by adding native vector data types and search capabilities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;SQL databases:&lt;/strong&gt; PostgreSQL (with pgvector), Oracle AI Database, and others support vector columns and similarity search via extensions or built-in features.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;NoSQL databases:&lt;/strong&gt; MongoDB and Redis now offer basic vector search features, often using plugins or modules.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This integration enables seamless blending of embeddings with structured business data, supporting hybrid query scenarios.&lt;/p&gt;

&lt;h3&gt;
  
  
  Oracle AI Database Approach
&lt;/h3&gt;

&lt;p&gt;From Oracle's viewpoint, AI databases must natively support vector data types, efficient similarity queries, and enterprise security for integrating embeddings across applications. Oracle AI Database is designed to address these needs at scale.&lt;/p&gt;

&lt;p&gt;The Oracle AI Database offers a unified approach allowing developers to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Store embeddings alongside structured and unstructured data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run similarity queries directly using SQL and specialized vector search operators.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Integrate with Oracle's rich security, high availability, and scalability features.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Combine vector search, filtering, ranking, and analytical queries in a single stack.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Example Procedures - Using Vector Embeddings in Oracle AI Database
&lt;/h2&gt;

&lt;p&gt;The following examples are intentionally minimal and illustrative. They highlight how Oracle AI Database supports native vector storage and SQL-based similarity search.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;documents&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;

 &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="n"&gt;NUMBER&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

 &lt;span class="n"&gt;content&lt;/span&gt; &lt;span class="k"&gt;CLOB&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

 &lt;span class="n"&gt;embedding&lt;/span&gt; &lt;span class="n"&gt;VECTOR&lt;/span&gt;

&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This example shows a minimal table definition using Oracle AI Database’s native VECTOR data type. In practice, embeddings are stored alongside structured or unstructured application data in the same database.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt;

&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;documents&lt;/span&gt;

&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;VECTOR_DISTANCE&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;embedding&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;query_vector&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;COSINE&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;FETCH&lt;/span&gt; &lt;span class="k"&gt;FIRST&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt; &lt;span class="k"&gt;ROWS&lt;/span&gt; &lt;span class="k"&gt;ONLY&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This example illustrates SQL-based similarity search in Oracle AI Database. The &lt;code&gt;:query_vector&lt;/code&gt; placeholder represents the embedding generated from user input by an embedding model (inside or outside the database) and is used to rank the nearest matches.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hybrid query pattern (semantic + relational filtering)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt;

&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;documents&lt;/span&gt;

&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt; &lt;span class="k"&gt;IS&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;

&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;VECTOR_DISTANCE&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;embedding&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;query_vector&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;COSINE&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;FETCH&lt;/span&gt; &lt;span class="k"&gt;FIRST&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt; &lt;span class="k"&gt;ROWS&lt;/span&gt; &lt;span class="k"&gt;ONLY&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This hybrid pattern combines standard SQL filtering with semantic ranking in a single query. It is useful when semantic search must also respect metadata constraints, access controls, or business rules. This streamlines workflows and facilitates embedding-driven applications without moving data across siloed systems.&lt;/p&gt;

&lt;p&gt;Using Oracle Autonomous AI Database in conjunction with &lt;a href="https://docs.langchain.com/oss/python/integrations/vectorstores/oracle" rel="noopener noreferrer"&gt;langchain-oracledb&lt;/a&gt;, for example, we can simply generate embeddings, store, and interact with vectors directly from within the database – requiring no additional investment in another separate vector database.&lt;/p&gt;

&lt;h2&gt;
  
  
  Querying and Searching for Stored Vector Embeddings
&lt;/h2&gt;

&lt;p&gt;The following are a few of the things you should keep in mind if your work involves querying and searching for stored vector embeddings:&lt;/p&gt;

&lt;h3&gt;
  
  
  Approximate Nearest Neighbor (ANN) Algorithms and Data Structures
&lt;/h3&gt;

&lt;p&gt;Searching for similar embeddings at scale requires efficient algorithms:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ANN Techniques:&lt;/strong&gt; Rather than exact search, algorithms like HNSW (Hierarchical Navigable Small World), IVF (Inverted File Index), and PQ (Product Quantization) yield fast, near-accurate results.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Data Structures:&lt;/strong&gt; Use trees (KD-Tree, Ball Tree), graphs (HNSW), or hash-based indices (LSH) to organize and retrieve vectors efficiently.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;ANN can deliver millisecond-latency searches over millions or billions of embeddings, making it essential for operational AI applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  High-level retrieval workflow (generalized)
&lt;/h3&gt;

&lt;p&gt;At a high level, semantic retrieval follows a simple and reusable pattern that applies across vector databases, frameworks, and application stacks:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Convert user input into a query embedding.&lt;/li&gt;
&lt;li&gt;Compare it against stored embeddings.&lt;/li&gt;
&lt;li&gt;Rank results by similarity.&lt;/li&gt;
&lt;li&gt;Apply filters and business rules as needed.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This high-level workflow is framework- and language-agnostic. While the underlying implementation differs across platforms and tools, the conceptual flow remains the same for the most vector search and RAG-style applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  Popular Libraries
&lt;/h3&gt;

&lt;p&gt;Several tools make it easier to store, and search embeddings:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Vector search libraries:&lt;/strong&gt; FAISS (Facebook AI Similarity Search), Annoy (Spotify), NMSLIB, ScaNN.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These libraries power both stand-alone vector stores and integrations within general-purpose databases.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to Choose the Right Similarity Metrics
&lt;/h3&gt;

&lt;p&gt;Selecting the right similarity metric is critical for effective search:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cosine similarity:&lt;/strong&gt; Measures the angle between vectors; ideal for text and semantic similarity.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Euclidean distance:&lt;/strong&gt; Useful for geometric or spatial data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Dot product:&lt;/strong&gt; Common in deep learning models; efficient for high-dimensional comparisons.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Your choice depends on the nature of your data and the specifics of your application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Oracle AI Database Capabilities
&lt;/h3&gt;

&lt;p&gt;Oracle’s AI Database combines native vector capabilities, enterprise security, and proven scalability, making it a robust choice for organizations seeking a unified solution for traditional data and AI-enabled workloads.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Native vector data types and indexing:&lt;/strong&gt; Supports efficient storage and retrieval of high-dimensional vectors.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Integrated similarity search:&lt;/strong&gt; Enables querying and filtering based on vector proximity.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Enterprise-grade security:&lt;/strong&gt; Encryption at rest, robust access controls, and activity monitoring.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Hybrid queries:&lt;/strong&gt; Seamless combination of structured, unstructured, and vector data in complex analytical tasks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;High scalability:&lt;/strong&gt; Handles massive volumes of embeddings without performance degradation.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Best Practices for Working With Vector Embeddings
&lt;/h2&gt;

&lt;p&gt;The following are a few of the best practices for using vector embeddings to power semantic search, personalized recommendations, multimodal analytics (including anomaly detection), and domain-specific insights across enterprise applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  Semantic Search and Information Retrieval
&lt;/h3&gt;

&lt;p&gt;Semantic search with embeddings offers better context and intent recognition than keyword search. Querying an embedding retrieves documents or objects with similar meanings—crucial for legal, healthcare, customer support, and research applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  Recommendation Systems and Personalization
&lt;/h3&gt;

&lt;p&gt;Compare user and item embeddings to power personalized recommendations. This increases engagement, retention, and value in e-commerce, media, and B2B applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  Multimodal Search and Anomaly Detection
&lt;/h3&gt;

&lt;p&gt;Combine embeddings across text, image, and audio for multimodal analytics or use distance-based thresholds to flag anomalies and outliers in fraud prevention or system monitoring.&lt;/p&gt;

&lt;h3&gt;
  
  
  Domain-Specific Analytics
&lt;/h3&gt;

&lt;p&gt;Specialized embeddings can be trained for particular industries—finance, healthcare, retail—and stored/retrieved for advanced analytics, predictions, or compliance monitoring.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to Select Appropriate Tools and Architectures
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Match your use case to the data platform (dedicated vector database vs. extended relational/NoSQL).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you want both, Oracle AI Database is a good option.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Factor in scale, integration needs, security requirements, and budget.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Leverage proven libraries and frameworks to speed up development.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Security and Scalability Considerations
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Encrypt embeddings, control access, and monitor usage.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose solutions that scale with data growth and user demand.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Balance security, performance, and cost based on enterprise requirements.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Architectural Patterns
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Hybrid architecture:&lt;/strong&gt; Combine vector storage/search with structured data in a unified database like Oracle AI Database.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Microservices:&lt;/strong&gt; Separate ingestion, search, and analytics as independently scaling components if needed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cloud-native solutions:&lt;/strong&gt; Consider managed vector databases for elasticity and reduced operational burden.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Tooling Reminders
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Use specialized libraries (FAISS, Annoy, HNSWLib) for local development, prototyping, or custom solutions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For production or enterprise use, rely on databases with native vector support and robust security, such as Oracle AI Database.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Frequently Asked Questions (FAQ)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What are vector embeddings and why do they matter?
&lt;/h3&gt;

&lt;p&gt;Vector embeddings are dense, high-dimensional numeric representations of objects like text, images, audio, or code. They place semantically similar items near each other in a continuous space, enabling tasks like semantic search, recommendations, RAG, deduplication, and anomaly detection. Compared with keyword or symbolic methods, embeddings better capture meaning, handle synonyms/paraphrases, and are robust across languages and modalities.&lt;/p&gt;

&lt;h3&gt;
  
  
  What are the main challenges in storing and querying embeddings at scale?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Volume and dimensionality: Billions of vectors, often 128–1024+ dimensions&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Performance: Fast indexing and low-latency search, efficient batch/stream ingestion&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Distributed ops: Sharding, replication, consistency, monitoring, and failover&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cost: Compute, storage, and operational overhead&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Security: Encryption at rest/in transit, access control, auditing, data sanitization, and advanced cryptographic techniques for sensitive data&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Where should I store embeddings: a dedicated vector database or a database with vector support?
&lt;/h3&gt;

&lt;p&gt;Two common patterns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Specialized vector databases (e.g., Pinecone, Weaviate, Milvus, Vespa) for high-scale, low-latency similarity search with ANN, SDKs, and REST APIs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;SQL/NoSQL databases with vector support (e.g., Oracle AI Database, PostgreSQL with pgvector, MongoDB, Redis) for blending vectors with structured data and enabling hybrid queries. Your choice should consider scale, integration with existing data, security, cost, and operational complexity.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What does Oracle AI Database provide for embeddings?
&lt;/h3&gt;

&lt;p&gt;Oracle AI Database offers native vector types and indexing, integrated similarity search in SQL, enterprise-grade security (encryption, granular access control, auditing), and high scalability. It supports hybrid analytical queries across structured, unstructured, and vector data. With Oracle Autonomous AI Database and libraries like &lt;a href="https://docs.langchain.com/oss/python/integrations/vectorstores/oracle" rel="noopener noreferrer"&gt;langchain-oracledb&lt;/a&gt;, teams can generate, store, and query embeddings within one platform—avoiding data silos and extra operational overhead. Encrypt data, enforce access controls, and monitor usage to meet enterprise requirements.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Storing and querying vector embeddings is a critical enabler for next-generation AI and data applications. By leveraging the right databases, libraries, and best practices, organizations and engineers can unlock new value from unstructured content, while maintaining performance, scalability, and security.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.langchain.com/oss/python/integrations/vectorstores/oracle" rel="noopener noreferrer"&gt;LangChain - Oracle AI Vector Search&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/oracle/langchain-oracle" rel="noopener noreferrer"&gt;GitHub - LangChain-Oracle&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.oracle.com/database/ai-vector-search/" rel="noopener noreferrer"&gt;Oracle AI Vector Search&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>oracle</category>
      <category>database</category>
      <category>ai</category>
      <category>vectorsearch</category>
    </item>
    <item>
      <title>Agent Memory: A Free Short Course on Building Memory-Aware Agents</title>
      <dc:creator>Wojtek Pluta</dc:creator>
      <pubDate>Thu, 16 Apr 2026 12:13:17 +0000</pubDate>
      <link>https://forem.com/oracledevs/agent-memory-a-free-short-course-on-building-memory-aware-agents-365k</link>
      <guid>https://forem.com/oracledevs/agent-memory-a-free-short-course-on-building-memory-aware-agents-365k</guid>
      <description>&lt;p&gt;Oracle and DeepLearning.AI have launched &lt;a href="https://www.deeplearning.ai/short-courses/agent-memory-building-memory-aware-agents/" rel="noopener noreferrer"&gt;&lt;strong&gt;Agent Memory: Building Memory-Aware Agents&lt;/strong&gt;&lt;/a&gt;, a free short course on DeepLearning.AI that teaches developers how to architect memory systems that give agents persistence, continuity, and the ability to learn over time.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Memory turns a stateless LLM into an agent that learns over time. How to architect agentic memory is one of the most debated topics in AI right now. This course gives AI developers and engineers a comprehensive view of the most common memory patterns."&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Andrew Ng, Founder, DeepLearning.AI&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Most agents forget. Each new session starts from zero, accumulated context from previous interactions is discarded, and the agent has no mechanism to learn from what it has already done. As a result, AI developers often rely on workarounds: cramming everything into the context window, reloading conversation logs, or bolting on ad-hoc retrieval.&lt;/p&gt;

&lt;p&gt;These approaches can work, but they don't provide a clear mental model for how information should live inside an agentic system boundary. This course treats memory as a first-class citizen in AI agents, and is built around that memory-first perspective.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"For the past few years, we have focused on prompt and context engineering to get the best results from a single LLM call. But engineering the right context for agents that need to work over days or weeks needs an effective memory system. This course takes that memory-first approach to building agents."&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Richmond Alake, AI Developer Experience Director, Oracle&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Beyond Prompt Engineering
&lt;/h2&gt;

&lt;p&gt;You’ve heard about prompt engineering. You've probably heard about context engineering. This course introduces the next layer: &lt;strong&gt;memory engineering&lt;/strong&gt;, treating long-term memory as first-class infrastructure that is external to the model, persistent, and structured.&lt;/p&gt;

&lt;p&gt;The course covers the full memory stack across five hands-on modules, built on LangChain, Tavily, and Oracle AI Database:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Why AI Agents Need Memory:&lt;/strong&gt; Explore failure modes of stateless agents and the memory-first architecture used throughout the course.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Constructing the Memory Manager:&lt;/strong&gt; Design persistent memory stores across memory types, model memory data for efficient retrieval, and implement a manager that orchestrates read, write, and retrieval operations during agent execution.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scaling Agent Tool Use with Semantic Tool Memory:&lt;/strong&gt; Treat tools as procedural memory, index them in a vector store, and retrieve only contextually relevant tools at inference time using semantic search.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Memory Operations: Extraction, Consolidation, and Self-Updating Memory:&lt;/strong&gt; Build LLM-powered pipelines that extract structured facts from raw interactions, consolidate episodic memory into semantic memory, and implement write-back loops that let an agent autonomously update and resolve conflicts in its own knowledge base.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Memory-Aware Agent:&lt;/strong&gt; Assemble a stateful agent that initializes from long-term memory at startup, checkpoints intermediate reasoning states during execution, and persists learned context across sessions.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;"The patterns we cover here are not theoretical. AI developers and engineers will walk through real implementations: building memory stores, wiring up extraction pipelines, and handling contradictions in memory. You leave with working code you can adapt for your own production agents."&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Nacho Martinez, AI Developer Advocate, Oracle&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Oracle AI Database as the Agent Memory Core
&lt;/h2&gt;

&lt;p&gt;Oracle AI Database serves as the unified agent memory core throughout the course. Instead of treating a database as a passive store, the course demonstrates how Oracle AI Database functions as the active retrieval and persistence layer that makes each memory pattern work in production.&lt;/p&gt;

&lt;p&gt;Oracle AI Database brings key retrieval strategies into a single engine, including vector search for semantic similarity and unstructured knowledge retrieval, graph traversal for relationship-aware reasoning across connected entities, and relational queries for structured, transactional memory that demands precision and consistency. This helps reduce complexity by avoiding separate systems for different data types.&lt;/p&gt;

&lt;p&gt;The memory patterns taught in this course, such as semantic tool memory, self-updating memory, and memory consolidation, are the same patterns used to build production-grade agentic systems on Oracle AI Database. This course puts that architecture directly in the hands of AI developers and engineers.&lt;/p&gt;




&lt;h2&gt;
  
  
  Who This Course Is For
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Agent Memory: Building Memory-Aware Agents&lt;/strong&gt; is designed for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AI developers and engineers building or evaluating agentic systems who need production-grade memory architecture&lt;/li&gt;
&lt;li&gt;ML engineers integrating LLMs into multi-turn or multi-session workflows&lt;/li&gt;
&lt;li&gt;Developers working with LangChain, LangGraph, or Tavily who want durable, structured memory&lt;/li&gt;
&lt;li&gt;Technical leaders assessing Oracle AI Database for agent infrastructure at scale&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Availability
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Agent Memory: Building Memory-Aware Agents&lt;/strong&gt; is available now on DeepLearning.AI. The course is free to access and requires no prior Oracle experience. Developers can &lt;a href="https://www.deeplearning.ai/short-courses/agent-memory-building-memory-aware-agents/" rel="noopener noreferrer"&gt;enroll in the course&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  About Oracle AI Database
&lt;/h2&gt;

&lt;p&gt;Oracle AI Database is a converged database platform built for AI workloads. It provides native vector search, graph traversal, relational retrieval, and the persistence infrastructure required for production agent memory systems in a single database engine. This removes the fragmented infrastructure that can become a bottleneck for AI innovation. Oracle AI Database is used by developers and enterprises as the unified memory core for AI agents to build and deploy intelligent, secure, memory-aware systems.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>oracle</category>
      <category>database</category>
      <category>agents</category>
    </item>
    <item>
      <title>A Practical Guide to Choosing the Right Memory Substrate for Your AI Agents</title>
      <dc:creator>Wojtek Pluta</dc:creator>
      <pubDate>Thu, 16 Apr 2026 12:11:25 +0000</pubDate>
      <link>https://forem.com/oracledevs/a-practical-guide-to-choosing-the-right-memory-substrate-for-your-ai-agents-33hj</link>
      <guid>https://forem.com/oracledevs/a-practical-guide-to-choosing-the-right-memory-substrate-for-your-ai-agents-33hj</guid>
      <description>&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Don't conflate interface with substrate.&lt;/strong&gt; Filesystems win as an interface (LLMs already know how to use them); databases win as a substrate (concurrency, auditability, semantic search).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;For prototypes, files are hard to beat.&lt;/strong&gt; Simple, transparent, debuggable—a folder of markdown gets you surprisingly far when iteration speed matters most.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Shared state demands a database.&lt;/strong&gt; Concurrent filesystem writes can silently corrupt data. If multiple agents or users touch the same memory, start with database guarantees.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Semantic retrieval beats keyword search at scale.&lt;/strong&gt; Grep performance degrades on paraphrases and synonyms. Vector search finds content by meaning, this is critical once your knowledge base grows.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Avoid polyglot persistence.&lt;/strong&gt; Running separate systems for vectors, documents, and transactions means four failure modes. Oracle AI Database simplifies your memory architecture.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AI developers are watching agent engineering evolve in real time, with leading teams openly sharing what works. One principle keeps showing up from the front lines: &lt;strong&gt;build within the LLM’s constraints&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In practice, two constraints dominate:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;LLMs are stateless across sessions&lt;/strong&gt; (no durable memory unless you bring it back in).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Context windows are bounded&lt;/strong&gt; (and performance can degrade as you stuff more tokens in).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So “just add more context” isn’t a reliable strategy due to the quadratic cost of attention mechanisms and the degradation of reasoning capabilities as context fills up. The winning pattern is &lt;strong&gt;external memory + disciplined retrieval&lt;/strong&gt;: store state outside the prompt (artifacts, decisions, tool outputs), then pull back only what matters for the current loop.&lt;/p&gt;

&lt;p&gt;There’s also a useful upside: because models are trained on internet-era developer workflows, they’re unusually competent with &lt;strong&gt;developer-native interfaces&lt;/strong&gt;: repos, folders, markdown, logs, and CLI-style interactions. That’s why filesystems keep showing up in modern agent stacks.&lt;/p&gt;

&lt;p&gt;This is where the debate heats up: “files are all you need” for agent memory. Most arguments collapse because they treat &lt;strong&gt;interface&lt;/strong&gt;, &lt;strong&gt;storage&lt;/strong&gt;, and &lt;strong&gt;deployment&lt;/strong&gt; as the same decision. They aren’t.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Filesystems are winning as an interface&lt;/strong&gt; because models already know how to list directories, grep for patterns, read ranges, and write artifacts. &lt;strong&gt;Databases are winning as a substrate&lt;/strong&gt; because once memory must be shared, audited, queried, and made reliable under concurrency, you either adopt database guarantees or painfully reinvent them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F02%2Ffs-db-FILEvsDB.drawio-4-scaled.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F02%2Ffs-db-FILEvsDB.drawio-4-scaled.png" alt="Filesystem interface versus database substrate for AI agent memory" width="800" height="755"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this piece, we give a systematic comparison of filesystems and databases for agent memory: where each approach shines, where it breaks down, and a decision framework for choosing the right foundation as you move from prototype to production.&lt;/p&gt;

&lt;p&gt;Our aim is to educate AI developers on various approaches to agent memory, backed by performance guidance and working code.&lt;/p&gt;

&lt;p&gt;All code presented in this article can be found &lt;a href="https://github.com/oracle-devrel/oracle-ai-developer-hub/blob/main/notebooks/fs_vs_dbs.ipynb" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding Agent Memory and Its Importance
&lt;/h2&gt;

&lt;p&gt;Let’s take the common use case of building a Research Assistant with Agentic capabilities.&lt;/p&gt;

&lt;p&gt;You build a Research Assistant agent that performs brilliantly in a demo; in the current execution, it can search arXiv, summarize papers, and draft a clean answer in a single run. Then you come back the next morning, start from a clean run, and then prompt the agent: &lt;em&gt;“Continue from where we left off, and also compare Paper A to Paper B.”&lt;/em&gt; The agent responds as if it has never met you because LLMs are inherently stateless. Unless you send prior context back in, the model has no durable awareness of what happened in previous turns or previous sessions.&lt;/p&gt;

&lt;p&gt;Once you move beyond single-turn Q&amp;amp;A into long-horizon tasks, deep research, multi-step workflows, and multi-agent coordination, you need a way to preserve continuity when the context window truncates, sessions restart, or multiple workers act on shared state. This takes us into the realm of leveraging systems of record for agents and introduces the concept of Agent Memory.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Stateless LLM Problem
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F02%2Ffs-db-2.drawio-7-scaled.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F02%2Ffs-db-2.drawio-7-scaled.png" title="Why your Research Assistant forgets everything between sessions?" alt="Why your Research Assistant forgets everything between sessions?" width="800" height="502"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Why your Research Assistant forgets everything between sessions?&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Agent Memory?
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Agent memory is the set of system components and techniques that enable an AI agent to store, recall, and update information over time so it can adapt to new inputs and maintain continuity across long-horizon tasks.&lt;/strong&gt; Core components typically include the language and embedding model, information retrieval mechanisms, and a persistent storage layer such as a database.​​&lt;/p&gt;

&lt;h3&gt;
  
  
  Types of Agent Memory
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F02%2Ffs-db-Types-of-Agent-Memory.drawio-6-1024x764.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F02%2Ffs-db-Types-of-Agent-Memory.drawio-6-1024x764.png" title="Types of Agent Memory" alt="Types of Agent Memory" width="800" height="597"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Types of Agent Memory&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In practical systems, agent memory is usually classified into two distinct forms:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Short-term memory (working memory):&lt;/strong&gt; whatever is currently inside the context window.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Long-term memory:&lt;/strong&gt; a persistent state that survives beyond a single call or session (facts, artifacts, plans, prior decisions, tool outputs).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Concepts and techniques associated with agent memory all come together within the agent loop and the agent harness, as demonstrated in this &lt;a href="https://github.com/oracle-devrel/oracle-ai-developer-hub/blob/main/notebooks/fs_vs_dbs.ipynb" rel="noopener noreferrer"&gt;notebook&lt;/a&gt; and explained later in this article.&lt;/p&gt;

&lt;h3&gt;
  
  
  Agent Loop and Agent Harness
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;The agent loop is the iterative execution cycle in which an LLM receives instructions from the environment and decides whether to generate a response or make a tool call based on its internal reasoning about the input provided in the current loop.&lt;/strong&gt; This process repeats until the LLM produces a final output or an exit criterion is met. At a high level, the following operations are present within the agent loop:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Assemble context&lt;/strong&gt; (user request + relevant memory + tool json schemas).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Call the model&lt;/strong&gt; (plan, decide next action).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Take actions&lt;/strong&gt; (tools, search, code execution, database queries).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Observe results&lt;/strong&gt; (tool outputs, errors, intermediate artifacts).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Update memory&lt;/strong&gt; (write transcripts, store artifacts, summarize, index).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Repeat&lt;/strong&gt; until the task completes or hands control back to the user.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Anthropic’s &lt;a href="https://www.anthropic.com/engineering/effective-harnesses-for-long-running-agents" rel="noopener noreferrer"&gt;guidance&lt;/a&gt; on long-running agents directly points to this: they describe harness practices that help agents quickly re-understand the state of work when starting with a fresh context window, including maintaining explicit progress artifacts.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;agent harness&lt;/strong&gt; is the surrounding runtime and rules that make the loop reliable: how you wire tools, where you write artifacts, how you log/trace behavior, how you manage memory, and how you prevent the agent from drowning in context.&lt;/p&gt;

&lt;p&gt;To complete the picture, the discipline of context engineering is heavily involved in the agent loop and aspect of the agent harness itself. &lt;strong&gt;Context engineering is the systematic design and curation of the content placed in an LLM’s context window so that the model receives high-signal tokens and produces the intended, reliable output within a fixed budget&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In this piece, we implement context engineering as a set of repeatable techniques inside the agent harness:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Context retrieval and selection:&lt;/strong&gt; Pull only what is relevant (via grep for filesystem memory, via vector similarity and SQL filters for database memory).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Progressive disclosure:&lt;/strong&gt; Start small (snippets, tails, line ranges) and expand only when needed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Context offloading:&lt;/strong&gt; Write large tool outputs and artifacts outside the prompt, then reload selectively.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Context reduction:&lt;/strong&gt; Summarize or compact information when you approach a degradation threshold, then store the summary in durable memory so you can rehydrate later.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The concepts and explanations above set us up for the rest of the comparison we introduce in this piece. Now that we have the “why” and the moving parts (stateless models, the agent loop, the agent harness, and memory), we can evaluate the two dominant substrates teams are using today to make memory real: the filesystem and the database.&lt;/p&gt;

&lt;h2&gt;
  
  
  Filesystem-first Agentic Research Assistant
&lt;/h2&gt;

&lt;p&gt;A filesystem-based memory architecture is not “the agent remembers everything forever”. It is the agent that can persist state and artifacts outside the context window and then pull them back selectively when needed. This aligns with two of the earlier-mentioned LLM constraints: a limited context window and statelessness.&lt;/p&gt;

&lt;p&gt;In our Research Assistant, the filesystem becomes the memory substrate. Rather than injecting a large number of tools and extensive documentation into the LLM's context window (which would inflate the token count and trigger early summarization), we store them on disk and let the agent search and selectively read what it needs. This matches with what the Applied AI team at Cursor calls “&lt;a href="https://cursor.com/blog/dynamic-context-discovery" rel="noopener noreferrer"&gt;Dynamic Context Discovery&lt;/a&gt;”: write large output to files, then let the agent &lt;code&gt;tail&lt;/code&gt; and read ranges as required.&lt;/p&gt;

&lt;p&gt;Our FSAgent and &lt;a href="https://github.com/oracle-devrel/oracle-ai-developer-hub/blob/main/notebooks/fs_vs_dbs.ipynb" rel="noopener noreferrer"&gt;demo&lt;/a&gt; is using valid filesystem-OS related operations (such as tail and cat to read the contents of files; but that this is a very "simplified" approach, with a limited number of operations for demonstration purposes, and the capabilities offered in the file system can be optimized (with other commands and implementations).&lt;/p&gt;

&lt;p&gt;On the other hand, it's a great start for people to get familiarized with tool access and how file system memory is achieved.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F02%2Fimage-10.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F02%2Fimage-10.png" alt="Filesystem-first agent memory architecture with semantic, episodic, and procedural memory layers" width="610" height="545"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Semantic memory (durable knowledge):&lt;/strong&gt; papers and reference docs saved as markdown.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Episodic memory (experience):&lt;/strong&gt; conversation transcripts + tool outputs per session/run.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Procedural memory (how to work):&lt;/strong&gt; “rules” / instructions files (e.g., CLAUDE.md / AGENTS.md) that shape behavior across sessions.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  What does this look like in tooling?
&lt;/h3&gt;

&lt;p&gt;Before we jump into the code, here’s the minimal tool surface we provide to the agent in the table below. Notice the pattern: instead of inventing specialized “memory APIs,” we expose a small set of filesystem primitives and let the agent compose them (very Unix).&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;What it does&lt;/th&gt;
&lt;th&gt;Output&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;arxiv_search_candidates(query, k=5)&lt;/td&gt;
&lt;td&gt;Searches arXiv and returns a JSON list of candidate papers with IDs, titles, authors, and abstracts.&lt;/td&gt;
&lt;td&gt;JSON string of paper candidates&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;fetch_and_save_paper(arxiv_id)&lt;/td&gt;
&lt;td&gt;Fetches full paper text (PDF → text) and saves to &lt;code&gt;semantic/knowledge_base/&amp;lt;id&amp;gt;.md&lt;/code&gt;. Avoids routing full content through the LLM.&lt;/td&gt;
&lt;td&gt;File path&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;read_file(path)&lt;/td&gt;
&lt;td&gt;Reads a file from disk and returns its contents in full (use sparingly).&lt;/td&gt;
&lt;td&gt;Full file contents&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;tail_file(path, n_lines=80)&lt;/td&gt;
&lt;td&gt;Reads the last N lines of a file (first step for large files).&lt;/td&gt;
&lt;td&gt;Last N lines&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;read_file_range(path, start_line, end_line)&lt;/td&gt;
&lt;td&gt;Reads a line range to “zoom in” without loading everything.&lt;/td&gt;
&lt;td&gt;Selected line range&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;grep_files(pattern, root_dir, file_glob)&lt;/td&gt;
&lt;td&gt;Grep-like search across files to find relevant passages quickly.&lt;/td&gt;
&lt;td&gt;Matches with file path + line number&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;list_papers()&lt;/td&gt;
&lt;td&gt;Lists all locally saved papers in &lt;code&gt;semantic/knowledge_base/&lt;/code&gt;.&lt;/td&gt;
&lt;td&gt;List of filenames&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;conversation_to_file(run_id, messages)&lt;/td&gt;
&lt;td&gt;Appends conversation entries to one transcript file per run in &lt;code&gt;episodic/conversations/&lt;/code&gt;.&lt;/td&gt;
&lt;td&gt;File path&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;summarise_conversation_to_file(run_id, messages)&lt;/td&gt;
&lt;td&gt;Saves full transcript, then writes a compact summary to &lt;code&gt;episodic/summaries/&lt;/code&gt;.&lt;/td&gt;
&lt;td&gt;Dict with transcript + summary paths&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;monitor_context_window(messages)&lt;/td&gt;
&lt;td&gt;Estimates current context usage (tokens used/remaining).&lt;/td&gt;
&lt;td&gt;Dict with token stats&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This design directly reflects what the AI ecosystem is converging on: a filesystem and a handful of core tools, rather than an explosion of bespoke tools.&lt;/p&gt;

&lt;h3&gt;
  
  
  Progressive reading (read, tail, range)
&lt;/h3&gt;

&lt;p&gt;The first memory principle implementation is simple: &lt;strong&gt;don’t load large files unless you must&lt;/strong&gt;. Filesystems are excellent at sequential read/write and work naturally with tools like &lt;code&gt;grep&lt;/code&gt; and log-style access. This makes them a strong fit for append-only transcript and artifact storage.&lt;/p&gt;

&lt;p&gt;That’s why we implement three reading tools:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Read everything (rare),&lt;/li&gt;
&lt;li&gt;Read the end (common for logs/transcripts)&lt;/li&gt;
&lt;li&gt;Read a slice (common for zooming into a match)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The tools below were implemented in Python and converted into objects callable by a langchain agent using the &lt;code&gt;@tool&lt;/code&gt; decorator from the langchain agent module.&lt;/p&gt;

&lt;p&gt;First is the &lt;code&gt;read_file&lt;/code&gt; tool, the “load it all” option. This tool is useful when the file is small, or you truly need the full artifact, but it’s intentionally not the default because it can expand the context window.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nd"&gt;@tool&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;read_file&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
 &lt;span class="n"&gt;p&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Path&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exists&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
 &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;File not found: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
 &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read_text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;encoding&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;utf-8&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;tail_file&lt;/code&gt; function is the first step for large files. It grabs the end of a log/transcript to quickly see the latest or most relevant portion before deciding whether to read more.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nd"&gt;@tool&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;tail_file&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;n_lines&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
 &lt;span class="n"&gt;p&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Path&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exists&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
 &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;File not found: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
 &lt;span class="n"&gt;lines&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read_text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;encoding&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;utf-8&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;splitlines&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
 &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;lines&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nf"&gt;max&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;n_lines&lt;/span&gt;&lt;span class="p"&gt;):])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;read_file_range&lt;/code&gt; function is seen as the surgical tool that is used once you’ve located the right region (often via &lt;code&gt;grep&lt;/code&gt; or after a &lt;code&gt;tail&lt;/code&gt;), pulls in just the exact line span you need, so the agent stays token-efficient and grounded.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nd"&gt;@tool&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;read_file_range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;start_line&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;end_line&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
 &lt;span class="n"&gt;p&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Path&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exists&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
 &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;File not found: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
 &lt;span class="n"&gt;lines&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read_text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;encoding&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;utf-8&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;splitlines&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
 &lt;span class="n"&gt;start&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;max&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;start_line&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="n"&gt;end&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;min&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;lines&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;end_line&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;start&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="n"&gt;end&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
 &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Empty range: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;start_line&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;:&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;end_line&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; (file has &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;lines&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; lines)&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
 &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;lines&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;start&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;end&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Again, this is essentially dynamic context discovery in a microcosm: load a small view first, then expand only when needed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Grep-style search (find first, read second)
&lt;/h3&gt;

&lt;p&gt;A filesystem-based agent should quickly find relevant material and pull only the exact slices it needs. This is why &lt;code&gt;grep&lt;/code&gt; is such a recurring theme in the agent tooling conversation: it gives the model a fast way to locate relevant regions before spending tokens to pull content.&lt;/p&gt;

&lt;p&gt;Here’s a simple grep-like tool that returns line-numbered hits so the agent can immediately jump to &lt;code&gt;read_file_range&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nd"&gt;@tool&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;grep_files&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
&lt;span class="n"&gt;pattern&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="n"&gt;root_dir&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;semantic&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="n"&gt;file_glob&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;**/*.md&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="n"&gt;max_matches&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="n"&gt;ignore_case&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;bool&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;

&lt;span class="n"&gt;root&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Path&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;root_dir&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;root&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exists&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
&lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Directory not found: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;root_dir&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="n"&gt;flags&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;re&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;IGNORECASE&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;ignore_case&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
&lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="n"&gt;rx&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;re&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;compile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pattern&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;flags&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="n"&gt;re&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;error&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Invalid regex pattern: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="n"&gt;matches&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;fp&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;root&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;glob&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;file_glob&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;fp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;is_file&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
&lt;span class="k"&gt;continue&lt;/span&gt;
&lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;fp&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;r&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;encoding&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;utf-8&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;errors&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ignore&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;line&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;enumerate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;start&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;rx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;search&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;line&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
&lt;span class="n"&gt;matches&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;fp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;as_posix&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;:&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;line&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;strip&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;matches&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="n"&gt;max_matches&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;matches&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="s"&gt;[TRUNCATED: max_matches reached]&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="k"&gt;continue&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;matches&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;No matches found.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;matches&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One subtle but important detail in our grep_files implementation is how we read files. Rather than loading entire files into memory with &lt;code&gt;read_text().splitlines()&lt;/code&gt;, we iterate lazily with for line in open(fp), which streams one line at a time and keeps memory usage constant regardless of file size.&lt;/p&gt;

&lt;p&gt;This aligns with the "find first, read second" philosophy: locate what you need without loading everything upfront. For readers interested in maximum performance, the &lt;a href="https://github.com/oracle-devrel/oracle-ai-developer-hub/blob/main/notebooks/fs_vs_dbs.ipynb" rel="noopener noreferrer"&gt;full notebook&lt;/a&gt; also includes a grep_files_os_based variant that shells out to ripgrep or grep, leveraging OS-level optimizations like memory-mapped I/O and SIMD instructions. In practice, this pattern (“search first, then read a range”) is one reason filesystem agents can feel surprisingly strong on focused corpora: the agent iteratively narrows the context instead of relying on a single-shot retrieval query.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tool outputs as files: keeping big JSON out of the prompt
&lt;/h3&gt;

&lt;p&gt;One of the fastest ways to blow up your context window is to return large JSON payloads from tools. &lt;a href="https://cursor.com/blog/dynamic-context-discovery" rel="noopener noreferrer"&gt;Cursor’s approach&lt;/a&gt; is to write these results to files and let the agent inspect them on demand (often starting with tail).&lt;/p&gt;

&lt;p&gt;That’s exactly why our folder structure includes a &lt;code&gt;tool_outputs/&amp;lt;session_id&amp;gt;/&lt;/code&gt; directory: it acts like an “evidence locker” for everything the agent did, without forcing those payloads into the current context.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
 &lt;/span&gt;&lt;span class="nl"&gt;"ts_utc"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2026-01-27T12:41:12.135396+00:00"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
 &lt;/span&gt;&lt;span class="nl"&gt;"tool"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arxiv_search_candidates"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
 &lt;/span&gt;&lt;span class="nl"&gt;"input"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"{'query': 'memgpt'}"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
 &lt;/span&gt;&lt;span class="nl"&gt;"output"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"content='[&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;n {&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;n &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;arxiv_id&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;2310.08560v2&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;,&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;n &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;entry_id&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;http://arxiv.org/abs/2310.08560v2&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;,&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;n &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;title&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;MemGPT: Towards LLMs as Operating Systems&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;,&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;n &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;authors&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Charles Packer, Sarah Wooders, Kevin Lin, Vivian Fang, Shishir G. Patil, Ion Stoica, Joseph E. Gonzalez&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;,&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;n &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;published&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;2024-02-12&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;,&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;n &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;abstract&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: ...msPnaMxOl8Pa'"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Putting it together: the agent toolset
&lt;/h3&gt;

&lt;p&gt;Before we create the agent, we bundle the tools into a small, composable toolbox. This matches the broader trend: agents often perform better with a smaller tool surface, less choice paralysis (aka context confusion), fewer weird and overlapping tool schemas, and more reliance on proven filesystem workflows.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;FS_TOOLS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
 &lt;span class="n"&gt;arxiv_search_candidates&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;# search arXiv for relevant research papers
&lt;/span&gt; &lt;span class="n"&gt;fetch_and_save_paper&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;# fetch paper text (PDF-&amp;gt;text) and save to semantic/knowledge_base/&amp;lt;id&amp;gt;.md
&lt;/span&gt; &lt;span class="n"&gt;read_file&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;# read a file in full (use sparingly)
&lt;/span&gt; &lt;span class="n"&gt;tail_file&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;# read end of file first
&lt;/span&gt; &lt;span class="n"&gt;read_file_range&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;# read a specific line range
&lt;/span&gt; &lt;span class="n"&gt;conversation_to_file&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;# append conversation entries to episodic memory
&lt;/span&gt; &lt;span class="n"&gt;summarise_conversation_to_file&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;# save transcript + compact summary
&lt;/span&gt; &lt;span class="n"&gt;monitor_context_window&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;# estimate token usage
&lt;/span&gt; &lt;span class="n"&gt;list_papers&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;# list saved papers
&lt;/span&gt; &lt;span class="n"&gt;grep_files&lt;/span&gt; &lt;span class="c1"&gt;# grep-like search over files
&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The “filesystem-first” system prompt: policy beats cleverness
&lt;/h3&gt;

&lt;p&gt;Filesystem tools alone aren’t enough, you also need &lt;strong&gt;a reading policy&lt;/strong&gt; that keeps the agent's token usage efficient and grounded. This is the same reason &lt;code&gt;CLAUDE.md&lt;/code&gt;, &lt;code&gt;AGENTS.md&lt;/code&gt;, and &lt;code&gt;SKILLS.md&lt;/code&gt; matter: they’re procedural memory that is applied consistently across sessions.&lt;/p&gt;

&lt;p&gt;Key policies we encode below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Store big artifacts on disk (papers, tool outputs, transcripts).&lt;/li&gt;
&lt;li&gt;Prefer grep + range reads over full reads.&lt;/li&gt;
&lt;li&gt;Use tail first for large files and logs.&lt;/li&gt;
&lt;li&gt;Be explicit about what you actually read (grounding).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Below is the implementation of an agent using the langchain framework.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;fs_agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;create_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
 &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;openai:&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getenv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;OPENAI_MODEL&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;gpt-4o-mini&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;FS_TOOLS&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="n"&gt;system_prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
 &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You are a conversational research ingestion agent.&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
 &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Core behavior:&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
 &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;- When asked to find a paper: use arxiv_search_candidates, pick the best arxiv_id, &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
 &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;then call fetch_and_save_paper to store the full text in semantic/knowledge_base/.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
 &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;- Papers/knowledge base live in semantic/knowledge_base/.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
 &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;- Conversations (transcripts) live in episodic/conversations/ (one file per run).&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
 &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;- Summaries live in episodic/summaries/.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
 &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;- Conversation may be summarised externally; respect summary + transcript references.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
 &lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  What the memory footprint looks like on disk
&lt;/h3&gt;

&lt;p&gt;After running the agent, you end up with a directory layout that makes the agent’s “memory” tangible and inspectable. In your example, the agent produces:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;episodic/conversations/fsagent_session_0010.md&lt;/code&gt; — the session transcript (episodic memory)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;episodic/tool_outputs/fsagent_session_0010/*.json&lt;/code&gt; — tool results saved as files (evidence + replay)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;semantic/knowledge_base/*.md&lt;/code&gt; — saved papers (semantic memory)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is &lt;em&gt;exactly&lt;/em&gt; the point of filesystem-first memory: the model doesn’t “remember” by magically retaining state; it “remembers” because it can re-open, search, and selectively read its prior artifacts.&lt;/p&gt;

&lt;p&gt;This is also why so many teams keep rediscovering the same pattern: files are a simple abstraction, and agents are surprisingly good at using them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Advantages of File Systems In AI Agents
&lt;/h2&gt;

&lt;p&gt;In the previous section, we showed what a filesystem‑first memory harness looks like in practice: the agent writes durable artifacts (papers, tool outputs, transcripts) to disk, then “remembers” by searching and selectively reading only the parts it needs.&lt;/p&gt;

&lt;p&gt;This approach works because it directly addresses two core constraints of LLMs: limited context windows and inherent statelessness. Once those constraints are handled, it becomes clear why file systems so often become the default interface for early agent systems.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pretraining‑native interface:&lt;/strong&gt; LLMs have ingested massive amounts of repos, docs, logs, and README‑driven workflows, so folders and files are a familiar operating surface.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Simple primitives, strong composition:&lt;/strong&gt; A small action set (list/read/write/search) composes into sophisticated behavior without needing schemas, migrations, or query planning.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Token efficiency via progressive disclosure:&lt;/strong&gt; Retrieve via search, then load a small slice (snippets, line ranges) instead of dumping entire documents into the prompt.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Natural home for artifacts and evidence:&lt;/strong&gt; Transcripts, intermediate results, cached documents, and tool outputs fit cleanly as files and remain human‑inspectable.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Debuggable by default:&lt;/strong&gt; You can open the directory and see exactly what the agent saved, what tools returned, and what the agent could have referenced.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Portability:&lt;/strong&gt; A folder is easy to copy, zip, diff, version, and replay elsewhere, great for demos, reproducibility, and handoffs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Low operational overhead:&lt;/strong&gt; For PoCs and MVPs, you get persistence and structure without provisioning extra infrastructure.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In practice, filesystem memory excels when the workload is artifact‑heavy (research notes, paper dumps, transcripts), when you want a clear audit trail, and when iteration speed matters more than sophisticated retrieval. It also encourages good agent hygiene: write outputs down, cite sources, and load only what you need.&lt;/p&gt;

&lt;h2&gt;
  
  
  Disadvantages of Filesystems In AI Agents
&lt;/h2&gt;

&lt;p&gt;But, unfortunately, it doesn’t end there. The same strengths that make files attractive, simplicity, relatively low cost, and fast implementation, can quickly become bottlenecks once you promote these systems into production, where they are expected to behave like a shared, reliable memory platform.&lt;/p&gt;

&lt;p&gt;As soon as an agent moves beyond single-user prototypes into real-world scenarios, where concurrent reads and writes are the norm and robustness under load is non-negotiable, filesystems start to show their limits.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Weak concurrency guarantees by default:&lt;/strong&gt; Multiple processes can overwrite or interleave writes unless you implement locking correctly. Even then, locking semantics vary across platforms and network filesystems.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No ACID transactions:&lt;/strong&gt; You don’t get atomic multi-step updates, isolation between writers, or durable commit semantics without building them. Partial writes and mid-operation failures can leave memory in inconsistent states.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Search quality is usually brittle:&lt;/strong&gt; Keyword/grep-style retrieval misses meaning, synonyms, and paraphrases.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scaling becomes “death by a thousand files”:&lt;/strong&gt; Directory bloat, fragmented artifacts, and expensive scans make performance degrade as memory grows, especially if you rely on repeated full-folder searches.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Indexing is DIY:&lt;/strong&gt; The moment you want fast retrieval, deduplication, ranking, or recency weighting, you end up maintaining your own indexes and metadata stores (which, being honest here…is basically a database).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Metadata and schema drift:&lt;/strong&gt; Agents inevitably accumulate extra fields (source URLs, timestamps, embeddings, tags). Keeping those consistent across files is harder than enforcing constraints in tables.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Poor multi-user / multi-agent coordination:&lt;/strong&gt; Shared memory across agents means shared state. Without a central coordinator, you’ll hit race conditions, inconsistent views, and an unclear “source of truth.”&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Harder auditing at scale:&lt;/strong&gt; Files are human-readable, but reconstructing “what happened” across many runs and threads becomes messy without structured logs, timestamps, and queryable history.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security and access control are coarse:&lt;/strong&gt; Permissions are filesystem-level, not row-level. It’s hard to enforce “agent A can read X but not Y” without duplicating data or adding an auth layer.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The core pattern is that filesystem memory stays attractive until you need correctness under concurrency, semantic retrieval, or structured guarantees. At that point, you either accept the limitations (and keep the agent single-user/single-process) or you adopt a database.&lt;/p&gt;

&lt;h2&gt;
  
  
  Database For Agent Memory
&lt;/h2&gt;

&lt;p&gt;By this point, most AI developers can see why filesystem first agent implementations are having a moment. It is a familiar interface, easy to prototype with, and our agents can “remember” by writing artifacts to disk and reloading them later via search plus selective reads. For a single developer on a laptop, that is often enough. But once we move beyond “it works on my laptop” and start supporting developers who ship to thousands or millions of users, memory stops being a folder of helpful files and becomes a shared system that has to behave predictably under load.&lt;/p&gt;

&lt;p&gt;Databases were created for the exact moment when “a pile of files” stops being good enough because too many people and processes are touching the same data. One of the &lt;a href="https://www.ibm.com/docs/en/zos-basic-skills?topic=now-history-ims-beginnings-nasa" rel="noopener noreferrer"&gt;most-cited&lt;/a&gt; origin stories of the database dates to the Apollo era. IBM, alongside partners, built what became IMS to manage complex operational data for the program, and early versions were installed in 1968 at the Rockwell Space Division, supporting NASA. The point was not simply storage. It was coordination, correctness, and the ability to trust shared data while many activities were happening simultaneously.&lt;/p&gt;

&lt;p&gt;That same production reality is what pushes agent memory toward databases today.&lt;/p&gt;

&lt;p&gt;When agent memory must handle concurrent reads and writes, preserve an auditable history of what happened, support fast retrieval across many sessions, and enforce consistent updates, we want database guarantees rather than best-effort file conventions.&lt;/p&gt;

&lt;p&gt;Oracle has been solving these exact problems since 1979, when we shipped the first commercial SQL database. The goal then was the same as now: make shared state reliable, portable, and trustworthy under load.&lt;/p&gt;

&lt;p&gt;On that note, allow us to show how this can work in practice.&lt;/p&gt;

&lt;h2&gt;
  
  
  Database-first Research Assistant
&lt;/h2&gt;

&lt;p&gt;In the filesystem first section, our Research Assistant “remembered” by writing artifacts to disk and reloading them later using cheap search plus selective reads. That is a great starting point. But when we want memory that is shared, queryable, and reliable under concurrent use, we need a different foundation.&lt;/p&gt;

&lt;p&gt;In this iteration of our agent, we keep the same user experience and the same high-level job. Search arXiv, ingest papers, answer follow-up questions, and maintain continuity across sessions. The difference is that memory now lives in the Oracle AI Database, where we can make it durable, indexed, filterable, and safe for concurrent reads and writes. We also achieve a clean separation between two memory surfaces: structured history in SQL tables and semantic recall via vector search.&lt;/p&gt;

&lt;p&gt;The result is what we call a MemAgent, an agent whose memory is not a folder of artifacts, but a queryable system. It is designed to support multi-threaded sessions, store full conversational history, store tool logs for debugging and auditing, and store a semantic knowledge base that can be searched by meaning rather than keywords.&lt;/p&gt;

&lt;h3&gt;
  
  
  Available tools for MemAgent
&lt;/h3&gt;

&lt;p&gt;Before we wire up the agent loop, we need to define the tool surface that MemAgent can use to reason, retrieve, and persist knowledge. The design goal here is similar to the filesystem-first approach: keep the toolset small and composable, but shift the memory substrate from files to the database. Instead of grepping folders and reading line ranges, MemAgent uses vector similarity search to retrieve semantically relevant context, and it persists what it learns in a way that is queryable and reliable across sessions.&lt;/p&gt;

&lt;p&gt;In practice, that means two things.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;First, ingestion tools do not just “fetch” content; they also chunk and embed it so it becomes searchable later.&lt;/li&gt;
&lt;li&gt;Second, retrieval tools are meaning-based rather than keyword-based, so the agent can find relevant passages even when the user paraphrases, uses synonyms, or asks higher-level conceptual questions.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The table below summarizes the minimal set of tools we expose to MemAgent and where each tool stores its outputs.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;What it does&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;arxiv_search_candidates(query, k)&lt;/td&gt;
&lt;td&gt;Searches arXiv for candidate papers&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;fetch_and_save_paper_to_kb_db(arxiv_id)&lt;/td&gt;
&lt;td&gt;Fetches paper, chunks text, stores embeddings&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;search_knowledge_base(query, k)&lt;/td&gt;
&lt;td&gt;Semantic search over stored papers&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;store_to_knowledge_base(text, metadata)&lt;/td&gt;
&lt;td&gt;Manually store text with metadata&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;FSAgent and MemAgent can look similar from the outside because both can ingest papers, answer questions, and maintain continuity. The difference is what powers that continuity and how retrieval works when the system grows.&lt;/p&gt;

&lt;p&gt;FSAgent relies on the operating system as its memory surface, which is great for iteration speed and human inspectability, but it typically relies on keyword-style discovery and file traversal. MemAgent treats memory as a database concern, which adds setup overhead, but unlocks indexed retrieval, stronger guarantees under concurrency, and richer ways to query and filter what the agent has learned.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;FSAgent (Filesystem)&lt;/th&gt;
&lt;th&gt;MemAgent (Database)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Search&lt;/td&gt;
&lt;td&gt;Keyword and grep&lt;/td&gt;
&lt;td&gt;Semantic similarity&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Persistence&lt;/td&gt;
&lt;td&gt;Markdown files&lt;/td&gt;
&lt;td&gt;SQL tables + vector indexes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Scalability&lt;/td&gt;
&lt;td&gt;Directory traversal&lt;/td&gt;
&lt;td&gt;Indexed queries&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Query Language&lt;/td&gt;
&lt;td&gt;Paths and regex&lt;/td&gt;
&lt;td&gt;SQL + vector similarity&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Setup Complexity&lt;/td&gt;
&lt;td&gt;Minimal&lt;/td&gt;
&lt;td&gt;Requires database runtime&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Creating data stores with LangChain and Oracle AI Database
&lt;/h3&gt;

&lt;p&gt;Before we start defining tables and vector stores, it is worth being explicit about the stack we are using and why. In this implementation, we are not building a bespoke agent framework from scratch.&lt;/p&gt;

&lt;p&gt;We use LangChain as the LLM framework to abstract the agent loop, tool calling, and message handling, then pair it with a model provider for reasoning and generation, and with Oracle AI Database as the unified memory core that stores both structured history and semantic embeddings.&lt;/p&gt;

&lt;p&gt;This separation is important because it mirrors how production agent systems are typically built. The agent logic evolves quickly, the model can be swapped, and the memory layer must remain reliable and queryable.&lt;/p&gt;

&lt;p&gt;Think of this as the agent stack. Each layer has a clear job, and together they create an agent that is both practical to build and robust enough to scale.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Model provider (OpenAI):&lt;/strong&gt; generates reasoning, responses, and tool decisions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LLM framework (LangChain):&lt;/strong&gt; provides the agent abstraction, tool wiring, and runtime orchestration.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Unified memory core (Oracle AI Database):&lt;/strong&gt; stores durable conversational memory in SQL and semantic memory in vector indexes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With that stack in place, the first step is simply to connect to the Oracle Database and initialize an embedding model. The database connection serves as the foundation for all memory operations, and the embedding model enables us to store and retrieve knowledge semantically through the vector store layer.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;connect_oracle&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;password&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;dsn&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;127.0.0.1:1521/FREEPDB1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;program&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;langchain_oracledb_demo&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
 &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;oracledb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;password&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;password&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;dsn&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;dsn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;program&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;program&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;database_connection&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;connect_oracle&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
 &lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;VECTOR&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="n"&gt;password&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;VectorPwd_2025&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="n"&gt;dsn&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;127.0.0.1:1521/FREEPDB1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="n"&gt;program&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;devrel.content.filesystem_vs_dbs&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Using user:&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;database_connection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;embedding_model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;HuggingFaceEmbeddings&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
 &lt;span class="n"&gt;model_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sentence-transformers/paraphrase-mpnet-base-v2&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we define the database schema to store our agent’s memory and prepare a clean slate for the demo. We separate memory into distinct tables so each type can be managed, indexed, and queried appropriately.&lt;/p&gt;

&lt;p&gt;Installing the Oracle Database integration in the LangChain ecosystem is straightforward. You can add it to your environment with a single pip command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;pip install -U langchain-oracledb&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Conversational history and logs are naturally tabular, while semantic and summary memory are stored in vector-backed tables through &lt;a href="https://docs.langchain.com/oss/python/integrations/vectorstores/oracle" rel="noopener noreferrer"&gt;OracleVS&lt;/a&gt;. For reproducibility, we drop any existing tables from previous runs, making the notebook deterministic and avoiding confusing results when you re-run the walkthrough.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_oracledb.vectorstores&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;OracleVS&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_oracledb.vectorstores.oraclevs&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;create_index&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_community.vectorstores.utils&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;DistanceStrategy&lt;/span&gt;

&lt;span class="n"&gt;CONVERSATIONAL_TABLE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CONVERSATIONAL_MEMORY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;KNOWLEDGE_BASE_TABLE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SEMANTIC_MEMORY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;LOGS_TABLE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;LOGS_MEMORY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;SUMMARY_TABLE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SUMMARY_MEMORY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="n"&gt;ALL_TABLES&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
 &lt;span class="n"&gt;CONVERSATIONAL_TABLE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="n"&gt;KNOWLEDGE_BASE_TABLE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="n"&gt;LOGS_TABLE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="n"&gt;SUMMARY_TABLE&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;table&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;ALL_TABLES&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
 &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
 &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;database_connection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;cursor&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;cur&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
 &lt;span class="n"&gt;cur&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;DROP TABLE &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;table&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; PURGE&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
 &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ORA-00942&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
 &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt; - &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;table&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; (not exists)&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
 &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt; [FAIL] &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;table&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;database_connection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;commit&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create the vector stores and HNSW indexes
&lt;/h3&gt;

&lt;p&gt;For this section, it is worth explaining what a “vector store” actually is in the context of agents. A vector store is a storage system that persists embeddings alongside metadata and supports similarity search, so the agent can retrieve items by meaning rather than keywords.&lt;/p&gt;

&lt;p&gt;Instead of asking “which file contains this exact phrase”, the agent asks “which chunks are semantically closest to my question” and pulls back the best matches.&lt;/p&gt;

&lt;p&gt;Under the hood, that usually means an approximate nearest neighbor index, because scanning every vector becomes prohibitively expensive as your knowledge base grows. HNSW is one of the most common indexing approaches for this style of retrieval.&lt;/p&gt;

&lt;p&gt;In the code below, we create two vector stores using the langchain_oracledb module OracleVS, one for the knowledge base and one for summaries, both using cosine distance.&lt;/p&gt;

&lt;p&gt;Second, it builds HNSW indexes so similarity search stays fast as memory grows, which is exactly what you want once your Research Assistant starts ingesting many papers and running over long-lived threads.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;knowledge_base_vs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;OracleVS&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
 &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;database_connection&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="n"&gt;embedding_function&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;embedding_model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="n"&gt;table_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;KNOWLEDGE_BASE_TABLE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="n"&gt;distance_strategy&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;DistanceStrategy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;COSINE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;summary_vs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;OracleVS&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
 &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;database_connection&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="n"&gt;embedding_function&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;embedding_model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="n"&gt;table_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;SUMMARY_TABLE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="n"&gt;distance_strategy&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;DistanceStrategy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;COSINE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;safe_create_index&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;vs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;idx_name&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
 &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
 &lt;span class="nf"&gt;create_index&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
 &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="n"&gt;vector_store&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;vs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="n"&gt;params&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;idx_name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;idx_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;idx_type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;HNSW&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
 &lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt; Created index: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;idx_name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
 &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ORA-00955&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
 &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt; [SKIP] Index already exists: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;idx_name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
 &lt;span class="k"&gt;raise&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Creating vector indexes...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;safe_create_index&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;database_connection&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;knowledge_base_vs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;kb_hnsw_cosine_idx&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;safe_create_index&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;database_connection&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;summary_vs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;summary_hnsw_cosine_idx&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;All indexes created!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Memory Manager
&lt;/h3&gt;

&lt;p&gt;In the code below, we create a custom Memory manager. The Memory manager is the abstraction layer that turns raw database operations into “agent memory behaviours”. This is the part that makes the database-first agent easy to reason about.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SQL methods store and load conversational history by &lt;code&gt;thread_id&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Vector methods store and retrieve semantic memory by similarity search&lt;/li&gt;
&lt;li&gt;Summary methods store compressed context and let us rotate the working set when we approach context limits
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain.tools&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;tool&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;typing&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Dict&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;MemoryManager&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
 &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
 A simplified memory manager for AI agents using Oracle AI Database.
 &lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;

 &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;conversation_table&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;knowledge_base_vs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;summary_vs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tool_log_table&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
 &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;conn&lt;/span&gt;
 &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;conversation_table&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;conversation_table&lt;/span&gt;
 &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;knowledge_base_vs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;knowledge_base_vs&lt;/span&gt;
 &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;summary_vs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;summary_vs&lt;/span&gt;
 &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tool_log_table&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;tool_log_table&lt;/span&gt;

 &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;write_conversational_memory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
 &lt;span class="n"&gt;thread_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;cursor&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;cur&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
 &lt;span class="n"&gt;id_var&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cur&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;var&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="n"&gt;cur&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
 INSERT INTO &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;conversation_table&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; (thread_id, role, content, metadata, timestamp)
 VALUES (:thread_id, :role, :content, :metadata, CURRENT_TIMESTAMP)
 RETURNING id INTO :id
 &lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;thread_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;role&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;metadata&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;id_var&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
 &lt;span class="n"&gt;record_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;id_var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getvalue&lt;/span&gt;&lt;span class="p"&gt;()[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;id_var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getvalue&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;
 &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;commit&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
 &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;record_id&lt;/span&gt;

 &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;load_conversational_history&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;limit&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;Dict&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;]]:&lt;/span&gt;
 &lt;span class="n"&gt;thread_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;cursor&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;cur&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
 &lt;span class="n"&gt;cur&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
 SELECT role, content FROM &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;conversation_table&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;
 WHERE thread_id = :thread_id AND summary_id IS NULL
 ORDER BY timestamp ASC
 FETCH FIRST :limit ROWS ONLY
 &lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;thread_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;limit&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;limit&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
 &lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cur&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fetchall&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

 &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;role&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;hasattr&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;read&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;role&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;results&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

 &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;mark_as_summarized&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;summary_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
 &lt;span class="n"&gt;thread_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;cursor&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;cur&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
 &lt;span class="n"&gt;cur&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
 UPDATE &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;conversation_table&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;
 SET summary_id = :summary_id
 WHERE thread_id = :thread_id AND summary_id IS NULL
 &lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;summary_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;summary_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;thread_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;thread_id&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
 &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;commit&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
 &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt; Marked messages as summarized (summary_id: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;summary_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;)&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

 &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;write_knowledge_base&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;metadata_json&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
 &lt;span class="n"&gt;metadata&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;loads&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;metadata_json&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;knowledge_base_vs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_texts&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

 &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;read_knowledge_base&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
 &lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;knowledge_base_vs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;similarity_search&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="n"&gt;content&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="n"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;page_content&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;doc&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;results&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
 &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;## Knowledge Base Memory: This are general information that is relevant to the question
### How to use: Use the knowledge base as background information that can help answer the question

&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;

 &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;write_summary&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;summary_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;full_content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;summary&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
 &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;summary_vs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_texts&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
 &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;summary_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
 &lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;summary_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;full_content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;full_content&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;summary&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;summary&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;description&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;
 &lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;summary_id&lt;/span&gt;

 &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;read_summary_memory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;summary_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
 &lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;summary_vs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;similarity_search&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
 &lt;span class="n"&gt;summary_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="nb"&gt;filter&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;summary_id&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
 &lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;results&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
 &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Summary &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;summary_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; not found.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
 &lt;span class="n"&gt;doc&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;results&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
 &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;summary&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;No summary content.&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

 &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;read_summary_context&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
 &lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;summary_vs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;similarity_search&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;summary&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;results&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
 &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;## Summary Memory&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;No summaries available.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

 &lt;span class="n"&gt;lines&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;## Summary Memory&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Use expand_summary(id) to get full content:&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
 &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;doc&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;results&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
 &lt;span class="n"&gt;sid&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;id&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;?&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="n"&gt;desc&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;description&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;No description&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="n"&gt;lines&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt; - [ID: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;sid&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;] &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;desc&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;lines&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we instantiate it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;memory_manager&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MemoryManager&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
 &lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;database_connection&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="n"&gt;conversation_table&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;CONVERSATION_HISTORY_TABLE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="n"&gt;knowledge_base_vs&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;knowledge_base_vs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="n"&gt;tool_log_table&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;TOOL_LOG_TABLE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="n"&gt;summary_vs&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;summary_vs&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Creating the tools and agent
&lt;/h3&gt;

&lt;p&gt;The database-first agent follows a simple, production-friendly pattern.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Persists every conversation turn as structured rows, including user and assistant messages with thread or run IDs and timestamps, so sessions are recoverable, traceable, and consistent across restarts.&lt;/li&gt;
&lt;li&gt;Persists long-term knowledge in a vector-enabled store by chunking documents, generating embeddings, and storing them with metadata, so retrieval is semantic, ranked, and fast as the corpus grows.&lt;/li&gt;
&lt;li&gt;Persists tool activity as first-class records that capture the tool name, inputs, outputs, status, errors, and key metadata, so agent behavior is inspectable, reproducible, and auditable.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;On top of that, the agent actively manages context: it tracks token usage and periodically rolls older dialogue and intermediate state into durable summaries (and/or “memory” tables), so the working prompt stays small while the full history remains available on demand.&lt;/p&gt;

&lt;h4&gt;
  
  
  Ingest papers into the knowledge base vector store
&lt;/h4&gt;

&lt;p&gt;This is the database-first equivalent of “fetch and save paper”. Instead of writing markdown files, we do three steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Load paper text from arXiv&lt;/li&gt;
&lt;li&gt;Chunk it to respect the embedding model limits&lt;/li&gt;
&lt;li&gt;Store chunks with metadata in the vector store, which gives us fast semantic search later
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;timezone&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_core.tools&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;tool&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_community.document_loaders&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ArxivLoader&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_text_splitters&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;RecursiveCharacterTextSplitter&lt;/span&gt;

&lt;span class="nd"&gt;@tool&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;fetch_and_save_paper_to_kb_db&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
 &lt;span class="n"&gt;arxiv_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="n"&gt;chunk_size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1500&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="n"&gt;chunk_overlap&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
 &lt;span class="n"&gt;loader&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ArxivLoader&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
 &lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;arxiv_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="n"&gt;load_max_docs&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="n"&gt;doc_content_chars_max&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="n"&gt;docs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;loader&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;load&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
 &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;docs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
 &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;No documents found for arXiv id: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;arxiv_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

 &lt;span class="n"&gt;doc&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;docs&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

 &lt;span class="n"&gt;title&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
 &lt;span class="n"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Title&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="n"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;title&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;arXiv &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;arxiv_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
 &lt;span class="p"&gt;)&lt;/span&gt;

 &lt;span class="n"&gt;entry_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Entry ID&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="n"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;entry_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="sh"&gt;""&lt;/span&gt;
 &lt;span class="n"&gt;published&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Published&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="n"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;published&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="sh"&gt;""&lt;/span&gt;
 &lt;span class="n"&gt;authors&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Authors&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="n"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;authors&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="sh"&gt;""&lt;/span&gt;

 &lt;span class="n"&gt;full_text&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;page_content&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="sh"&gt;""&lt;/span&gt;
 &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;full_text&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;strip&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
 &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Loaded arXiv &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;arxiv_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; but extracted empty text (PDF parsing issue).&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

 &lt;span class="n"&gt;splitter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;RecursiveCharacterTextSplitter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
 &lt;span class="n"&gt;chunk_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;chunk_size&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="n"&gt;chunk_overlap&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;chunk_overlap&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="n"&gt;chunks&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;splitter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;split_text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;full_text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

 &lt;span class="n"&gt;ts_utc&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;timezone&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;utc&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;isoformat&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
 &lt;span class="n"&gt;metadatas&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
 &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;chunks&lt;/span&gt;&lt;span class="p"&gt;)):&lt;/span&gt;
 &lt;span class="n"&gt;metadatas&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
 &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;source&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;arxiv&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;arxiv_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;arxiv_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;title&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;entry_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;entry_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;published&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;published&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
 &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;authors&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;authors&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
 &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;chunk_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;num_chunks&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;chunks&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
 &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ingested_ts_utc&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;ts_utc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="p"&gt;}&lt;/span&gt;
 &lt;span class="p"&gt;)&lt;/span&gt;

 &lt;span class="n"&gt;knowledge_base_vs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_texts&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;chunks&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;metadatas&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

 &lt;span class="nf"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
 &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Saved arXiv &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;arxiv_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; to &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;KNOWLEDGE_BASE_TABLE&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;: &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
 &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;chunks&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; chunks (title: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;).&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
 &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We create two more tools below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;search_knowledge_base(query, k=5):&lt;/strong&gt; Runs a semantic similarity search over the database-backed knowledge base and returns the top &lt;em&gt;k&lt;/em&gt; most relevant chunks, so the agent can retrieve context by meaning rather than exact keywords.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;store_to_knowledge_base(text, metadata_json="{}"):&lt;/strong&gt; Stores a new piece of text into the knowledge base and attaches metadata (as JSON), which gets embedded and indexed so it becomes searchable in future queries.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain.tools&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;tool&lt;/span&gt;

&lt;span class="nd"&gt;@tool&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;search_knowledge_base&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
 &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;memory_manager&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read_knowledge_base&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nd"&gt;@tool&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;store_to_knowledge_base&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;metadata_json&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
 &lt;span class="n"&gt;memory_manager&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write_knowledge_base&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;metadata_json&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Successfully stored text to knowledge base.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we build the LangChain agent using the database-first tools.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain.agents&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;create_agent&lt;/span&gt;

&lt;span class="n"&gt;MEM_AGENT&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;create_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
 &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;openai:&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getenv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;OPENAI_MODEL&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;gpt-4o-mini&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;search_knowledge_base&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;store_to_knowledge_base&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;arxiv_search_candidates&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;fetch_and_save_paper_to_kb_db&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Result Comparison: FSAgent vs MemAgent: End-to-End Benchmark (Latency + Quality)
&lt;/h2&gt;

&lt;p&gt;At this point, the difference between a filesystem agent and a database-backed agent should feel less like a philosophical debate and more like an engineering trade-off. Both approaches can “remember” in the sense that they can persist state, retrieve context, and answer follow-up questions. The real test is what happens when you leave the tidy laptop demo and hit production realities: &lt;strong&gt;larger corpora, fuzzier queries, and concurrent workloads&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;To make that concrete, we ran an end-to-end benchmark and measured the full agent loop per query—retrieval, context assembly, tool calls, model invocations, and the final answer—across three scenarios:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Small-corpus retrieval:&lt;/strong&gt; a tight, keyword-friendly dataset to validate baseline retrieval and answer synthesis with minimal context.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Large-corpus retrieval:&lt;/strong&gt; a larger dataset with more paraphrase variability to stress retrieval quality and context efficiency at scale.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Concurrent write integrity:&lt;/strong&gt; a multi-worker stress test to evaluate correctness under simultaneous reads/writes (integrity, race conditions, throughput).&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  FSAgent vs MemAgent: End-to-End Benchmark (Latency + Quality)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F02%2Fimage-7-1024x703.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F02%2Fimage-7-1024x703.png" alt="Benchmark chart comparing FSAgent and MemAgent on end-to-end latency and answer quality" width="800" height="549"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From the result shown in the image above, two conclusions immediately stand out: &lt;strong&gt;latency&lt;/strong&gt; and &lt;strong&gt;answer quality&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In our run, MemAgent generally finished faster end-to-end than FSAgent. That might sound counterintuitive if you assume “database equals overhead,” and sometimes it does.&lt;/p&gt;

&lt;p&gt;But the agent loop is not dominated by raw storage primitives. It is dominated by how quickly you can find the right information and how little unnecessary context you force into the model, also known as context engineering. Semantic retrieval tends to return fewer, more relevant chunks (subject to tuning of the retrieval pipelines), which means less scanning, less paging through files, and fewer tokens burned on irrelevant text.&lt;/p&gt;

&lt;p&gt;In this particular run, both agents produced similar-quality answers. That is not surprising. When the questions are retrieval-friendly and the corpus is small enough, both approaches can find the right passages. FSAgent gets there through keyword search and careful reading. MemAgent gets there through similarity search over embedded chunks. Different roads, similar destination.&lt;/p&gt;

&lt;p&gt;And I think it’s worth zooming in on one nuance here. When the information to traverse is minimal in terms of character length and the query is keyword-friendly, the retrieval quality of both agents tends to converge. At that scale, “search” is barely a problem, so the dominant factor becomes the model’s ability to read and synthesise, not the retrieval substrate. The gap only starts to widen when the corpus grows, the wording becomes fuzzier, and the system must retrieve reliably under real-world constraints such as noise, paraphrases, and concurrency. Which it eventually does.&lt;/p&gt;

&lt;h3&gt;
  
  
  About the “LLM-as-a-Judge” metric
&lt;/h3&gt;

&lt;p&gt;We also scored answers using an LLM-as-a-judge prompt. It is a pragmatic way to get directional feedback when you do not have labeled ground truth, but it is not a silver bullet. Judges can be sensitive to prompt phrasing, can over-reward fluency, and can miss subtle grounding failures.&lt;/p&gt;

&lt;p&gt;If you are building this for production, treat LLM judging as a starting signal, not the finish line. The more reliable approach is a mix of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Reference-based evaluation&lt;/strong&gt; when you have ground truth, such as rubric grading, exact match, or F1-style scoring.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Retrieval-aware evaluation&lt;/strong&gt; when context matters, such as context precision and recall, answer faithfulness, and groundedness. &lt;strong&gt;Tracing plus evaluation tooling&lt;/strong&gt; so you can connect failures to the specific retrievals, tool calls, and context assembly decisions that caused them.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Even with a lightweight judge, the directional story remains consistent. As retrieval becomes more difficult and the system becomes busier, database-backed memory tends to perform better.&lt;/p&gt;

&lt;h3&gt;
  
  
  Large Corpus Benchmark: Why the gap widens as data grows
&lt;/h3&gt;

&lt;p&gt;The large-corpus test is designed to stress the exact weakness of keyword-first memory. We intentionally made the search problem harder by growing the corpus and making the queries less “exact match.”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;FSAgent with a concatenated corpus:&lt;/strong&gt;&lt;br&gt;
When you merge many papers into large markdown files, FSAgent becomes dependent on grep-style discovery followed by paging the right sections into the context window. It can work, but it gets brittle as the corpus grows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If the user paraphrases or uses synonyms, exact keyword matches can fail.&lt;/li&gt;
&lt;li&gt;If the keyword is too common, you get too many hits, and the agent has to sift through them manually.&lt;/li&gt;
&lt;li&gt;When uncertain, the agent often loads larger slices “just in case,” which increases token count, latency, and the risk of context dilution.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;MemAgent with chunked, embedded memory:&lt;/strong&gt;&lt;br&gt;
Chunking plus embeddings makes retrieval more forgiving and more stable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The user does not need to match the source phrasing exactly.&lt;/li&gt;
&lt;li&gt;The agent can fetch a small set of high-similarity chunks, keeping context tight.&lt;/li&gt;
&lt;li&gt;Indexed retrieval remains predictable as memory grows, rather than requiring repeated scans of files.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The narrative takeaway is simple. Filesystems feel great when the corpus is small and the queries are keyword-friendly. As the corpus grows and the questions get fuzzier, semantic retrieval becomes the differentiator, and database-backed memory becomes the more dependable default.&lt;/p&gt;

&lt;p&gt;The quality gap widens with scale. On a handful of documents, grep can brute-force its way to a reasonable answer: the agent finds a keyword match, pulls surrounding context, and responds.&lt;/p&gt;

&lt;p&gt;But scatter the same information across hundreds of files, and keyword search starts missing the forest for the trees. It returns too many shallow hits or none when the user's phrasing doesn't match the source text verbatim. Semantic search, by contrast, surfaces conceptually relevant chunks even when the vocabulary differs. The result isn't just faster retrieval, it's more coherent answers with fewer hallucinated gaps. This is evident in our LLM judge evaluation on the large corpus benchmark, where FSAgent achieved a score of 29.7% while MemAgent reached 87.1%.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F02%2Fimage-5-1024x727.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F02%2Fimage-5-1024x727.png" alt="Large-corpus benchmark showing the widening quality gap between FSAgent and MemAgent" width="800" height="568"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Concurrency Test: What production teaches you very quickly
&lt;/h3&gt;

&lt;p&gt;We find that the real breaking point for filesystem memory is rarely retrieval. It is concurrency.&lt;/p&gt;

&lt;p&gt;We ran three versions of the same workload under concurrent writes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Filesystem without locking,&lt;/strong&gt; where multiple workers append to the same file.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Filesystem with locking,&lt;/strong&gt; where writes are guarded by file locks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Oracle AI Database with transactions,&lt;/strong&gt; where multiple workers write rows under ACID guarantees.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then we measured two things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Integrity,&lt;/strong&gt; meaning, did we get the expected number of entries with no corruption?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Execution time,&lt;/strong&gt; meaning how long the batch took end-to-end.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F02%2Fimage-6.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblogs.oracle.com%2Fdevelopers%2Fwp-content%2Fuploads%2Fsites%2F129%2F2026%2F02%2Fimage-6.jpg" alt="Concurrent write integrity comparison across filesystem and database memory backends" width="800" height="271"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What we observed maps to what many teams discover the hard way.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Naive filesystem writes can be fast and still be wrong.&lt;/strong&gt; Without locking, concurrent writes conflict with each other. You might get good throughput and still lose memory entries. If your agent’s “memory” is used for downstream reasoning, silent loss is not a performance issue. It is a correctness failure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Locking fixes integrity, but now correctness is your job.&lt;/strong&gt; With explicit locking, you can make filesystem writes safe. But you inherit the complexity. Lock scope, lock contention, platform differences, network filesystem behavior, and failure recovery all become part of your agent engineering work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Databases make correctness the default.&lt;/strong&gt; Transactions and isolation are exactly what databases were designed for. Yes, there is overhead. But the key difference is that you are not bolting correctness on after a production incident. You start with a system whose job is to protect the shared state.&lt;/p&gt;

&lt;p&gt;And of course, you can take the file-locking approach, add atomic writes, build a write-ahead log, introduce retry and recovery logic, maintain indexes for fast lookups, and standardise metadata so you can query it reliably.&lt;/p&gt;

&lt;p&gt;Eventually, though, you will realise you have not “avoided” a database at all.&lt;/p&gt;

&lt;p&gt;You have just rebuilt one, only with fewer guarantees and more edge cases to own.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: Is there a happy medium for AI Developers?
&lt;/h2&gt;

&lt;p&gt;This isn’t a religious war between “files” and “databases.” It’s a question of what you’re optimizing for—and which failure modes you’re willing to own. If you’re building single-user or single-writer prototypes, filesystem memory is a great default. It’s simple, transparent, and fast to iterate on. You can open a folder and see exactly what the agent saved, diff it, version it, and replay it with nothing more than a text editor.&lt;/p&gt;

&lt;p&gt;If you’re building multi-user agents, background workers, or anything you plan to ship at scale, a database-backed memory store is a safer foundation at that stage. At that stage, concurrency, integrity, governance, access control, and auditability matter more than raw simplicity. A practical compromise is a hybrid design: keep file-like ergonomics for artifacts and developer workflows, but store durable memory in a database that can enforce correctness.&lt;/p&gt;

&lt;p&gt;And if you insist on filesystem-only memory in production, treat &lt;strong&gt;locking, atomic writes, recovery, indexing, and metadata discipline&lt;/strong&gt; as first-class engineering work. Because the moment you do that seriously, you’re no longer “just using files”—you’re rebuilding a database.&lt;/p&gt;

&lt;p&gt;One last trap worth calling out: &lt;strong&gt;polyglot persistence&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Many AI stacks drift into an anti-pattern: a vector DB for embeddings, a NoSQL DB for JSON, a graph DB for relationships, and a relational DB for transactions. Each product is “best at its one thing,” until you realize you’re operating four databases, four security models, four backup strategies, four scaling profiles, and four cascading failure points.&lt;/p&gt;

&lt;p&gt;Coordination becomes the tax. You end up building glue code and sync pipelines just to make the system feel unified to the agent. This is why converged approaches matter in agent systems: production memory isn’t only about storing vectors—it’s about storing &lt;strong&gt;operational history, artifacts, metadata, and semantics&lt;/strong&gt; under one consistent set of guarantees.&lt;/p&gt;

&lt;p&gt;For AI Developers, your application acts as an integration layer for multiple storage engines, each with different access patterns and operational semantics. You end up building glue code, sync pipelines, and reconciliation logic just to make the system feel unified to the agent.&lt;/p&gt;

&lt;p&gt;Of course, production data is inherently heterogeneous. You will inevitably deal with structured, semi-structured, unstructured text, embeddings, JSON documents, and relationship-heavy data.&lt;/p&gt;

&lt;p&gt;The point is not that “one model wins”.&lt;/p&gt;

&lt;p&gt;The point is that when you understand the fundamentals of data management, reliability, indexing, governance, and queryability, you want a platform that can store and retrieve these forms without turning your AI infrastructure into a collection of loosely coordinated subsystems.&lt;/p&gt;

&lt;p&gt;This is the philosophy behind Oracle’s &lt;a href="https://www.oracle.com/uk/database/" rel="noopener noreferrer"&gt;converged database approach&lt;/a&gt;, which is designed to support multiple data types and workloads natively within a single engine. In the world of agents, that becomes a practical advantage because we can use Oracle as the unified memory core for both operational memory (SQL tables for history and logs) and semantic memory (vector search for retrieval).&lt;/p&gt;

&lt;h2&gt;
  
  
  Frequently Asked Questions
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;What is AI Agent memory?&lt;/strong&gt; AI agent memory is the set of system components and techniques that enable an AI agent to store, recall, and update information over time. Because LLMs are inherently stateless—they have no built-in ability to remember previous sessions—agent memory provides the persistence layer that allows agents to maintain continuity across conversations, learn from past interactions, and adapt to user preferences.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Should I use a filesystem or a database for an AI agent's memory?&lt;/strong&gt; It depends on your use case. Filesystems excel at single-user prototypes, artifact-heavy workflows, and rapid iteration—they're simple, transparent, and align with how LLMs naturally operate. Databases become essential when you need concurrent access, ACID transactions, semantic retrieval, or shared state across multiple agents or users. Many production systems use a hybrid approach: file-like interfaces for agent interaction, with database guarantees underneath.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;How do I build an AI agent with long-term memory?&lt;/strong&gt; Start by separating memory types: working memory (current context), semantic memory (knowledge base), episodic memory (interaction history), and procedural memory (behavioral rules). Implement storage: a filesystem for prototypes and a database for production. Add retrieval tools that the agent can call. Build a summarization to compress the old context. Test with multi-session scenarios where the agent must recall information from previous conversations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;What are semantic, episodic, and procedural memory in AI agents?&lt;/strong&gt; These terms, borrowed from cognitive science, describe different types of agent memory. Semantic memory stores durable knowledge and facts (like saved documents or reference materials). Episodic memory captures experiences and interaction history (conversation transcripts, tool outputs). Procedural memory encodes how the agent should behave—instructions, rules, files like CLAUDE.md, and learned workflows that shape behavior across sessions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;What is the best database for AI applications?&lt;/strong&gt; The best database depends on your requirements. For AI agent memory specifically, you need: vector search capability for semantic retrieval, SQL or structured queries for history and metadata, ACID transactions if multiple agents share state, and scalability as your memory corpus grows. Converged databases that combine these capabilities—like Oracle AI Database—reduce operational complexity versus running separate specialized systems.&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>database</category>
      <category>oracle</category>
    </item>
  </channel>
</rss>
