<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: AuraCore Cognitive Field AI Developer.</title>
    <description>The latest articles on Forem by AuraCore Cognitive Field AI Developer. (@burstfirea47050).</description>
    <link>https://forem.com/burstfirea47050</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/burstfirea47050"/>
    <language>en</language>
    <item>
      <title>I built a full cognitive runtime. The "mind" between the prompts. Come look here: https://auracorecf.github.io/ This is proto-AI OS. Has major implications for medical, financial, and defense. Use's new memory retrieval system. TCF v Rag paper on github.</title>
      <dc:creator>AuraCore Cognitive Field AI Developer.</dc:creator>
      <pubDate>Mon, 13 Apr 2026 04:40:12 +0000</pubDate>
      <link>https://forem.com/burstfirea47050/i-built-a-full-cognitive-runtime-the-mind-between-the-prompts-come-look-here-22ij</link>
      <guid>https://forem.com/burstfirea47050/i-built-a-full-cognitive-runtime-the-mind-between-the-prompts-come-look-here-22ij</guid>
      <description>&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;div class="c-embed__content"&gt;
      &lt;div class="c-embed__body flex items-center justify-between"&gt;
        &lt;a href="https://auracorecf.github.io/" rel="noopener noreferrer" class="c-link fw-bold flex items-center"&gt;
          &lt;span class="mr-2"&gt;auracorecf.github.io&lt;/span&gt;
          

        &lt;/a&gt;
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;


</description>
    </item>
    <item>
      <title>I'm working on a new retrieval system. Not RAG</title>
      <dc:creator>AuraCore Cognitive Field AI Developer.</dc:creator>
      <pubDate>Tue, 24 Mar 2026 21:22:17 +0000</pubDate>
      <link>https://forem.com/burstfirea47050/im-working-on-a-new-retrieval-system-not-rag-ak4</link>
      <guid>https://forem.com/burstfirea47050/im-working-on-a-new-retrieval-system-not-rag-ak4</guid>
      <description>&lt;p&gt;It uses TCF ( Temporal Cognitive Fields) to create CFGS ( Cognitive Field Geometry Shapes) for persistent, stateful recall.&lt;br&gt;
  The strongest property of this architecture is continuity. Because retrieval is stateful and tied to episodes, themes, persistent facts, and live field geometry, the system is better positioned than standard RAG to resume ongoing work, preserve relationship context, and adapt response planning to current internal state.&lt;br&gt;
 The main trade-off is complexity. A system that retrieves across multiple memory forms and can reactivate them into a live field is more powerful, but it also needs stronger discipline around speaker attribution, telemetry isolation, and continuity hygiene. The architecture gains expressive control at the cost of a more demanding runtime and more subtle failure modes.&lt;br&gt;
 Not sure if I'm releasing the source code yet, but I'm honestly thinking I should. Project is here.: &lt;a href="https://AuraCoreCF.github.io" rel="noopener noreferrer"&gt;https://AuraCoreCF.github.io&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>vscode</category>
      <category>github</category>
      <category>llm</category>
    </item>
    <item>
      <title>AuraCoreCF: a local‑first cognitive runtime (not another chatbot wrapper)</title>
      <dc:creator>AuraCore Cognitive Field AI Developer.</dc:creator>
      <pubDate>Mon, 23 Mar 2026 00:54:44 +0000</pubDate>
      <link>https://forem.com/burstfirea47050/auracorecf-a-local-first-cognitive-runtime-not-another-chatbot-wrapper-2li2</link>
      <guid>https://forem.com/burstfirea47050/auracorecf-a-local-first-cognitive-runtime-not-another-chatbot-wrapper-2li2</guid>
      <description>&lt;p&gt;Most “AI agents” today are just chatbots with longer prompts and a vector DB bolted on the side. They feel smart for a few turns, then forget you, lose the plot, or hallucinate their own state.&lt;br&gt;
​&lt;/p&gt;

&lt;p&gt;Over the last months I’ve been building something different: AuraCoreCF, a local‑first cognitive runtime that treats the language model as the voice, not the mind. The “mind” is an explicit internal state engine that lives outside the model and persists over time.&lt;br&gt;
​&lt;/p&gt;

&lt;p&gt;What Aura actually does&lt;br&gt;
Aura runs alongside your local LLM (e.g., Ollama) and keeps a continuous internal state across sessions instead of stuffing more tokens into a prompt and hoping. Under the hood it maintains seven activation fields (attention, meaning, goal, trust, skill, context, identity), each as a 64‑dimensional vector that evolves over time.&lt;br&gt;
​&lt;/p&gt;

&lt;p&gt;On every cycle, a small salience resolver decides what actually matters right now based on recency, momentum, and relevance, then builds a field‑weighted system prompt for the model. The model never “sees” your entire life story; it sees what is cognitively active, with the rest decaying or being sidelined instead of exploding context length.&lt;br&gt;
​&lt;/p&gt;

&lt;p&gt;Memory that isn’t just “more context”&lt;br&gt;
Instead of dumping transcripts into a vector store, Aura has an episodic memory layer (a Temporal Continuity Field) that tracks episodes and how they connect. It’s closer to “what has this agent been doing with this person over days/weeks?” than “what are the last 50 messages?”.&lt;br&gt;
​&lt;/p&gt;

&lt;p&gt;Reward signals (response quality, coherence, emotional alignment, plus explicit thumbs up/down) slowly reshape which fields dominate for a given user. Over time, the runtime learns how to think with you, not just what to say back.&lt;br&gt;
​&lt;/p&gt;

&lt;p&gt;What Aura is not&lt;br&gt;
Not a new model and not fine‑tuning. It works with your existing local model; all cognition happens before/after inference.&lt;br&gt;
​&lt;/p&gt;

&lt;p&gt;Not magic AGI. The LLM is still doing the generation; Aura is just giving it a more structured, persistent mind to work with.&lt;br&gt;
​&lt;/p&gt;

&lt;p&gt;Not cloud‑locked. The runtime itself is JavaScript, running locally; you only need GPU/CPU for the model, not for the cognitive layer.&lt;br&gt;
​&lt;/p&gt;

&lt;p&gt;Why this might interest you&lt;br&gt;
For LLM devs / local‑AI hackers: this is an attempt to formalize “agent state” as a first‑class runtime concern instead of frameworks endlessly re‑implementing ad‑hoc memory, tools, and prompt hacks. If you’ve ever hit context limits, weird regressions in long conversations, or brittle agent graphs, you’ll recognize the pain this targets.&lt;/p&gt;

&lt;p&gt;For indie hackers / builders: Aura is meant to sit underneath products, not be the product. You can build your own UI and business logic on top of a runtime that already handles continuity, emotional carry‑through, and evolving user preferences. No Python orchestration stack required.&lt;br&gt;
​&lt;/p&gt;

&lt;p&gt;For AI enthusiasts: this is a real, running thing I use daily, not a theoretical post. It has rough edges, and it will absolutely break in places, but it already feels less like “talking to a goldfish” and more like something that remembers how it feels about you from yesterday.&lt;br&gt;
​&lt;/p&gt;

&lt;p&gt;Status and honesty&lt;br&gt;
Aura is early, experimental, and not fully open‑sourced yet. The core cognitive engine is still closed while I harden it and see if it’s genuinely useful beyond my own setup. There are bugs, UX gaps, and design decisions I may have gotten wrong.&lt;br&gt;
​&lt;/p&gt;

&lt;p&gt;If you want polished SaaS, this is not it. If you want to poke at a concrete attempt to give local models a persistent mind, see what breaks, and tell me where the ideas fail, you’re the person I’m trying to reach.&lt;br&gt;
​&lt;/p&gt;

&lt;p&gt;More details, diagrams, and docs: AuraCoreCF.github.io.&lt;br&gt;
​&lt;/p&gt;

&lt;p&gt;If this resonates, I’m happy to go deep on implementation details, failure modes, or why I chose fields over yet another RAG stack.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>javascript</category>
      <category>buildinpublic</category>
    </item>
  </channel>
</rss>
