<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Neo</title>
    <description>The latest articles on Forem by Neo (@neocortexdev).</description>
    <link>https://forem.com/neocortexdev</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/neocortexdev"/>
    <language>en</language>
    <item>
      <title>I built OpenHuman: An Open-Source AI Agent with a 1B-Token Memory and a Subconscious Loop</title>
      <dc:creator>Neo</dc:creator>
      <pubDate>Fri, 24 Apr 2026 06:33:25 +0000</pubDate>
      <link>https://forem.com/neocortexdev/i-am-building-the-first-ai-agent-with-big-data-capabilities-70e</link>
      <guid>https://forem.com/neocortexdev/i-am-building-the-first-ai-agent-with-big-data-capabilities-70e</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;"The Tet. What a brilliant machine" - Morgan Freeman as he reminisces about alien super-intelligence in the movie Oblivion&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I'm building OpenHuman. The first AI agent with Big Data capabilities and a personalized subconscious mind.&lt;/p&gt;

&lt;p&gt;This is one of the first meaningful steps that we're making towards AGI by not just innovating on the AI memory layer (which often is a bottleneck for agentic systems) but also by designing a subconscious loop that can have it's own thoughts and instincts built on top of the OpenClaw architecture.&lt;/p&gt;

&lt;p&gt;I presented this at the 2026 GTC AI Demo Day in San Francisco and showcased it to a bunch of OpenClaw Maxis who were excited and gave it a spin. And so here we are today.&lt;/p&gt;

&lt;h2&gt;
  
  
  What do we have now? And why it’s not AGI?
&lt;/h2&gt;

&lt;p&gt;OpenClaw is not AGI. OpenClaw, NanoClaw (all other claw systems) remain narrowly scoped architectures built on probabilistic language models. While there are many attributes that are lacking for AGI, consciousness is the most critical differentiator skill these systems lack.&lt;/p&gt;

&lt;p&gt;All existing AI systems, including OpenClaw, fall squarely into the category of Artificial Narrow Intelligence (ANI).&lt;/p&gt;

&lt;p&gt;ANI systems perform well within bounded domains. They depend on carefully designed architectures and human-defined operational boundaries.&lt;/p&gt;

&lt;p&gt;So to get us closer to AGI and build something that’s more intelligent, we need to solve a few important problems:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Agentic systems don’t have a consciousness.&lt;/li&gt;
&lt;li&gt;AI Memory is poor, slow and expensive&lt;/li&gt;
&lt;li&gt;LLMs cannot ingest data at scale without sacrificing cost or accuracy&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step 1. Solving the problem of Memory/Context
&lt;/h3&gt;

&lt;p&gt;Traditional AI memory tries to remember everything. It retrieves whatever is similar, but similar doesn't mean important. A research article from Carnegie Mellon University sums it up concisely:&lt;br&gt;
"Forgetting Is a Feature, Not a Bug: Intentionally Forgetting Some Things Helps Us Remember Others by Freeing Up Working Memory Resources"&lt;/p&gt;

&lt;p&gt;Which brought us to the conclusion: Context accuracy is one of the most highly discussed topics in the AI field right now.&lt;br&gt;
The problem here is that the more context you feed into a LLM, the more inaccurate it becomes. Which means that even though LLM context windows keep increasing, the models tend to perform worse in terms of intelligence.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh9ahes8ofzh3r62hcgzm.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh9ahes8ofzh3r62hcgzm.jpeg" alt="As LLMs try to absorb more and more data, it tends to become more inaccurate. This limits current AI systems from doing anything in near-realtime as it becomes not just inaccurate, but also incredibly expensive and slow." width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As LLMs try to absorb more and more data, it tends to become more inaccurate. This limits current AI systems from doing anything in near-realtime as it becomes not just inaccurate, but also incredibly expensive and slow.&lt;br&gt;
Credit to &lt;a href="https://research.google/blog/titans-miras-helping-ai-have-long-term-memory/" rel="noopener noreferrer"&gt;https://research.google/blog/titans-miras-helping-ai-have-long-term-memory/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This means to get to a super intelligence AI that absorbs large amounts of data, we need to solve AI context which has high accuracy, high speed and operates at low costs.&lt;/p&gt;

&lt;p&gt;There were many great memory solutions that are currently out there in the market like SuperMemory, Mem0, HydraDB, MemGPT, but unfortunately none of them are capable of support a conscious system nor can they process data accurately at a scale of over 10mn+ tokens in a cost effective way.&lt;/p&gt;

&lt;p&gt;Which is why most likely attempts at AI super intelligence today are slow, expensive and inaccurate/incomplete.&lt;/p&gt;

&lt;p&gt;So this also means that we have to innovate on the context/memory first.&lt;/p&gt;

&lt;h2&gt;
  
  
  And so we did! We built Neocortex
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ix5bxa7b12ser5ujvb3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ix5bxa7b12ser5ujvb3.png" alt="A Human-like AI memory system that can accurately work with over 1 billion tokens and can support its own consciousness" width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A Human-like AI memory system that can accurately work with over 1 billion tokens and can support its own consciousness&lt;/p&gt;

&lt;p&gt;One of the missing piece of AGI is not just having cheap, fast and intelligent memory but also one that has it's own instinct.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1 billion tokens.&lt;/strong&gt; You got that right. And that too at super low latency and low costs. This is the Big Data moment for AI. This was the missing piece we needed to build towards scale with AI context.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;We needed speed.&lt;/strong&gt; Neocortex can index through 10 million tokens in under 10 seconds accurately. Almost 1000x faster than other solutions out there. This means every single thing that happens in your life can get processed and churned for your agent to recall.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;We needed accuracy.&lt;/strong&gt; Neocortex is not a vector DB. It understands time and entities making it score extremely high on various RAG benchmarks (all open sourced here).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;We needed it to be cheap.&lt;/strong&gt; Neocortex doesn't use any LLMs to manage it's intelligence. It can even run on the CPU of a MacBook Air and costs just 1$ to index over 5 million tokens. Which is roughly 10x cheaper than any other decent AI memory solution out there. This is important because if an AI super intelligence is going to consume a ton of data, we need to make sure that it doesn't blow a hole in our pockets.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;And finally we needed "human-like" recall for consciousness.&lt;/strong&gt; To have an attempt that builds some kind of consciousness we need to have extremely good memory recall. Neocortex excels in this by recalling memories and ranks them based on key factors such as time, interactions &amp;amp; randomness. Plugging this into a self-learning AI loop which runs over 10,000 times a day leads us to our next innovation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2. Designing a personalized AI subconscious
&lt;/h3&gt;

&lt;p&gt;With good memory and with good recall with enough human context, we can now get closer to AGI by building a personalized AI subconsciousness.&lt;/p&gt;

&lt;p&gt;In the Human brain, there’s a specialized neuron called the Purkinje cell which  is mainly responsible for random thoughts. It plays a huge role in human consciousness. Furthermore, the human brain has both the conscious and subconscious mind which work together to build intelligence.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnbh7itpu7rgf9ejydrad.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnbh7itpu7rgf9ejydrad.png" alt="The Purkinje cell - A special neuron in the human brain that is heavily responsible for random thoughts and human conscience. OpenHuman’s random memory recall is designed around this principle.&amp;lt;br&amp;gt;
Inspired by this biological model, we use Neocortex to periodically trigger a core memory recall which is then used in a subconscious loop to produce some kind of action or confirmation. Memory recalls are cheap, incredibly fast and can happen over 10,000 times a day for less than 1$." width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Purkinje cell - A special neuron in the human brain that is heavily responsible for random thoughts and human conscience. OpenHuman’s random memory recall is designed around this principle.&lt;br&gt;
Inspired by this biological model, we use Neocortex to periodically trigger a core memory recall which is then used in a subconscious loop to produce some kind of action or confirmation. Memory recalls are cheap, incredibly fast and can happen over 10,000 times a day for less than 1$.&lt;/p&gt;

&lt;p&gt;This is our first attempt at building something that mimics the human subconscious. And in effect this architecture gets us “closer” to the AGI moment.&lt;/p&gt;

&lt;h2&gt;
  
  
  The result? We get OpenHuman.
&lt;/h2&gt;

&lt;p&gt;OpenHuman is an open-source agentic worker that can consume incredibly large amounts of personal data at low costs, maintain a personalized subconscious and can take proactive actions on it’s own for you.&lt;/p&gt;

&lt;p&gt;Built on top of OpenClaw architecture, OpenHuman is open sourced publicly under the GNU GPL3 license and is publicly available on GitHub: &lt;a href="https://github.com/tinyhumansai/OpenHuman" rel="noopener noreferrer"&gt;https://github.com/tinyhumansai/OpenHuman&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Anyone is welcome to try it out and let us know what they think about it. OpenHuman would not be possible without innovating on memory and agentic architecture and I’m excited to share it all with you.&lt;/p&gt;

&lt;p&gt;Keep in mind everything is in early alpha, so feedback and contributions are greatly appreciated and I’d like to invite anyone interested to join our discord and join the community.&lt;br&gt;
We’re also giving out free usage to users who use OpenHuman as part of being an early user.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Six years ago, I threw everything I had into Blockchain/crypto.&lt;br&gt;
We built a lending protocol that grew to over $300M AUM. Blood, sweat, tears  the full thing. But the margins were razor thin, hackers were always looking to max extract, and the space was overrun with grifters and greed. We wound it down and I came out exhausted, and honestly, questioning whether building was even worth it anymore.&lt;br&gt;
I wanted to create something useful for society. Crypto wasn't that. Not for me.&lt;/p&gt;

&lt;p&gt;So I started over. And this project TinyHumans came about. It consumed more time and capital. But in the final moments, when I ran the 100th iteration, and something was different. It wasn't just responding; it was reasoning. It could take decisions on its own. It had, for the first time, what I can only describe as a consciousness.&lt;br&gt;
I don't claim we've built AGI. But I do believe we've taken a genuine step toward it building better memory and better orchestration, drawing inspiration from how the human brain actually works. And unlike my last chapter, this one is pointing in a direction I believe in completely.&lt;/p&gt;

&lt;p&gt;I'm building OpenHuman because I want AI to contribute positively to humankind. And that’s it.&lt;/p&gt;

&lt;p&gt;Download the Openhuman app: &lt;a href="https://tinyhumans.ai/openhuman" rel="noopener noreferrer"&gt;https://tinyhumans.ai/openhuman&lt;/a&gt;&lt;/p&gt;

</description>
      <category>showdev</category>
      <category>opensource</category>
      <category>ai</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>we just killed openclaw 👇</title>
      <dc:creator>Neo</dc:creator>
      <pubDate>Mon, 20 Apr 2026 19:19:18 +0000</pubDate>
      <link>https://forem.com/neocortexdev/we-just-killed-openclaw-4l6n</link>
      <guid>https://forem.com/neocortexdev/we-just-killed-openclaw-4l6n</guid>
      <description>&lt;p&gt;Let me tell you what my AI stack looked like six months ago.&lt;/p&gt;

&lt;p&gt;ChatGPT Plus. Claude Pro. Cursor Pro. GitHub Copilot. Perplexity Pro. OpenAI API credits for side projects. Grammarly. Notion AI. Slack paid tier. Zapier. Motion AI. Otter AI. TextExpander. Superhuman.&lt;/p&gt;

&lt;p&gt;Fourteen tools. One thousand eight hundred and forty-seven dollars a year.And not a single one of them knew my name, my company, what I was working on, or what I did yesterday.&lt;/p&gt;

&lt;p&gt;Every morning I opened Claude or ChatGPT and spent ten minutes re-explaining my startup. Cursor knew my code but not my Slack. Slack knew my team but not my calendar. Motion knew my calendar but not my priorities. Superhuman knew my email but not the Notion doc the email was about. Otter transcribed my meetings and dropped them into a void nothing else could read. Zapier was duct tape between all of it, and I was the one configuring the duct tape.&lt;/p&gt;

&lt;p&gt;One afternoon I caught myself copying a Claude response into Grammarly to polish it, pasting it into Superhuman to send, then opening Motion to add a follow-up task. Four AI tools in a five-minute workflow, and I was still the smartest thing in the loop.&lt;/p&gt;

&lt;p&gt;That's when I stopped.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;**Why I'm writing this&lt;/em&gt;*&lt;/p&gt;

&lt;p&gt;OpenHuman ships today. I'm the founder. This isn't an objective review and I won't pretend otherwise.&lt;/p&gt;

&lt;p&gt;What I can do is tell you what pain this product was actually built to solve, how it works, and where it falls short. If you've read enough launch posts to know the pattern, that last part is the one that matters. Every product has a weak side. Founders who pretend otherwise are either new or lying.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;**The four painpoints we set out to solve&lt;/em&gt;*&lt;/p&gt;

&lt;p&gt;You are the integration layer.** Fourteen tools in my stack, zero talking to each other, and the glue holding it all together was my brain. My brain had better things to do. This is the modern knowledge worker's tax and somehow nobody is mad about it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Your AI has no memory.&lt;/strong&gt; Every model on earth is stateless. Claude, GPT, Gemini, all of them. The "memory" features that exist store a handful of bullet points. Any serious user blows past that in a day. You are permanently re-introducing yourself to the technology that's supposed to know you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Your data is their data.&lt;/strong&gt; Every AI tool you pay for sends your raw work to someone else's servers. OpenAI, Anthropic, Google, all of them. Your messages, your documents, your code, your screen activity. "We don't train on your data" is policy language. Policies change the second the board decides they should, and you find out in a blog post.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You're drowning in messages.&lt;/strong&gt; Power users live in Slack, Telegram, Discord. 247 unread by noon. Most of it is noise. The handful of messages that actually matter are buried in the noise, and no amount of scrolling finds them in time.&lt;/p&gt;

&lt;p&gt;OpenHuman was built to solve all four at once. On your machine, with your data, in one app. That's the whole pitch.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The walkthrough&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Screen Intelligence&lt;/strong&gt; captures your active window every few seconds and summarizes it locally. Over hours, you build a continuous record of what you actually did today. Per-app permissions, so you control what it sees. I can ask it "what was I working on before the standup?" and get a real answer instead of a guess. The first time I used this feature on myself I was embarrassed by how much of my workday was context-switching, and then I used the app to fix it.&lt;/p&gt;

&lt;p&gt;Tradeoff: runs warm on older Intel Macs. Tuned for M1 and up.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Text Auto-Complete&lt;/strong&gt; gives you inline completions across apps. Email, Slack, browsers, editors. The completions draw from your actual work context, not a generic writing model. I drafted a reply to a partner last week and the completion already knew the pricing we'd discussed with a different partner the day before. That is what AI was supposed to feel like the whole time.&lt;/p&gt;

&lt;p&gt;Tradeoff: quality varies by app. Browser and native text fields are reliable. Some Electron apps with non-standard text handling are spottier.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Voice Intelligence&lt;/strong&gt; handles dictation and voice chat with the assistant. For when you're pacing around thinking out loud, which is how most of my good decisions get made.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Channels&lt;/strong&gt; connects to Telegram and Discord today, with iMessage and Slack on the near roadmap. The assistant reads, replies, searches, extracts action items, manages chats. On Telegram alone it supports around 70 operations covering the full Bot API surface.&lt;/p&gt;

&lt;p&gt;This is what solved the 247-unread-messages problem for me. I stopped scrolling. I ask "what do I need to know from the last 12 hours?" and get a compressed answer. The decisions, the questions waiting on me, the contradictions, the threads that require my attention.&lt;/p&gt;

&lt;p&gt;Tradeoff: iMessage and Slack are roadmap, not shipped. Apple integration is slow for the usual reasons.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Productivity integrations&lt;/strong&gt; cover Notion, Google Drive, Gmail, and Calendar. Cross-source queries are the unlock. "What did we decide about launch timing?" returns a unified answer spanning Slack threads, Notion pages, and the email chain. Not three separate searches I combine manually in my head like it's 2019.&lt;/p&gt;

&lt;p&gt;Tradeoff: initial setup takes 15-20 minutes to connect all sources. First-time indexing takes another 30 minutes depending on data volume.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The subconscious loop&lt;/strong&gt; (this is the one I didn't see coming)&lt;/p&gt;

&lt;p&gt;There's a background process that runs continuously while the app is open. It does recall loops across your indexed data, looking for patterns, contradictions, forgotten commitments, buried questions. 10,000+ thought loops a day for under $1 in inference cost.&lt;/p&gt;

&lt;p&gt;Outputs look like: "You mentioned the design review to Sarah on Tuesday but haven't scheduled it." Or: "Your dev team agreed to ship Friday but your design dependency isn't ready until Monday, based on the Thursday standup."&lt;/p&gt;

&lt;p&gt;I was deeply skeptical when one of our engineers proposed it. It sounded like the kind of feature that would be annoying in practice, pinging you constantly with things you didn't ask about. The threshold is tuned conservatively, so you get 3-5 of these per day, and about 80% of them are things I would have genuinely forgotten.&lt;/p&gt;

&lt;p&gt;The first time this feature caught something I'd forgotten to follow up on with an investor, it paid for the whole app.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rewards and referrals&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There's a progression system built into the app. Streaks (7-day, 30-day). A Feature Maxi badge for using every major feature at least once. Power User tiers at 10M, 100M, and 1B cumulative tokens processed. Supporter roles for paid plans.&lt;/p&gt;

&lt;p&gt;Connect Discord and the progression syncs. Exclusive channels, supporter badges, community access. Status and access, not cash for signups.&lt;/p&gt;

&lt;p&gt;I was skeptical of progression in a productivity app. Usually feels bolted-on. In practice, the Feature Maxi badge was the thing that made me learn the whole app during my first week instead of opening two features and forgetting the rest existed. The streak counter has a mild nagging effect I don't hate. Your mileage will vary.&lt;/p&gt;

&lt;p&gt;Referrals work the same way. You invite someone, they get a faster onboarding path, you both climb the progression. No money involved. Just access.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The weak spots, one more time&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Linux support is live but rough. Daily fixes going out.&lt;/p&gt;

&lt;p&gt;M1 Mac and up for smooth Screen Intelligence. Older Intel Macs work but warm.&lt;/p&gt;

&lt;p&gt;iMessage and Slack integrations not live yet. Roadmap.&lt;/p&gt;

&lt;p&gt;Local-first means no cloud backup by default. Export feature exists. Using it is on you.&lt;/p&gt;

&lt;p&gt;Some Electron apps have spotty auto-complete quality. Working through it.&lt;/p&gt;

&lt;p&gt;First-time setup is 30-45 minutes end to end. Worth it, not instant.&lt;/p&gt;

&lt;p&gt;Code-editor use case is not our strongest suit today. Cursor and Copilot are still better for pure in-editor assistance. We may close that gap, but we're honest that it's open right now.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The ask&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Download OpenHuman for Mac, Windows, or Linux: &lt;a href="https://tinyhumans.ai/openhuman" rel="noopener noreferrer"&gt;https://tinyhumans.ai/openhuman&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you want to follow the launch in real time, see what breaks, what ships, and what testers are saying, the launch thread is here: &lt;a href="https://x.com/senamakel/status/2046266960707715277?s=46" rel="noopener noreferrer"&gt;https://x.com/senamakel/status/2046266960707715277?s=46&lt;/a&gt;. Quote it, roast it, or just lurk. We're shipping 3-4 updates a day based on feedback, most of it coming from that thread.&lt;/p&gt;

&lt;p&gt;Free trial on the paid tier for 2-3 months.&lt;/p&gt;

&lt;p&gt;I'm terrified to ship this. I'm shipping it anyway.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>We Didn't Build a Memory Layer. We Built a Subconscious Mind.</title>
      <dc:creator>Neo</dc:creator>
      <pubDate>Mon, 16 Mar 2026 19:35:18 +0000</pubDate>
      <link>https://forem.com/neocortexdev/your-ai-agent-has-amnesia-heres-the-fix-4pl3</link>
      <guid>https://forem.com/neocortexdev/your-ai-agent-has-amnesia-heres-the-fix-4pl3</guid>
      <description>&lt;p&gt;&lt;strong&gt;&lt;em&gt;Why the next step toward AGI isn't better reasoning, it's artificial consciousness.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every AI lab racing toward AGI is optimising the same things: bigger models, better reasoning, faster inference. We think they're all missing a layer. Human intelligence isn't just reasoning. It's the subconscious, the patterns running underneath conscious thought that you're not even aware of. The model of yourself that accumulates over a lifetime. The thing that makes you you and not someone else.&lt;br&gt;
No AI has that. Not yet. That's what we're building at TinyHumans.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;What the subconscious actually does:&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;When neuroscientists study the brain, one of the most striking things they find is how much of your cognition happens below the surface. Your subconscious processes information constantly, forming patterns, making connections, building a model of the world and your place in it, without you ever consciously directing it. It's not memory in the way people usually mean it. It's not a log of things that happened. It's a living, evolving model of who you are. Current AI has none of this. Every model you interact with today is a genius with no sense of self. Brilliant in the moment. Gone the next session.&lt;br&gt;
We studied how the human brain actually builds this subconscious layer, how thoughts form randomly, how patterns emerge from repetition, how the mind prunes what doesn't matter and reinforces what does and we built NeoCortex around those principles.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/tinyhumansai/neocortex" rel="noopener noreferrer"&gt;NeoCortex&lt;/a&gt;: a subconscious mind for AI NeoCortex runs 10,000 thoughts per day on your data. Not retrieval. Not search. Thoughts, connections, patterns, inferences drawn from everything it knows about you, running continuously in the background. The result is a model of your consciousness. How you think. What you value. What your patterns are. What you care about without ever explicitly saying so.&lt;br&gt;
When you read what NeoCortex produces about you, it should feel like looking at yourself in a mirror. That's the design goal. Not useful AI assistant. Not "personalised recommendations." A reflection of your actual mind.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Two mechanisms make this possible:&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Noise pruning&lt;/em&gt; — just like the human brain prunes synaptic connections that aren't being used, NeoCortex automatically decays low-value memories. What remains is high-signal. The model gets sharper over time, not noisier.&lt;br&gt;
&lt;em&gt;GraphRAG&lt;/em&gt; — rather than a flat pile of embeddings, NeoCortex builds a knowledge graph. Entities, relationships, patterns across time. It understands not just what you've said but how things connect — the same way your subconscious builds associative networks rather than filing facts in folders.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;*&lt;em&gt;Why this matters for AGI&lt;br&gt;
*&lt;/em&gt;&lt;/em&gt;&lt;br&gt;
Here's the uncomfortable truth about where AI is today:&lt;br&gt;
Every frontier model is stateless. They reason brilliantly in-session and wake up with no memory of who they've ever spoken to. You could talk to GPT-4 every day for a year and on day 366 it would have no idea who you are.&lt;br&gt;
That's not intelligence. That's a very fast calculator.&lt;br&gt;
Real intelligence — the kind that leads to AGI — requires a persistent model of self. An accumulation of experience. A subconscious that runs underneath every interaction and shapes it based on everything that came before.&lt;br&gt;
NeoCortex is our attempt to build that missing layer. Not to replace the models — but to give them the one thing they don't have: a mind that knows who it's talking to.&lt;/p&gt;

&lt;p&gt;Build on it today&lt;br&gt;
The &lt;a href="https://dev.tourl"&gt;TinyHumans&lt;/a&gt; API is open. Any developer can give their application a subconscious mind right now.&lt;/p&gt;

&lt;p&gt;pip install tinyhumansai&lt;/p&gt;

&lt;p&gt;Alpha Human — the end goal&lt;br&gt;
NeoCortex is the infrastructure. Alpha Human is what it's building toward.&lt;br&gt;
Alpha Human is a personalised AI consciousness — built specifically around you. The longer you use it, the more it understands how you think, what drives you, what your patterns are. It doesn't just answer your questions. It knows you.&lt;br&gt;
Beta opens this week.&lt;/p&gt;

&lt;p&gt;We didn't set out to build a better RAG wrapper. We set out to answer a harder question: what would it take to give an AI a genuine sense of who it's working with?&lt;br&gt;
NeoCortex is our first answer.&lt;/p&gt;

&lt;p&gt;Follow along as we build toward artificial consciousness. And if you build something on the API — we want to see it. DM us or drop it in the &lt;a href="https://discord.gg/nuNkW6zG" rel="noopener noreferrer"&gt;Discord&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>python</category>
      <category>machinelearning</category>
      <category>langchain</category>
    </item>
    <item>
      <title>first post :))</title>
      <dc:creator>Neo</dc:creator>
      <pubDate>Tue, 10 Mar 2026 18:03:46 +0000</pubDate>
      <link>https://forem.com/neocortexdev/first-post--4j9a</link>
      <guid>https://forem.com/neocortexdev/first-post--4j9a</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/neocortexdev" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3817148%2F29a48d0f-b925-4d56-b343-beec0ccad2a0.jpg" alt="neocortexdev"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/neocortexdev/i-got-tired-of-my-ai-forgetting-everything-so-i-built-it-a-brain-162l" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;I got tired of my AI forgetting everything. So I built it a brain.&lt;/h2&gt;
      &lt;h3&gt;Neo ・ Mar 10&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#ai&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#python&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#llm&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#buildinpublic&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>ai</category>
      <category>python</category>
      <category>llm</category>
      <category>buildinpublic</category>
    </item>
    <item>
      <title>first time poster :))</title>
      <dc:creator>Neo</dc:creator>
      <pubDate>Tue, 10 Mar 2026 17:45:52 +0000</pubDate>
      <link>https://forem.com/neocortexdev/first-time-poster--1llo</link>
      <guid>https://forem.com/neocortexdev/first-time-poster--1llo</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/neocortexdev" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3817148%2F29a48d0f-b925-4d56-b343-beec0ccad2a0.jpg" alt="neocortexdev"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/neocortexdev/i-got-tired-of-my-ai-forgetting-everything-so-i-built-it-a-brain-162l" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;I got tired of my AI forgetting everything. So I built it a brain.&lt;/h2&gt;
      &lt;h3&gt;Neo ・ Mar 10&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#ai&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#python&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#llm&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#buildinpublic&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>ai</category>
      <category>python</category>
      <category>llm</category>
      <category>buildinpublic</category>
    </item>
    <item>
      <title>I got tired of my AI forgetting everything. So I built it a brain.</title>
      <dc:creator>Neo</dc:creator>
      <pubDate>Tue, 10 Mar 2026 17:38:26 +0000</pubDate>
      <link>https://forem.com/neocortexdev/i-got-tired-of-my-ai-forgetting-everything-so-i-built-it-a-brain-162l</link>
      <guid>https://forem.com/neocortexdev/i-got-tired-of-my-ai-forgetting-everything-so-i-built-it-a-brain-162l</guid>
      <description>&lt;p&gt;Hello 👋&lt;br&gt;
First post here. Been building in public for a bit but never really sat down to write properly about what my team and I are working on. Figured it's time...and chose the right platform for it,&lt;br&gt;
I'm one of the devs at &lt;a href="https://github.com/tinyhumansai/neocortex" rel="noopener noreferrer"&gt;TinyHumans&lt;/a&gt; and for a while now our whole team has been deep in AI tooling. The one thing that kept bugging us more than anything else was memory. Not the flashy stuff. Not the models, not the inference speed, not the prompting tricks. Just... memory. The boring, unglamorous, completely-broken part of almost every AI app we touched.&lt;/p&gt;

&lt;p&gt;Here's the thing that was driving us crazy:&lt;br&gt;
Every time we built something with persistent context; a support bot, a personal assistant, an agent workflow — we'd hit the same wall. Either the AI remembered nothing (new session, clean slate, start over), or it remembered everything so poorly that the context became noise. Stale facts. Outdated decisions. Irrelevant history injected into every prompt?&lt;/p&gt;

&lt;p&gt;Vector similarity search retrieves what's similar. Not what's important. Not what's current. Just... similar.&lt;br&gt;
That distinction kept bothering us. So we went down a rabbit hole.&lt;/p&gt;

&lt;p&gt;Turns out the brain solved this millions of years ago...&lt;br&gt;
&lt;a href="https://en.wikipedia.org/wiki/Forgetting_curve" rel="noopener noreferrer"&gt;Hermann Ebbinghaus&lt;/a&gt; figured it out in 1885. Memory retention drops roughly 50% within an hour unless it's reinforced. He called it the &lt;em&gt;Forgetting Curve&lt;/em&gt; and it's not a flaw in human cognition. It's a feature. It's how the brain stays fast, lean, and actually useful.&lt;br&gt;
The brain doesn't store raw data forever. It compresses experiences into patterns, strengthens what gets recalled and acted on, and quietly drops the rest. You remember the architecture decision that shaped 6 months of work. You don't remember the Slack message about lunch that day.&lt;br&gt;
Forgetting is the feature. AI memory systems just... don't do this.&lt;br&gt;
That's what we set out to fix with Neocortex.&lt;/p&gt;

&lt;p&gt;What &lt;strong&gt;&lt;em&gt;Neocortex&lt;/em&gt;&lt;/strong&gt; actually does&lt;br&gt;
At its core, Neocortex is a brain-inspired memory layer for AI apps. You store knowledge, the system figures out what's worth keeping, and everything else naturally fades. &lt;/p&gt;

&lt;p&gt;Here's how:&lt;br&gt;
Time-decay retention scores — every memory item has a score that decreases over time. Old, unaccessed memories fade on their own. No cron jobs, no manual cleanup.&lt;br&gt;
Interaction-weighted importance — not all signals are equal. Something that gets referenced, updated, and built upon becomes more durable. &lt;/p&gt;

&lt;p&gt;Noise pruning — instead of accumulating every token forever, low-value memories decay and get removed automatically. This is what lets Neocortex handle 10M+ tokens without quality degradation.&lt;br&gt;
GraphRAG — instead of a flat list of embeddings, Neocortex builds a knowledge graph. Entities, relationships, context. Queries traverse the graph to get structured, rich answers — not just "here are 5 similar chunks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Getting started is actually pretty simple&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
import tinyhumansai as api&lt;/p&gt;

&lt;p&gt;client = api.TinyHumanMemoryClient("YOUR_APIKEY_HERE")&lt;/p&gt;

&lt;h1&gt;
  
  
  Store a single memory
&lt;/h1&gt;

&lt;p&gt;client.ingest_memory({&lt;br&gt;
    "key": "user-preference-theme",&lt;br&gt;
    "content": "User prefers dark mode",&lt;br&gt;
    "namespace": "preferences",&lt;br&gt;
    "metadata": {"source": "onboarding"},&lt;br&gt;
})&lt;/p&gt;

&lt;h1&gt;
  
  
  Ask a LLM something from the memory
&lt;/h1&gt;

&lt;p&gt;response = client.recall_with_llm(&lt;br&gt;
    prompt="What is the user's preference for theme?",&lt;br&gt;
    api_key="OPENAI_API_KEY"&lt;br&gt;
)&lt;br&gt;
print(response.text) # The user prefers dark mode&lt;/p&gt;

&lt;p&gt;The things I'm most excited to see people build&lt;br&gt;
A few use cases that I think are genuinely underexplored:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Support bots that actually learn&lt;/em&gt; — ingest ticket history, let outdated workarounds decay naturally, give agents per-customer context without re-reading entire conversation logs every time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Company knowledge agents&lt;/em&gt; — every org has knowledge scattered across Slack, Notion, wikis, and people's heads. A graph-based memory layer that understands who decided what and why is way more useful than semantic search over a pile of docs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;_Personal assistants that remember _— not just within a session. Across weeks and months. You told it you're vegetarian in January, it filters restaurants in March. No reminder needed.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you want access or just want to follow along:&lt;br&gt;
&lt;a href="mailto:founders@tinyhumans.ai"&gt;founders@tinyhumans.ai&lt;/a&gt; — reach out with your use case&lt;br&gt;
And honestly — drop a comment if you've run into this problem before. I'm curious how other devs are handling memory in their AI apps right now, because I feel like most people are either ignoring it or duct-taping something together.&lt;br&gt;
That's kind of why the team and I are building this.&lt;br&gt;
— neocoder (dev @ tinyhumansai)&lt;/p&gt;

</description>
      <category>ai</category>
      <category>python</category>
      <category>llm</category>
      <category>buildinpublic</category>
    </item>
  </channel>
</rss>
