<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Ravikash Gupta</title>
    <description>The latest articles on Forem by Ravikash Gupta (@blockmandev).</description>
    <link>https://forem.com/blockmandev</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/blockmandev"/>
    <language>en</language>
    <item>
      <title>QORA - Native Rust LLM Inference Engine</title>
      <dc:creator>Ravikash Gupta</dc:creator>
      <pubDate>Sat, 28 Feb 2026 17:51:01 +0000</pubDate>
      <link>https://forem.com/blockmandev/qora-native-rust-llm-inference-engine-4n4n</link>
      <guid>https://forem.com/blockmandev/qora-native-rust-llm-inference-engine-4n4n</guid>
      <description>&lt;p&gt;Pure Rust inference engine for the SmolLM3-3B language model. No Python runtime, no CUDA, no external dependencies. Single executable + quantized weights = portable AI on any machine.&lt;/p&gt;

&lt;p&gt;Downlod 🤗: &lt;a href="https://huggingface.co/qoranet/QORA-LLM" rel="noopener noreferrer"&gt;https://huggingface.co/qoranet/QORA-LLM&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;| Base Model | SmolLM3-3B (HuggingFaceTB/SmolLM3-3B) | | Parameters | 3.07 Billion | | Quantization | Q4 (4-bit symmetric, group_size=32) | | Model Size | 1.68 GB (Q4) / ~6 GB (F16) | | Executable | 6.7 MB | | Context Length | 65,536 tokens (up to 128K with YARN) | | Platform | Windows x86_64 (CPU-only) |&lt;/p&gt;

&lt;p&gt;Key Architectural Innovation: NoPE (No Position Encoding)&lt;br&gt;
SmolLM3 uses a 3:1 NoPE ratio — 75% of layers have no positional encoding at all. Only layers 3, 7, 11, 15, 19, 23, 27, 31, 35 apply RoPE. This reduces computational overhead and enables better long-context generalization.&lt;/p&gt;

&lt;p&gt;Performance Benchmarks&lt;br&gt;
Test Hardware: Windows 11, CPU-only (no GPU acceleration)&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>rust</category>
      <category>rustai</category>
    </item>
    <item>
      <title>QORA-TTS - Native Rust Text-to-Speech Engine</title>
      <dc:creator>Ravikash Gupta</dc:creator>
      <pubDate>Sat, 28 Feb 2026 17:47:08 +0000</pubDate>
      <link>https://forem.com/blockmandev/qora-tts-native-rust-text-to-speech-engine-1k6i</link>
      <guid>https://forem.com/blockmandev/qora-tts-native-rust-text-to-speech-engine-1k6i</guid>
      <description>&lt;p&gt;Pure Rust text-to-speech synthesis engine. No Python runtime, no CUDA, no external dependencies. Single executable + quantized weights = portable TTS on any machine.&lt;/p&gt;

&lt;p&gt;Downlod 🤗: &lt;a href="https://huggingface.co/qoranet/QORA-TTS" rel="noopener noreferrer"&gt;https://huggingface.co/qoranet/QORA-TTS&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All engines are pure Rust, CPU-only, single-binary executables with no Python dependencies. &lt;/p&gt;

&lt;p&gt;Supported Languages&lt;br&gt;
English, Chinese, German, Italian, Portuguese, Spanish, Japanese, Korean, French, Russian, Beijing Dialect, Sichuan Dialect&lt;/p&gt;

&lt;p&gt;Quantization&lt;br&gt;
Q4: 4-bit symmetric, group_size=32 (default, 1.5 GB)&lt;br&gt;
F16: Half-precision floats (optional, ~3 GB)&lt;br&gt;
6.4x compression ratio from f32 to Q4&lt;/p&gt;

</description>
      <category>rust</category>
      <category>ai</category>
      <category>tts</category>
      <category>programming</category>
    </item>
    <item>
      <title>Qoranet's Pure Rust AI Infrastructure</title>
      <dc:creator>Ravikash Gupta</dc:creator>
      <pubDate>Sat, 28 Feb 2026 04:02:15 +0000</pubDate>
      <link>https://forem.com/blockmandev/qoranet-1lok</link>
      <guid>https://forem.com/blockmandev/qoranet-1lok</guid>
      <description>&lt;h1&gt;
  
  
  ⚡ QoraNet
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fznbgn7v99d56nxfe4q1o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fznbgn7v99d56nxfe4q1o.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Pure Rust AI Infrastructure for the Privacy-First Blockchain
&lt;/h3&gt;



&lt;p&gt;&lt;strong&gt;Building AI models that run without Python, without cloud, without paid APIs.&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Just Rust. Just your machine. Just freedom.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🧠 What We're Building
&lt;/h2&gt;

&lt;p&gt;QoraNet is building the &lt;strong&gt;world's first privacy-first blockchain with native AI infrastructure&lt;/strong&gt; — and we're open-sourcing the AI models that power it.&lt;/p&gt;

&lt;p&gt;Our AI stack is built &lt;strong&gt;entirely in Rust&lt;/strong&gt; using the &lt;a href="https://github.com/blockmandev/flash" rel="noopener noreferrer"&gt;Burn framework&lt;/a&gt;. No Python runtime. No PyTorch. No TensorFlow. No CUDA dependency. No C++ libraries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;One binary. Zero dependencies. Runs anywhere.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🔥 Released Models
&lt;/h2&gt;

&lt;h3&gt;
  
  
  QORA-LLM
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Pure Rust inference engine — no Python, no CUDA, no external dependencies&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Detail&lt;/th&gt;
&lt;th&gt;Spec&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Architecture&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;SmolLM3-3B based&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Language&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Pure Rust (Burn framework)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Dependencies&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Zero external ML libraries&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Quantization&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Optimized for local execution&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Platforms&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Windows, macOS, Linux, iOS, Android&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;GPU Required&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;No — runs on CPU&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;API Key Required&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Cost&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Free forever&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Clone and run — that's all you need&lt;/span&gt;
git clone https://github.com/qora-protocol/qora-llm
&lt;span class="nb"&gt;cd &lt;/span&gt;qora-llm
cargo run &lt;span class="nt"&gt;--release&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;

&lt;/p&gt;
&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;div class="c-embed__content"&gt;
        &lt;div class="c-embed__cover"&gt;
          &lt;a href="https://huggingface.co/qoranet" class="c-link align-middle" rel="noopener noreferrer"&gt;
            &lt;img alt="" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-thumbnails.huggingface.co%2Fsocial-thumbnails%2Fqoranet.png" height="auto" class="m-0"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="c-embed__body"&gt;
        &lt;h2 class="fs-xl lh-tight"&gt;
          &lt;a href="https://huggingface.co/qoranet" rel="noopener noreferrer" class="c-link"&gt;
            qoranet (Qoranet)
          &lt;/a&gt;
        &lt;/h2&gt;
          &lt;p class="truncate-at-3"&gt;
            QORA (Quantum Orchestrated Runtime Architecture) is a next-gen platform merging AGI, blockchain, and secure digital infrastructure. QORA's  mission is to build the future of autonomous, intelligent...
          &lt;/p&gt;
        &lt;div class="color-secondary fs-s flex items-center"&gt;
          huggingface.co
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;








&lt;h2&gt;
  
  
  🗺️ Roadmap — What's Coming
&lt;/h2&gt;

&lt;p&gt;We're building a complete multi-modal AI suite in pure Rust:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Status&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;th&gt;Link&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;QORA-LLM&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅ Released&lt;/td&gt;
&lt;td&gt;Language model — chat, reasoning, code&lt;/td&gt;
&lt;td&gt;&lt;a href="https://huggingface.co/qoranet/QORA-LLM" rel="noopener noreferrer"&gt;https://huggingface.co/qoranet/QORA-LLM&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;QORA-TTS&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅ Released&lt;/td&gt;
&lt;td&gt;Text-to-speech — voice synthesis&lt;/td&gt;
&lt;td&gt;&lt;a href="https://huggingface.co/qoranet/QORA-TTS" rel="noopener noreferrer"&gt;https://huggingface.co/qoranet/QORA-TTS&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;QORA-Vision-Image&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅ Released&lt;/td&gt;
&lt;td&gt;Image + video understanding&lt;/td&gt;
&lt;td&gt;&lt;a href="https://huggingface.co/qoranet/QORA-Vision-Image" rel="noopener noreferrer"&gt;https://huggingface.co/qoranet/QORA-Vision-Image&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;QORA-Vision-Video&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅ Released&lt;/td&gt;
&lt;td&gt;Speech-to-text — voice input&lt;/td&gt;
&lt;td&gt;&lt;a href="https://huggingface.co/qoranet/QORA-Vision-Video" rel="noopener noreferrer"&gt;https://huggingface.co/qoranet/QORA-Vision-Video&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;QORA-Agent&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;📅 Planned&lt;/td&gt;
&lt;td&gt;Autonomous AI agent with tool use&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Each model follows the same philosophy: &lt;strong&gt;pure Rust, zero dependencies, runs locally, free forever.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🏗️ Part of the QoraNet Ecosystem
&lt;/h2&gt;

&lt;p&gt;Our AI models are a core component of the &lt;strong&gt;QoraNet blockchain&lt;/strong&gt; — the world's fastest Layer-1 with mandatory privacy.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌─────────────────────────────────────────────┐
│            QoraNet Blockchain               │
│          400ms Finality • 100k+ TPS         │
├─────────────┬───────────┬───────────────────┤
│  Privacy    │  DeFi     │  Social           │
│             │           │                   │
│  ZK-SNARKs  │  DEX      │  Encrypted Chat   │
│  PLONK+KZG  │  CLOB     │  Video/Voice      │
│  Stealth    │  Perps    │  AI Assistants    │
│  Addresses  │  Oracle   │  Bot Framework    │
├─────────────┴───────────┴───────────────────┤
│              AI Infrastructure              │
│                                             │
│  QORA-LLM • QORA-TTS • QORA-Vision          │
│  Pure Rust • On-chain billing • Verifiable  │
│  Decentralized inference across validators  │
└─────────────────────────────────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Key Blockchain Features
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;400ms instant finality&lt;/strong&gt; — 10-50x faster than any EVM chain&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mandatory ZK-SNARK privacy&lt;/strong&gt; — all tokens private by default&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;34 native modules&lt;/strong&gt; — DEX, orderbook, perpetuals, oracle, chat, AI, auth, KYC, and more&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Full EVM compatibility&lt;/strong&gt; — deploy Solidity contracts with built-in privacy&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Native encrypted chat&lt;/strong&gt; — E2E encrypted messaging, video, and voice calls&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;On-chain AI billing&lt;/strong&gt; — token-based payments for AI inference&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🔐 Our AI Philosophy
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Principle&lt;/th&gt;
&lt;th&gt;What It Means&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Pure Rust&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;No Python, no C++ FFI, no heavy runtimes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Zero Dependencies&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Single binary, nothing else to install&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Local First&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Runs on your hardware, not our servers&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Privacy by Default&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Your data never leaves your machine&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Free Forever&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;No API keys, no subscriptions, no limits&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Cross Platform&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Windows, macOS, Linux, iOS, Android&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Open Source&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Fully auditable, community-driven&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  ⚡ Why Rust for AI?
&lt;/h2&gt;

&lt;p&gt;Most AI models require Python + PyTorch + CUDA. We took a different path:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Python Stack&lt;/th&gt;
&lt;th&gt;QoraNet (Rust)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Install size&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;2-10 GB&lt;/td&gt;
&lt;td&gt;Single binary&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Startup time&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;10-30 seconds&lt;/td&gt;
&lt;td&gt;&amp;lt; 1 second&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Memory usage&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;High (GC overhead)&lt;/td&gt;
&lt;td&gt;Minimal (zero-cost abstractions)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;GPU required&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Usually yes&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Cross-compile&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Painful&lt;/td&gt;
&lt;td&gt;Native&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Security&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Runtime errors&lt;/td&gt;
&lt;td&gt;Compile-time safety&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Deployment&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Docker + dependencies&lt;/td&gt;
&lt;td&gt;Copy one file&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  🤝 Community &amp;amp; Contributing
&lt;/h2&gt;

&lt;p&gt;We're building in the open and welcome contributions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/qora-protocol" rel="noopener noreferrer"&gt;github.com/qora-protocol&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Twitter/X&lt;/strong&gt;: &lt;a href="https://twitter.com/qora_net" rel="noopener noreferrer"&gt;@Qora_Net&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LinkedIn&lt;/strong&gt;: &lt;a href="https://linkedin.com/company/qoranet" rel="noopener noreferrer"&gt;company/qoranet&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reddit&lt;/strong&gt;: &lt;a href="https://www.reddit.com/r/QoraNet/" rel="noopener noreferrer"&gt;r/QoraNet&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Email&lt;/strong&gt;: &lt;a href="mailto:info@qoranet.com"&gt;info@qoranet.com&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  📜 License
&lt;/h2&gt;

&lt;p&gt;Our AI models are open source. See individual model repos for specific license details.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Built with 🦀 Rust and 🔥 Burn&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;AI should be private, fast, free, and yours. We're making that real.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;⚡ &lt;strong&gt;QoraNet&lt;/strong&gt; — The Privacy-First Blockchain Infrastructure for AI&lt;/p&gt;

</description>
      <category>ai</category>
      <category>blockchain</category>
      <category>opensource</category>
      <category>rust</category>
    </item>
    <item>
      <title>I built a free self-hosted memory system for Claude CLI — replaces Supermemory/Mem0/Zep</title>
      <dc:creator>Ravikash Gupta</dc:creator>
      <pubDate>Fri, 13 Feb 2026 21:46:20 +0000</pubDate>
      <link>https://forem.com/blockmandev/i-built-a-free-self-hosted-memory-system-for-claude-cli-replaces-supermemorymem0zep-4dca</link>
      <guid>https://forem.com/blockmandev/i-built-a-free-self-hosted-memory-system-for-claude-cli-replaces-supermemorymem0zep-4dca</guid>
      <description>&lt;p&gt;## What it does&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Hybrid search: FTS5 full-text + vector similarity + importance + recency + access frequency&lt;/li&gt;
&lt;li&gt;4 memory types (static, dynamic, episodic, semantic)&lt;/li&gt;
&lt;li&gt;4 importance levels (critical → low)&lt;/li&gt;
&lt;li&gt;Memory deduplication &amp;amp; auto-merge&lt;/li&gt;
&lt;li&gt;Memory graph — link related memories together&lt;/li&gt;
&lt;li&gt;Time-decay scoring (recent memories rank higher)&lt;/li&gt;
&lt;li&gt;Auto semantic chunking for long texts&lt;/li&gt;
&lt;li&gt;LLM-based fact extraction from conversations&lt;/li&gt;
&lt;li&gt;Soft delete + restore&lt;/li&gt;
&lt;li&gt;Export/import, bulk ops&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;## Claude CLI integration&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;MCP server with 8 tools (search, add, update, delete, profile, list, link, stats)&lt;/li&gt;
&lt;li&gt;Claude automatically loads your profile at session start&lt;/li&gt;
&lt;li&gt;Memories persist across sessions in SQLite&lt;/li&gt;
&lt;li&gt;Drop-in &lt;code&gt;.mcp.json&lt;/code&gt; — just restart Claude Code&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;## Also works with any AI app&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Vercel AI SDK middleware (6 tools)&lt;/li&gt;
&lt;li&gt;OpenAI middleware (auto-inject memories)&lt;/li&gt;
&lt;li&gt;Framework-agnostic wrapper (Anthropic, Gemini, Ollama, anything)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;## Tech&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SQLite + FTS5 + better-sqlite3&lt;/li&gt;
&lt;li&gt;5 embedding options (local BM25, OpenAI, Ollama, hybrid)&lt;/li&gt;
&lt;li&gt;Zero cloud dependencies&lt;/li&gt;
&lt;li&gt;36 tests, all passing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It's a single-folder drop-in. No Docker, no cloud, no API keys needed.&lt;/p&gt;

&lt;p&gt;GitHub: &lt;a href="https://github.com/blockmandev/ai-memory" rel="noopener noreferrer"&gt;https://github.com/blockmandev/ai-memory&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Would love feedback from the community. What features would you add?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>vscode</category>
    </item>
  </channel>
</rss>
