<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Tarek</title>
    <description>The latest articles on Forem by Tarek (@tarekraafat).</description>
    <link>https://forem.com/tarekraafat</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/tarekraafat"/>
    <language>en</language>
    <item>
      <title>I Rebuilt My JavaScript Database From Scratch for the AI Agent Era</title>
      <dc:creator>Tarek</dc:creator>
      <pubDate>Tue, 31 Mar 2026 17:49:54 +0000</pubDate>
      <link>https://forem.com/tarekraafat/i-rebuilt-my-javascript-database-from-scratch-for-the-ai-agent-era-h62</link>
      <guid>https://forem.com/tarekraafat/i-rebuilt-my-javascript-database-from-scratch-for-the-ai-agent-era-h62</guid>
      <description>&lt;p&gt;Three years ago, I built Skalex - a simple, zero-dependency &lt;br&gt;
in-memory document database for JavaScript. It did what it said &lt;br&gt;
on the tin: store documents, query them, persist them to disk. &lt;br&gt;
People used it. I was happy.&lt;/p&gt;

&lt;p&gt;Then everything changed.&lt;/p&gt;
&lt;h2&gt;
  
  
  The moment I knew I had to rewrite it
&lt;/h2&gt;

&lt;p&gt;AI agents became real. Not just chatbots - actual agents that &lt;br&gt;
remember things, reason about data, and take actions. And as I &lt;br&gt;
started building with them, I kept running into the same wall: &lt;br&gt;
The database layer wasn't designed for this.&lt;/p&gt;

&lt;p&gt;Every time I wanted an agent to recall something from a previous &lt;br&gt;
session, I had to bolt on a vector database. Every time I wanted &lt;br&gt;
natural language queries, I had to wire up an external service. &lt;br&gt;
Every time I wanted to expose data to Claude Desktop or Cursor &lt;br&gt;
via MCP, I had to build plumbing from scratch.&lt;/p&gt;

&lt;p&gt;I was spending more time on the infrastructure than on the actual &lt;br&gt;
agent logic.&lt;/p&gt;

&lt;p&gt;So I asked myself: what would a database look like if it was &lt;br&gt;
designed for AI agents from day one?&lt;/p&gt;

&lt;p&gt;That question became Skalex v4.&lt;/p&gt;
&lt;h2&gt;
  
  
  The constraints I refused to compromise on
&lt;/h2&gt;

&lt;p&gt;Before writing a single line, I set three rules for myself:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Zero dependencies - no supply chain risk, no bloat, no &lt;code&gt;node_modules&lt;/code&gt; hell&lt;/li&gt;
&lt;li&gt;One package - everything the AI stack needs, no separate installs&lt;/li&gt;
&lt;li&gt;Every JavaScript runtime - Node.js, Bun, Deno, browsers, edge workers&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These constraints made everything harder. And they made the result much better.&lt;/p&gt;
&lt;h2&gt;
  
  
  What I built
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Vector search that just works
&lt;/h3&gt;

&lt;p&gt;The most requested feature from v3 users was semantic search. I wanted it to feel native, not bolted on.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Skalex&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;ai&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;provider&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;openai&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;apiKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;OPENAI_API_KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;embeddingModel&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;text-embedding-3-small&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;notes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createCollection&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;notes&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;vectorField&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;content&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// Insert with automatic embedding generation&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;notes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;insert&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; 
  &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;The cat sat on the mat&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; 
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// Search semantically&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;notes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;search&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;feline on furniture&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; 
  &lt;span class="na"&gt;limit&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt; 
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No separate vector database. No separate embedding pipeline. One package, one API.&lt;/p&gt;

&lt;p&gt;Want to use local models instead? Swap the adapter:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Skalex&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;ai&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;provider&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ollama&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;embeddingModel&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;nomic-embed-text&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Fully offline. Zero API costs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Agent memory that survives restarts
&lt;/h3&gt;

&lt;p&gt;The biggest pain point I kept seeing in AI agent code was memory that evaporated when the process died. Session memory is easy. Cross-session memory is the hard part.&lt;/p&gt;

&lt;p&gt;Skalex solves this with a dedicated memory API backed by semantic embeddings and persistent storage:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;memory&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;useMemory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;agent-1&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Remember something&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;memory&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;remember&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;User prefers dark mode and concise answers&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;memory&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;remember&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;User is building a SaaS product in Next.js&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Recall semantically across sessions&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;memory&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;recall&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;user preferences&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;limit&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// Compress old memories to save space&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;memory&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;compress&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When the process restarts, &lt;code&gt;db.connect()&lt;/code&gt; reloads everything from the storage adapter. The agent picks up exactly where it left off.&lt;/p&gt;

&lt;h3&gt;
  
  
  Natural language queries
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;db.ask()&lt;/code&gt; translates plain English into structured filters via any LLM:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ask&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;find all users who signed up this month and haven't logged in&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Works with OpenAI, Anthropic, and Ollama. The LLM generates the filter, and Skalex executes it. No prompt engineering required on your end.&lt;/p&gt;

&lt;h3&gt;
  
  
  A one-line MCP server
&lt;/h3&gt;

&lt;p&gt;This is the feature I'm most excited about. Model Context Protocol lets you expose your database as a tool to Claude Desktop, Cursor, and any MCP client:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Skalex&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./data&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;mcp&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;listen&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Three lines. Claude now has full read/write access to your database - find, insert, update, delete, search, ask questions in plain English. Add it to your Claude Desktop config and your AI assistant has a real persistent memory layer.&lt;/p&gt;

&lt;h2&gt;
  
  
  The hardest part
&lt;/h2&gt;

&lt;p&gt;The hardest part wasn't the AI features. It was keeping everything in a single zero-dependency package while supporting six different runtimes.&lt;/p&gt;

&lt;p&gt;Node.js, Bun, and Deno all handle crypto, file I/O, and module resolution differently. Browsers don't have a filesystem. Edge workers have memory constraints. Every platform is a special case.&lt;/p&gt;

&lt;p&gt;The solution was pluggable storage adapters:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Node.js&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Skalex&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;adapter&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;FsAdapter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./data&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// Browser&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Skalex&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;adapter&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;LocalStorageAdapter&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// Cloudflare Workers&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Skalex&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;adapter&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;D1Adapter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DB&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// Bun&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Skalex&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;adapter&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;BunSQLiteAdapter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./db.sqlite&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Same API. Every platform. Zero code changes in your application layer.&lt;/p&gt;

&lt;p&gt;The test suite ended up at 787 tests across Node.js, Bun, Deno, Chrome ESM, Chrome UMD, and BunSQLite. Every commit runs all of them.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I learned from the rewrite
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Constraints are a feature.&lt;/strong&gt; Zero dependencies forced me to implement crypto, compression, and vector math from scratch using only built-in APIs. The result is leaner and more auditable than any dependency chain.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The AI stack needs a database that understands it.&lt;/strong&gt; Tacking vector search onto a traditional document store feels wrong because it is wrong. When memory, search, and queries are all first-class citizens, the agent code becomes dramatically simpler.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Local-first is underrated.&lt;/strong&gt; Ollama support means you can run the entire AI stack - embeddings, LLM, database - on your laptop with no API keys, no costs, no data leaving your machine. For prototyping and privacy-sensitive workloads, that's a huge deal.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's next
&lt;/h2&gt;

&lt;p&gt;v4 is in alpha today. The API is largely stable, but may shift before the stable release. What's on the roadmap:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Hybrid BM25 + vector search with Reciprocal Rank Fusion&lt;/li&gt;
&lt;li&gt;CRDT real-time collaboration&lt;/li&gt;
&lt;li&gt;SQLite WASM adapter for browser-native persistence&lt;/li&gt;
&lt;li&gt;Graph traversal queries&lt;/li&gt;
&lt;li&gt;Framework adapters for React, Vue, Svelte, Solid, and Eleva&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're building AI agents, CLI tools, desktop apps, or edge workers where the dataset fits in memory - give it a try.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install &lt;/span&gt;skalex@alpha
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Feedback, bug reports, and contributions are what make the stable release good.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GitHub: &lt;a href="https://github.com/TarekRaafat/skalex" rel="noopener noreferrer"&gt;https://github.com/TarekRaafat/skalex&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Docs: &lt;a href="https://tarekraafat.github.io/skalex" rel="noopener noreferrer"&gt;https://tarekraafat.github.io/skalex&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>javascript</category>
      <category>ai</category>
      <category>database</category>
      <category>webdev</category>
    </item>
    <item>
      <title>autoComplete.js library v2.0 RELEASED!</title>
      <dc:creator>Tarek</dc:creator>
      <pubDate>Sun, 23 Dec 2018 21:45:41 +0000</pubDate>
      <link>https://forem.com/tarekraafat/autocompletejs-library-v20-released-421g</link>
      <guid>https://forem.com/tarekraafat/autocompletejs-library-v20-released-421g</guid>
      <description>&lt;p&gt;
    &lt;a href="https://tarekraafat.github.io/autoComplete.js/" rel="noopener noreferrer"&gt;
            &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fn1k2imq19xtfw0tcrjuq.png" width="800" height="420"&gt;
    &lt;/a&gt;
&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Simple autocomplete pure vanilla Javascript library. &lt;strong&gt;v2.0&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://tarekraafat.github.io/autoComplete.js/" rel="noopener noreferrer"&gt;autoComplete.js&lt;/a&gt; is a simple pure vanilla Javascript library that's progressively designed for speed, high versatility and seamless integration with wide range of projects &amp;amp; systems, made for users and developers in mind.&lt;/p&gt;

&lt;p&gt;An autocomplete search should feel like it’s making the user’s (End User &amp;amp; Developer) experience much faster, easier, more productive and more streamlined. Actually, it shouldn't feel like anything at all, the user should fly by autocomplete on their way to whatever it is they’re trying to search. That's why autoComplete.js designed and built on the following core principles.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;End User:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Simplicity&lt;/li&gt;
&lt;li&gt;Intuitiveness&lt;/li&gt;
&lt;li&gt;Usability&lt;/li&gt;
&lt;li&gt;Speed&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Developer:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lightweight&lt;/li&gt;
&lt;li&gt;Customizable&lt;/li&gt;
&lt;li&gt;Zero Dependencies&lt;/li&gt;
&lt;li&gt;Seamless Integration&lt;/li&gt;
&lt;li&gt;Scalable&lt;/li&gt;
&lt;li&gt;Open Source&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Have a look at the autoComplete.js &lt;a href="https://tarekraafat.github.io/autoComplete.js/demo/" rel="noopener noreferrer"&gt;demo&lt;/a&gt;, and let me know your thoughts! :)&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>webdev</category>
      <category>programming</category>
      <category>showdev</category>
    </item>
  </channel>
</rss>
