<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: 날다람쥐</title>
    <description>The latest articles on Forem by 날다람쥐 (@flyingsquirrel0419).</description>
    <link>https://forem.com/flyingsquirrel0419</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/flyingsquirrel0419"/>
    <language>en</language>
    <item>
      <title>useless-gps</title>
      <dc:creator>날다람쥐</dc:creator>
      <pubDate>Fri, 10 Apr 2026 17:50:44 +0000</pubDate>
      <link>https://forem.com/flyingsquirrel0419/useless-gps-3kf6</link>
      <guid>https://forem.com/flyingsquirrel0419/useless-gps-3kf6</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/aprilfools-2026"&gt;DEV April Fools Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;I made a useless-gps website.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;div class="c-embed__content"&gt;
      &lt;div class="c-embed__body flex items-center justify-between"&gt;
        &lt;a href="https://useless-gps.vercel.app/" rel="noopener noreferrer" class="c-link fw-bold flex items-center"&gt;
          &lt;span class="mr-2"&gt;useless-gps.vercel.app&lt;/span&gt;
          

        &lt;/a&gt;
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;


&lt;h2&gt;
  
  
  Code
&lt;/h2&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/flyingsquirrel0419" rel="noopener noreferrer"&gt;
        flyingsquirrel0419
      &lt;/a&gt; / &lt;a href="https://github.com/flyingsquirrel0419/useless-gps" rel="noopener noreferrer"&gt;
        useless-gps
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;useless-gps&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;The world's most accurate and completely useless GPS locator.&lt;/p&gt;
&lt;p&gt;This project is a small Next.js app that reads your browser geolocation and turns it into intentionally unhelpful cosmic, geophysical, and existential status cards.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Stack&lt;/h2&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;Next.js 14&lt;/li&gt;
&lt;li&gt;React 18&lt;/li&gt;
&lt;li&gt;TypeScript&lt;/li&gt;
&lt;li&gt;Tailwind CSS&lt;/li&gt;
&lt;li&gt;Framer Motion&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Local Development&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;Install dependencies:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;npm ci&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Start the development server:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;npm run dev&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Build for production:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;npm run build&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Start the production server:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;npm run start&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;How It Works&lt;/h2&gt;

&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;Requests browser geolocation with high accuracy enabled&lt;/li&gt;
&lt;li&gt;Shows a fake "scan" sequence while location data is loading&lt;/li&gt;
&lt;li&gt;Converts coordinates into humorous location cards&lt;/li&gt;
&lt;li&gt;Renders a stylized retro radar interface with animated effects&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Contribution Policy&lt;/h2&gt;

&lt;/div&gt;
&lt;p&gt;&lt;code&gt;main&lt;/code&gt; is a protected branch.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Do not push directly to &lt;code&gt;main&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Open a pull request for every change&lt;/li&gt;
&lt;li&gt;Outside contributors should work from a fork and open a PR&lt;/li&gt;
&lt;li&gt;Non-admin changes require operator approval before landing on &lt;code&gt;main&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Operator Review&lt;/h2&gt;

&lt;/div&gt;
&lt;p&gt;Repository ownership and…&lt;/p&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/flyingsquirrel0419/useless-gps" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h2&gt;
  
  
  How I Built It
&lt;/h2&gt;

&lt;p&gt;I am interesting in space.So i want to know location of earth in space. So i made it. &lt;/p&gt;

&lt;h2&gt;
  
  
  Prize Category
&lt;/h2&gt;

&lt;p&gt;Community Favorite : Because i build this with own coding skills. so I should choose "Community Favorite".&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>418challenge</category>
      <category>showdev</category>
    </item>
    <item>
      <title>layercache: Stop Paying Redis Latency on Every Hot Read</title>
      <dc:creator>날다람쥐</dc:creator>
      <pubDate>Thu, 09 Apr 2026 23:53:34 +0000</pubDate>
      <link>https://forem.com/flyingsquirrel0419/layercache-stop-paying-redis-latency-on-every-hot-read-m8l</link>
      <guid>https://forem.com/flyingsquirrel0419/layercache-stop-paying-redis-latency-on-every-hot-read-m8l</guid>
      <description>&lt;p&gt;Every Node.js backend hits the same wall eventually.&lt;/p&gt;

&lt;p&gt;Your Redis cache is working, latency is acceptable, and then traffic doubles. Suddenly the Redis round-trip that felt like nothing at 200 req/s starts dominating your p95 at 2,000 req/s. You add an in-process memory cache on top, wire up some invalidation logic by hand, and three months later you are maintaining a fragile two-layer system with no stampede protection and no cross-instance consistency.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/flyingsquirrel0419/layercache" rel="noopener noreferrer"&gt;layercache&lt;/a&gt; is a TypeScript-first library that solves this problem once, cleanly. It stacks memory, Redis, and disk behind a single unified API and handles the hard parts — stampede prevention, cross-instance invalidation, graceful degradation under Redis failures — out of the box.&lt;/p&gt;

&lt;p&gt;This post walks through what it does and what the benchmark numbers actually look like on a real Redis backend.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Core Idea
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;your app ──▶ L1 Memory   ~0.006 ms  (per-process, sub-millisecond)
                │
             L2 Redis    ~0.2 ms    (shared across instances)
                │
             L3 Disk     ~2 ms      (optional, persistent)
                │
             Fetcher     runs once  (even under high concurrency)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On a cache hit the fastest available layer responds and the result backfills any warmer layers automatically. On a miss the fetcher runs exactly once, no matter how many concurrent requests arrived at the same time.&lt;/p&gt;

&lt;p&gt;That last part — the single-flight guarantee — is where most hand-rolled hybrid caches fall apart.&lt;/p&gt;




&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install &lt;/span&gt;layercache
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Memory only (no Redis needed):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;CacheStack&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;MemoryLayer&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;layercache&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;cache&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;CacheStack&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
  &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;MemoryLayer&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;maxSize&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="nx"&gt;_000&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;user:123&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;findUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;123&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Memory + Redis layered setup:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;CacheStack&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;MemoryLayer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;RedisLayer&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;layercache&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;Redis&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ioredis&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;cache&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;CacheStack&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
  &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;MemoryLayer&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;maxSize&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="nx"&gt;_000&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;
  &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;RedisLayer&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Redis&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="na"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;300&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;prefix&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;myapp:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;user:123&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;findUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;123&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The API is the same regardless of how many layers you add. Your application code doesn't change when you add or remove a layer.&lt;/p&gt;




&lt;h2&gt;
  
  
  Benchmark Results
&lt;/h2&gt;

&lt;p&gt;I ran layercache v1.2.9 against a real Redis 7 backend (Docker, not a mock) on Linux. Here is what the numbers look like.&lt;/p&gt;

&lt;h3&gt;
  
  
  Warm Hit Latency
&lt;/h3&gt;

&lt;p&gt;The most important number for a cache library is how fast the hit path is.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Mode&lt;/th&gt;
&lt;th&gt;Avg ms&lt;/th&gt;
&lt;th&gt;P95 ms&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;No cache (origin)&lt;/td&gt;
&lt;td&gt;5.175&lt;/td&gt;
&lt;td&gt;8.742&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memory only&lt;/td&gt;
&lt;td&gt;0.009&lt;/td&gt;
&lt;td&gt;0.014&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memory + Redis&lt;/td&gt;
&lt;td&gt;0.005&lt;/td&gt;
&lt;td&gt;0.006&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Memory-only warm hits averaged &lt;strong&gt;0.009ms&lt;/strong&gt;. With a Redis layer added, the hot path still resolves from L1 memory and came in at &lt;strong&gt;0.005ms&lt;/strong&gt; — both are firmly sub-millisecond and effectively the same class of latency for production purposes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Stampede Prevention
&lt;/h3&gt;

&lt;p&gt;This is where the library earns its keep. 75 concurrent requests for the same missing key, repeated 5 times:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Mode&lt;/th&gt;
&lt;th&gt;Avg ms&lt;/th&gt;
&lt;th&gt;Origin Executions&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;No cache&lt;/td&gt;
&lt;td&gt;409.5&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;375&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memory only&lt;/td&gt;
&lt;td&gt;6.9&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;5&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memory + Redis&lt;/td&gt;
&lt;td&gt;36.7&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;5&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Without a cache, 75 × 5 = 375 origin calls. With layercache, the fetcher ran exactly 5 times — once per round, regardless of concurrency. The layered case is slower than memory-only because it pays Redis coordination costs, but the correctness guarantee is the same.&lt;/p&gt;

&lt;h3&gt;
  
  
  HTTP Throughput
&lt;/h3&gt;

&lt;p&gt;Under sustained load with &lt;code&gt;autocannon&lt;/code&gt; (40 connections, 8 seconds):&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Route&lt;/th&gt;
&lt;th&gt;Avg Latency&lt;/th&gt;
&lt;th&gt;P97.5&lt;/th&gt;
&lt;th&gt;Req/s&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;No cache&lt;/td&gt;
&lt;td&gt;249 ms&lt;/td&gt;
&lt;td&gt;271 ms&lt;/td&gt;
&lt;td&gt;161&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memory only&lt;/td&gt;
&lt;td&gt;1.82 ms&lt;/td&gt;
&lt;td&gt;4 ms&lt;/td&gt;
&lt;td&gt;16,705&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memory + Redis&lt;/td&gt;
&lt;td&gt;1.74 ms&lt;/td&gt;
&lt;td&gt;4 ms&lt;/td&gt;
&lt;td&gt;17,184&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Caching moved the service from &lt;strong&gt;161 req/s&lt;/strong&gt; to over &lt;strong&gt;17,000 req/s&lt;/strong&gt; — roughly a 100× improvement in throughput. Average latency dropped from 249ms to under 2ms. The memory-only and layered routes performed nearly identically in steady state because hot requests stay in L1 after warm-up.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Happens When Redis Is Slow or Dead?
&lt;/h2&gt;

&lt;p&gt;This is the question that separates a library you can actually run in production from one you can only trust in demos.&lt;/p&gt;

&lt;h3&gt;
  
  
  Slow Redis
&lt;/h3&gt;

&lt;p&gt;I measured three scenarios with injected TCP latency:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Redis Delay&lt;/th&gt;
&lt;th&gt;L1 hot hit&lt;/th&gt;
&lt;th&gt;L2 hit&lt;/th&gt;
&lt;th&gt;Cold miss&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;0ms&lt;/td&gt;
&lt;td&gt;0.407ms&lt;/td&gt;
&lt;td&gt;2.655ms&lt;/td&gt;
&lt;td&gt;12.259ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;100ms&lt;/td&gt;
&lt;td&gt;0.119ms&lt;/td&gt;
&lt;td&gt;101.172ms&lt;/td&gt;
&lt;td&gt;504.167ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;500ms&lt;/td&gt;
&lt;td&gt;0.196ms&lt;/td&gt;
&lt;td&gt;501.404ms&lt;/td&gt;
&lt;td&gt;2506.013ms&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;The key insight:&lt;/strong&gt; L1 hot hits stayed fast regardless of Redis latency. If a request can be served from in-process memory, slow Redis does not matter at all. The latency penalty only applies when a request needs to reach L2 or perform a cold miss.&lt;/p&gt;

&lt;p&gt;Cold misses scaled hard with injected delay because the request paid both the Redis round-trip and the write-back path. If you have traffic patterns with many cold misses, a slow Redis will drag your tail latency even with &lt;code&gt;gracefulDegradation&lt;/code&gt; enabled — the benchmark showed graceful and strict modes performing nearly identically under slow conditions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dead Redis
&lt;/h3&gt;

&lt;p&gt;Under a fully paused Redis instance:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Warm L1 hits: &lt;strong&gt;still worked&lt;/strong&gt; — both strict and graceful modes served from memory normally&lt;/li&gt;
&lt;li&gt;Cold misses: &lt;strong&gt;timed out at 2000ms&lt;/strong&gt; — both modes failed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is important to understand. &lt;code&gt;gracefulDegradation&lt;/code&gt; keeps warm traffic alive when Redis goes down. It does not create a fast fallback path for cold keys. New keys and expired keys that need a Redis write-back will stall until the timeout.&lt;/p&gt;

&lt;p&gt;Operationally this means: &lt;strong&gt;if your L1 TTL is shorter than your expected Redis outage window, you will see degraded cold-miss behavior.&lt;/strong&gt; Size your L1 TTLs with this in mind.&lt;/p&gt;




&lt;h2&gt;
  
  
  Queue Amplification Under Slow Redis
&lt;/h2&gt;

&lt;p&gt;A follow-up benchmark asked: if Redis is slow and 500 concurrent requests pile up on L2-hit traffic, does latency stay bounded or blow up?&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Redis Delay&lt;/th&gt;
&lt;th&gt;Concurrency 1&lt;/th&gt;
&lt;th&gt;Concurrency 500&lt;/th&gt;
&lt;th&gt;Amplification&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;100ms&lt;/td&gt;
&lt;td&gt;100.8ms&lt;/td&gt;
&lt;td&gt;128.9ms&lt;/td&gt;
&lt;td&gt;1.28×&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;500ms&lt;/td&gt;
&lt;td&gt;501.1ms&lt;/td&gt;
&lt;td&gt;515.8ms&lt;/td&gt;
&lt;td&gt;1.03×&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;No runaway queue amplification. At 500 concurrent requests against a 500ms-latency Redis, wall-clock time only grew by about 15ms above the single-request baseline. The library appears to batch or overlap L2 requests within a shared Redis client rather than serializing them, which keeps the curve nearly flat.&lt;/p&gt;




&lt;h2&gt;
  
  
  Memory Pressure and Eviction
&lt;/h2&gt;

&lt;p&gt;With &lt;code&gt;maxSize: 25&lt;/code&gt; and 180 unique keys inserted (each with a 256KB payload), then revisiting the earliest 25 keys:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Evictions&lt;/th&gt;
&lt;th&gt;L1 Retained&lt;/th&gt;
&lt;th&gt;Revisit Avg&lt;/th&gt;
&lt;th&gt;Origin Fetches&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;180&lt;/td&gt;
&lt;td&gt;25&lt;/td&gt;
&lt;td&gt;1.332ms&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;0&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Eviction was predictable. L1 held exactly &lt;code&gt;maxSize&lt;/code&gt; entries after the fill phase. When evicted keys were revisited, they reloaded from Redis L2 rather than hitting the origin — zero origin fetches despite L1 having evicted everything. GC activity was measurable (36 events, 78ms total) but no stop-the-world pauses appeared at this payload size.&lt;/p&gt;




&lt;h2&gt;
  
  
  Multi-Instance and Cross-Process Features
&lt;/h2&gt;

&lt;p&gt;Single-process benchmarks only tell part of the story. layercache ships with primitives for distributed deployments:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;CacheStack&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;MemoryLayer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;RedisLayer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;RedisInvalidationBus&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;RedisSingleFlightCoordinator&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;layercache&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;redis&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Redis&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;cache&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;CacheStack&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;MemoryLayer&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;maxSize&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="nx"&gt;_000&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;
    &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;RedisLayer&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;redis&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;3600&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
  &lt;span class="p"&gt;],&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;invalidationBus&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;RedisInvalidationBus&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;publisher&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;redis&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;subscriber&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Redis&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="c1"&gt;// separate connection for pub/sub&lt;/span&gt;
    &lt;span class="p"&gt;}),&lt;/span&gt;
    &lt;span class="na"&gt;singleFlightCoordinator&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;RedisSingleFlightCoordinator&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;redis&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;
    &lt;span class="na"&gt;gracefulDegradation&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;retryAfterMs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="nx"&gt;_000&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The edge benchmark verified both of these features work:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cross-instance invalidation:&lt;/strong&gt; Instance B observed the updated value after Instance A invalidated and repopulated the key.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Distributed single-flight:&lt;/strong&gt; 60 concurrent requests split across two instances triggered exactly &lt;strong&gt;1&lt;/strong&gt; origin fetch total.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;TTL expiry stampedes are also deduplicated. In the benchmark, 40 concurrent requests hitting the same expired key across 5 rounds produced only 5 origin executions — one per expiry round.&lt;/p&gt;




&lt;h2&gt;
  
  
  Framework Integrations
&lt;/h2&gt;

&lt;p&gt;layercache ships middleware and adapters for the major Node.js frameworks:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Express:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/api/users&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;createExpressCacheMiddleware&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;users&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
  &lt;span class="na"&gt;keyResolver&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;`users:&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;
&lt;span class="p"&gt;}),&lt;/span&gt; &lt;span class="nx"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NestJS:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;Module&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;imports&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;CacheStackModule&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;forRoot&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;layers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
      &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;MemoryLayer&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;
      &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;RedisLayer&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;redis&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;300&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;})]&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;AppModule&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Fastify, Hono, tRPC, GraphQL resolver wrappers, and Next.js App Router are also covered.&lt;/p&gt;




&lt;h2&gt;
  
  
  Payload Size Matters for Redis Reads
&lt;/h2&gt;

&lt;p&gt;One benchmark result worth highlighting explicitly: payload size has almost no effect on L1 memory hits, but has a large effect when Redis is on the read path.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Mode&lt;/th&gt;
&lt;th&gt;1KB avg&lt;/th&gt;
&lt;th&gt;1MB avg&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Memory hit&lt;/td&gt;
&lt;td&gt;0.012ms&lt;/td&gt;
&lt;td&gt;0.018ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Redis hit&lt;/td&gt;
&lt;td&gt;0.200ms&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;4.170ms&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;If you are storing large objects — full page renders, heavy API responses — and relying on Redis as the primary read path without a warm L1 in front, you will feel the serialization and network overhead. Keep large objects in L1 where possible, or enable compression at the Redis layer.&lt;/p&gt;




&lt;h2&gt;
  
  
  When to Use layercache
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Good fit:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Services handling repeated reads for the same keys under any meaningful concurrency&lt;/li&gt;
&lt;li&gt;Multi-instance deployments that need consistent cache state across processes&lt;/li&gt;
&lt;li&gt;Situations where Redis slowdowns or outages should degrade gracefully rather than cascade&lt;/li&gt;
&lt;li&gt;Teams that want observable caching with hits/misses/latency metrics without building the instrumentation themselves&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Less relevant:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pure write-heavy workloads with no repeated reads&lt;/li&gt;
&lt;li&gt;Environments where an in-process memory cache is prohibited for compliance reasons&lt;/li&gt;
&lt;li&gt;Very simple single-key caches where a plain &lt;code&gt;Map&lt;/code&gt; with a TTL is already sufficient&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Scenario&lt;/th&gt;
&lt;th&gt;Key number&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Warm L1 hit latency&lt;/td&gt;
&lt;td&gt;~0.006ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;HTTP throughput gain (no cache → cached)&lt;/td&gt;
&lt;td&gt;~100×&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Stampede dedup (75 concurrent, 5 rounds)&lt;/td&gt;
&lt;td&gt;375 fetches → 5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Distributed single-flight (60 requests, 2 instances)&lt;/td&gt;
&lt;td&gt;60 fetches → 1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Slow Redis impact on hot L1 traffic&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Dead Redis impact on warm L1 traffic&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Dead Redis impact on cold-miss traffic&lt;/td&gt;
&lt;td&gt;Timeout&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The library makes a clear promise: stack your layers, wire up your fetcher, and it handles the coordination. The benchmarks back that promise up on a real backend.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Links:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/flyingsquirrel0419/layercache" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.npmjs.com/package/layercache" rel="noopener noreferrer"&gt;npm&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/flyingsquirrel0419/layercache/blob/main/docs/api.md" rel="noopener noreferrer"&gt;API Reference&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/flyingsquirrel0419/layercache/blob/main/docs/migration-guide.md" rel="noopener noreferrer"&gt;Migration Guide from node-cache-manager / keyv / cacheable&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>typescript</category>
      <category>node</category>
      <category>npm</category>
      <category>redis</category>
    </item>
    <item>
      <title>Beyond Basic Caching: How layercache Eliminates Cache Stampedes in Node.js</title>
      <dc:creator>날다람쥐</dc:creator>
      <pubDate>Thu, 09 Apr 2026 18:21:03 +0000</pubDate>
      <link>https://forem.com/flyingsquirrel0419/beyond-basic-caching-how-layercache-eliminates-cache-stampedes-in-nodejs-4gi2</link>
      <guid>https://forem.com/flyingsquirrel0419/beyond-basic-caching-how-layercache-eliminates-cache-stampedes-in-nodejs-4gi2</guid>
      <description>&lt;p&gt;Every Node.js developer knows the caching drill. You start with an in-memory &lt;code&gt;Map&lt;/code&gt;, graduate to Redis when you scale horizontally, and eventually find yourself wiring up a fragile hybrid system that breaks in production at 2 AM.&lt;/p&gt;

&lt;p&gt;I recently discovered &lt;a href="https://www.npmjs.com/package/layercache" rel="noopener noreferrer"&gt;&lt;code&gt;layercache&lt;/code&gt;&lt;/a&gt;—a multi-layer caching toolkit that promises to handle the messy parts (stampede prevention, graceful degradation, distributed consistency) while keeping the API simple. But does it deliver?&lt;/p&gt;

&lt;p&gt;I ran four comprehensive benchmark suites against real Redis instances to find out. Here are the results.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Architecture: L1 + L2 + Coordination
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;layercache&lt;/code&gt; treats caching as a stack:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌─────────────────────────────────────┐
│  L1 Memory  (~0.01ms, per-process)  │
│  L2 Redis   (~0.5ms, shared)        │
│  L3 Disk    (~2ms, persistent)      │
└─────────────────────────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you request a key, it checks L1 first, then L2, then your database. The clever part? &lt;strong&gt;All layers backfill automatically&lt;/strong&gt;—if you hit L2, layercache populates L1 for the next request. If you hit the database, it writes to both layers.&lt;/p&gt;

&lt;p&gt;But the real magic happens when 100 requests arrive simultaneously for the same expired key.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benchmark 1: The Stampede Test
&lt;/h2&gt;

&lt;p&gt;The "thundering herd" problem is where most caching libraries fail. When a popular key expires, 100 concurrent requests can trigger 100 database queries before the first one repopulates the cache.&lt;/p&gt;

&lt;p&gt;I tested 75 concurrent requests across 5 runs (375 total requests) for a cold key:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Setup&lt;/th&gt;
&lt;th&gt;Origin Fetches&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;No cache&lt;/td&gt;
&lt;td&gt;375&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memory-only&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memory + Redis&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Result:&lt;/strong&gt; &lt;code&gt;layercache&lt;/code&gt;'s single-flight coordination ensured the fetcher ran exactly &lt;strong&gt;once&lt;/strong&gt; per expiry round, not 75 times. The library creates a coordination lock in Redis (or memory) so that concurrent requests wait for the first fetcher to complete rather than hammering your database.&lt;/p&gt;

&lt;p&gt;Latency under this stampede:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Mode&lt;/th&gt;
&lt;th&gt;Avg Latency&lt;/th&gt;
&lt;th&gt;P95&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;No cache&lt;/td&gt;
&lt;td&gt;409ms&lt;/td&gt;
&lt;td&gt;429ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memory-only&lt;/td&gt;
&lt;td&gt;6.9ms&lt;/td&gt;
&lt;td&gt;13.5ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Layered&lt;/td&gt;
&lt;td&gt;36.7ms&lt;/td&gt;
&lt;td&gt;43.6ms&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The layered case is slower than memory-only (it pays Redis coordination costs), but it preserves the critical property: &lt;strong&gt;your database only feels one request&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benchmark 2: Real HTTP Throughput
&lt;/h2&gt;

&lt;p&gt;Theory is nice, but what about real HTTP servers? I set up three Express routes—no cache, memory-only, and layered—and hit them with &lt;code&gt;autocannon&lt;/code&gt; (40 connections, 8 seconds):&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Route&lt;/th&gt;
&lt;th&gt;Avg Latency&lt;/th&gt;
&lt;th&gt;P97.5&lt;/th&gt;
&lt;th&gt;Req/sec&lt;/th&gt;
&lt;th&gt;Throughput&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;/nocache&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;249ms&lt;/td&gt;
&lt;td&gt;271ms&lt;/td&gt;
&lt;td&gt;161&lt;/td&gt;
&lt;td&gt;57 KB/s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;/memory&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;1.82ms&lt;/td&gt;
&lt;td&gt;4ms&lt;/td&gt;
&lt;td&gt;16,705&lt;/td&gt;
&lt;td&gt;5.9 MB/s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;/layered&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;1.74ms&lt;/td&gt;
&lt;td&gt;4ms&lt;/td&gt;
&lt;td&gt;17,184&lt;/td&gt;
&lt;td&gt;6.1 MB/s&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;That's a 100x throughput increase&lt;/strong&gt; with minimal latency difference between memory-only and Redis-backed layers. Once warmed, L1 memory serves the hot path while Redis provides the shared backing store for multi-instance deployments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benchmark 3: When Redis Goes Wrong
&lt;/h2&gt;

&lt;p&gt;Production caches fail. I tested two failure modes:&lt;/p&gt;

&lt;h3&gt;
  
  
  Slow Redis (500ms latency injection)
&lt;/h3&gt;

&lt;p&gt;Using a TCP proxy to add synthetic latency:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Scenario&lt;/th&gt;
&lt;th&gt;Single Request&lt;/th&gt;
&lt;th&gt;500 Concurrent&lt;/th&gt;
&lt;th&gt;Amplification&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;L2 hit (strict)&lt;/td&gt;
&lt;td&gt;501ms&lt;/td&gt;
&lt;td&gt;515ms&lt;/td&gt;
&lt;td&gt;1.03x&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;L2 hit (graceful)&lt;/td&gt;
&lt;td&gt;501ms&lt;/td&gt;
&lt;td&gt;512ms&lt;/td&gt;
&lt;td&gt;1.02x&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Key finding:&lt;/strong&gt; Under slow Redis, wall-clock time stayed close to the single-request baseline even at 500 concurrent requests. The linearity ratio collapsed to ~0.002, meaning the batch completed far faster than a naive "latency × N" model would predict.&lt;/p&gt;

&lt;p&gt;However, &lt;strong&gt;cold misses were brutal&lt;/strong&gt;: With 500ms Redis latency, a cache miss took ~2.5s because it paid the slow Redis cost plus the fetch/write cost.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dead Redis (complete outage)
&lt;/h3&gt;

&lt;p&gt;I paused the Redis container with Docker:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Scenario&lt;/th&gt;
&lt;th&gt;Success&lt;/th&gt;
&lt;th&gt;Latency&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Strict hot hit&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;0.17ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Graceful hot hit&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;0.07ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Strict cold miss&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;Timeout (2000ms)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Graceful cold miss&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;Timeout (2000ms)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Critical insight:&lt;/strong&gt; &lt;code&gt;gracefulDegradation&lt;/code&gt; did &lt;strong&gt;not&lt;/strong&gt; turn a cold miss into a fast memory-only fallback when Redis was completely frozen. Hot L1 keys survived the outage beautifully (served from memory), but new or expired keys stalled until timeout.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Operational takeaway:&lt;/strong&gt; Warm your critical keys before Redis has issues. Hot L1 traffic is your lifeline during Redis outages.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benchmark 4: Memory Pressure and Eviction
&lt;/h2&gt;

&lt;p&gt;What happens when L1 memory fills up? I set &lt;code&gt;maxSize: 25&lt;/code&gt; and inserted 180 unique 256KB payloads:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Evictions&lt;/td&gt;
&lt;td&gt;180&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;L1 Retained&lt;/td&gt;
&lt;td&gt;25 (exactly maxSize)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Origin Fetches on Revisit&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GC Pauses (max)&lt;/td&gt;
&lt;td&gt;6.1ms&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;When revisiting the oldest keys (which were evicted from L1), they were seamlessly reloaded from Redis L2—not the origin. No cache stampede, no origin amplification.&lt;/p&gt;

&lt;p&gt;The GC impact was measurable (36 events, 78ms total) but not catastrophic—max pause stayed at 6ms, far from stop-the-world territory.&lt;/p&gt;

&lt;h2&gt;
  
  
  Edge Cases: TTL Expiry and Distributed Systems
&lt;/h2&gt;

&lt;h3&gt;
  
  
  TTL Stampede Protection
&lt;/h3&gt;

&lt;p&gt;I tested 40 concurrent requests hitting a key that just expired (TTL: 1s, waited 1.1s):&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Mode&lt;/th&gt;
&lt;th&gt;Fetch Count&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Memory-only&lt;/td&gt;
&lt;td&gt;5 (one per expiry round)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Layered&lt;/td&gt;
&lt;td&gt;5 (one per expiry round)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Even with TTL expiry triggering simultaneously across multiple rounds, deduplication held firm.&lt;/p&gt;

&lt;h3&gt;
  
  
  Multi-Instance Consistency
&lt;/h3&gt;

&lt;p&gt;Running two Node.js instances with shared Redis:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Invalidation Bus:&lt;/strong&gt; When Instance A updated a key, Instance B's L1 cache was invalidated via Redis Pub/Sub within milliseconds.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Distributed Single-Flight:&lt;/strong&gt; 60 concurrent requests across both instances for the same missing key resulted in exactly &lt;strong&gt;1&lt;/strong&gt; origin fetch.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is the holy grail for microservices: you get per-process L1 speed with cluster-wide consistency.&lt;/p&gt;

&lt;h2&gt;
  
  
  Payload Size Sensitivity
&lt;/h2&gt;

&lt;p&gt;Does caching large objects hurt?&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Setup&lt;/th&gt;
&lt;th&gt;1KB Avg&lt;/th&gt;
&lt;th&gt;1MB Avg&lt;/th&gt;
&lt;th&gt;P95 (1MB)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Memory-only&lt;/td&gt;
&lt;td&gt;0.012ms&lt;/td&gt;
&lt;td&gt;0.018ms&lt;/td&gt;
&lt;td&gt;0.023ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Redis-only&lt;/td&gt;
&lt;td&gt;0.200ms&lt;/td&gt;
&lt;td&gt;4.170ms&lt;/td&gt;
&lt;td&gt;10.11ms&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Large payloads hurt &lt;strong&gt;only when Redis is on the hot path&lt;/strong&gt;. Memory hits barely changed between 1KB and 1MB, but Redis hits jumped 20x due to serialization and network transfer. Keep your L1 &lt;code&gt;maxSize&lt;/code&gt; generous for large objects.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical Takeaways
&lt;/h2&gt;

&lt;p&gt;After running these benchmarks, here are my operational recommendations:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use layered caching for multi-instance deployments.&lt;/strong&gt; The hot-hit latency is identical to memory-only (~0.005ms), but you get distributed consistency and stampede prevention.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Warm your cache before traffic spikes.&lt;/strong&gt; Cold misses under slow Redis are painful (~2.5s), and dead Redis won't gracefully degrade for new keys.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Set generous L1 limits for large payloads.&lt;/strong&gt; 1MB objects in Redis are 200x slower than in memory. Let L1 absorb that cost.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Don't rely on graceful degradation for cold keys.&lt;/strong&gt; It protects hot L1 traffic during outages, but new keys will still timeout.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Trust the stampede prevention.&lt;/strong&gt; The library correctly handled 75→1 fetch reduction even with TTL expiry and cross-instance coordination.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;Basic setup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;CacheStack&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;MemoryLayer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;RedisLayer&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;layercache&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;Redis&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ioredis&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;cache&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;CacheStack&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
  &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;MemoryLayer&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;maxSize&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="nx"&gt;_000&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;
  &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;RedisLayer&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Redis&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="na"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;3600&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="c1"&gt;// Automatic stampede prevention&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;user:123&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;findUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;123&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For distributed deployments, wire up the invalidation bus:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;RedisInvalidationBus&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;RedisSingleFlightCoordinator&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;layercache&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;cache&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;CacheStack&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
  &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;MemoryLayer&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;
  &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;RedisLayer&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;redis&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;invalidationBus&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;RedisInvalidationBus&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; 
    &lt;span class="na"&gt;publisher&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;redis&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
    &lt;span class="na"&gt;subscriber&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Redis&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; 
  &lt;span class="p"&gt;}),&lt;/span&gt;
  &lt;span class="na"&gt;singleFlightCoordinator&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;RedisSingleFlightCoordinator&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;redis&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;
  &lt;span class="na"&gt;gracefulDegradation&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;retryAfterMs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="nx"&gt;_000&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Verdict
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;layercache&lt;/code&gt; delivers on its promises. The benchmark data shows it handles the three hard problems of production caching—&lt;strong&gt;stampede prevention&lt;/strong&gt;, &lt;strong&gt;graceful degradation&lt;/strong&gt;, and &lt;strong&gt;distributed consistency&lt;/strong&gt;—without sacrificing the performance of simple in-memory caching.&lt;/p&gt;

&lt;p&gt;The 100x HTTP throughput improvement and zero-fetch stampede protection make it a strong candidate for any Node.js service moving beyond a single instance.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Have you solved cache stampedes differently? I'd love to hear your war stories in the comments.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Links:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.npmjs.com/package/layercache/" rel="noopener noreferrer"&gt;npm: layercache&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/flyingsquirrel0419/layercache" rel="noopener noreferrer"&gt;GitHub Repository&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Benchmark environment: Node.js v20.20.1, Redis 7-alpine, Linux 5.15&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>typescript</category>
      <category>caching</category>
      <category>performance</category>
      <category>redis</category>
    </item>
    <item>
      <title>I built a multi-layer caching library for Node.js — would love your feedback!</title>
      <dc:creator>날다람쥐</dc:creator>
      <pubDate>Wed, 08 Apr 2026 17:51:00 +0000</pubDate>
      <link>https://forem.com/flyingsquirrel0419/i-built-a-multi-layer-caching-library-for-nodejs-would-love-your-feedback-2gm</link>
      <guid>https://forem.com/flyingsquirrel0419/i-built-a-multi-layer-caching-library-for-nodejs-would-love-your-feedback-2gm</guid>
      <description>&lt;p&gt;Hey dev.to community! 👋&lt;/p&gt;

&lt;p&gt;I've been working on a side project for a while now and finally got it to a point where I feel comfortable sharing it publicly. It's called &lt;strong&gt;layercache&lt;/strong&gt; — a multi-layer caching toolkit for Node.js.&lt;/p&gt;

&lt;p&gt;I'd really appreciate any feedback, honest criticism, or ideas from folks who deal with caching in production. Here's the quick overview:&lt;/p&gt;




&lt;h2&gt;
  
  
  Why I built this
&lt;/h2&gt;

&lt;p&gt;Almost every Node.js service I've worked on eventually hits the same caching problem:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Memory-only cache&lt;/strong&gt; → Fast, but each instance has its own isolated view of data&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Redis-only cache&lt;/strong&gt; → Shared across instances, but every request still pays a network round-trip&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hand-rolled hybrid&lt;/strong&gt; → Works at first, then you need stampede prevention, tag invalidation, stale serving, observability... and it spirals fast&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I couldn't find a library that handled all of this cleanly in one place, so I built one.&lt;/p&gt;




&lt;h2&gt;
  
  
  What layercache does
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;layercache&lt;/strong&gt; lets you stack multiple cache layers (Memory → Redis → Disk) behind a single unified API. On a cache hit, it serves from the fastest available layer and backfills the rest. On a miss, the fetcher runs &lt;strong&gt;exactly once&lt;/strong&gt; — even under high concurrency.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;              ┌───────────────────────────────────────┐
your app ----&amp;gt;│             layercache                │
              │                                       │
              │  L1 Memory    ~0.01ms  (per-process)  │
              │      |                                │
              │  L2 Redis     ~0.5ms   (shared)       │
              │      |                                │
              │  L3 Disk      ~2ms     (persistent)   │
              │      |                                │
              │  Fetcher      ~20ms    (runs once)    │
              └───────────────────────────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Basic usage
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install &lt;/span&gt;layercache
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;CacheStack&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;MemoryLayer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;RedisLayer&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;layercache&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;Redis&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ioredis&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;cache&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;CacheStack&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
  &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;MemoryLayer&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;maxSize&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="nx"&gt;_000&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;       &lt;span class="c1"&gt;// L1: in-process&lt;/span&gt;
  &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;RedisLayer&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Redis&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="na"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;3600&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;  &lt;span class="c1"&gt;// L2: shared&lt;/span&gt;
&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="c1"&gt;// Read-through: fetcher runs once, all layers filled automatically&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;user:123&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;findUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;123&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also start with just memory (no Redis required) and add layers as your needs grow.&lt;/p&gt;




&lt;h2&gt;
  
  
  Key features I'm most proud of
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Stampede prevention&lt;/strong&gt; — 100 concurrent requests for the same key trigger only 1 fetcher execution. Distributed dedup via Redis locks works across multiple server instances too.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tag-based invalidation&lt;/strong&gt; — Invalidate groups of related keys by tag, including across all layers at once. Useful for things like "invalidate all user-related cache entries."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stale-while-revalidate / stale-if-error&lt;/strong&gt; — Serve the stale cached value immediately while refreshing in the background, or keep serving stale data when the upstream is down.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Framework integrations&lt;/strong&gt; — Middleware helpers for Express, Fastify, Hono, tRPC, GraphQL, and a NestJS module with a &lt;code&gt;@Cacheable()&lt;/code&gt; decorator.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Observability out of the box&lt;/strong&gt; — Prometheus exporter, OpenTelemetry tracing, per-layer latency metrics, event hooks, and an HTTP stats endpoint.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Admin CLI&lt;/strong&gt; — &lt;code&gt;npx layercache stats|keys|invalidate&lt;/code&gt; for Redis-backed caches.&lt;/p&gt;




&lt;h2&gt;
  
  
  NestJS example (because I use NestJS a lot)
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// app.module.ts&lt;/span&gt;
&lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;Module&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;imports&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="nx"&gt;CacheStackModule&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;forRoot&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;layers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;MemoryLayer&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;
        &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;RedisLayer&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;redis&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;ttl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;300&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
      &lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;AppModule&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;

&lt;span class="c1"&gt;// user.service.ts&lt;/span&gt;
&lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;Injectable&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;UserService&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;(@&lt;/span&gt;&lt;span class="nd"&gt;InjectCacheStack&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;readonly&lt;/span&gt; &lt;span class="nx"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;CacheStack&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;

  &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nf"&gt;getUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`user:&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;findUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Benchmark numbers (on my machine, grain of salt)
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Scenario&lt;/th&gt;
&lt;th&gt;Avg Latency&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;L1 memory hit&lt;/td&gt;
&lt;td&gt;~0.006 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;L2 Redis hit&lt;/td&gt;
&lt;td&gt;~0.020 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;No cache (simulated DB)&lt;/td&gt;
&lt;td&gt;~1.08 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Stampede prevention: 100 concurrent requests → 1 fetcher execution.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I'm looking for feedback on
&lt;/h2&gt;

&lt;p&gt;Honestly, everything! But a few things I'm specifically unsure about:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;API design&lt;/strong&gt; — Does the &lt;code&gt;CacheStack&lt;/code&gt; + layer composition model feel intuitive? Are there footguns I'm missing?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The feature set&lt;/strong&gt; — Is this too much? Too little? Are there things here that should just be separate libraries?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Production readiness&lt;/strong&gt; — What would you need to see before using something like this in production? (more tests? better docs? battle-tested examples?)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Naming / discoverability&lt;/strong&gt; — &lt;code&gt;layercache&lt;/code&gt; as a name... does it communicate what it does clearly enough?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Anything else&lt;/strong&gt; — I'm sure there are patterns or edge cases I haven't thought of.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;📦 npm: &lt;a href="https://www.npmjs.com/package/layercache" rel="noopener noreferrer"&gt;npmjs.com/package/layercache&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;🐙 GitHub: &lt;a href="https://github.com/flyingsquirrel0419/layercache" rel="noopener noreferrer"&gt;github.com/flyingsquirrel0419/layercache&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;📖 Docs: &lt;a href="https://github.com/flyingsquirrel0419/layercache/blob/main/docs/api.md" rel="noopener noreferrer"&gt;API Reference&lt;/a&gt; | &lt;a href="https://github.com/flyingsquirrel0419/layercache/blob/main/docs/tutorial.md" rel="noopener noreferrer"&gt;Tutorial&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you try it out or browse the source and have thoughts — good, bad, or indifferent — I'm all ears. Comments here, GitHub Issues, or Discussions all work.&lt;/p&gt;

&lt;p&gt;Thanks for reading! 🙏&lt;/p&gt;

</description>
      <category>typescript</category>
      <category>redis</category>
    </item>
  </channel>
</rss>
