<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: LeetDezine</title>
    <description>The latest articles on Forem by LeetDezine (@leetdezine).</description>
    <link>https://forem.com/leetdezine</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/leetdezine"/>
    <language>en</language>
    <item>
      <title>Why Is Redis INCR a Bad Fit for a Public URL Shortener?</title>
      <dc:creator>LeetDezine</dc:creator>
      <pubDate>Thu, 23 Apr 2026 17:18:06 +0000</pubDate>
      <link>https://forem.com/leetdezine/url-shortener-traps-that-look-correct-until-they-break-2o8g</link>
      <guid>https://forem.com/leetdezine/url-shortener-traps-that-look-correct-until-they-break-2o8g</guid>
      <description>&lt;p&gt;&lt;a href="https://leetdezine.com/?utm_source=devto" rel="noopener noreferrer"&gt;LeetDezine&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp3plm3uket9rftps6qq9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp3plm3uket9rftps6qq9.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Redis INCR is one of those solutions that looks perfect the first time you see it. Atomic counter increments. Every call returns a unique integer. Base62-encode it and you have a short code — zero collision checks, zero retries, no background service.&lt;/p&gt;

&lt;p&gt;It's cleaner than anything else on the board. So why does every serious URL shortener reject it?&lt;/p&gt;

&lt;p&gt;The answer has nothing to do with code generation.&lt;/p&gt;


&lt;h2&gt;
  
  
  How Redis INCR Works (And Why It's Technically Correct)
&lt;/h2&gt;

&lt;p&gt;The mechanics are clean:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Creation request arrives
→ Redis: INCR url_counter → returns 1000000
→ Base62 encode 1000000:

  Divide repeatedly, collect remainders, stop when quotient = 0:

  1000000 ÷ 62 = 16129  remainder 22 → 'M'  (quotient != 0, keep going)
  16129   ÷ 62 = 260    remainder 9  → '9'  (quotient != 0, keep going)
  260     ÷ 62 = 4      remainder 12 → 'C'  (quotient != 0, keep going)
  4       ÷ 62 = 0      remainder 4  → '4'  (quotient = 0, stop)

  Read remainders bottom to top: "4C9M" → pad to 6 chars → "004C9M"

→ INSERT short_code = "004C9M"
→ Done.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Redis is single-threaded. &lt;code&gt;INCR&lt;/code&gt; is atomic — it increments and returns in a single operation. Two simultaneous calls always get different values:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;App server 1: INCR → 1000000
App server 2: INCR → 1000001  ← different, guaranteed
App server 3: INCR → 1000002  ← different, guaranteed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No race condition. No collision. No retry loop. Encoding a unique number always produces a unique code. The math is correct.&lt;/p&gt;

&lt;p&gt;So what's the problem?&lt;/p&gt;




&lt;h2&gt;
  
  
  Problem 1 — Sequential Codes Are a Privacy Violation
&lt;/h2&gt;

&lt;p&gt;Counter values are sequential. If your user receives &lt;code&gt;yoursite.com/004C9M&lt;/code&gt;, they immediately know:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;yoursite.com/004C9L  ← previous URL, someone else's
yoursite.com/004C9N  ← next URL, someone else's
yoursite.com/004C9K  ← keep going...
yoursite.com/004C9J  ← and going...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;They can walk the entire database. Every URL in your system is discoverable by incrementing one character.&lt;/p&gt;

&lt;p&gt;For an internal tool where all users are trusted, this might be fine. For a public shortener — where someone might shorten a pre-announcement link, an internal doc, a private file, a personal photo album — it's a real privacy violation. Your users have a reasonable expectation that their short link isn't guessable.&lt;/p&gt;

&lt;p&gt;Sequential codes make that expectation impossible to satisfy.&lt;/p&gt;




&lt;h2&gt;
  
  
  Problem 2 — Redis Becomes a Hard Dependency on Every Creation
&lt;/h2&gt;

&lt;p&gt;With INCR, the hot path looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Request → INCR Redis → encode → INSERT DB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Redis is in the critical path of every single URL creation. If Redis goes down:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Redis down
→ INCR fails
→ No counter value
→ Creation fails immediately
→ Zero fallback
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There's no graceful degradation. No buffer. No local state to drain. The moment Redis is unreachable, your creation endpoint returns errors.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Fix: KGS + Pre-Generated Key Pool
&lt;/h2&gt;

&lt;p&gt;The Key Generation Service approach flips the model. Instead of generating a key at request time, keys are generated in advance and stored in a Redis pool. When a request arrives, the app server just pops one.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Before any request arrives:
→ KGS generates random base62 codes offline
→ Loads them into Redis list (RPUSH)

When creation request arrives:
→ App server pops key from local batch
→ INSERT into DB
→ Done — zero Redis call on hot path
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why LPOP is atomic:&lt;/strong&gt; Redis is single-threaded. Even if 20 app servers call LPOP at the same millisecond, Redis processes them one at a time:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;App server 1: LPOP → "x7k2p9" (removed)
App server 2: LPOP → "k2m8q1" (removed)
App server 3: LPOP → "p9n3r7" (removed)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Physically impossible for two LPOP calls to return the same key. No locks needed. No &lt;code&gt;SELECT FOR UPDATE&lt;/code&gt;. Atomicity comes from the architecture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The batch pre-fetch:&lt;/strong&gt; Each app server grabs 100 keys at startup and keeps them in local memory. At 1k creations/sec across 20 servers, Redis traffic drops from 1000 LPOP/sec to ~10 batch refills/sec. 100x reduction.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;App server starts:
→ LPOP 100 keys → store in local queue

Creation request:
→ Pop from local queue (zero network call)
→ Queue empty → refill from Redis
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What this fixes for Redis failure:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Redis down
→ App servers drain local batch (100 keys × 20 servers = 2000 keys)
→ At 1k creations/sec → ~2 seconds of local runway
→ Circuit breaker engages, Redis recovers
→ Graceful degradation instead of hard failure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Side by Side
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Redis INCR&lt;/th&gt;
&lt;th&gt;KGS + Pool&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Collision checks&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Code predictability&lt;/td&gt;
&lt;td&gt;Sequential — enumerable&lt;/td&gt;
&lt;td&gt;Random — private&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Redis failure&lt;/td&gt;
&lt;td&gt;Creation fails instantly&lt;/td&gt;
&lt;td&gt;Local batch buys time&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Operational cost&lt;/td&gt;
&lt;td&gt;Very simple&lt;/td&gt;
&lt;td&gt;Small background worker&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Right for&lt;/td&gt;
&lt;td&gt;Internal tools&lt;/td&gt;
&lt;td&gt;Public URL shortener&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  The Pattern
&lt;/h2&gt;

&lt;p&gt;Redis INCR fails not because of what it does, but because of what it leaks. Sequential uniqueness and privacy are in direct conflict. You can't have both with a counter.&lt;/p&gt;

&lt;p&gt;The KGS + pool approach keeps the "no collision checks, no retries" guarantee while adding randomness and resilience. The operational cost is a 50-line background worker and one metric to monitor. The privacy and fault tolerance gains are worth it for any public-facing system.&lt;/p&gt;

&lt;p&gt;The full URL shortener case study — including requirements, DB design, caching, peak traffic, and every failure mode — is at:&lt;/p&gt;

&lt;p&gt;→ &lt;a href="https://leetdezine.com/?utm_source=devto" rel="noopener noreferrer"&gt;https://leetdezine.com/?utm_source=devto&lt;/a&gt;&lt;/p&gt;

</description>
      <category>systemdesign</category>
      <category>distributedsystems</category>
      <category>backend</category>
      <category>redis</category>
    </item>
    <item>
      <title>Why Random UUIDs are Killing Your Database Performance</title>
      <dc:creator>LeetDezine</dc:creator>
      <pubDate>Mon, 20 Apr 2026 10:15:57 +0000</pubDate>
      <link>https://forem.com/leetdezine/why-random-uuids-are-killing-your-database-performance-h59</link>
      <guid>https://forem.com/leetdezine/why-random-uuids-are-killing-your-database-performance-h59</guid>
      <description>&lt;p&gt;Every developer starts with a UUID. It’s the industry standard for a reason: zero coordination, zero DB checks, and zero single point of failure. Any machine can generate one and be 100% sure it’s unique.&lt;/p&gt;

&lt;p&gt;But as your system scales, that "standard" choice starts to hurt.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem: UUIDs vs. Databases
&lt;/h2&gt;

&lt;p&gt;If you're using &lt;strong&gt;UUID v4&lt;/strong&gt; (completely random), you're essentially handing your database a grenade. &lt;/p&gt;

&lt;p&gt;Because the IDs are random, every new insert lands in a random spot in your B-Tree index. This causes &lt;strong&gt;page splits&lt;/strong&gt;, fragments your storage, and slows down your writes as the table grows. Plus, at 128 bits (16 bytes), they're twice as large as a standard &lt;code&gt;BIGINT&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Evolution of ID Generation
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Single Server Counter:&lt;/strong&gt; Simple, but if the server dies, your ID generation stops (SPOF).&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;UUID v4:&lt;/strong&gt; Globally unique, but random and huge. No time-sortability.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;UUID v7:&lt;/strong&gt; The modern middle ground. It's still 16 bytes, but it's &lt;strong&gt;time-sortable&lt;/strong&gt;, which fixes the database page-split problem.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Ticket Server (Redis):&lt;/strong&gt; Centralized counter. Fast, but now your ID generation depends on Redis availability.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Snowflake IDs:&lt;/strong&gt; The "Big Tech" solution (used by Twitter, Discord, and Instagram).&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Why Snowflake Wins
&lt;/h3&gt;

&lt;p&gt;Snowflake IDs pack everything you need into just &lt;strong&gt;64 bits (8 bytes)&lt;/strong&gt;. They fit perfectly into a standard &lt;code&gt;BIGINT&lt;/code&gt;, making them fast to index and easy to store.&lt;/p&gt;

&lt;p&gt;Here is the breakdown of how those 64 bits are structured:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;1 bit (Sign):&lt;/strong&gt; Always 0 (keeps the number positive).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;41 bits (Timestamp):&lt;/strong&gt; Milliseconds since a custom epoch. This gives you ~69 years of IDs and makes them &lt;strong&gt;natively time-sortable&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;10 bits (Machine ID):&lt;/strong&gt; Allows up to 1,024 independent nodes to generate IDs simultaneously without talking to each other.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;12 bits (Sequence):&lt;/strong&gt; A counter for IDs generated in the same millisecond on the same machine (up to 4,096 IDs/ms).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Comparison
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Property&lt;/th&gt;
&lt;th&gt;UUID v4&lt;/th&gt;
&lt;th&gt;UUID v7&lt;/th&gt;
&lt;th&gt;Snowflake&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Size&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;128-bit&lt;/td&gt;
&lt;td&gt;128-bit&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;64-bit&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Sortable&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;❌ No&lt;/td&gt;
&lt;td&gt;✅ Yes&lt;/td&gt;
&lt;td&gt;✅ Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Coordination&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅ None&lt;/td&gt;
&lt;td&gt;✅ None&lt;/td&gt;
&lt;td&gt;✅ None&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;DB Friendly&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;❌ No&lt;/td&gt;
&lt;td&gt;✅ Yes&lt;/td&gt;
&lt;td&gt;✅ &lt;strong&gt;Best&lt;/strong&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Which one should you choose?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Quick Prototypes:&lt;/strong&gt; Stick with &lt;strong&gt;UUID v4&lt;/strong&gt;. It’s easy and requires zero setup.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Modern Web Apps:&lt;/strong&gt; Move to &lt;strong&gt;UUID v7&lt;/strong&gt;. You get the simplicity of UUIDs with the performance of time-sortable IDs.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;High-Scale Systems:&lt;/strong&gt; Go with &lt;strong&gt;Snowflake&lt;/strong&gt;. When every byte and every millisecond of database latency matters, 64-bit sortable IDs are the only way to go.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The Golden Rule:&lt;/strong&gt; You can't just "trim" a UUID to make it shorter. Trimming 128 bits down to 6 characters for a "short link" throws away 92 bits of entropy, turning a global guarantee into a collision nightmare.&lt;/p&gt;

&lt;p&gt;For a full deep dive into the math and architecture behind distributed IDs, check out the case study at &lt;a href="https://leetdezine.com/03-Case-Studies/01-Foundation/01-Unique-ID-Generator/" rel="noopener noreferrer"&gt;LeetDezine&lt;/a&gt;&lt;/p&gt;

</description>
      <category>systemdesign</category>
      <category>snowflake</category>
      <category>distributedsystems</category>
      <category>backend</category>
    </item>
  </channel>
</rss>
