<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: rishabh pahwa</title>
    <description>The latest articles on Forem by rishabh pahwa (@rishabh_pahwa_1a2b93e60b0).</description>
    <link>https://forem.com/rishabh_pahwa_1a2b93e60b0</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/rishabh_pahwa_1a2b93e60b0"/>
    <language>en</language>
    <item>
      <title>Why "No Rollback" Breaks Production</title>
      <dc:creator>rishabh pahwa</dc:creator>
      <pubDate>Fri, 15 May 2026 08:44:38 +0000</pubDate>
      <link>https://forem.com/rishabh_pahwa_1a2b93e60b0/why-no-rollback-breaks-production-23ea</link>
      <guid>https://forem.com/rishabh_pahwa_1a2b93e60b0/why-no-rollback-breaks-production-23ea</guid>
      <description>&lt;p&gt;Most data migration strategies focus on getting to the new state. But your actual success metric isn't "migration complete," it's "can we revert this change without data loss?" A robust rollback mechanism isn't a luxury; it's the only way to guarantee business continuity when migrations inevitably hit a snag.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why "No Rollback" Breaks Production
&lt;/h2&gt;

&lt;p&gt;Imagine your team deploys a new feature requiring a crucial schema change—say, adding a &lt;code&gt;user_preferences&lt;/code&gt; JSONB column with a &lt;code&gt;NOT NULL&lt;/code&gt; constraint. You run the migration, deploy the new application code, and for the first 10 minutes, everything looks green. Then, an edge case surfaces: existing users with implicit empty preference data (handled by old app logic) start seeing 500 errors because the new application expects a specific, non-null JSON structure. Revenue instantly drops by 15%, and PagerDuty is screaming.&lt;/p&gt;

&lt;p&gt;Without a safe rollback strategy, you're in a nightmare scenario:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Roll forward with a hotfix:&lt;/strong&gt; Rushing a fix under pressure is a recipe for more bugs, especially if the underlying data is already corrupted or partially transformed.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Restore from backup:&lt;/strong&gt; This means hours of downtime and guaranteed data loss since the backup was taken. Any new data written in the last few hours is gone.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Manual data repair:&lt;/strong&gt; An error-prone, slow process for critical data, often involving direct database manipulation, leading to further inconsistency.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;All options are unacceptable in a production system handling high traffic or sensitive data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Designing for Zero-Data-Loss Rollback: The Phased Migration
&lt;/h2&gt;

&lt;p&gt;The core idea for safe rollbacks is to ensure your &lt;em&gt;old&lt;/em&gt; system can continue to operate correctly throughout the migration, especially writing data, even as you transition to a new schema or database. This allows you to revert to the old application version without data loss if something breaks.&lt;/p&gt;

&lt;p&gt;This typically involves a phased approach often called "dual write" or "shadow write."&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;           +--------------------+
           |                    |
           |   Application v1   |
           |  (Reads/Writes Old)|
           |                    |
           +----------+---------+
                      |
                      | Reads/Writes (Old Schema)
                      v
            +-------------------+
            |                   |
            |    Old Database   |
            |    (Old Schema)   |
            |                   |
            +-------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Phase 1: Dual Write Introduction (No Read Change)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Your new application version (v2) is deployed alongside v1. Critically, v2 &lt;em&gt;writes to both the old schema and the new schema&lt;/em&gt;. Reads continue to come from the old schema by both v1 and v2. This ensures the old path is always kept up-to-date and valid.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;           +--------------------+      +--------------------+
           |    Application v1  |      |    Application v2  |
           | (Reads/Writes Old) |      | (Writes Old &amp;amp; New) |
           |                    |      | (Reads Old)        |
           +----------+---------+      +----------+---------+
                      |                             |
                      | Reads/Writes (Old Schema)   | Writes (New Schema)
                      v                             v
            +-------------------+           +-------------------+
            |                   |           |                   |
            |    Old Database   |&amp;lt;----------|    New Database   |
            |    (Old Schema)   |           |    (New Schema)   |
            |                   |           |                   |
            +-------------------+           +-------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Phase 2: Backfill Historical Data&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;While dual writes ensure new data is captured in both places, existing historical data only lives in the old schema. An asynchronous job is run to backfill and transform this data from the old schema into the new schema. This must be idempotent and carefully handle concurrent writes from Phase 1.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 3: Read Switchover (Still Dual Writing)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once the backfill is complete and verified, you update Application v2 to read primarily from the new schema. Application v1 continues to read and write to the old schema. Dual writes from v2 continue, ensuring both databases remain synchronized.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;           +--------------------+      +--------------------+
           |    Application v1  |      |    Application v2  |
           | (Reads/Writes Old) |      | (Writes Old &amp;amp; New) |
           |                    |      | (Reads New)        |
           +----------+---------+      +----------+---------+
                      |                             |
                      | Reads/Writes (Old Schema)   | Writes (New Schema)
                      v                             v
            +-------------------+           +-------------------+
            |                   |           |                   |
            |    Old Database   |&amp;lt;----------|    New Database   |
            |    (Old Schema)   |           |    (New Schema)   |
            |                   |           |                   |
            +-------------------+           +-------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Rollback Point:&lt;/strong&gt; If at any point during Phases 1-3 an issue arises, you can instantly rollback &lt;code&gt;Application v2&lt;/code&gt; to &lt;code&gt;Application v1&lt;/code&gt;. Since &lt;code&gt;Application v1&lt;/code&gt; was always writing to the old schema, and &lt;code&gt;Application v2&lt;/code&gt; was also writing to it, the critical data for your production system remains intact and consistent in the old schema. The new schema might contain inconsistent or orphaned data, but your core business operations are unaffected.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 4: Cutover and Cleanup&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once confidence is high (e.g., after weeks of monitoring with no issues), you can remove the dual writes from v2 and eventually deprecate/drop the old schema or database.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real-world Application: Stripe's Data Migrations
&lt;/h2&gt;

&lt;p&gt;Stripe, processing billions of API calls daily, cannot afford data loss or significant downtime. Their approach to critical data migrations (e.g., changing how &lt;code&gt;PaymentIntent&lt;/code&gt; objects are stored, or migrating customer data between sharded databases) heavily relies on phased strategies for zero-downtime, zero-data-loss transitions.&lt;/p&gt;

&lt;p&gt;When migrating to new data models or infrastructure, Stripe often employs a variation of the dual-write pattern, sometimes extended with a "shadow-read" phase. For instance, if migrating a service to a new database or schema, they might:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Replicate data:&lt;/strong&gt; Stream existing data from the old system to the new, ensuring eventual consistency.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Dual-write:&lt;/strong&gt; All new writes go to &lt;em&gt;both&lt;/em&gt; the old and new systems. This is critical for rollback: the old system always has the latest state.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Shadow-read/Verify:&lt;/strong&gt; New application code starts reading from the new system but &lt;em&gt;compares the result with the old system&lt;/em&gt;. If there's a discrepancy, it logs an error but serves the response from the old system. This acts as a "dark launch" validation, catching data inconsistencies before they impact users.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Phased Read Cutover:&lt;/strong&gt; Once shadow-reads are validated (e.g., 99.999% consistency over days), reads are progressively switched to the new system, starting with a small percentage of traffic (canary deployment) and gradually increasing.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Remove Dual-write:&lt;/strong&gt; Once all traffic is routed to the new system and it's stable, the dual-write logic is removed.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Decommission:&lt;/strong&gt; The old system is eventually decommissioned.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This process can take weeks or even months for critical systems, providing an extremely long window for verification and instant rollback at any stage before the old system is retired. The overhead of writing twice (or reading twice) is a recognized trade-off for business continuity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Common Mistakes Engineers Make
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Forgetting Data Integrity Constraints:&lt;/strong&gt; Focusing only on changing column types but neglecting the &lt;code&gt;NOT NULL&lt;/code&gt; constraints or unique indexes. If you add &lt;code&gt;NOT NULL&lt;/code&gt; to a column that has existing &lt;code&gt;NULL&lt;/code&gt; values, your migration will fail unless you've backfilled defaults &lt;em&gt;before&lt;/em&gt; applying the constraint. This seems basic, but it's a frequent cause of production failures.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Prematurely Dropping Old Data or Indices:&lt;/strong&gt; Convinced the migration is "done" after a few hours, engineers drop old columns, tables, or indices. If a hidden bug emerges days later, a rollback becomes a partial data restoration from backup (data loss) or a manual, complex data reconstruction task. Keep old structures around for &lt;em&gt;weeks&lt;/em&gt; or &lt;em&gt;months&lt;/em&gt; if possible, even if unused, until full confidence is achieved.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Inadequate Monitoring on the Old Path:&lt;/strong&gt; During dual-write, the focus often shifts entirely to the new path. If the old path's writes (which are critical for rollback) start failing due to unexpected application interactions or database load, and you don't monitor it, your safety net is silently compromised. Monitor both paths comprehensively, especially write success rates and latencies.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Interview Angle
&lt;/h2&gt;

&lt;p&gt;Interviewers love to probe into data migration because it exposes your understanding of trade-offs and production resilience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Question:&lt;/strong&gt; "You need to add a new &lt;code&gt;status&lt;/code&gt; column (enum type) to a critical &lt;code&gt;orders&lt;/code&gt; table that processes thousands of transactions per second. Describe a zero-downtime, zero-data-loss migration strategy and how you'd handle a rollback."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strong Answer Breakdown:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Phase 1: Safe Schema Evolution.&lt;/strong&gt; Start by adding the new &lt;code&gt;status&lt;/code&gt; column as &lt;code&gt;NULLABLE&lt;/code&gt; and with no default. This ensures existing rows remain valid. Deploy this schema change &lt;em&gt;without&lt;/em&gt; application code changes.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Phase 2: Dual Write with Backfill.&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;  Deploy a new version of your application (v2) that, when writing or updating an order, writes to &lt;em&gt;both&lt;/em&gt; the old and new &lt;code&gt;status&lt;/code&gt; columns. For existing orders, backfill the &lt;code&gt;status&lt;/code&gt; column based on existing logic or a reasonable default value using an asynchronous, idempotent job.&lt;/li&gt;
&lt;li&gt;  Application v1 continues to operate as normal, reading/writing only the old columns.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Rollback Safety:&lt;/strong&gt; At this stage, if v2 has issues, you can roll back to v1. All critical data (including the old status representation) is preserved in the original format. The new &lt;code&gt;status&lt;/code&gt; column might become stale or inconsistent, but it doesn't impact v1.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;  &lt;strong&gt;Phase 3: Phased Read Switchover.&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;  Once backfill is complete and the dual-write period has passed without issues, deploy an updated v2 that reads the &lt;code&gt;status&lt;/code&gt; from the &lt;em&gt;new&lt;/em&gt; column first. If it's &lt;code&gt;NULL&lt;/code&gt; (indicating an un-migrated row or an old version), fall back to inferring status from the old logic. Continue dual-writing.&lt;/li&gt;
&lt;li&gt;  Use feature flags to gradually roll out this read change to a small percentage of users, carefully monitoring for errors and data discrepancies.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;  &lt;strong&gt;Phase 4: Enforce Constraint and Cleanup.&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;  Once confident, add a &lt;code&gt;NOT NULL&lt;/code&gt; constraint to the &lt;code&gt;status&lt;/code&gt; column.&lt;/li&gt;
&lt;li&gt;  Finally, remove the old status logic and column, typically after a significant soak period (weeks).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;  &lt;strong&gt;Key Mitigations and Trade-offs:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Data Inconsistency:&lt;/strong&gt; Validate data written to the new column against the old. Use eventual consistency patterns.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Performance Overhead:&lt;/strong&gt; Dual writes add latency and database load. Monitor this closely.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Complexity:&lt;/strong&gt; More application code paths, more deployment steps. Mitigate with automated testing and clear operational runbooks.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Rollback:&lt;/strong&gt; Emphasize that the existence of the old, valid data and the ability for the old application version to function means you can always revert to a known good state without data loss.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Need help designing robust migration strategies or preparing for your next system design interview?&lt;/p&gt;

&lt;p&gt;Book a 1:1 session with me on Topmate to discuss your challenges and level up your skills.&lt;/p&gt;




&lt;h2&gt;
  
  
  Want to Go Deeper?
&lt;/h2&gt;

&lt;p&gt;I do 1:1 sessions on system design, backend architecture, and interview prep.&lt;br&gt;
If you're preparing for a Staff/Senior role or cracking FAANG rounds — &lt;a href="https://topmate.io/rishabh_pahwa" rel="noopener noreferrer"&gt;book a session here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>backendengineering</category>
      <category>systemdesign</category>
      <category>datamigration</category>
      <category>rollbackstrategy</category>
    </item>
    <item>
      <title>The Production Problem with Async Dual Writes</title>
      <dc:creator>rishabh pahwa</dc:creator>
      <pubDate>Wed, 13 May 2026 15:00:19 +0000</pubDate>
      <link>https://forem.com/rishabh_pahwa_1a2b93e60b0/the-production-problem-with-async-dual-writes-ao4</link>
      <guid>https://forem.com/rishabh_pahwa_1a2b93e60b0/the-production-problem-with-async-dual-writes-ao4</guid>
      <description>&lt;p&gt;Many "zero-downtime" data migration strategies involving dual writes promise seamless transitions, but often hide insidious data consistency traps. Without careful handling, you're not just moving data; you're silently corrupting or losing it, only to discover the issue months after cutover.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Production Problem with Async Dual Writes
&lt;/h2&gt;

&lt;p&gt;Imagine you're an engineer at a rapidly growing SaaS company. Your &lt;code&gt;users&lt;/code&gt; table needs to be sharded or migrated to a new database technology. To avoid downtime, you implement a dual-write strategy: all new writes go to both the old and new &lt;code&gt;users&lt;/code&gt; tables. Reads initially come from the old table, then eventually switch to the new one. This sounds solid.&lt;/p&gt;

&lt;p&gt;Now, picture this: A user updates their profile. Your application sends two write requests: one to &lt;code&gt;OldDB.users&lt;/code&gt; and one to &lt;code&gt;NewDB.users&lt;/code&gt;. The write to &lt;code&gt;OldDB&lt;/code&gt; succeeds, returning HTTP 200. But the write to &lt;code&gt;NewDB&lt;/code&gt; fails due to a network timeout, a transient database hiccup, or a schema validation error specific to the new system. What does your application do? If it immediately returns success because the &lt;code&gt;OldDB&lt;/code&gt; write worked, you now have an inconsistency: the user's profile is updated in the old system but stale in the new. Over days or weeks, these small, non-atomic failures accumulate, leading to widespread data divergence. When you finally cut over to reading solely from &lt;code&gt;NewDB&lt;/code&gt;, users start seeing outdated profiles, missing orders, or incorrect balances. Your "zero-downtime" migration just became a "zero-consistency" disaster.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Expand-Contract Pattern and Dual Writes
&lt;/h2&gt;

&lt;p&gt;The Expand-Contract pattern is a common strategy for zero-downtime schema migrations. It involves phases:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Expand&lt;/strong&gt;: Modify your application to read from the old schema and write to both the old and new schemas.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Migrate Data&lt;/strong&gt;: Backfill historical data from the old schema to the new.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Validate&lt;/strong&gt;: Continuously compare data between old and new.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Contract&lt;/strong&gt;: Switch reads to the new schema, then remove the old schema and dual-write logic.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here's how the dual-write phase typically works, and where consistency issues arise:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                  +-----------------------------------+
                  |            Application            |
                  |  (v1.1 - Dual-Write/Read Old)     |
                  +-----------------------------------+
                       |        ^         ^
                       | Write  | Read    | Write
                       v        |         |
      +---------------------+   |         |   +---------------------+
      | Old Database (v1.0) |&amp;lt;--+---------+--&amp;gt;| New Database (v1.1) |
      | (e.g., MySQL)       |                 | (e.g., PostgreSQL)  |
      +---------------------+                 +---------------------+
                                  ^
                                  | Backfill / Sync Job
                                  | (e.g., Debezium, custom scripts)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this setup:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Reads&lt;/strong&gt;: Go to the &lt;code&gt;Old Database&lt;/code&gt; (or read from both and merge, with old as authoritative).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Writes&lt;/strong&gt;: Go to &lt;em&gt;both&lt;/em&gt; &lt;code&gt;Old Database&lt;/code&gt; and &lt;code&gt;New Database&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Backfill&lt;/strong&gt;: A separate job continuously copies existing data from &lt;code&gt;Old&lt;/code&gt; to &lt;code&gt;New&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The fundamental challenge is that writing to two separate databases (or even two different tables in the same database) is not an atomic operation. Without a distributed transaction across both write operations, there's always a window where one succeeds and the other fails, leading to divergence.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Stripe Maintains Sanity at Scale
&lt;/h2&gt;

&lt;p&gt;Stripe, processing billions in transactions, performs hundreds of schema changes monthly. Their approach to zero-downtime data migration heavily relies on dual writes but is backed by extensive reconciliation. When migrating critical financial data, they recognize that non-atomic dual writes are a reality.&lt;/p&gt;

&lt;p&gt;Instead of assuming perfect consistency, Stripe engineers build systems that detect and fix discrepancies. Their strategy often includes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Shadow Writes&lt;/strong&gt;: Before dual-writing, they might "shadow write" to the new schema. The new system receives a copy of write traffic, but these writes aren't considered authoritative and are often discarded. This allows testing the performance and correctness of the new schema under production load &lt;em&gt;without&lt;/em&gt; impacting the old system or risking data integrity.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Idempotency and Retries&lt;/strong&gt;: Application logic ensures that write operations are idempotent, meaning they can be safely retried. When a dual write occurs, if one database write fails, the application logs the failure and often retries later or enqueues it for asynchronous processing.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Continuous Reconciliation&lt;/strong&gt;: This is the most crucial part. After dual writes are enabled, Stripe runs continuous, automated reconciliation jobs. These jobs scan both the old and new databases, compare records based on a unique identifier, and identify discrepancies. If a difference is found (e.g., a record exists in &lt;code&gt;OldDB&lt;/code&gt; but not &lt;code&gt;NewDB&lt;/code&gt;, or attributes differ), the reconciliation job logs it, potentially attempts to fix it (e.g., by re-applying the change to &lt;code&gt;NewDB&lt;/code&gt;), or flags it for manual review. For example, a reconciliation job might compare 100 million &lt;code&gt;customer&lt;/code&gt; records daily, flagging any divergence beyond a 0.0001% threshold. This background process ensures eventual consistency and acts as a safety net against non-atomic dual-write failures.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This rigorous validation and reconciliation process is what turns a risky dual-write strategy into a production-grade, zero-downtime migration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Common Mistakes When Implementing Dual Writes
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Assuming Atomicity Across Databases&lt;/strong&gt;: Many engineers treat a dual-write operation (e.g., &lt;code&gt;db1.save()&lt;/code&gt; and &lt;code&gt;db2.save()&lt;/code&gt;) as a single atomic unit. It's not. If your application code just calls two database clients, success from one and failure from the other leads to data divergence. You need explicit error handling, retries, and compensation logic, or rely on eventual consistency with strong reconciliation.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Inadequate Read Strategy During Transition&lt;/strong&gt;: During the dual-write phase, how do you read?

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Read-Old&lt;/strong&gt;: Reading only from the old system is safer for consistency &lt;em&gt;during&lt;/em&gt; the transition, but means data written to the new system isn't immediately visible, and requires a hard cutover for reads.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Read-New-Fallback-Old&lt;/strong&gt;: Reading from the new, falling back to old if not found, can lead to inconsistencies if the new system is incomplete or subtly different.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Read-Both-Merge&lt;/strong&gt;: Reading from both and merging requires complex conflict resolution and can be slow. Most get this wrong by not clearly defining the source of truth for reads at each stage.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Neglecting Reconciliation and Observability&lt;/strong&gt;: Simply setting up dual writes and a backfill job isn't enough. Without robust monitoring to track dual-write success rates, latency for each write, and, critically, continuous data validation (reconciliation) between the old and new systems, you're flying blind. Silent data loss is guaranteed without it. Many engineers skip this crucial, complex step, leading to post-cutover data integrity nightmares.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Interview Angle: What Interviewers Ask
&lt;/h2&gt;

&lt;p&gt;Interviewers will probe your understanding beyond the basic concept. Expect questions like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;"How do you ensure data consistency during a dual-write phase if one database write succeeds and the other fails?"&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Strong Answer&lt;/strong&gt;: "Since distributed transactions are rarely feasible or desirable, I wouldn't assume atomicity. Instead, I'd implement a compensation mechanism. For writes, I'd typically wrap the dual-write logic in a transaction &lt;em&gt;within the application&lt;/em&gt; or use an idempotent message queue. The application would first publish the data change to a reliable queue (e.g., Kafka). A consumer would then attempt to write to both databases. If one write fails, the message could be retried with backoff. If persistent failures occur, it lands in a dead-letter queue for manual intervention or triggers an alert. Ultimately, even with retries, you need a continuous, asynchronous reconciliation job that scans both databases for discrepancies and fixes them, ensuring eventual consistency. This shifts the complexity from transactional guarantees to robust error handling and eventual repair."&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;"When would you use a 'shadow write' versus a 'dual write'?"&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Strong Answer&lt;/strong&gt;: "Shadow writes are primarily for &lt;em&gt;testing&lt;/em&gt; the new system with production-like load and data, without letting it impact the live system. You write to both the old authoritative system and the new system, but the new system's writes are often ignored or merely logged for validation. This is low-risk. Dual writes, however, mean both systems are authoritative &lt;em&gt;for writes&lt;/em&gt; during a transitional period, with the intent to eventually cut over reads to the new system. It's a higher-risk strategy because data consistency is paramount. I'd use shadow writes for initial performance testing or schema validation of the new system, and dual writes when I'm confident in the new system's write path and am preparing for a full cutover, backed by strong reconciliation."&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Moving critical data without disruption is hard. Do it right, and your systems evolve gracefully. Cut corners, and you'll spend weeks on data recovery.&lt;/p&gt;




&lt;p&gt;Need to refine your system design skills for your next interview? Book a 1:1 session with me to discuss real-world system challenges and effective design patterns.&lt;/p&gt;




&lt;h2&gt;
  
  
  Want to Go Deeper?
&lt;/h2&gt;

&lt;p&gt;I do 1:1 sessions on system design, backend architecture, and interview prep.&lt;br&gt;
If you're preparing for a Staff/Senior role or cracking FAANG rounds — &lt;a href="https://topmate.io/rishabh_pahwa" rel="noopener noreferrer"&gt;book a session here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>systemdesign</category>
      <category>databasemigration</category>
      <category>distributedsystems</category>
      <category>dataconsistency</category>
    </item>
    <item>
      <title>Your "Cache Invalidation is Hard" Answer Misses the Real Horror</title>
      <dc:creator>rishabh pahwa</dc:creator>
      <pubDate>Sun, 10 May 2026 08:42:41 +0000</pubDate>
      <link>https://forem.com/rishabh_pahwa_1a2b93e60b0/your-cache-invalidation-is-hard-answer-misses-the-real-horror-5em7</link>
      <guid>https://forem.com/rishabh_pahwa_1a2b93e60b0/your-cache-invalidation-is-hard-answer-misses-the-real-horror-5em7</guid>
      <description>&lt;h2&gt;
  
  
  Your "Cache Invalidation is Hard" Answer Misses the Real Horror
&lt;/h2&gt;

&lt;p&gt;Most engineers parrot "cache invalidation is hard" as a standard interview response, but few understand &lt;em&gt;why&lt;/em&gt; it's hard or the real-world horrors it introduces. It's not just about stale data; it's about financial losses, broken business logic, and cascading failures when eventual consistency hits critical paths.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Production Nightmare: Financial Impact of Stale Data
&lt;/h2&gt;

&lt;p&gt;Imagine a ride-sharing platform like Uber. A user updates their payment method because the old card expired. The update is written to the database successfully. However, due to an aggressive cache TTL or a failed invalidation, the dispatch service still sees the &lt;em&gt;old&lt;/em&gt;, expired card for the next 5 minutes. The user tries to book a ride, it fails. They try again, it fails. Frustrated, they switch to a competitor.&lt;/p&gt;

&lt;p&gt;This isn't just "stale data"; it's a direct loss of revenue, a degraded user experience, and a hit to brand loyalty. In banking, showing an incorrect account balance, even for seconds, can trigger compliance violations and massive reputational damage. In e-commerce, a product showing "in stock" when it's sold out leads to cancelled orders and angry customers. The problem isn't theoretical; it's financial and operational.&lt;/p&gt;

&lt;h2&gt;
  
  
  Beyond TTLs: Active Invalidation in Distributed Systems
&lt;/h2&gt;

&lt;p&gt;The naive approach to cache invalidation often relies on Time-To-Live (TTL) or a simple write-through/write-around policy. While these have their place, critical systems demand more robust strategies that aim for &lt;em&gt;stronger consistency&lt;/em&gt; than basic eventual consistency can provide, especially when data is updated from multiple sources.&lt;/p&gt;

&lt;p&gt;Consider an active invalidation strategy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;+------------+       +------------+       +------------+       +-------------+
|    User    |       |  Frontend  |       |  Backend   |       |   Database  |
| (API Client)|       |    Service |       |    Service |       |  (Postgres) |
+------------+       +------------+       +------------+       +-------------+
      |                   |                      |                      |
      | 1. Update Profile |                      |                      |
      +------------------&amp;gt;|                      |                      |
      |                   | 2. Call Update API   |                      |
      |                   +---------------------&amp;gt;|                      |
      |                   |                      | 3. Update DB         |
      |                   |                      +---------------------&amp;gt;|
      |                   |                      | (DB transaction ACK) |
      |                   |                      |&amp;lt;---------------------+
      |                   |                      |                      |
      |                   |                      | 4. Publish Invalidation Event to Message Bus
      |                   |                      +---------------------&amp;gt;+
      |                   |                      | (e.g., Kafka)        |
      |                   |                      |                      |
      |                   |                      |                      |
      |                   |                      |                      |
      |                   |                      |                      |
      |                   |                      |                      |
      |                   |                      |                      |
+------------+       +------------+       +------------+       +-------------+
|  Cache     |       | Invalidator|       |  Message   |
| (Redis)    |       |  Service   |       |    Bus     |
+------------+       +------------+       +------------+
      ^                   ^                      ^
      |                   | 5. Consume Invalidation Event
      |                   |&amp;lt;---------------------+
      |                   |                      |
      | 6. Invalidate Key |                      |
      |&amp;lt;------------------+                      |
      | (Cache ACK)       |                      |
      |                   |                      |
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this flow, after the database is updated (step 3), an invalidation event is &lt;em&gt;published&lt;/em&gt; to a message bus (step 4). An &lt;code&gt;Invalidator Service&lt;/code&gt; &lt;em&gt;consumes&lt;/em&gt; this event (step 5) and then explicitly &lt;em&gt;deletes&lt;/em&gt; or &lt;em&gt;updates&lt;/em&gt; the corresponding key in the cache (step 6). This decouples the write path from cache invalidation, improving write latency, but introduces eventual consistency. The critical aspect is making this event propagation and consumption &lt;em&gt;reliable&lt;/em&gt; and &lt;em&gt;fast&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Meta's Approach to Consistent Caching at Scale
&lt;/h2&gt;

&lt;p&gt;At companies like Meta (Facebook), operating some of the world's largest caches, simple TTLs aren't enough. They can't afford to show stale profile data, friend lists, or post engagement for minutes. Their "Cache Made Consistent" initiatives aim to solve the very race conditions and inconsistencies that plague distributed caching.&lt;/p&gt;

&lt;p&gt;They've moved beyond basic invalidation to sophisticated systems that ensure stronger consistency guarantees. One approach involves using transaction logs (like binlogs in MySQL) from the database to drive invalidation. A service tails these logs, filters relevant updates, and publishes specific invalidation messages to a distributed system. Cache nodes then subscribe to these messages. This pushes the consistency window from minutes (TTL) down to milliseconds, closely following database writes.&lt;/p&gt;

&lt;p&gt;This system is built for extreme scale: potentially hundreds of thousands of updates per second across petabytes of data. It's not just about sending an &lt;code&gt;invalidate(key)&lt;/code&gt; command; it's about guaranteeing delivery, handling partial failures (what if a cache node is down?), and ensuring that &lt;em&gt;all&lt;/em&gt; relevant dependent caches (e.g., user profile, friend count, feed items) are consistently updated or invalidated.&lt;/p&gt;

&lt;h2&gt;
  
  
  Common Mistakes Engineers Make
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Over-relying on TTL for critical data:&lt;/strong&gt; While great for performance, a 5-minute TTL on a user's payment method or an item's stock count is a ticking time bomb. It trades consistency for availability in places where consistency is paramount. For high-stakes data, TTLs should be very short (seconds) and coupled with active invalidation, or the cache should be bypassed entirely for reads requiring strong consistency.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Ignoring cache dependency graphs:&lt;/strong&gt; Invalidating a single key like &lt;code&gt;user:123&lt;/code&gt; is often insufficient. What about other cached entities that &lt;em&gt;depend&lt;/em&gt; on &lt;code&gt;user:123&lt;/code&gt;'s data, such as &lt;code&gt;user_profile_page:123&lt;/code&gt; or &lt;code&gt;feed_for_user:123&lt;/code&gt;? If you don't invalidate the entire dependency tree, you'll still show stale data. Building and maintaining this dependency graph is complex and often overlooked until production issues arise.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Not building resilient invalidation pipelines:&lt;/strong&gt; Active invalidation introduces its own distributed system problems. What happens if the message bus is down? What if an invalidation message is lost? What if a cache node fails to receive an invalidation? Without retries, dead-letter queues, and eventual reconciliation mechanisms, your cache will drift indefinitely. This is where &lt;code&gt;cache invalidation is hard&lt;/code&gt; actually holds true – building a &lt;em&gt;reliable&lt;/em&gt; invalidation mechanism.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The Interview Angle: Beyond the Buzzwords
&lt;/h2&gt;

&lt;p&gt;When an interviewer asks about cache invalidation, they're looking for more than "it's hard, use TTL." They want to understand your appreciation for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Consistency models and trade-offs:&lt;/strong&gt; When would you tolerate eventual consistency? When do you need strong consistency, and how would you achieve it with a cache? (e.g., using a write-through cache with a transactional database, or bypassing the cache for critical reads).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Failure modes:&lt;/strong&gt; What happens if invalidation fails? How do you detect it? How do you recover? Strong answers discuss monitoring cache hit ratios, consistency checks between cache and DB, and fallback mechanisms like circuit breakers.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Complexity at scale:&lt;/strong&gt; How do you invalidate data across hundreds or thousands of cache nodes? How do you handle fan-out invalidation for dependent data? Think about event-driven architectures, distributed transactions (though rare for caches), and sophisticated messaging patterns.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For instance, if asked, "How would you design a caching system for a bank account balance?", a strong answer would emphasize &lt;em&gt;strong consistency&lt;/em&gt;. You might propose a very short TTL (e.g., 1 second) coupled with immediate, transactional invalidation for updates, or even suggest &lt;em&gt;not caching&lt;/em&gt; the balance at all for reads that require absolute accuracy, fetching directly from the database to avoid any risk of stale data. The cost of an inconsistent balance outweighs the latency benefit of a cache.&lt;/p&gt;

&lt;h2&gt;
  
  
  Need to level up your system design skills?
&lt;/h2&gt;

&lt;p&gt;Book a 1:1 session with me to deep dive into real-world system challenges and ace your next interview. Let's build your expertise together.&lt;/p&gt;




&lt;h2&gt;
  
  
  Want to Go Deeper?
&lt;/h2&gt;

&lt;p&gt;I do 1:1 sessions on system design, backend architecture, and interview prep.&lt;br&gt;
If you're preparing for a Staff/Senior role or cracking FAANG rounds — &lt;a href="https://topmate.io/rishabh_pahwa" rel="noopener noreferrer"&gt;book a session here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>systemdesign</category>
      <category>caching</category>
      <category>distributedsystems</category>
      <category>backendengineering</category>
    </item>
  </channel>
</rss>
