<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: diata</title>
    <description>The latest articles on Forem by diata (@diata0210).</description>
    <link>https://forem.com/diata0210</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/diata0210"/>
    <language>en</language>
    <item>
      <title>The 24-hour test: if you couldn't write it by hand tomorrow, you didn't write it today.</title>
      <dc:creator>diata</dc:creator>
      <pubDate>Sat, 25 Apr 2026 08:07:08 +0000</pubDate>
      <link>https://forem.com/diata0210/the-24-hour-test-if-you-couldnt-write-it-by-hand-tomorrow-you-didnt-write-it-today-4o88</link>
      <guid>https://forem.com/diata0210/the-24-hour-test-if-you-couldnt-write-it-by-hand-tomorrow-you-didnt-write-it-today-4o88</guid>
      <description>&lt;p&gt;Hello, I'm Tuan.&lt;/p&gt;

&lt;p&gt;I have one rule for using AI at work, and it's the only one I trust:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Every line of code I shipped today, I should be able to write again tomorrow with no AI. If I can't, I didn't write it.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That's the test. Ten seconds, end of day. I don't always pass — there's a story below where I clearly didn't. But running the test is the difference between using AI and being used by it.&lt;/p&gt;

&lt;p&gt;(If you read &lt;a href="https://dev.to/diata0210/vibe-coding-is-producing-developers-who-cant-debug-were-going-to-pay-for-this-2ifh"&gt;yesterday's rant&lt;/a&gt; about vibe coding and were about to accuse me of hypocrisy — this post is the receipt. I use AI every day. Here's the line.)&lt;/p&gt;

&lt;h2&gt;
  
  
  The line: AI does typing, I do thinking
&lt;/h2&gt;

&lt;p&gt;The cleanest framing I've found:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;AI is a translator, not a tutor. Translators turn what you already mean into syntax. Tutors teach you what to mean. The first is safe. The second is how you stop being an engineer.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Translation is delegable. The thinking that produces &lt;em&gt;what to translate&lt;/em&gt; is not.&lt;/p&gt;

&lt;p&gt;Sounds obvious. In practice, the line is invisible until you watch yourself cross it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I refuse to delegate
&lt;/h2&gt;

&lt;p&gt;Three things, full stop.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Debugging.&lt;/strong&gt; When something breaks, the reasoning chain — &lt;em&gt;"this fails because X, so checking Y will tell me if I'm right"&lt;/em&gt; — never leaves my head. Outsource it once, you save twenty minutes. Outsource it for a year and you become the candidate from yesterday's post.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architecture decisions.&lt;/strong&gt; AI doesn't know my team's oncall load, data gravity, or the political fact that the platform team will block any new service for six months. I'll use it to &lt;em&gt;enumerate&lt;/em&gt; options I missed. I will not use it to &lt;em&gt;pick&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Anything in a domain where I don't already have a model.&lt;/strong&gt; If I can't &lt;em&gt;check&lt;/em&gt; the answer, I have no business &lt;em&gt;accepting&lt;/em&gt; it. Build the model, then delegate. Never the other way around.&lt;/p&gt;

&lt;h2&gt;
  
  
  The time I broke my own rule
&lt;/h2&gt;

&lt;p&gt;I'm going to tell on myself, because the post is dishonest without it.&lt;/p&gt;

&lt;p&gt;Six months ago I was tired. End of a long sprint, last ticket of the day. A small feature needed a transaction wrapping two writes. I knew exactly what I wanted, so I let Cursor write the wrapper.&lt;/p&gt;

&lt;p&gt;It produced a clean-looking &lt;code&gt;BEGIN&lt;/code&gt;/&lt;code&gt;COMMIT&lt;/code&gt; block: lock the user row, lock the wallet row, do the writes, commit. I read it. Looked right. The integration test passed against a single-threaded fixture. Shipped.&lt;/p&gt;

&lt;p&gt;Two days later, deadlocks started showing up under load:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ERROR: deadlock detected
DETAIL: Process 14823 waits for ShareLock on transaction 9182733;
        blocked by process 12044.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The cause: another existing code path acquired those same two rows in the &lt;em&gt;opposite&lt;/em&gt; order — wallet first, user second. Two hot rows, two transactions racing, classic crossed-lock deadlock. The integration test never caught it because there was only one writer in the fixture, and concurrency tests had been sitting on the team backlog where they'd been comfortably ignored for a year.&lt;/p&gt;

&lt;p&gt;The fix took fifteen minutes — reorder both code paths to acquire locks in the same canonical order. The bug shipped because I'd never traced lock order &lt;em&gt;through the existing code&lt;/em&gt; before adding mine. AI couldn't have known about the other path; my own model would have, if I'd held it in my head for one more minute. I had been holding it ten minutes earlier. Then I got tired and let go.&lt;/p&gt;

&lt;p&gt;The lesson wasn't "AI was wrong." AI was correct, in isolation.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;Correctness in isolation is not correctness.&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;The only thing that catches that gap is a human who is already holding the system in their head.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I now have one extra rule: when I'm tired, I close the AI tab. Tired-Tuan accepts suggestions Awake-Tuan would have caught. The tool's value inverts when judgment is degraded.&lt;/p&gt;

&lt;h2&gt;
  
  
  A real example, done correctly
&lt;/h2&gt;

&lt;p&gt;Different incident, different week. Postgres query running hot — same flavor of problem as &lt;a href="https://dev.to/diata0210/an-index-made-our-query-faster-it-slowly-suffocated-our-database-2emn"&gt;this earlier post&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The actual sequence, with AI use marked at each step.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1 — Hypothesis (no AI).&lt;/strong&gt; Query had been fine for eighteen months. What changed? A column added two weeks earlier, mostly null in practice. A new ingestion job had tripled write traffic to the table. My guess: the planner was working from stale row estimates. Cheap to test.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2 — Verify (mostly me).&lt;/strong&gt; Ran &lt;code&gt;EXPLAIN (ANALYZE, BUFFERS)&lt;/code&gt;. I had to look up the exact &lt;code&gt;BUFFERS&lt;/code&gt; flag — five seconds in Claude. The plan came back with a &lt;code&gt;Seq Scan&lt;/code&gt; where I expected an index scan. Estimated rows: 200. Actual rows: 240,000. There it was.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3 — Read the plan (AI as second pair of eyes).&lt;/strong&gt; I read the plan myself first and identified the row-estimate problem. Then I pasted it into Claude and asked &lt;em&gt;"what would you flag here, in order of severity?"&lt;/em&gt; It listed five things. I had caught three. Of the two I'd skimmed past, one was a minor memory spill that didn't matter; one was a real signal I'd missed — an unrelated index wasn't being used. I filed it as a follow-up.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4 — Decide the fix (no AI).&lt;/strong&gt; Options: &lt;code&gt;ANALYZE&lt;/code&gt; the table, bump &lt;code&gt;default_statistics_target&lt;/code&gt; for the column, or restructure the query. I picked &lt;code&gt;ANALYZE&lt;/code&gt; plus a stats target bump, because the ingestion job would keep skewing things. Architecture choice. Mine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5 — Write the migration (AI typed).&lt;/strong&gt; I described what I wanted in a sentence. Claude produced the migration. I read every line, ran it against a copy of prod, shipped.&lt;/p&gt;

&lt;p&gt;Total time: maybe 35% faster than by hand. Skill atrophy: zero, because every reasoning step was mine. AI was a typist and a second pair of eyes. It never held the model.&lt;/p&gt;

&lt;p&gt;If your workflow has the model doing the reasoning and you doing the accepting, the ratio is inverted, and the 24-hour test will quietly fail.&lt;/p&gt;

&lt;h2&gt;
  
  
  The part where I lose half the seniors reading this
&lt;/h2&gt;

&lt;p&gt;Here's the line I don't think my generation wants to admit:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Most senior engineers who adopted Cursor or Copilot more than a year ago have atrophied. They just haven't been tested yet.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The reason it's invisible is that AI is a multiplier on existing skill, and a multiplier feels like the skill itself. You finish the task faster. You feel sharp. You ship. The seniors who atrophied let AI become a tutor — accepting suggestions whose correctness they couldn't independently verify. The ones who didn't kept it strictly a translator. Same tool, opposite outcomes, and from the outside they look identical for about eighteen months.&lt;/p&gt;

&lt;p&gt;This isn't speculation. The aviation industry has been studying it for forty years under the name &lt;em&gt;"automation-induced skill decay."&lt;/em&gt; Pilots who fly long-haul on autopilot lose their manual flying skills, measurably and predictably, even when they remain confident in their abilities. The FAA was concerned enough to issue Safety Alert SAFO 13002 explicitly recommending that operators give pilots more opportunities to hand-fly the aircraft, because pilots can no longer self-assess when their skill has slipped. The same mechanism applies to anyone whose work is mediated through automation. We are not special.&lt;/p&gt;

&lt;p&gt;I run a manual day on myself once a month: autocomplete off, no chat, full day. The first time I tried it, I was shocked at how rusty the first hour felt. Not on typing — on &lt;em&gt;thinking&lt;/em&gt;. I'd been narrating my process to AI for so long I'd forgotten how to narrate it to myself.&lt;/p&gt;

&lt;p&gt;I got it back, fast. But I would not have noticed it slipping if I hadn't tested for it.&lt;/p&gt;

&lt;p&gt;Yesterday's post was about juniors who never built the model. Today's is about seniors who had it and let it slip. If you're reading this and feel defensive — you already know the answer to the test.&lt;/p&gt;

&lt;h2&gt;
  
  
  The whole thing in three lines
&lt;/h2&gt;

&lt;p&gt;If you skimmed this far, here it is condensed:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;AI does the typing. You do the thinking.&lt;/strong&gt; Translator, not tutor.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Refuse to delegate three things&lt;/strong&gt;: debugging, architecture, anything you don't yet have a model for.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Run the 24-hour test on yourself this week.&lt;/strong&gt; One day a month, run it without the tool.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That's the whole post. The rest is supporting evidence.&lt;/p&gt;

&lt;h2&gt;
  
  
  The consequence
&lt;/h2&gt;

&lt;p&gt;The day the tool is down, expensive, blocked at your client site, or — most often — confidently wrong about something that ships and breaks at 3 AM, you find out who you actually are as an engineer.&lt;/p&gt;

&lt;p&gt;Skill that exists only inside the augmentation evaporates with the augmentation. Skill that exists outside of it doesn't.&lt;/p&gt;

&lt;p&gt;Pick which one you want to be.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This is part 2 of a thread that started &lt;a href="https://dev.to/diata0210/vibe-coding-is-producing-developers-who-cant-debug-were-going-to-pay-for-this-2ifh"&gt;here&lt;/a&gt;. If you're a senior who thinks the atrophy point doesn't apply to you — that's exactly when the test matters most. Try it before you reply.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;I write about backend, production incidents, and the unglamorous parts of being a software engineer. Follow if that's your kind of thing.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>discuss</category>
      <category>career</category>
    </item>
    <item>
      <title>Vibe coding is producing developers who can't debug. We're going to pay for this.</title>
      <dc:creator>diata</dc:creator>
      <pubDate>Sat, 25 Apr 2026 04:38:31 +0000</pubDate>
      <link>https://forem.com/diata0210/vibe-coding-is-producing-developers-who-cant-debug-were-going-to-pay-for-this-2ifh</link>
      <guid>https://forem.com/diata0210/vibe-coding-is-producing-developers-who-cant-debug-were-going-to-pay-for-this-2ifh</guid>
      <description>&lt;p&gt;Hello, I'm Tuan.&lt;/p&gt;

&lt;p&gt;I've been doing technical interviews for backend roles recently. The pattern across the last few rounds genuinely scared me — and I don't think we, as an industry, are talking about it honestly.&lt;/p&gt;

&lt;p&gt;Most of the candidates I sat with could not debug.&lt;/p&gt;

&lt;p&gt;Not "struggled to debug." Not "took longer than I expected." Could not, in any meaningful sense, form a hypothesis about a broken system and test it. The moment something didn't work, the same reflex kicked in: paste the error into Cursor, paste the file, wait, paste again.&lt;/p&gt;

&lt;p&gt;I'm not writing this to gatekeep. I'm writing this because something is breaking in our profession, quietly, and the seniors who notice it are mostly staying quiet because saying it out loud sounds like an old man yelling at clouds.&lt;/p&gt;

&lt;p&gt;Fine. I'll be the old man.&lt;/p&gt;

&lt;h2&gt;
  
  
  The interview that made me start writing this
&lt;/h2&gt;

&lt;p&gt;Strong resume. A few years of experience. Confident on system design, fluent on Redis caching, queue patterns, the usual.&lt;/p&gt;

&lt;p&gt;I gave him a small task: a Node.js service throwing 500s on a specific input. Logs included. He could use any tool, including AI.&lt;/p&gt;

&lt;p&gt;He opened Cursor and pasted the stack trace. Then the source file. Then, when the suggestion didn't fix it, the new error. Then the next error.&lt;/p&gt;

&lt;p&gt;Forty minutes in, he had four open tabs of AI suggestions and no working theory of what was actually wrong.&lt;/p&gt;

&lt;p&gt;The bug, when I finally walked him through it, was a hallucinated method. Three weeks earlier, his AI assistant had suggested &lt;code&gt;.findUserByEmailOrThrow()&lt;/code&gt; — a method that does not exist in our ORM. The code shipped because the test mocked the entire data layer and returned truthy. In production, the call resolved to &lt;code&gt;undefined&lt;/code&gt;, the next line dereferenced it, and the service crashed on a specific edge case nobody had hit yet.&lt;/p&gt;

&lt;p&gt;He had accepted that line without reading it. He could not have caught it, because he had no model of what code his ORM actually exposed.&lt;/p&gt;

&lt;p&gt;I'm not telling this story to mock him. He was smart, articulate, and probably understood distributed systems better than I did at his age.&lt;/p&gt;

&lt;p&gt;He just had no internal model of how the code in front of him actually executed.&lt;/p&gt;

&lt;h2&gt;
  
  
  This wasn't an outlier
&lt;/h2&gt;

&lt;p&gt;I assumed it was. It wasn't.&lt;/p&gt;

&lt;p&gt;Across the recent interviews, the same patterns kept showing up:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Most candidates reached for an AI tool within the first minute of seeing an error, before forming a single hypothesis of their own.&lt;/li&gt;
&lt;li&gt;A majority could not walk me through what their own committed code did, line by line, when asked.&lt;/li&gt;
&lt;li&gt;Several had shipped code containing functions or fields that don't exist in the libraries they claimed to use — hallucinations that survived because tests mocked around them.&lt;/li&gt;
&lt;li&gt;A non-trivial number told me, unprompted, that they "don't really debug anymore."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These were not bootcamp graduates. These were mid-level engineers from companies you've heard of.&lt;/p&gt;

&lt;p&gt;And it isn't just my interview room. METR's 2025 randomized study on experienced open-source developers found something nobody wants to repeat at standup: developers using AI tools &lt;em&gt;felt&lt;/em&gt; 20% faster, and were measured 19% &lt;em&gt;slower&lt;/em&gt;. GitClear's 2024 analysis of millions of commits found that the rate of "churned code" — lines added and removed within two weeks — has roughly doubled since Copilot went mainstream. We are shipping more code that we then immediately rip back out.&lt;/p&gt;

&lt;p&gt;Something has shifted. And the people best positioned to call it out — senior engineers — are mostly not, because the same tool is making &lt;em&gt;them&lt;/em&gt; faster, and it's hard to criticize what's working for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  What vibe coding actually does to your brain
&lt;/h2&gt;

&lt;p&gt;Here's the part nobody on Twitter wants to admit.&lt;/p&gt;

&lt;p&gt;When you debug a problem yourself — really debug it, with print statements and bad guesses and dead ends — you're not just fixing a bug. You're building a mental model of the system. Every wrong hypothesis you eliminate teaches you something about how this code behaves under pressure.&lt;/p&gt;

&lt;p&gt;That model is the entire job.&lt;/p&gt;

&lt;p&gt;This isn't folk wisdom. Cognitive science has had a name for it for forty years: &lt;strong&gt;desirable difficulty&lt;/strong&gt;. Robert Bjork's research showed that learning sticks in proportion to the friction you experience while doing the task — not in spite of it. Make the practice too easy and the skill never consolidates. The struggle isn't a tax on learning. It &lt;em&gt;is&lt;/em&gt; the learning.&lt;/p&gt;

&lt;p&gt;Writing code is the easy part. AI is great at the easy part. But the model — the intuition for &lt;em&gt;where the bug probably is&lt;/em&gt;, the smell that says &lt;em&gt;this looks fine but it isn't&lt;/em&gt; — only forms when you struggle.&lt;/p&gt;

&lt;p&gt;When you outsource the struggle, you outsource the model.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;AI doesn't make you a worse developer. It makes you a developer who never becomes a better one.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It's like learning chess by only playing with the engine's evaluation bar visible. You'll know which moves are good. You will never know why. The day the bar disappears, you are not a chess player. You are someone who used to be near a chess engine.&lt;/p&gt;

&lt;h2&gt;
  
  
  The strongest counters, addressed honestly
&lt;/h2&gt;

&lt;p&gt;There are two arguments against everything I just wrote that I take seriously. I want to deal with both, because if I don't, the comments will — and probably less charitably.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Counter 1: "Juniors will learn faster, not slower. AI lets them ship more, hit more edge cases, see more of the system."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I want to believe it. I genuinely do.&lt;/p&gt;

&lt;p&gt;I'm just not seeing it in the people walking into the interview room, and I'm not seeing it in the data either. Volume of code shipped is not the same as depth of understanding, and "shipping more" only teaches you something if you have the model to interpret what you shipped. Without the model, more output is just more noise — which is exactly what GitClear's churn numbers look like.&lt;/p&gt;

&lt;p&gt;Maybe I'm wrong. Maybe in three years the data flips and juniors reach senior-level intuition faster than ever. I'd love to write the apology post.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Counter 2: "Calculators didn't kill math. Compilers didn't kill assembly devs. IDEs didn't kill C programmers. Every abstraction triggers this panic and the industry adapts. You're being a boomer."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is the argument I respect most, and it's also the one that's wrong in a specific, important way.&lt;/p&gt;

&lt;p&gt;Every previous abstraction in software was &lt;strong&gt;deterministic&lt;/strong&gt;. A calculator that returns the wrong answer for &lt;code&gt;2 + 2&lt;/code&gt; gets recalled. A compiler that miscompiles is a P0 bug that ships a patch the same week. An IDE that auto-completes a method that doesn't exist is broken software. The contract was: &lt;em&gt;the abstraction is correct; you can trust the output and focus on the layer above.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;AI is the first abstraction in our field whose output can be &lt;strong&gt;confidently, plausibly wrong&lt;/strong&gt;, by design, and we ship it anyway. There is no recall. There is no patch. The "bug" is the entire premise.&lt;/p&gt;

&lt;p&gt;That changes what skill is required of the user. With a compiler, you didn't need to verify its output line by line — that would defeat the point. With AI, verifying the output line by line is &lt;em&gt;the only thing keeping you from shipping nonsense&lt;/em&gt;. And verification requires the exact skill — reading code, building a model, smelling wrongness — that vibe coding skips.&lt;/p&gt;

&lt;p&gt;Calculators didn't require a numeracy detector. AI requires a bullshit detector. You can only build that detector by being wrong, alone, many times, before you ever touch the tool.&lt;/p&gt;

&lt;p&gt;This is not the same transition. It looks the same from the outside. Underneath, the contract has flipped.&lt;/p&gt;

&lt;h2&gt;
  
  
  The skill that's quietly disappearing
&lt;/h2&gt;

&lt;p&gt;The skill is not "writing code." The skill is &lt;strong&gt;forming a hypothesis about a system you don't fully understand and testing it cheaply.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is what senior engineers do all day. It's why we get paid. It's also the thing AI cannot teach you, because you can only learn it by being wrong in public, repeatedly, with consequences.&lt;/p&gt;

&lt;p&gt;Every time a junior pastes an error into Claude instead of reading it, a small skill atrophies. Compound that across two years of "productivity" and you get someone who can ship a CRUD app from scratch but freezes when production breaks at 2 AM.&lt;/p&gt;

&lt;p&gt;Stack Overflow, by the way, was &lt;em&gt;not&lt;/em&gt; the same thing. It forced you to &lt;strong&gt;read&lt;/strong&gt; an answer that was usually for a slightly different problem, then &lt;strong&gt;adapt&lt;/strong&gt; it. That adaptation step — &lt;em&gt;"this isn't quite my situation, but if I change X..."&lt;/em&gt; — was the learning.&lt;/p&gt;

&lt;p&gt;AI removes the adaptation step. It gives you something that looks like it should work, you accept it, you ship it, you learn nothing.&lt;/p&gt;

&lt;p&gt;Stack Overflow was a textbook. AI is the answer key.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I tell juniors now
&lt;/h2&gt;

&lt;p&gt;I've stopped saying "don't use AI." That's both unrealistic and condescending.&lt;/p&gt;

&lt;p&gt;What I tell them instead:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Form your own guess before reaching for the tool.&lt;/strong&gt; Even a wrong guess. Especially a wrong guess. The wrong guess is where the model gets built.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Before you accept any AI suggestion, explain to yourself why it works.&lt;/strong&gt; Out loud, if you have to. If you can't, don't paste it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Once a week, fix something the hard way.&lt;/strong&gt; Pick a bug. Solve it without AI. It will feel slow. That's the point.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This isn't anti-AI. It's anti-atrophy.&lt;/p&gt;

&lt;p&gt;The goal is not to never use the tool. The goal is to make sure that when the tool is wrong — and it &lt;em&gt;will&lt;/em&gt; be wrong, on the day production is on fire and the stakes are real — you are not standing in front of the wreckage with no idea what to do.&lt;/p&gt;

&lt;h2&gt;
  
  
  The uncomfortable prediction
&lt;/h2&gt;

&lt;p&gt;In five years, debugging legacy systems will be the highest-paid niche in software, because almost nobody under thirty will be able to do it.&lt;/p&gt;

&lt;p&gt;I am not joking and I am not exaggerating. The systems we are building today will still exist. The bugs in them will still exist. Someone will have to walk into a 200,000-line codebase, follow execution by hand, and figure out why a specific request is timing out on Tuesdays.&lt;/p&gt;

&lt;p&gt;That person will charge a fortune. There will not be many of them.&lt;/p&gt;

&lt;p&gt;If you are early in your career, this is genuinely good news — but only if you choose discomfort now.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The friction of debugging without AI is not a bug. It is the skill being installed.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Skip it now and the bill comes due later. It always does.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If you're a junior reading this and you disagree — I genuinely want to hear why. Tell me what I'm getting wrong, in the comments. I'd rather be argued out of this than be right about it.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;I write about backend, production incidents, and the unglamorous parts of being a software engineer. Follow if that's your kind of thing.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>discuss</category>
      <category>career</category>
      <category>webdev</category>
    </item>
    <item>
      <title>An index made our query faster. It slowly suffocated our database.</title>
      <dc:creator>diata</dc:creator>
      <pubDate>Fri, 24 Apr 2026 03:19:59 +0000</pubDate>
      <link>https://forem.com/diata0210/an-index-made-our-query-faster-it-slowly-suffocated-our-database-2emn</link>
      <guid>https://forem.com/diata0210/an-index-made-our-query-faster-it-slowly-suffocated-our-database-2emn</guid>
      <description>&lt;p&gt;Hello, I'm Tuan.&lt;/p&gt;

&lt;p&gt;When backend engineers encounter a slow query, the first instinct is often something like:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Check the WHERE and ORDER BY, then just add a composite index."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I used to think the same way.&lt;/p&gt;

&lt;p&gt;And to be fair, in many cases, that approach works perfectly fine.&lt;/p&gt;

&lt;p&gt;But once, a seemingly correct optimization turned into a production incident. The read query became significantly faster, the EXPLAIN plan looked clean, and everything seemed perfect.&lt;/p&gt;

&lt;p&gt;Yet slowly, the entire production system began to degrade.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Database CPU usage spiked.&lt;/li&gt;
&lt;li&gt;Disk I/O increased dramatically.&lt;/li&gt;
&lt;li&gt;API latency crept upward.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It took me a while to realize the real problem:&lt;/p&gt;

&lt;p&gt;I optimized the read path, but completely ignored the write cost.&lt;/p&gt;

&lt;p&gt;If you're about to run &lt;code&gt;CREATE INDEX&lt;/code&gt; to save a slow API, take a few minutes to read this first.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Initial Problem
&lt;/h2&gt;

&lt;p&gt;One day, the product team asked for a simple feature:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Create an API that returns the top 20 hottest products in a category."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Essentially, a real-time trending ranking.&lt;/p&gt;

&lt;p&gt;At first glance, the solution seemed trivial — just sort products by a score and return the top 20.&lt;/p&gt;

&lt;p&gt;The products table already had around 10 million rows, and traffic was already in the thousands of requests per second. Since this API would appear in a highly visible part of the product, slow responses were not acceptable.&lt;/p&gt;

&lt;p&gt;My thinking at the time was straightforward:&lt;/p&gt;

&lt;p&gt;Just add the right index and it will be fine.&lt;/p&gt;

&lt;h2&gt;
  
  
  The "Perfectly Correct" Optimization
&lt;/h2&gt;

&lt;p&gt;The query looked like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt;
    &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;interest_score&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;products&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;status&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'ACTIVE'&lt;/span&gt;
&lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;stock_quantity&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
&lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;category_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;42&lt;/span&gt;
&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;interest_score&lt;/span&gt; &lt;span class="k"&gt;DESC&lt;/span&gt;
&lt;span class="k"&gt;LIMIT&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Running this on a table with millions of rows would cause a full scan and sort, which obviously wouldn't scale.&lt;/p&gt;

&lt;p&gt;So I applied the classic solution:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;INDEX&lt;/span&gt; &lt;span class="n"&gt;idx_products_category_status_score&lt;/span&gt;
&lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;products&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;category_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;interest_score&lt;/span&gt; &lt;span class="k"&gt;DESC&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The results looked fantastic.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The query became dramatically faster&lt;/li&gt;
&lt;li&gt;The EXPLAIN plan looked perfect&lt;/li&gt;
&lt;li&gt;Response time dropped immediately&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;From the perspective of query performance, everything seemed solved. At that moment, I felt pretty confident about the fix.&lt;/p&gt;

&lt;p&gt;Unfortunately, that confidence didn't last long.&lt;/p&gt;

&lt;h2&gt;
  
  
  When Production Started Acting Strange
&lt;/h2&gt;

&lt;p&gt;The issue was something I completely overlooked.&lt;/p&gt;

&lt;p&gt;interest_score was not a static column.&lt;/p&gt;

&lt;p&gt;Every time users interacted with a product — viewing details, liking it, or adding it to the cart — the score increased. Something like this happened constantly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;UPDATE&lt;/span&gt; &lt;span class="n"&gt;products&lt;/span&gt;
&lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="n"&gt;interest_score&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;interest_score&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At first, this seemed harmless. Incrementing a number is one of the most common operations in any system.&lt;/p&gt;

&lt;p&gt;But the moment interest_score became part of an index, that simple update was no longer simple.&lt;/p&gt;

&lt;h2&gt;
  
  
  The System Didn't Crash — It Slowly Suffocated
&lt;/h2&gt;

&lt;p&gt;The worst kind of production issue is the one that doesn't fail loudly.&lt;/p&gt;

&lt;p&gt;There were no crashes. No obvious errors. The system just became slower and slower.&lt;/p&gt;

&lt;p&gt;Over time we observed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;API latency gradually increased&lt;/li&gt;
&lt;li&gt;Database CPU usage spiked&lt;/li&gt;
&lt;li&gt;Disk I/O skyrocketed&lt;/li&gt;
&lt;li&gt;Some requests started timing out&lt;/li&gt;
&lt;li&gt;Slow query logs filled with UPDATE statements&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Initially we blamed traffic growth. After all, the SELECT query was indexed and looked perfectly fine.&lt;/p&gt;

&lt;p&gt;But after monitoring the system closely, the real culprit finally became clear — the heavy load was coming from the updates to interest_score.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Problem
&lt;/h2&gt;

&lt;p&gt;The index itself was not wrong. The real issue was the hidden write cost.&lt;/p&gt;

&lt;p&gt;Whenever interest_score changes, the database cannot simply update a number in place. Because the column participates in an index used for sorting, the database must also maintain the index structure.&lt;/p&gt;

&lt;p&gt;Conceptually, it means:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The record must be removed from its old position in the index and reinserted into a new one.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;With a few updates, this is trivial. But when thousands of updates per second hit the system, maintaining that index becomes extremely expensive.&lt;/p&gt;

&lt;p&gt;In other words: the index optimized reads, but it dramatically increased the cost of writes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hotspot Problem
&lt;/h2&gt;

&lt;p&gt;User interactions are not evenly distributed. Popular products receive far more clicks than others.&lt;/p&gt;

&lt;p&gt;That meant many updates were hitting the same rows repeatedly, creating contention inside the database. Even though the code looked harmless:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;UPDATE&lt;/span&gt; &lt;span class="n"&gt;products&lt;/span&gt;
&lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="n"&gt;interest_score&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;interest_score&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Under the hood, the database was handling heavy concurrent updates to the same regions of data and index pages. The system was effectively fighting itself.&lt;/p&gt;

&lt;p&gt;That was the moment I realized something important:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;An index that speeds up a query does not necessarily make the system healthier.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And in hindsight, the real design mistake was trying to make the main transactional table handle real-time ranking.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hard Decision: Removing the Index
&lt;/h2&gt;

&lt;p&gt;Removing the index felt wrong at first. After all, it had significantly improved the query performance.&lt;/p&gt;

&lt;p&gt;But the metrics were clear. As long as that index existed, write contention would remain.&lt;/p&gt;

&lt;p&gt;So we removed it.&lt;/p&gt;

&lt;p&gt;The result was immediate:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Write pressure dropped significantly&lt;/li&gt;
&lt;li&gt;Disk I/O stabilized&lt;/li&gt;
&lt;li&gt;Database CPU usage returned to normal levels&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The ranking query itself became slower again, but at least the entire system was no longer being dragged down by a single column update.&lt;/p&gt;

&lt;p&gt;That moment taught me an important lesson:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Some problems that look like SQL optimization tasks are actually architecture problems.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The Alternative: Rethinking the Architecture
&lt;/h2&gt;

&lt;p&gt;Instead of forcing the database to handle both persistent data and real-time ranking, we split responsibilities.&lt;/p&gt;

&lt;p&gt;Score updates were moved to Redis Sorted Sets.&lt;/p&gt;

&lt;p&gt;When user actions occur, we increment the score in Redis:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ZINCRBY trending:cat:42 1 12345
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The new flow became simple:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;User action updates score in Redis&lt;/li&gt;
&lt;li&gt;When ranking is needed, fetch the top IDs from Redis&lt;/li&gt;
&lt;li&gt;Fetch product details from the database using id IN (...)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This allowed each system to focus on what it does best:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9yfu7k0ld28w0zr9v6yc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9yfu7k0ld28w0zr9v6yc.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Of course, this design also came with trade-offs. Redis might return products that are out of stock or inactive — so we had to fetch slightly more results and filter them in the database. We also accepted eventual consistency instead of perfect real-time synchronization.&lt;/p&gt;

&lt;p&gt;But overall, the system became far more scalable and stable.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;p&gt;Since that incident, I approach slow queries very differently.&lt;/p&gt;

&lt;p&gt;Before adding an index, I now ask myself a few questions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is this column frequently updated?&lt;/li&gt;
&lt;li&gt;How much write overhead will this index introduce?&lt;/li&gt;
&lt;li&gt;Am I optimizing a query, or optimizing the entire workload?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For highly dynamic values like ranking scores, like counts, and view counts — I avoid updating the main transactional table directly. More often than not, the real bottleneck is not SQL syntax, but choosing the right system for the workload.&lt;/p&gt;

&lt;p&gt;That composite index wasn't technically wrong. But in the context of our production traffic, it was the wrong decision.&lt;/p&gt;

&lt;p&gt;And today, I care less about whether a query becomes faster. I care more about this question:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Does this change actually make the whole system healthier?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Because in production systems, correctness is not defined by the speed of a single query. It is defined by how the entire system behaves under real traffic.&lt;/p&gt;

&lt;p&gt;If you found this helpful, follow me for more deep dives into Backend Architecture.&lt;/p&gt;

</description>
      <category>database</category>
      <category>backend</category>
      <category>performance</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
