<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Thiago Pacheco</title>
    <description>The latest articles on Forem by Thiago Pacheco (@pacheco).</description>
    <link>https://forem.com/pacheco</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/pacheco"/>
    <language>en</language>
    <item>
      <title>Clean Code Is Dead (And I Hate That I Agree)</title>
      <dc:creator>Thiago Pacheco</dc:creator>
      <pubDate>Sun, 12 Apr 2026 10:53:52 +0000</pubDate>
      <link>https://forem.com/pacheco/clean-code-is-dead-and-i-hate-that-i-agree-4kme</link>
      <guid>https://forem.com/pacheco/clean-code-is-dead-and-i-hate-that-i-agree-4kme</guid>
      <description>&lt;p&gt;I’ve spent my career fighting for clean code. In code reviews, in architecture meetings, in those long debates about naming conventions that everyone pretends to hate but secretly cares about. Readable code. Well-structured code. Code that respects the next person who has to touch it.&lt;/p&gt;

&lt;p&gt;I’m starting to realize that none of that might matter anymore.&lt;/p&gt;




&lt;h2&gt;
  
  
  Clean Code Was Always a Human Interface
&lt;/h2&gt;

&lt;p&gt;Every clean code practice we follow was invented to solve a human problem.&lt;/p&gt;

&lt;p&gt;Descriptive variable names? So a human can read it. Separation of concerns? So a human can navigate it. Consistent formatting, small functions, clear abstractions? All of it — designed to make code convenient for people to write and to read.&lt;/p&gt;

&lt;p&gt;The entire philosophy assumes that humans are the primary audience of source code.&lt;/p&gt;

&lt;p&gt;But what happens when they’re not?&lt;/p&gt;

&lt;h2&gt;
  
  
  AI Doesn’t Need Your Clean Code
&lt;/h2&gt;

&lt;p&gt;The more we rely on AI to write, review, and maintain code, the less we actually know the implementation details. And I don’t mean that in a lazy way — I mean structurally. The workflow is changing. You describe what you want, AI generates it, you review the output at a high level, and you move on.&lt;/p&gt;

&lt;p&gt;AI doesn’t care about your variable names. It doesn’t need elegant abstractions to understand what’s happening. It processes the entire codebase — messy or clean — with the same indifference. It doesn’t get confused by a 500-line function. It doesn’t lose context the way a human does after scrolling through too many files.&lt;/p&gt;

&lt;p&gt;I had a moment recently that made this click. I was reviewing AI-generated code and caught myself leaving comments about naming and structure — the same feedback I’d give a junior dev. Then I paused. Who was I writing these comments for? The AI would regenerate the whole thing from scratch on the next prompt anyway. I was applying human code review instincts to a process that doesn’t have a human on the receiving end (sort of). Old habits addressing a problem that no longer exists.&lt;/p&gt;

&lt;p&gt;The practices we built specifically for human readability and human convenience are becoming overhead. In some cases, they’re becoming a bottleneck — extra layers of abstraction that add complexity without benefiting the thing that’s actually doing the reading.&lt;/p&gt;

&lt;p&gt;This isn’t a thought experiment. This is already happening in how teams ship software.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Highest Level Language Is English Now
&lt;/h2&gt;

&lt;p&gt;If readability stops being the priority, what takes its place? Performance.&lt;/p&gt;

&lt;p&gt;If AI can handle the complexity regardless, why optimize for human readability when you can optimize for raw execution speed? The ideal language for AI-driven development might not be Python or TypeScript. It might be C. It might be Rust. It might be something even lower level where AI has fine-grained control over memory, threading, and every implementation detail — things that are painful for humans but trivial for a model that doesn’t get frustrated.&lt;/p&gt;

&lt;p&gt;We’ve always talked about “high level” and “low level” languages. High level meant closer to human thinking, low level meant closer to the machine. But now there’s a level above all of them.&lt;/p&gt;

&lt;p&gt;English. Portuguese. Mandarin. Whatever you speak.&lt;/p&gt;

&lt;p&gt;Natural language is the highest level language now. LLMs are remarkable polyglots — they work fluently in all of them. And code? Code is just the compilation target.&lt;/p&gt;

&lt;p&gt;We went from writing machine instructions, to writing human-readable code, to just… describing what we want in plain words. Each step abstracted away more control. Each step moved us further from the metal.&lt;/p&gt;

&lt;h2&gt;
  
  
  We’re Losing Control at Every Layer
&lt;/h2&gt;

&lt;p&gt;It’s not just that AI writes the code. People use AI to plan the work, brainstorm the architecture, make decisions about what to build and how to build it. The entire pipeline — from idea to implementation — is being routed through language models.&lt;/p&gt;

&lt;p&gt;And LLMs are dangerously convincing. Their reasoning is well-structured even when the underlying data is fabricated or slightly off. I’ve caught myself reading an AI-generated explanation, thinking “yeah, that makes sense,” only to realize later that a key detail was subtly wrong. Or worse — never realizing it at all. The convincing tone becomes a trap.&lt;/p&gt;

&lt;p&gt;You could argue that humans were never perfectly accurate either. Fair. We’ve always built software on incomplete knowledge and best guesses. But there was something grounding about having a person in the loop who had intuition, experience, and skin in the game. Someone who could smell when something was off, even if they couldn’t articulate why.&lt;/p&gt;

&lt;p&gt;The more we delegate — not just the coding, but the thinking — the more that instinct fades. And I’m not sure we’re paying enough attention to what we’re losing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Maybe I’m Too Attached to the Craft
&lt;/h2&gt;

&lt;p&gt;Maybe I’m romanticizing this. Maybe code was always just a means to an end and I turned it into something more than it needed to be. I built part of my identity around writing good code, caring about architecture, treating the codebase as a product in itself. It’s hard to watch that become irrelevant and not take it personally.&lt;/p&gt;

&lt;p&gt;Maybe I’m onto something. Maybe the people who cared about the craft will be the ones who notice when the quality starts slipping in ways that AI can’t detect. Or maybe that’s just what I tell myself to feel relevant.&lt;/p&gt;

&lt;p&gt;I genuinely don’t know.&lt;/p&gt;

&lt;p&gt;And I can’t be a hypocrite about it. This very piece — I’m using AI to help me review it, refine the structure, make sure it reads well. I’m literally writing about the death of human craft while using the thing that’s killing it to help me write better.&lt;/p&gt;

&lt;p&gt;But the ideas are mine. The opinions are mine. The discomfort is mine. AI didn’t tell me to feel this way — I felt it, and then I used a tool to articulate it more clearly. There’s a difference between using AI as a tool and being used by it. At least I think there is.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Mental Model Shift
&lt;/h2&gt;

&lt;p&gt;I don’t have a solution. But I’ve been rethinking how I relate to the work, and that’s helped more than any specific tool or workflow.&lt;/p&gt;

&lt;p&gt;The shift is this: if code is becoming the compilation target, then what you’re really building isn’t the code — it’s the system of decisions that produces it. Your taste. Your standards. Your judgment about what good looks like. That’s the actual product now.&lt;/p&gt;

&lt;p&gt;And that’s something you can teach to AI.&lt;/p&gt;

&lt;p&gt;I’ve been experimenting with this — taking the patterns I’ve developed over years of writing software and encoding them into the tools I work with. Not just “generate a function that does X” but “here’s how I think about error handling, here’s my preference on abstraction depth, here’s what I consider acceptable tradeoffs.” The more specific you get about your own engineering philosophy, the more the output starts to feel like yours instead of generic AI slop.&lt;/p&gt;

&lt;p&gt;This isn’t complicated or expensive. The tooling to build your own AI workflows — agents that understand how &lt;em&gt;you&lt;/em&gt; work — is accessible today in a way that would’ve been unthinkable two years ago. You don’t need a team or a platform. You need clarity about your own standards and the willingness to invest time in teaching them.&lt;/p&gt;

&lt;p&gt;If you’ve spent years developing engineering taste, that taste is now &lt;em&gt;leverage&lt;/em&gt;. You can apply it at a scale that was never possible when you had to write every line yourself. More ambitious projects. More complex systems. Things that would’ve required a team, handled by one person with clear vision and the right tools.&lt;/p&gt;

&lt;p&gt;It only works if you stay in the driver’s seat though. If you’re the one making the calls about what ships and what gets thrown away. Not a consumer of whatever AI generates, but the lead. The final authority.&lt;/p&gt;

&lt;p&gt;And right now, I’m watching a lot of people quietly stop being that.&lt;/p&gt;

&lt;h2&gt;
  
  
  I Don’t Have a Clean Answer
&lt;/h2&gt;

&lt;p&gt;If language models keep evolving at even half the pace we’ve seen over the last couple of years, the industry in five years looks nothing like it does today. The way we think about programming, about code quality, about what it means to be a software engineer — all of it is up for renegotiation.&lt;/p&gt;

&lt;p&gt;I don’t have a neat conclusion. I have a tension I’m sitting with, and I think a lot of developers feel it too even if they haven’t put words to it yet.&lt;/p&gt;

&lt;p&gt;Clean code might be dead. The practices, the principles, the carefully named variables and thoughtfully extracted functions — they might genuinely become artifacts of an era when humans needed to read what humans wrote.&lt;/p&gt;

&lt;p&gt;But the intention behind clean code? Caring about what you build. Taking pride in the craft. Giving a damn about quality even when no one is looking?&lt;/p&gt;

&lt;p&gt;That can’t die. Unless we let it.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://sudoish.com/clean-code-is-dead/" rel="noopener noreferrer"&gt;Clean Code Is Dead (And I Hate That I Agree)&lt;/a&gt; appeared first on &lt;a href="https://sudoish.com" rel="noopener noreferrer"&gt;sudoish&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>developmentbestpract</category>
      <category>aicoding</category>
    </item>
    <item>
      <title>You Think, AI Executes: The Skills That Actually Matter</title>
      <dc:creator>Thiago Pacheco</dc:creator>
      <pubDate>Mon, 06 Apr 2026 02:58:16 +0000</pubDate>
      <link>https://forem.com/pacheco/you-think-ai-executes-the-skills-that-actually-matter-1e7p</link>
      <guid>https://forem.com/pacheco/you-think-ai-executes-the-skills-that-actually-matter-1e7p</guid>
      <description>&lt;p&gt;The most valuable developer skill right now isn't writing more code faster. It's learning unfamiliar codebases, building context that guides decisions, planning strategic approaches to problems, and shipping production code with confidence.&lt;/p&gt;

&lt;p&gt;I recently added &lt;code&gt;.env&lt;/code&gt; file support to &lt;a href="https://github.com/joerdav/xc" rel="noopener noreferrer"&gt;xc&lt;/a&gt;, a Markdown-based task runner written in Go. The codebase was completely unfamiliar. I'm not a Go expert. But in 2.5 hours, I went from zero knowledge to a production-ready pull request with 84% test coverage and zero bugs in manual testing.&lt;/p&gt;

&lt;p&gt;Here's what's different: &lt;strong&gt;I didn't write a single line of code.&lt;/strong&gt; Not one. AI wrote everything—tests, implementation, integration, documentation. My role was entirely different: I questioned, I planned, I directed, I reviewed. I read the code, but I didn't write it.&lt;/p&gt;

&lt;p&gt;This isn't another "I asked ChatGPT to build an app" story. This is about the skills that separate developers who use AI as a force multiplier from those who just ask it to generate code. It's about onboarding fast, documenting strategically, planning thoroughly, directing execution, and reviewing confidently. The code writing? That's handled.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcpc3lwr2dgyj25mysreu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcpc3lwr2dgyj25mysreu.png" alt="📁" width="72" height="72"&gt;&lt;/a&gt; Complete &lt;code&gt;.ai/&lt;/code&gt; folder in the working fork:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;&lt;a href="https://github.com/sudoish/xc/tree/ai-context/.ai" rel="noopener noreferrer"&gt;github.com/sudoish/xc/tree/ai-context/.ai&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9f19bu2w1s9oqo72994z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9f19bu2w1s9oqo72994z.png" alt="🔀" width="72" height="72"&gt;&lt;/a&gt; Production-ready PR:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;&lt;a href="https://github.com/joerdav/xc/pull/167" rel="noopener noreferrer"&gt;github.com/joerdav/xc/pull/167&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkvs7fddjqg8063ot203d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkvs7fddjqg8063ot203d.png" alt="💡" width="72" height="72"&gt;&lt;/a&gt; The &lt;code&gt;.ai/&lt;/code&gt; folder lives in a separate &lt;code&gt;ai-context&lt;/code&gt; branch so it doesn't clutter the main codebase but remains available for reference and iteration.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Why This Matters
&lt;/h2&gt;

&lt;p&gt;Most AI coding demos show you the magic: "I asked ChatGPT to build X and it worked!" They skip the parts that actually matter for professional development: How do you onboard to a codebase you've never seen? How do you make architectural decisions when you don't understand the patterns yet? How do you ensure your code is production-ready when AI helped write it?&lt;/p&gt;

&lt;p&gt;These are the skills that matter now. Code generation is table stakes. What matters is context building, strategic planning, and confident execution.&lt;/p&gt;

&lt;p&gt;Here's the project: &lt;a href="https://github.com/joerdav/xc" rel="noopener noreferrer"&gt;xc&lt;/a&gt;, a task runner that reads tasks from Markdown files. About 5,000 lines of Go. Completely unfamiliar to me. The feature request was straightforward: add &lt;code&gt;.env&lt;/code&gt; file support (&lt;a href="https://github.com/joerdav/xc/issues/162" rel="noopener noreferrer"&gt;Issue #162&lt;/a&gt;). In 2.5 hours, using free AI models and a structured approach, I went from knowing nothing about the codebase to a merged pull request.&lt;/p&gt;

&lt;p&gt;The difference wasn't better prompts. It was better process.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Actual Workflow: What I Did vs What AI Did
&lt;/h2&gt;

&lt;p&gt;Here's the honest breakdown of who did what. I didn't write a single line of code myself. That's not the valuable work anymore.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I did:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Explored the codebase with AI&lt;/strong&gt; — Asked questions, challenged its understanding, verified explanations against the actual code&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Built the &lt;code&gt;.ai/&lt;/code&gt; structure&lt;/strong&gt; — Wrote context docs, ADRs, rules, and implementation specs based on my growing understanding&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Questioned the strategy&lt;/strong&gt; — Evaluated alternatives, captured trade-offs, made architectural decisions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Directed the implementation&lt;/strong&gt; — "Follow the spec. Implement test 1. Now test 2." Each step validated before moving forward&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reviewed iteratively&lt;/strong&gt; — Asked AI to review the code, digested its findings, confirmed issues, asked it to fix them. Repeated multiple times&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Final deep review&lt;/strong&gt; — Read through the entire PR on GitHub, verified everything made sense, marked ready for review&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;What AI did:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Answered my questions&lt;/strong&gt; — Explained architecture, pointed me to relevant files, clarified patterns&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Wrote all the code&lt;/strong&gt; — Tests, implementation, integration, everything&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Found its own bugs&lt;/strong&gt; — Self-review caught 5 issues before I even looked at the code&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fixed the issues&lt;/strong&gt; — Applied fixes based on its own review findings&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Followed the plan&lt;/strong&gt; — Implemented exactly what the spec described, in the order specified&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;What we did together:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Built understanding through conversation&lt;/li&gt;
&lt;li&gt;Validated each step before proceeding&lt;/li&gt;
&lt;li&gt;Caught subtle bugs through TDD&lt;/li&gt;
&lt;li&gt;Created production-ready code with high confidence&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The key insight: &lt;strong&gt;I never typed code.&lt;/strong&gt; I read it, reviewed it, directed changes to it. But I didn't write it. My value was in understanding, planning, and judgment. AI's value was in execution and self-checking. This is the new division of labor.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Four Skills
&lt;/h2&gt;

&lt;p&gt;This walkthrough demonstrates four skills that matter more than code generation:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Skill 1: Rapid Onboarding.&lt;/strong&gt; Learning an unfamiliar codebase fast by building structured context instead of reading every file. The &lt;code&gt;.ai/&lt;/code&gt; folder captures architecture, patterns, and limitations in a way both humans and AI can reference.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Skill 2: Strategic Documentation.&lt;/strong&gt; Building documentation that guides development, not just records it. Architecture Decision Records (ADRs) capture the "why" behind choices, evaluate alternatives, and create a shared understanding before code is written.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Skill 3: Systematic Planning.&lt;/strong&gt; Breaking down problems into testable steps. Each test defines expected behavior. Each implementation proves the behavior works. Each commit tells part of the story. No guessing, no hoping.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Skill 4: Confident Execution.&lt;/strong&gt; Shipping code you trust because you've tested it thoroughly, reviewed it critically, and validated it works in real scenarios. AI can help write code, but you own the quality.&lt;/p&gt;

&lt;p&gt;These skills work regardless of the AI tool you use. They work with free models. They work on unfamiliar codebases.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Feature Request
&lt;/h2&gt;

&lt;p&gt;First, a quick primer on how xc works: it's a task runner that reads tasks directly from your &lt;code&gt;README.md&lt;/code&gt; (or any markdown file). Tasks are defined as markdown headings with code blocks. When you run &lt;code&gt;xc test&lt;/code&gt;, it finds the &lt;code&gt;## test&lt;/code&gt; heading in your README and executes the code block beneath it. The genius is that your documentation &lt;em&gt;is&lt;/em&gt; your task runner, so they never get out of sync.&lt;/p&gt;

&lt;p&gt;A user opened &lt;a href="https://github.com/joerdav/xc/issues/162" rel="noopener noreferrer"&gt;Issue #162&lt;/a&gt; asking for &lt;code&gt;.env&lt;/code&gt; file support. They wanted to use the same set of tasks for different environments without cluttering the Markdown with environment variables.&lt;/p&gt;

&lt;p&gt;Before the feature, you'd have to write this in your README.md:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;## deploy&lt;/span&gt;

Deploy to production.

Env: &lt;span class="nv"&gt;DATABASE_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;postgres://prod/db, &lt;span class="nv"&gt;API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;secret123, &lt;span class="nv"&gt;ENV&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;production

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;kubectl apply -f deployment.yaml&lt;/p&gt;

&lt;p&gt;Then run with &lt;code&gt;xc deploy&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;After the feature, your README stays clean:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;## deploy&lt;/span&gt;

Deploy to production.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;kubectl apply -f deployment.yaml&lt;/p&gt;

&lt;p&gt;The environment variables live in a separate &lt;code&gt;.env&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight properties"&gt;&lt;code&gt;&lt;span class="py"&gt;DATABASE_URL&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;postgres://prod/db&lt;/span&gt;
&lt;span class="py"&gt;API_KEY&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;secret123&lt;/span&gt;
&lt;span class="py"&gt;ENV&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;production&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You still run the same command, but now the credentials are managed in &lt;code&gt;.env&lt;/code&gt; instead of cluttering your documentation.&lt;/p&gt;

&lt;p&gt;Simple ask, but the implementation requires real decisions. When do you load the files? What about overrides? How do you handle security? What about backward compatibility?&lt;/p&gt;

&lt;h2&gt;
  
  
  The &lt;code&gt;.ai/&lt;/code&gt; Structure: Context as Code
&lt;/h2&gt;

&lt;p&gt;Before writing any code, I created a structured context folder. This turned out to be the key to working with AI effectively. It's not about better prompts, it's about better &lt;strong&gt;structure&lt;/strong&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Full &lt;code&gt;.ai/&lt;/code&gt; folder:&lt;/strong&gt; &lt;a href="https://github.com/sudoish/xc/tree/ai-context/.ai" rel="noopener noreferrer"&gt;github.com/sudoish/xc/tree/ai-context/.ai&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The folder looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.ai/
├── agents.md # Who's working on what
├── context.md # Project overview, architecture
├── architecture/
│ ├── decisions.md # Current design patterns
│ └── adrs/
│ └── 001-dotenv-support.md # Design decisions for this feature
├── rules/
│ ├── code-style.md # Go conventions
│ ├── testing.md # TDD workflow
│ └── commits.md # Commit message format
└── tasks/
    └── 001-dotenv-implementation.md # Step-by-step plan

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; This structure is an investment, not overhead you repeat for every feature. You build it once during your first feature, then leverage it for every feature after. The &lt;code&gt;context.md&lt;/code&gt;, &lt;code&gt;architecture/decisions.md&lt;/code&gt;, and &lt;code&gt;rules/&lt;/code&gt; files rarely change. Each new feature just adds a new ADR (like &lt;code&gt;002-api-caching.md&lt;/code&gt;) and a new task spec (like &lt;code&gt;002-api-caching-implementation.md&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;Think of it like setting up your development environment. The initial setup takes time, but every feature after that is faster because the foundation exists.&lt;/p&gt;

&lt;p&gt;Each file serves a specific purpose. The &lt;a href="https://github.com/sudoish/xc/blob/ai-context/.ai/context.md" rel="noopener noreferrer"&gt;&lt;code&gt;context.md&lt;/code&gt;&lt;/a&gt; file becomes AI's memory. It explains what xc does, how it's architected with its &lt;code&gt;cmd/&lt;/code&gt;, &lt;code&gt;models/&lt;/code&gt;, &lt;code&gt;run/&lt;/code&gt;, and &lt;code&gt;parser/&lt;/code&gt; packages, what key behaviors exist like dependencies and environment handling, and what current limitations we're working around. Every time I ask AI a question, this context gets included automatically.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/sudoish/xc/blob/ai-context/.ai/rules/testing.md" rel="noopener noreferrer"&gt;&lt;code&gt;rules/testing.md&lt;/code&gt;&lt;/a&gt; file defines the TDD workflow we follow: write a failing test first (red), write minimal code to make it pass (green), clean up without changing behavior (refactor), then commit. This keeps both me and AI honest. No skipping tests. No shortcuts.&lt;/p&gt;

&lt;p&gt;The real gem is &lt;a href="https://github.com/sudoish/xc/blob/ai-context/.ai/architecture/adrs/001-dotenv-support.md" rel="noopener noreferrer"&gt;&lt;code&gt;adrs/001-dotenv-support.md&lt;/code&gt;&lt;/a&gt;, the Architecture Decision Record. This is where design happens. It's not "build me a feature," it's "here's why we chose this approach." We decided to load .env files at application startup rather than per-task, to support &lt;code&gt;.env.local&lt;/code&gt; overrides, to skip world-readable files for security, and to add CLI flags like &lt;code&gt;--env-file&lt;/code&gt; and &lt;code&gt;--no-env&lt;/code&gt;. We considered alternatives like per-task loading (rejected as too complex) and requiring an explicit flag (rejected as too much friction). This ADR becomes the source of truth. When AI suggests something different, I can just say "check the ADR."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The living documentation principle:&lt;/strong&gt; As the codebase evolves, so does the &lt;code&gt;.ai/&lt;/code&gt; folder. When you add a new feature, you write a new ADR (002, 003, etc.). When architecture changes, you update &lt;code&gt;architecture/decisions.md&lt;/code&gt; or add a new ADR explaining the change. When patterns emerge, you document them. The folder grows with the project, but the structure stays the same. Each feature builds on the understanding captured before it.&lt;/p&gt;

&lt;p&gt;This means the second feature is faster than the first. The third is faster than the second. The documentation compounds.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Task Spec: Planning Before Coding
&lt;/h2&gt;

&lt;p&gt;Before writing any code, I created &lt;a href="https://github.com/sudoish/xc/blob/ai-context/.ai/tasks/001-dotenv-implementation.md" rel="noopener noreferrer"&gt;&lt;code&gt;tasks/001-dotenv-implementation.md&lt;/code&gt;&lt;/a&gt;, a step-by-step plan for implementing the feature. This isn't a project management document. It's a development spec that breaks the feature into TDD cycles.&lt;/p&gt;

&lt;p&gt;The spec listed each test I needed to write, what behavior it should verify, and the expected implementation. Test for file not found. Test for loading valid env. Test for .env.local overrides. Test for security checks. Each one became a TDD cycle.&lt;/p&gt;

&lt;p&gt;This is what makes AI effective. Without the spec, I'd be asking AI "what should I do next?" every five minutes. With the spec, I'm asking "implement the next test according to the plan." The spec keeps development focused and systematic. It's the difference between wandering and following a map.&lt;/p&gt;

&lt;p&gt;For your second feature, you write a new spec. For your third, another one. The format is consistent, but each spec is tailored to its feature. This is the work that makes development fast and confident.&lt;/p&gt;

&lt;h2&gt;
  
  
  The TDD Flow: Red → Green → Refactor → Commit
&lt;/h2&gt;

&lt;p&gt;Here's where the real work happens. Each test defines acceptance criteria for exactly what needs to be built.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cycle 1: Valid .env should load variables&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First behavior: if a &lt;code&gt;.env&lt;/code&gt; file exists and contains &lt;code&gt;KEY=value&lt;/code&gt; pairs, those should be loaded into the environment. Test written, test failed (red)—no loader existed yet. Implementation added using the godotenv library (green). Test passed. Committed with "load env vars from dotenv file".&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cycle 2: .env.local should override .env&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Expected behavior: if both &lt;code&gt;.env&lt;/code&gt; and &lt;code&gt;.env.local&lt;/code&gt; exist, and both define the same variable, the &lt;code&gt;.env.local&lt;/code&gt; value wins. This is crucial for local development where you want to override defaults without modifying the base file. Test written, test failed initially because I was using the wrong function, fixed the implementation, test passed. Committed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cycle 3: World-readable files should be skipped&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Security requirement: if a &lt;code&gt;.env&lt;/code&gt; file has permissions that allow other users to read it (like &lt;code&gt;chmod 644&lt;/code&gt;), skip loading it and warn the user. This prevents accidentally exposing secrets. Test created, test failed (secrets were being loaded), added permission check, test passed. Committed.&lt;/p&gt;

&lt;p&gt;This rhythm of define → test → implement → verify → commit creates a clean history. When I looked at the final commit log, I could see exactly how the feature evolved: add godotenv dependency, load env vars from dotenv file, support dotenv local overrides, add security check for world readable files, integrate dotenv loading into main, add env file cli flags. Thirteen commits total, each one atomic and meaningful. Each commit is a story about one specific behavior being added.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Review Process
&lt;/h2&gt;

&lt;p&gt;After the implementation was done, I did a deep review of my own code. I found five issues that needed fixing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Issue 1: Test Isolation (Critical)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Tests were modifying the global environment without properly restoring it. If a test set &lt;code&gt;TEST_KEY=value&lt;/code&gt;, the cleanup would delete it, but what if that key already existed before the test ran? The cleanup wasn't restoring the original value, just removing the key. This breaks parallel test execution because tests can interfere with each other.&lt;/p&gt;

&lt;p&gt;The fix: create a helper function that saves the current state of environment variables before the test runs, then restores that exact state (including whether the variable existed at all) when the test completes. Now tests are safe to run in parallel. Committed with "add test environment isolation helper".&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Issue 2: Windows Test Bug (Critical)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One test needed to skip execution on Windows because file permission models are different. I had written the check incorrectly, reading from an environment variable instead of the language's built-in constant. This would break Windows CI. Small mistake, but important. Fixed and committed with "fix windows test skip to use runtime goos".&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Issue 3: Early Exit Timing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The .env loading was happening even for commands like &lt;code&gt;--help&lt;/code&gt; and &lt;code&gt;--version&lt;/code&gt;, which meant users could see security warnings when just checking the version. Moved the loading to happen after those early exits. Performance optimization and better user experience. Committed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Issue 4: Error Context&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When file operations failed, errors didn't indicate which file caused the problem. Added context wrapping so errors show the specific file path. Makes debugging much easier. Committed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Issue 5: Test Coverage&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One helper function didn't have its own test. Added coverage to bring the total to 84%. Committed.&lt;/p&gt;

&lt;p&gt;Each issue got its own fix, its own verification, its own commit. The same disciplined process for fixes that I used for features.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Manual Testing
&lt;/h2&gt;

&lt;p&gt;Code works in tests, but does it work for real users? I installed my version and created a test project to verify everything worked end-to-end.&lt;/p&gt;

&lt;p&gt;I created a &lt;code&gt;.env&lt;/code&gt; file with some variables, created a &lt;code&gt;.env.local&lt;/code&gt; file that overrode some of them, and made sure the permissions were correct with &lt;code&gt;chmod 600&lt;/code&gt;. Then I added a task to my &lt;code&gt;README.md&lt;/code&gt; to verify the variables were loaded:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In README.md:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;## check-env&lt;/span&gt;

Check loaded environment variables.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;echo "Environment: $ENV"&lt;br&gt;
echo "Database: $DATABASE_URL"&lt;br&gt;
echo "API Key: ${API_KEY:0:8}..."&lt;/p&gt;

&lt;p&gt;When I ran &lt;code&gt;xc check-env&lt;/code&gt;, I saw exactly what I expected. The &lt;code&gt;xc&lt;/code&gt; command read the task from the README and executed it with the environment variables from &lt;code&gt;.env&lt;/code&gt; and &lt;code&gt;.env.local&lt;/code&gt;. The environment was set to "development" from the base .env, but the database URL and API key were overridden by .env.local. Perfect.&lt;/p&gt;

&lt;p&gt;I ran eight manual test scenarios: default .env loading, .env.local overrides, the –no-env flag skipping loading, –env-file loading a custom path, security warnings for world-readable files, task-level Env statements still working, –help not loading .env (avoiding unnecessary warnings), and a real-world multi-variable scenario. All eight passed.&lt;/p&gt;
&lt;h2&gt;
  
  
  The PR
&lt;/h2&gt;

&lt;p&gt;I submitted everything as &lt;a href="https://github.com/joerdav/xc/pull/167" rel="noopener noreferrer"&gt;PR #167&lt;/a&gt;. The changes included thirteen commits (eight for the feature, five for fixes), about 200 lines of code including tests, four unit tests plus six integration tests, 84% code coverage, and zero bugs found in manual testing.&lt;/p&gt;

&lt;p&gt;The documentation was complete with a README section showing examples, a &lt;code&gt;.env.example&lt;/code&gt; template file, the load order documented clearly, and security best practices explained. Most importantly, everything was backward compatible. Existing task-level &lt;code&gt;Env:&lt;/code&gt; statements still work exactly as before.&lt;/p&gt;
&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;.ai/&lt;/code&gt; folder was the game-changer. Instead of writing long prompts like "Build me a .env loader with security checks and…", I could just say "Implement the loader per ADR-001". The ADR contains all the decisions. AI just implements them.&lt;/p&gt;

&lt;p&gt;I used free models throughout. No expensive API calls. The key wasn't the model, it was the context. Clear architecture docs, explicit ADRs, and well-defined tests gave AI everything it needed to generate good code.&lt;/p&gt;

&lt;p&gt;TDD kept everything honest. Every cycle followed the same pattern: write a test that defines the behavior, let AI suggest an implementation, let the test validate it works, then commit. No guessing. No "it probably works." The test proves it.&lt;/p&gt;

&lt;p&gt;Thirteen commits might seem like a lot for 200 lines of code, but each commit serves a purpose. Each one is reviewable on its own. Each one tells part of the story. Each one is revertible if needed. Git bisect works perfectly with this kind of history.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;.env.local&lt;/code&gt; override issue shows the workflow clearly. AI suggested the wrong approach first, using &lt;code&gt;Load()&lt;/code&gt; instead of &lt;code&gt;Overload()&lt;/code&gt;. But the test caught it. That's how it should work: AI suggests, test validates, human decides.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Real Value
&lt;/h2&gt;

&lt;p&gt;This isn't about "AI wrote code for me." It's about process, collaboration, and documentation.&lt;/p&gt;

&lt;p&gt;The process matters. Structured context in the &lt;code&gt;.ai/&lt;/code&gt; folder. Design decisions captured in ADRs. TDD discipline with tests written first. Small commits with one change at a time. This is how you ship production code.&lt;/p&gt;

&lt;p&gt;The collaboration matters. AI acts as a pair programmer, not a magic wand. Tests validate AI suggestions. Human makes the design decisions. Both contribute to better code.&lt;/p&gt;

&lt;p&gt;The documentation matters. Future contributors now have context about the project, the architecture, and why decisions were made the way they were. The implementation plan is explicit. The tests document the expected behavior. Six months from now, none of this is lost.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The compounding matters most.&lt;/strong&gt; You build the foundation once. Every feature after that leverages it. The second feature doesn't need new &lt;code&gt;context.md&lt;/code&gt; or &lt;code&gt;rules/&lt;/code&gt; files, just a new ADR and task spec. The third feature is even faster. The documentation evolves as the codebase evolves. New ADRs when architecture changes. Updates to &lt;code&gt;context.md&lt;/code&gt; when understanding deepens. Updates to &lt;code&gt;rules/&lt;/code&gt; when patterns emerge. The investment pays dividends forever.&lt;/p&gt;
&lt;h2&gt;
  
  
  Try It Yourself
&lt;/h2&gt;

&lt;p&gt;Want to replicate this process? Pick a project and create the &lt;code&gt;.ai/&lt;/code&gt; structure right in your working directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; .ai/&lt;span class="o"&gt;{&lt;/span&gt;architecture/adrs,rules,tasks&lt;span class="o"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use the &lt;a href="https://github.com/sudoish/xc/tree/ai-context/.ai" rel="noopener noreferrer"&gt;template structure&lt;/a&gt; as a guide. Build the foundation files once (&lt;code&gt;context.md&lt;/code&gt;, &lt;code&gt;architecture/decisions.md&lt;/code&gt;, &lt;code&gt;rules/&lt;/code&gt;), then for each feature add a new ADR and task spec. The &lt;code&gt;.ai/&lt;/code&gt; folder lives alongside your code and evolves with it—commit it with your changes so it stays in sync.&lt;/p&gt;

&lt;p&gt;Direct AI through TDD: "Implement test 1 from the spec." AI writes the test and implementation. "Run it." Test passes. "Commit." Repeat. When done, have AI review its own work, confirm findings, direct fixes. Then do your final review for strategic correctness.&lt;/p&gt;

&lt;p&gt;Each feature adds a new ADR and task spec to the &lt;code&gt;.ai/&lt;/code&gt; folder. The foundation files rarely change. The documentation compounds.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Full Timeline
&lt;/h2&gt;

&lt;p&gt;I spent about 45 minutes on documentation upfront: exploring the codebase with AI, questioning its understanding, writing the ADRs, rules, and context. This sounds like a lot, but it's a one-time investment. The &lt;code&gt;context.md&lt;/code&gt;, &lt;code&gt;architecture/decisions.md&lt;/code&gt;, and &lt;code&gt;rules/&lt;/code&gt; files I wrote for this first feature will be reused for every future feature. I'll only spend 10-15 minutes on feature-specific docs (ADR + task spec) for the next feature.&lt;/p&gt;

&lt;p&gt;The implementation took 40 minutes: directing AI through TDD cycles, one test at a time, validating each step. Integration of CLI flags and wiring into main.go took 15 minutes of the same directed approach. Documentation like README updates and examples took another 15 minutes. Manual testing took 15 minutes: I installed the binary and ran real scenarios. The review process took 30 minutes: first AI reviewed its own code (found 5 issues), then I reviewed the fixes, then I did a final deep review on GitHub.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; AI wrote 100% of the code. I wrote 100% of the strategy, asked 100% of the questions, and made 100% of the decisions. I reviewed every line, but I didn't type any of them. Total time from fork to production-ready PR was about 2.5 hours.&lt;/p&gt;

&lt;p&gt;If I added a second feature tomorrow, it would take less time. By the third feature even faster. The documentation compounds.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;p&gt;The complete &lt;code&gt;.ai/&lt;/code&gt; structure and documentation is at &lt;a href="https://github.com/sudoish/xc/tree/ai-context/.ai" rel="noopener noreferrer"&gt;github.com/sudoish/xc/tree/ai-context/.ai&lt;/a&gt;. The pull request with all code and tests is at &lt;a href="https://github.com/joerdav/xc/pull/167" rel="noopener noreferrer"&gt;github.com/joerdav/xc/pull/167&lt;/a&gt;. The working fork is at &lt;a href="https://github.com/sudoish/xc" rel="noopener noreferrer"&gt;github.com/sudoish/xc&lt;/a&gt;. The original issue is &lt;a href="https://github.com/joerdav/xc/issues/162" rel="noopener noreferrer"&gt;joerdav/xc#162&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The &lt;code&gt;.ai/&lt;/code&gt; folder lives in a separate branch in this example only because I wanted to reference it for this article without including it in the PR to the upstream project. In your own work, keep the &lt;code&gt;.ai/&lt;/code&gt; folder in your main working branch and commit it with your changes—it should evolve alongside your code, not separately.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Skills That Actually Matter
&lt;/h2&gt;

&lt;p&gt;AI wrote every line of code. I read every line, but I didn't write any of them. The feature is production-ready because I focused on what actually matters.&lt;/p&gt;

&lt;p&gt;The four skills transformed from framework to practice: rapid onboarding through questioning AI and building structured context, strategic documentation through ADRs written before code, systematic planning through testable specs, and iterative review through AI self-checks followed by strategic verification.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;.ai/&lt;/code&gt; folder, the ADRs, the task specs, the review cycles—they all worked exactly as planned. The result: 84% coverage, zero bugs, 2.5 hours from fork to production-ready PR.&lt;/p&gt;

&lt;p&gt;These skills work with free models. They work on unfamiliar codebases. They separate developers who use AI effectively from those who just generate code and hope it works.&lt;/p&gt;

&lt;p&gt;The magic isn't in the AI. It's in the process. And the process is this: &lt;strong&gt;you think, you plan, you direct, you review. AI executes.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Means for Your Career
&lt;/h2&gt;

&lt;p&gt;The developer who can onboard to unfamiliar codebases fast, document decisions strategically, plan systematically, and execute with confidence is far more valuable than the developer who can write code quickly. Because here's the reality: &lt;strong&gt;code writing is no longer the bottleneck, it never was. AI just made this a lot more evident&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I shipped a production-ready feature to an unfamiliar codebase in 2.5 hours without writing a single line of code. The bottleneck wasn't typing. It was understanding, planning, and judging. Those are the skills that matter.&lt;/p&gt;

&lt;p&gt;AI tools are getting better at code generation every month. They're not getting better at understanding your codebase's architecture, making strategic trade-offs, or ensuring production quality. Those skills are still yours. Those skills are what companies pay for.&lt;/p&gt;

&lt;p&gt;The question isn't "Will AI replace developers?" It's "Which developers will thrive when everyone has access to AI?" The answer is the ones who master onboarding, documentation, planning, and review. The ones who understand that their job is no longer to write code—it's to think clearly, plan thoroughly, and judge correctly.&lt;/p&gt;

&lt;p&gt;This is the junior dev role being redefined. It's not about writing boilerplate anymore. That work is done. It's about learning systems fast, making good decisions, directing execution, and ensuring quality. If you can do that, you're not competing with AI. You're orchestrating it.&lt;/p&gt;

&lt;p&gt;Writing code is optional. Reading it, understanding it, and judging it—those aren't.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post documents a real open source contribution made using AI as a pair programmer. All code, tests, documentation, and the complete &lt;code&gt;.ai/&lt;/code&gt; folder structure are publicly available in the &lt;a href="https://github.com/sudoish/xc/tree/ai-context/.ai" rel="noopener noreferrer"&gt;sudoish/xc fork&lt;/a&gt; for anyone who wants to replicate this approach.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://sudoish.com/ai-driven-development-xc-dotenv/" rel="noopener noreferrer"&gt;You Think, AI Executes: The Skills That Actually Matter&lt;/a&gt; appeared first on &lt;a href="https://sudoish.com" rel="noopener noreferrer"&gt;sudoish&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>uncategorized</category>
      <category>ai</category>
      <category>developerskills</category>
      <category>developmentprocess</category>
    </item>
    <item>
      <title>How We Made It Nearly Impossible to Become a Developer</title>
      <dc:creator>Thiago Pacheco</dc:creator>
      <pubDate>Sun, 29 Mar 2026 16:47:08 +0000</pubDate>
      <link>https://forem.com/pacheco/how-we-made-it-nearly-impossible-to-become-a-developer-4oog</link>
      <guid>https://forem.com/pacheco/how-we-made-it-nearly-impossible-to-become-a-developer-4oog</guid>
      <description>&lt;p&gt;I once interviewed a senior software engineer. Almost 10 years of experience. Proven track record of delivery. Solid industry knowledge. The kind of person you’d want on your team without a second thought.&lt;/p&gt;

&lt;p&gt;They completed the technical challenge. Not flawlessly — there were considerations, trade-offs they made that weren’t all correct. But when we reviewed their decisions together, the reasoning was sound. They showed commitment to their choices and could articulate why they went the direction they did. Some answers were vague in spots, some mistakes were real, but nothing that wouldn’t get corrected in the first week on the job with actual codebase context. The kind of gaps that disappear when you’re working on real problems instead of performing in a vacuum.&lt;/p&gt;

&lt;p&gt;We didn’t hire them.&lt;/p&gt;

&lt;p&gt;Not because I didn’t want to. I did. But the compounded small mistakes added up under the scoring rubric, and the final grade wasn’t strong enough to sell to the hiring managers. The rules of the process made a good engineer look like a bad candidate.&lt;/p&gt;

&lt;p&gt;And I get it — those rules exist to keep the bar high, to ensure we only hire top talent. At least, that’s what every company believes. But what I’ve seen throughout my career, on both sides of the table, is that the process doesn’t filter for the best engineers. It filters for the best interviewers. And we lose great colleagues — dedicated, talented people — because they didn’t fit the rule book.&lt;/p&gt;

&lt;p&gt;That was a senior engineer with a decade of experience. Now imagine you’re a junior with none.&lt;/p&gt;

&lt;p&gt;The software industry has a hiring problem. Not the kind where we can’t find people — the kind where we’ve made it nearly impossible for new people to get in.&lt;/p&gt;

&lt;p&gt;Entry-level developer hiring has collapsed — some reports show drops of 60% or more in the past year, with actual hires into junior roles falling as much as 73%. CS graduates are sitting at 6.1% unemployment according to the Federal Reserve Bank of New York — more than double the overall national rate. And the majority of tech leaders say they plan to reduce entry-level hiring even further while increasing AI investment.&lt;/p&gt;

&lt;p&gt;But the pipeline didn’t break overnight. It’s been cracking for years. AI just kicked the door in.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Interview Problem Nobody Wants to Fix
&lt;/h2&gt;

&lt;p&gt;The interview process was broken long before AI showed up. And I’ll say what a lot of people in the industry think but won’t say out loud: &lt;strong&gt;the standard software engineering interview process is unrealistic, unnecessarily demanding, and a terrible predictor of on-the-job performance.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Think about what we ask candidates to do. Solve algorithmic puzzles on a whiteboard or shared screen. Explain their thought process in real time while simultaneously figuring out the solution. Design systems on the spot for problems they’ve never encountered in that specific framing. All while someone watches and judges every hesitation.&lt;/p&gt;

&lt;p&gt;Here’s the thing — that’s not how software development works. Not even close.&lt;/p&gt;

&lt;p&gt;Real engineering is focused, deep work. It’s sitting alone with a problem for hours, researching approaches, trying things, breaking things, iterating. It’s the exact opposite of performing under observation with a timer running. Most developers do their best work when they’re left alone to think. Asking them to showcase and explain how they’d deliver features while they’re still processing the problem doesn’t test their engineering ability. It tests their ability to perform under artificial pressure.&lt;/p&gt;

&lt;p&gt;And yet, this is how we gatekeep an entire profession.&lt;/p&gt;

&lt;h2&gt;
  
  
  When Even a Principal Engineer Can’t Pass
&lt;/h2&gt;

&lt;p&gt;I have a close friend who’s a principal engineer. He’s delivered massive projects — systems that required intense complexity, heavy reliability, and serious scalability. The kind of work that keeps companies running. I’ve watched him turn down offers from companies that couldn’t skip the whiteboard stage. His track record speaks for itself, but he knows the process doesn’t.&lt;/p&gt;

&lt;p&gt;He straight up refuses to do technical interviews. Hates the process. Never performed well in them.&lt;/p&gt;

&lt;p&gt;But that wasn’t always the case. Early in his career, he had no choice. He went through the motions, sat through the whiteboard sessions, stumbled through the live coding exercises. And that’s exactly how he learned he was terrible at it. Not terrible at engineering — terrible at the performance.&lt;/p&gt;

&lt;p&gt;If a principal engineer with years of proven delivery struggles with this process, what does that tell us? It tells us we’re measuring the wrong thing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Culture fit, communication, problem-solving mindset, willingness to learn — these should carry far more weight than whether someone can implement a binary tree traversal from memory while a stranger watches.&lt;/strong&gt; But the industry has standardized around LeetCode-style assessments like they’re some universal truth, and we’ve collectively decided that this is just how it works.&lt;/p&gt;

&lt;p&gt;It’s not. It’s a choice. And it’s a bad one.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Wrong Skills at the Worst Time
&lt;/h2&gt;

&lt;p&gt;Here’s where it gets really damaging for juniors specifically.&lt;/p&gt;

&lt;p&gt;When you’re just starting your career, you have limited time, limited money, and unlimited pressure. The message the industry sends you is clear: grind LeetCode. Master data structures and algorithms. Practice system design for systems you’ve never built. Get good at performing.&lt;/p&gt;

&lt;p&gt;So that’s what people do. They spend months — sometimes six months or more — focused entirely on interview preparation instead of actually building things, learning real-world patterns, or developing the engineering intuition that makes someone genuinely valuable.&lt;/p&gt;

&lt;p&gt;We’re literally telling the next generation of developers to optimize for the wrong skills from day one. And then we wonder why new hires can’t navigate a real codebase.&lt;/p&gt;

&lt;p&gt;The industry has created a perverse incentive: &lt;strong&gt;becoming good at getting hired and becoming good at the job are two completely different skill paths.&lt;/strong&gt; And for juniors who are just figuring out what software engineering even is, being forced down the interview prep path first is actively harmful to their development.&lt;/p&gt;

&lt;h2&gt;
  
  
  Now Add AI to the Mix
&lt;/h2&gt;

&lt;p&gt;As if the interview gauntlet wasn’t enough, the industry just added a new requirement: you need to be proficient with AI tools.&lt;/p&gt;

&lt;p&gt;On the surface, this makes sense. AI-assisted development is becoming standard practice. Companies want developers who can leverage these tools effectively. Fair enough.&lt;/p&gt;

&lt;p&gt;But think about what we’re actually asking.&lt;/p&gt;

&lt;p&gt;Yes, some AI coding tools have free tiers now. GitHub Copilot has one. Cursor has a free plan. But if you’re just starting out — fresh from school, finishing a bootcamp, or self-teaching — do you even know that? The AI tooling landscape is an overwhelming mess of options, hype, and conflicting advice. Experienced developers struggle to keep up with what’s worth using. How is someone who’s still learning what a REST API is supposed to navigate that?&lt;/p&gt;

&lt;p&gt;And the free tiers only get you so far. The tools that companies actually expect proficiency in — Copilot Pro, Cursor Pro, Claude Pro — cost $10 to $20 per month each. If you want a serious AI-assisted workflow, you’re looking at $30-50/month minimum. That might not sound like much to someone employed, but when you’re unemployed, every dollar matters. Asking someone without income to pay for premium AI tools so they can develop the skills needed to get a job is a catch-22.&lt;/p&gt;

&lt;p&gt;There are too many unknowns when you’re starting out. Every conversation about AI in development assumes a baseline of knowledge and context that juniors simply don’t have yet. And instead of helping them build that foundation, we’re adding it to the list of things they need to figure out on their own before we’ll even consider hiring them.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Senior Shortage Nobody Sees Coming
&lt;/h2&gt;

&lt;p&gt;Here’s the part that should terrify every tech leader who’s currently celebrating their AI-powered lean engineering team: &lt;strong&gt;you’re eating your seed corn.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It takes roughly 7 to 10 years to develop a senior engineer. Not just someone with “senior” in their title — someone who can architect systems, mentor teams, make judgment calls under uncertainty, and understand the business context of technical decisions. That kind of expertise doesn’t come from tutorials or AI tools. It comes from years of making mistakes, shipping real products, debugging production incidents at 2 AM, and slowly building the intuition that separates an engineer from someone who writes code.&lt;/p&gt;

&lt;p&gt;If we’re not hiring juniors now, we won’t have mid-level engineers in 3-5 years. And we won’t have seniors in 7-10 years.&lt;/p&gt;

&lt;p&gt;The Stanford Digital Economy Lab data already shows it: employment for software developers aged 22-25 has dropped roughly 20% since late 2022, while developers over 26 remain stable. The two groups tracked perfectly until ChatGPT launched, then diverged sharply. We’re watching the pipeline dry up in real time.&lt;/p&gt;

&lt;p&gt;And here’s the irony that makes it worse: companies are cutting juniors because they believe AI replaces what juniors did. But the data tells a different story. Google’s DORA 2024 report found that a 25% increase in AI adoption translated to just a 2% productivity gain — while executives at those same companies were telling their boards that AI had boosted output by 25%. The gap between measured reality and executive perception is staggering, and companies are making structural hiring decisions based on that perception, not the data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Juniors were never just “cheap labor” who wrote boilerplate.&lt;/strong&gt; They stress-tested documentation. They exposed hidden assumptions in systems. They forced seniors to articulate knowledge that would otherwise stay implicit. They built institutional memory.&lt;/p&gt;

&lt;p&gt;A senior with Copilot can write code faster, sure — but faster code was never the bottleneck.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Squeeze
&lt;/h2&gt;

&lt;p&gt;So let’s put it all together. If you’re a junior developer in 2026, here’s your reality:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;There are barely any jobs for you.&lt;/strong&gt; Entry-level hiring has collapsed. Companies want seniors only.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The jobs that exist demand more than ever.&lt;/strong&gt; The few junior roles left aren’t really junior anymore — they want 2-3 years of experience, AI proficiency, and system design knowledge.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The interview process is designed against you.&lt;/strong&gt; Months of LeetCode prep that teaches you nothing about real engineering.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;You need tools you might not know exist or can’t afford.&lt;/strong&gt; AI proficiency is expected, but the landscape is overwhelming and the good stuff costs money you don’t have.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The anxiety is crushing.&lt;/strong&gt; The pressure to be the best, to stand out in a market with fewer openings and more candidates, is driving people out before they even start.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;And the result? People are giving up. Not because they can’t code. Not because they’re not smart enough. Because the path from “I want to be a software developer” to actually being one has become so hostile, so expensive, and so demoralizing that it’s not worth it anymore.&lt;/p&gt;

&lt;p&gt;Can you blame them?&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Needs to Change
&lt;/h2&gt;

&lt;p&gt;I don’t have a clean five-point solution. Anyone who does is selling something. But I know what direction we should be moving.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rethink interviews from the ground up.&lt;/strong&gt; Pair programming sessions, take-home projects with reasonable time limits, portfolio reviews, trial periods — there are better ways to assess ability than making people perform algorithms under pressure. If your interview process can’t distinguish between a great engineer who interviews poorly and a mediocre one who interviews well, the process is broken. Not the candidate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Invest in juniors as a strategic decision, not charity.&lt;/strong&gt; The companies that hire and develop juniors now will have the experienced engineers everyone else is desperate for in 2030. A handful of companies are already doubling down on junior hiring. They’re not being generous — they’re playing the long game while everyone else optimizes for this quarter.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stop pretending AI replaced the junior role.&lt;/strong&gt; It replaced the boilerplate. The questions juniors ask, the assumptions they challenge, the documentation they stress-test — that’s not automatable. If your team stopped growing because you thought Copilot could replace a curious 23-year-old, you’re going to feel that decision in five years.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Question
&lt;/h2&gt;

&lt;p&gt;The software industry has spent the last two decades complaining about a talent shortage. And now, faced with the largest pool of motivated CS graduates and career switchers in history, we’ve decided the best strategy is to lock the door and let AI handle it.&lt;/p&gt;

&lt;p&gt;If senior engineers can’t pass technical interviews, if junior roles demand senior skills, if the tools you need are buried in a landscape designed for people who already know what they’re doing, and if the entire process optimizes for performance over competence — then the pipeline isn’t just broken. We broke it. Deliberately, through a thousand small decisions that each seemed reasonable in isolation but collectively created a system that’s eating its own future.&lt;/p&gt;

&lt;p&gt;The question isn’t whether this will catch up with us. It’s whether we’ll have anyone left in the pipeline to fix it when it does.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://sudoish.com/junior-developer-pipeline-broken/" rel="noopener noreferrer"&gt;How We Made It Nearly Impossible to Become a Developer&lt;/a&gt; appeared first on &lt;a href="https://sudoish.com" rel="noopener noreferrer"&gt;sudoish&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>career</category>
      <category>careergrowth</category>
    </item>
    <item>
      <title>The AI Productivity Lie Nobody Wants to Admit</title>
      <dc:creator>Thiago Pacheco</dc:creator>
      <pubDate>Sat, 28 Mar 2026 12:37:32 +0000</pubDate>
      <link>https://forem.com/pacheco/the-ai-productivity-lie-nobody-wants-to-admit-4a6o</link>
      <guid>https://forem.com/pacheco/the-ai-productivity-lie-nobody-wants-to-admit-4a6o</guid>
      <description>&lt;p&gt;I’ve been producing bad code. And it’s not because I forgot how to code.&lt;/p&gt;

&lt;p&gt;I’ve tried every workflow. Terminal agents, IDE copilots, full vibe coding, augmented coding. I keep exploring because that’s what I do — evaluate, keep what works, move on from what doesn’t.&lt;/p&gt;

&lt;p&gt;But here’s where I am right now: most of the time, it’s still more reliable for me to write the code myself than to let the agent do it.&lt;/p&gt;

&lt;p&gt;When the AI writes it, yes — it’s faster sometimes. But then I review it. I find things. I correct things. And suddenly I’m in a loop where I’m either spending more time than if I just did it myself, or the same time doing a more tedious version of the work. I’m not writing code anymore. I’m auditing code I didn’t write and don’t fully trust.&lt;/p&gt;

&lt;p&gt;And I’m starting to wonder what that’s doing to my engineering skills. Not because AI replaced them. Because the constant pressure to delegate everything is pulling me away from the deep thinking that built those skills in the first place.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Pressure Nobody Talks About
&lt;/h2&gt;

&lt;p&gt;The expectation to be dramatically more productive with AI is real. It’s coming from management, from Twitter, from inside your own head. Every demo makes it look like everyone else figured it out and you’re falling behind.&lt;/p&gt;

&lt;p&gt;And yeah — AI can be a huge boost. But only if you have perfect project structure, perfect product context, perfect documentation.&lt;/p&gt;

&lt;p&gt;That doesn’t exist. Not in any real codebase I’ve ever worked on.&lt;/p&gt;

&lt;p&gt;You know what exists? Legacy code. Tech debt. Patterns that were “temporary” three years ago. Business logic that lives in someone’s head and nowhere else.&lt;/p&gt;

&lt;p&gt;When you point an AI agent at that, it doesn’t fix the problems. It copies them. It amplifies them. It confidently reproduces your worst patterns at scale.&lt;/p&gt;

&lt;p&gt;So now you’re not just dealing with tech debt. You’re dealing with AI-generated tech debt that looks clean because the agent formatted it nicely.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Data That Should Make You Uncomfortable
&lt;/h2&gt;

&lt;p&gt;Here’s what made me feel less crazy about all of this. And each study hits harder than the last.&lt;/p&gt;

&lt;h3&gt;
  
  
  You’re Not Even Faster
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/" rel="noopener noreferrer"&gt;METR&lt;/a&gt;, a nonprofit research organization, ran what might be the most rigorous study on this topic to date. They took 16 experienced open-source developers — people who maintain large repositories, averaging 22,000+ stars and over a million lines of code — and had them work on real issues in their own codebases. Real bugs, real features, real refactors. Not toy problems.&lt;/p&gt;

&lt;p&gt;Half the time they could use AI. Half the time they couldn’t.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The result? When developers used AI tools, they took 19% longer to complete their tasks.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Not faster. Slower.&lt;/p&gt;

&lt;p&gt;But here’s the part that should genuinely unsettle you: before the study, these developers predicted AI would make them 24% faster. After using it — after actually experiencing the slowdown — they still believed AI had made them 20% faster.&lt;/p&gt;

&lt;p&gt;19% slower. Felt 20% faster. &lt;strong&gt;A 40 percentage point gap between perception and reality.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is what I call the speed mirage. You feel like you’re flying. The data says you’re walking backwards. And you can’t even tell.&lt;/p&gt;

&lt;h3&gt;
  
  
  You Understand Less of What You Ship
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.anthropic.com/research/AI-assistance-coding-skills" rel="noopener noreferrer"&gt;Anthropic&lt;/a&gt;, the company that makes Claude, ran a randomized controlled trial on their own tool. Developers using AI assistance scored &lt;strong&gt;17% lower on comprehension tests&lt;/strong&gt; compared to developers who coded manually. The AI group finished slightly faster, but that speed difference wasn’t even statistically significant.&lt;/p&gt;

&lt;p&gt;Marginal speed gain. Real understanding loss.&lt;/p&gt;

&lt;p&gt;It gets worse. The developers who delegated code generation to AI scored below 40% on comprehension. The ones who used AI for conceptual questions — asking “why” and “how does this work” — scored above 65%.&lt;/p&gt;

&lt;p&gt;Same tool. Completely different outcomes depending on how you used it. And the biggest gap was in debugging — the skill you need most when things break in production.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Industry Already Knows
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://survey.stackoverflow.co/2025/ai" rel="noopener noreferrer"&gt;Stack Overflow’s 2025 Developer Survey&lt;/a&gt;: 84% of developers use or plan to use AI tools, but only 33% trust the output. Down from 43% the year before.&lt;/p&gt;

&lt;p&gt;Adoption up. Trust down.&lt;/p&gt;

&lt;p&gt;So we’re not faster. We understand less. And we don’t even trust what we ship. But we keep using it because everyone else seems to have figured it out.&lt;/p&gt;




&lt;h2&gt;
  
  
  Throughput vs. Confidence
&lt;/h2&gt;

&lt;p&gt;Let me name the thing clearly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The industry is optimizing for throughput when it should be optimizing for confidence.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Throughput is lines of code generated. PRs opened. Features “shipped.” It’s the metric that looks good on a dashboard and falls apart in production.&lt;/p&gt;

&lt;p&gt;Confidence is different. Do I understand what this code does? Do I trust it handles the edge cases? Can I debug it at 2 AM when something breaks?&lt;/p&gt;

&lt;p&gt;Vibe coding optimizes for throughput. You get a dopamine spike. You feel productive. And then you spend the rest of the day cleaning up after the machine.&lt;/p&gt;

&lt;p&gt;I’m not anti-AI. I use it every day. It’s incredible for researching tradeoffs, validating ideas, catching things I missed in review. When I use AI as a thinking partner, it genuinely makes me better.&lt;/p&gt;

&lt;p&gt;But when I use it as a coding replacement, it makes my output worse. And that’s the gap the industry isn’t willing to talk about.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Confidence Boundary
&lt;/h2&gt;

&lt;p&gt;So where does this leave us?&lt;/p&gt;

&lt;p&gt;I don’t have the perfect workflow. I’m not going to pretend I do. But I’ve been paying attention to what actually works, and the pattern is consistent.&lt;/p&gt;

&lt;p&gt;The developers getting the best results from AI aren’t the ones who figured out the perfect prompt. They’re the ones who figured out &lt;strong&gt;what to delegate and what to keep.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I’ve been calling this the confidence boundary.&lt;/p&gt;

&lt;p&gt;Here’s a real example. I had a feature to build recently. Instead of just opening the terminal and prompting the agent to do it, I stopped. I wrote the spec first. A clear, detailed explanation of what needed to be accomplished. The edge cases that the implementation had to survive. The constraints. The things I explicitly didn’t want.&lt;/p&gt;

&lt;p&gt;Then I handed that to the agent and let it implement against my spec.&lt;/p&gt;

&lt;p&gt;Because I did the thinking upfront, reviewing the output took minutes instead of hours. I knew exactly what should be there and what shouldn’t.&lt;/p&gt;

&lt;p&gt;But here’s the thing nobody tells you — and this is where it gets uncomfortable.&lt;/p&gt;

&lt;p&gt;To get a good result from the agent, you have to be &lt;em&gt;very&lt;/em&gt; specific. You’re writing a detailed spec, thinking through edge cases, defining constraints. And at some point you realize: &lt;strong&gt;for certain features, you’ve already done most of the hard work.&lt;/strong&gt; The thinking &lt;em&gt;is&lt;/em&gt; the work.&lt;/p&gt;

&lt;p&gt;At that point, it’s genuinely faster to just write the code yourself and use the agent as a pair reviewer to make sure you’re on the right track.&lt;/p&gt;

&lt;p&gt;Other times — boilerplate, scaffolding, repetitive patterns, implementations where the spec is clear and the risk is low — full delegation is absolutely the move. Hand the agent the guidance and let it run.&lt;/p&gt;

&lt;p&gt;The real skill isn’t prompting. It’s learning what to delegate and what to keep. And that judgment comes from understanding your codebase, the complexity of the task, and honestly — how much you trust the output for that specific context.&lt;/p&gt;

&lt;p&gt;The messier your codebase — legacy code, real tech debt, patterns with history the agent will never know — the more that judgment matters. The tooling is irrelevant. Neovim, Cursor, whatever. &lt;strong&gt;The bottleneck is you.&lt;/strong&gt; Whether you know where your confidence boundary is and whether you’re honest about it.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Bet I’m Making
&lt;/h2&gt;

&lt;p&gt;If you feel like AI is making you more productive but less confident in what you ship — you’re not falling behind. You’re paying attention.&lt;/p&gt;

&lt;p&gt;If your engineering skills feel soft because you keep delegating instead of thinking — that’s not paranoia. The research says it’s real.&lt;/p&gt;

&lt;p&gt;The speed mirage is powerful. It feels like progress. The dashboards say it’s progress. But if you can’t explain what you shipped, debug it when it breaks, or trust it handles the edge cases — that’s not progress. That’s debt with a nice commit message.&lt;/p&gt;

&lt;p&gt;I’m not quitting AI. I’m quitting the lie that it makes everything faster.&lt;/p&gt;

&lt;p&gt;The developers who are going to thrive aren’t the ones who ship more code. They’re the ones who learned what to keep and what to let go. Who built the judgment for when to delegate and when to do the work themselves.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Confidence over throughput.&lt;/strong&gt; That’s the bet I’m making.&lt;/p&gt;




&lt;h2&gt;
  
  
  Sources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/" rel="noopener noreferrer"&gt;METR Study — AI slows experienced developers by 19%&lt;/a&gt; (&lt;a href="https://arxiv.org/abs/2507.09089" rel="noopener noreferrer"&gt;arXiv paper&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.anthropic.com/research/AI-assistance-coding-skills" rel="noopener noreferrer"&gt;Anthropic Study — 17% comprehension loss with AI assistance&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://survey.stackoverflow.co/2025/ai" rel="noopener noreferrer"&gt;Stack Overflow 2025 Developer Survey — Trust declining&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://addyo.substack.com/p/avoiding-skill-atrophy-in-the-age" rel="noopener noreferrer"&gt;Addy Osmani on Skill Atrophy&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.turingcollege.com/blog/agentic-engineering-vs-vibe-coding" rel="noopener noreferrer"&gt;Karpathy on Agentic Engineering&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.thoughtworks.com/en-us/insights/blog/agile-engineering-practices/spec-driven-development-unpacking-2025-new-engineering-practices" rel="noopener noreferrer"&gt;Thoughtworks on Spec-Driven Development&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The post &lt;a href="https://sudoish.com/the-ai-productivity-lie-nobody-wants-to-admit-2/" rel="noopener noreferrer"&gt;The AI Productivity Lie Nobody Wants to Admit&lt;/a&gt; appeared first on &lt;a href="https://sudoish.com" rel="noopener noreferrer"&gt;sudoish&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>developmentbestpract</category>
      <category>aicoding</category>
    </item>
    <item>
      <title>A Tale of Accidental Architecture: How 50 Lines Became A Black Friday Disaster</title>
      <dc:creator>Thiago Pacheco</dc:creator>
      <pubDate>Fri, 27 Feb 2026 04:26:03 +0000</pubDate>
      <link>https://forem.com/pacheco/a-tale-of-accidental-architecture-how-50-lines-became-a-black-friday-disaster-25cc</link>
      <guid>https://forem.com/pacheco/a-tale-of-accidental-architecture-how-50-lines-became-a-black-friday-disaster-25cc</guid>
      <description>&lt;p&gt;Let me tell you about Sarah.&lt;/p&gt;

&lt;p&gt;This is a fictional story. But I bet you’ll recognize it.&lt;/p&gt;

&lt;p&gt;I’ve seen this pattern play out across different companies, different teams, different tech stacks.  &lt;strong&gt;The details change. The progression doesn’t.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Week 1: The Perfect Start
&lt;/h2&gt;

&lt;p&gt;Sarah’s building a notification system for an e-commerce platform.&lt;/p&gt;

&lt;p&gt;First requirement: send an email when someone places an order.&lt;/p&gt;

&lt;p&gt;Simple. She writes one function. Webhook comes in, format the email, hit SMTP, done.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The whole thing is maybe 50 lines.&lt;/strong&gt;  It works perfectly. Code review approves it. It ships.&lt;/p&gt;

&lt;p&gt;Sarah’s thinking: &lt;em&gt;“It’s just one notification type. I’ll add proper abstraction when we actually need it.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You’ve thought this too. So have I.&lt;/p&gt;

&lt;p&gt;Nothing wrong with it. Week 1, this is the right call.&lt;/p&gt;

&lt;h2&gt;
  
  
  Week 3: The First Copy-Paste
&lt;/h2&gt;

&lt;p&gt;Product team loves the email notifications. Now they want SMS for order shipments.&lt;/p&gt;

&lt;p&gt;Mike picks up the ticket.&lt;/p&gt;

&lt;p&gt;He opens Sarah’s code. Sees the pattern.  &lt;strong&gt;Makes sense.&lt;/strong&gt;  He follows it.&lt;/p&gt;

&lt;p&gt;New handler. Receives the shipment webhook. Formats the SMS message. Connects to Twilio. Sends it.&lt;/p&gt;

&lt;p&gt;He copies some of Sarah’s email formatting logic because customers should see consistent information. Has to adjust it for the 160-character SMS limit, but the core logic is the same.&lt;/p&gt;

&lt;p&gt;Mike’s thinking: &lt;em&gt;“There’s some duplication with the email code, but SMS is different enough that abstracting it would be premature. It’s only two notification types.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deadline is tomorrow.&lt;/strong&gt;  This ships.&lt;/p&gt;

&lt;p&gt;Still nothing catastrophically wrong here. Two types, small duplication, it’s manageable.&lt;/p&gt;

&lt;p&gt;Right?&lt;/p&gt;

&lt;h2&gt;
  
  
  Week 5: User Preferences
&lt;/h2&gt;

&lt;p&gt;Customers start complaining.&lt;/p&gt;

&lt;p&gt;“I don’t want SMS notifications.”&lt;/p&gt;

&lt;p&gt;“Why am I getting emails for every status change?”&lt;/p&gt;

&lt;p&gt;Sarah adds user preferences. Creates a database table. Updates her email handler to check if the user wants that particular notification before sending.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The handler triples in size.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Query the database. Check multiple preference flags. Handle the case where preferences don’t exist yet. Default values. Edge cases.&lt;/p&gt;

&lt;p&gt;Sarah’s thinking: &lt;em&gt;“This is getting messy, but the deadline is tomorrow and this works. I’ll refactor it next sprint.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I cannot tell you how many times I’ve heard “next sprint.”&lt;/p&gt;

&lt;p&gt;(Spoiler: next sprint never comes.)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhzgu3hiae5nlit5zn7l.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhzgu3hiae5nlit5zn7l.gif" alt="This is fine dog meme - developer ignoring growing problems" width="480" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Week 7: Two Ways to Do Everything
&lt;/h2&gt;

&lt;p&gt;Mike needs to add notifications for order cancellations and delivery confirmations.&lt;/p&gt;

&lt;p&gt;He realizes hardcoding email bodies isn’t going to scale.&lt;/p&gt;

&lt;p&gt;So he builds a template system. Creates a templates directory. Writes a simple renderer. Updates his handlers to load templates, populate data, send.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It’s actually pretty clean.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Meanwhile, Sarah’s handlers still use string formatting. She doesn’t know Mike built a template system. Mike didn’t announce it in Slack. It just… exists now.&lt;/p&gt;

&lt;p&gt;The codebase now has  &lt;strong&gt;two different ways of generating notification content.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Sarah finds out later. Thinks: &lt;em&gt;“I should probably switch to Mike’s templates… but my code is working and I’m slammed with other features.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;And she is. Three new features this sprint. No time to refactor working code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Week 9: The Third Approach
&lt;/h2&gt;

&lt;p&gt;Emma joins the team.&lt;/p&gt;

&lt;p&gt;First task: add Slack notifications for the support team when high-value orders come in.&lt;/p&gt;

&lt;p&gt;She opens the notification code. Finds Sarah’s inline approach. Finds Mike’s templates.  &lt;strong&gt;Neither makes sense for Slack.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Slack needs structured JSON payloads, not formatted text.&lt;/p&gt;

&lt;p&gt;So Emma does what any good engineer would do: she creates a  &lt;strong&gt;“proper solution”.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Notification service class. Methods for each notification type. Handles destination-specific formatting internally. Clean. Testable. Well-designed.&lt;/p&gt;

&lt;p&gt;She shows it to the team in standup.&lt;/p&gt;

&lt;p&gt;Mike: &lt;em&gt;“That’s nice, but I don’t have time to refactor my SMS code right now. Maybe later.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Sarah: &lt;em&gt;“I like it, but my code has been running in production for months. If it ain’t broke…”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Emma’s service class gets used for Slack notifications.  &lt;strong&gt;Nothing else changes.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now there are three ways to send notifications.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj49li7kl1vg74rp29yei.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj49li7kl1vg74rp29yei.gif" alt="Spider-Man pointing meme - three developers with different approaches" width="498" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Week 12: The Chaos Compounds
&lt;/h2&gt;

&lt;p&gt;Product wants:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Push notifications for the mobile app&lt;/li&gt;
&lt;li&gt;Digest emails (daily order summaries)&lt;/li&gt;
&lt;li&gt;Ability to snooze notifications&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Three developers. Three features. Same week.&lt;/p&gt;

&lt;p&gt;Each one discovers the existing fragmentation. Each one makes their own call.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Developer A&lt;/strong&gt;  tries to extend Sarah’s inline approach. Adds push notification logic directly in the handler.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Developer B&lt;/strong&gt;  uses Mike’s templates but creates a  &lt;strong&gt;new template format&lt;/strong&gt;  because the existing one doesn’t support digest layouts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Developer C&lt;/strong&gt;  tries to use Emma’s service class but realizes it doesn’t handle scheduling or snoozing. So they add that logic directly in their handler instead.&lt;/p&gt;

&lt;p&gt;The notification preferences table is now being updated by  &lt;strong&gt;five different code paths.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Each developer added their own columns because they didn’t realize others had added similar fields. One stores preferences as JSON. Another uses boolean columns. Another created a  &lt;strong&gt;separate preferences table&lt;/strong&gt;  with foreign keys.&lt;/p&gt;

&lt;p&gt;I’ve seen this code review happen. Every PR gets approved. Every piece of code works.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nobody did anything wrong.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;And yet.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;“Every PR got approved. Every piece of code worked. Nobody did anything wrong. And yet.”&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Week 15: Customer Complaints
&lt;/h2&gt;

&lt;p&gt;Support tickets start flooding in.&lt;/p&gt;

&lt;p&gt;“I’m getting duplicate notifications.”&lt;/p&gt;

&lt;p&gt;“I disabled email but I’m still getting them.”&lt;/p&gt;

&lt;p&gt;“I’m not getting notifications at all for important orders.”&lt;/p&gt;

&lt;p&gt;Sarah investigates. Opens the codebase.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Six different code paths handle notifications.&lt;/strong&gt;  Some check preferences before sending. Some check during sending. Some don’t check at all because the developer assumed another layer was handling it.&lt;/p&gt;

&lt;p&gt;She finds the bug. It’s in her original email handler. The preference check is wrong.&lt;/p&gt;

&lt;p&gt;She fixes it. Deploys.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Three other notification types break.&lt;/strong&gt;  They were relying on her buggy behavior.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fapyzb9u5k536v4qidesn.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fapyzb9u5k536v4qidesn.gif" alt="Domino effect - one bug fix breaks three other features" width="480" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The team estimate to fix it properly: &lt;em&gt;“We need to stop and refactor everything first, or we’ll just make it worse.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Management: “We don’t have time for a refactor. Just fix the bugs.”&lt;/p&gt;

&lt;h2&gt;
  
  
  Week 17: The Template Nightmare
&lt;/h2&gt;

&lt;p&gt;Marketing wants to update email designs. New brand guidelines.&lt;/p&gt;

&lt;p&gt;The developer assigned to this opens the codebase.&lt;/p&gt;

&lt;p&gt;Templates are  &lt;strong&gt;everywhere.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Some in a &lt;code&gt;/templates&lt;/code&gt; directory. Some hardcoded as strings. Some in the database. Some fetched from an external CMS that one developer integrated without telling anyone.&lt;/p&gt;

&lt;p&gt;There’s no single source of truth.&lt;/p&gt;

&lt;p&gt;Worse: the data passed to templates is completely inconsistent.&lt;/p&gt;

&lt;p&gt;Email templates expect order objects with certain fields. SMS templates expect a flattened structure. Push notifications expect a completely different format.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;One design change requires touching dozens of files.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The developer estimates: “Two weeks, maybe three.”&lt;/p&gt;

&lt;p&gt;Marketing: “It’s just a design update. How is that two weeks?”&lt;/p&gt;

&lt;h2&gt;
  
  
  Week 20: Performance Crisis
&lt;/h2&gt;

&lt;p&gt;Black Friday.&lt;/p&gt;

&lt;p&gt;The system crashes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjlrdt9uxnezf4h3w7irz.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjlrdt9uxnezf4h3w7irz.gif" alt="Everything is on fire - Black Friday system crash" width="480" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Investigation reveals: notification handlers are opening new database connections for  &lt;strong&gt;every single notification sent.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Some handlers properly close connections. Some don’t.&lt;/p&gt;

&lt;p&gt;Connection pools exhausted. Some handlers retry failed sends immediately and indefinitely,  &lt;strong&gt;amplifying the problem during the outage.&lt;/strong&gt;  One handler spawns a goroutine for each notification but never limits concurrency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The server runs out of memory processing a batch of 10,000 order confirmations.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Different developers made different assumptions about error handling.&lt;/p&gt;

&lt;p&gt;Some silently swallow errors and log them. Some retry with exponential backoff. Some fail fast. Some store failed notifications in one database table for retry. Others use a different table. One developer integrated a third-party queue system  &lt;strong&gt;that nobody else knew existed.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Notifications are getting lost between these systems.&lt;/p&gt;

&lt;p&gt;I’ve been on calls where the CTO asks: “How many notification systems do we have?”&lt;/p&gt;

&lt;p&gt;Nobody can answer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Week 24: The Audit
&lt;/h2&gt;

&lt;p&gt;Compliance team asks a simple question:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“Can you show us a record of all notifications sent to customer X in the past 90 days?”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The team cannot answer this.&lt;/p&gt;

&lt;p&gt;Notification logs are  &lt;strong&gt;scattered everywhere.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Some handlers log to stdout. Some to files. Some to a database table. Some don’t log at all.&lt;/p&gt;

&lt;p&gt;The log formats are completely different. Some include the full message content. Some just log “notification sent” without details. There’s no correlation between the notification and the triggering event.&lt;/p&gt;

&lt;p&gt;The auditor asks: &lt;em&gt;“How do you ensure notifications contain required legal disclosures?”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Each template was created independently. Some include required legal text. Some don’t.  &lt;strong&gt;There’s no centralized enforcement.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I’ve seen this audit happen. Teams spend weeks reconstructing logs manually.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Breaking Point
&lt;/h2&gt;

&lt;p&gt;VP of Engineering asks for a simple feature:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“Add an unsubscribe link to all emails.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The team estimates:  &lt;strong&gt;Three weeks.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The VP is shocked.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa3dybxptm7pxoqd6d67t.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa3dybxptm7pxoqd6d67t.gif" alt="Shocked reaction - three weeks to add an unsubscribe link?!" width="195" height="229"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;“It’s just adding a link. How is that three weeks of work?”&lt;/p&gt;

&lt;p&gt;The tech lead explains:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“We have seven different code paths that send emails. Each uses a different templating system. Some render templates on the server. Some fetch them from external systems. Some are hardcoded strings. We need to update each one individually, ensure the unsubscribe logic is consistent across all of them, add tracking for unsubscribe events, update the preferences system to handle unsubscribes properly, and test everything thoroughly because there’s no centralized testing strategy.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Three weeks. For a link.&lt;/p&gt;

&lt;p&gt;The VP asks the obvious question:  &lt;strong&gt;“How did it get this bad?”&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What Went Wrong?
&lt;/h2&gt;

&lt;p&gt;Here’s the thing that kills me about this story.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nobody made a catastrophically bad decision.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Sarah’s Week 1 implementation was appropriate. Mike’s template system was a reasonable improvement. Emma’s service class was a genuine attempt to bring order.&lt;/p&gt;

&lt;p&gt;Every single developer was trying to do good work under deadline pressure.&lt;/p&gt;

&lt;p&gt;The problem wasn’t the individual decisions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It was the absence of a shared architectural vision.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Without clear boundaries and layers, each developer made reasonable local optimizations that created global chaos.&lt;/p&gt;

&lt;p&gt;The “I’ll refactor it later” moments never came because there was never a good time to stop feature development.&lt;/p&gt;

&lt;p&gt;The “let’s standardize this” conversations happened but never resulted in action because no one had time to migrate existing code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The codebase evolved organically.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;And organic growth without structure doesn’t produce a garden. It produces a weed-infested lot.&lt;/p&gt;

&lt;h2&gt;
  
  
  “But This Is Just a Communication Problem”
&lt;/h2&gt;

&lt;p&gt;You might be thinking: the real issue was that developers didn’t communicate.&lt;/p&gt;

&lt;p&gt;If Sarah and Mike had talked, they wouldn’t have built two different templating systems. If Emma had socialized her service class better, others would have adopted it.&lt;/p&gt;

&lt;p&gt;Better standups. Better code reviews. Better documentation.  &lt;strong&gt;That’s&lt;/strong&gt;  what was missing, not architecture.&lt;/p&gt;

&lt;p&gt;This is seductive because it’s partially true.&lt;/p&gt;

&lt;p&gt;But here’s why it misses the point:  &lt;strong&gt;architecture IS communication.&lt;/strong&gt;&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;“Architecture IS communication. It’s the most important form of communication for technical decisions.”&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;It’s the most important form of communication for technical decisions.&lt;/p&gt;

&lt;p&gt;Think about what actually happened in the story.&lt;/p&gt;

&lt;p&gt;The team  &lt;strong&gt;DID communicate.&lt;/strong&gt;  Mike showed his template system in code review. Emma presented her service class and got positive feedback. They had a meeting in Week 11 trying to align on standards.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The communication happened.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;What didn’t happen was turning those conversations into durable, enforceable decisions.&lt;/p&gt;

&lt;p&gt;This is the key difference:&lt;/p&gt;

&lt;p&gt;Conversation says &lt;em&gt;“we should probably do X.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Architecture says &lt;em&gt;“X is how we do things here, and here’s where it lives.”&lt;/em&gt;&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;“Conversation is ephemeral. Architecture is the artifact that persists after the meeting ends.”&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;When a new developer joins and asks “where should notification logic go?”, the answer shouldn’t require scheduling a meeting or hunting through Slack history.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It should be obvious from looking at the codebase.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Communication without architecture leads to the problem Emma faced. She built something good. People agreed it was good. And then…  &lt;strong&gt;nothing changed.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Without architectural decisions being explicitly made (&lt;em&gt;“from now on, all notifications go through NotificationService”&lt;/em&gt;), the good idea just becomes another option in an increasingly fragmented codebase.&lt;/p&gt;

&lt;p&gt;Good communication can prevent chaos. But it can’t survive bad processes.&lt;/p&gt;

&lt;p&gt;When developers are under deadline pressure, working on different features, joining the team at different times,  &lt;strong&gt;communication will have gaps.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Architecture is the safety net for when communication fails.&lt;/p&gt;

&lt;p&gt;It’s the shared context that makes it possible to work somewhat independently without creating complete divergence.&lt;/p&gt;

&lt;p&gt;So yes, the team in our story could have communicated better.&lt;/p&gt;

&lt;p&gt;But the solution isn’t &lt;em&gt;“communicate more.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It’s “communicate the architecture and make it stick.”&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Document where things belong. Make architectural decisions explicit. Enforce them in code review. Build structure that persists beyond any individual conversation.&lt;/p&gt;

&lt;p&gt;Because at the end of the day, you can have all the Slack channels and standups and retros you want.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Without a shared architectural foundation, you’re just having the same conversations over and over while the codebase continues to fragment.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What Should Have Happened in Week 1
&lt;/h2&gt;

&lt;p&gt;Sarah should have spent 30 minutes writing this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Notification System Architecture

## Where Things Live
- All notification logic → services/notification_service.py
- Templates → templates/ directory (Jinja2 format)
- Preference checks → services/preference_service.py
- Delivery logging → notification_log table

## How to Add a New Notification Type
1. Add template to templates/
2. Add method to NotificationService
3. Log delivery attempt (success or failure)
4. Add tests to test_notification_service.py

## Error Handling
- Retries: 3 attempts with exponential backoff (1s, 2s, 4s)
- Failed sends → dead_letter_queue table
- All errors logged with correlation ID

## Preferences
- Check preferences BEFORE sending (not during)
- Default: all notifications enabled
- Unsubscribe → set all preferences to false

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;That’s it.&lt;/strong&gt;  30 minutes of work. Would have saved months of chaos.&lt;/p&gt;




&lt;p&gt;When Mike added SMS in Week 3, he would have known where to put it. When Emma added Slack in Week 9, she would have followed the existing pattern. When three developers worked simultaneously in Week 12, they would have made consistent decisions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Not because they communicated better. Because the architecture communicated for them.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Pattern You’ve Seen Before
&lt;/h2&gt;

&lt;p&gt;I’ve seen this exact pattern play out at least a dozen times.&lt;/p&gt;

&lt;p&gt;Different companies. Different tech stacks. Different teams. Different features.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The pattern is always the same.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Week 1: Clean, working code.&lt;/p&gt;

&lt;p&gt;Week 3: Small duplication appears.&lt;/p&gt;

&lt;p&gt;Week 7: Multiple approaches emerge.&lt;/p&gt;

&lt;p&gt;Week 12: Chaos compounds.&lt;/p&gt;

&lt;p&gt;Month 6: Simple changes take weeks.&lt;/p&gt;

&lt;p&gt;The timeline varies. Sometimes it happens faster (AI accelerates it). Sometimes slower (disciplined team delays it). But without architecture,  &lt;strong&gt;the destination is always the same.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Then AI Showed Up and Made Everything 10x Worse
&lt;/h2&gt;

&lt;p&gt;Everything I just described? It’s been happening for decades.&lt;/p&gt;

&lt;p&gt;Slow burn. Predictable. Manageable if you catch it early.&lt;/p&gt;

&lt;p&gt;Then 2024 happened.&lt;/p&gt;

&lt;p&gt;AI coding assistants arrived. And they turned architectural decay from a slow burn into a  &lt;strong&gt;wildfire.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  AI Replicates. It Doesn’t Invent.
&lt;/h3&gt;

&lt;p&gt;Here’s what changed.&lt;/p&gt;

&lt;p&gt;When Mike needed to add SMS in Week 3, he opened Sarah’s code.  &lt;strong&gt;Looked at it.&lt;/strong&gt;  Made a decision. Maybe he copied the pattern. Maybe he tried something different.&lt;/p&gt;

&lt;p&gt;But he  &lt;strong&gt;thought&lt;/strong&gt;  about it.&lt;/p&gt;

&lt;p&gt;Now imagine Mike has Cursor. Or Copilot. Or Claude Code.&lt;/p&gt;

&lt;p&gt;He types: &lt;code&gt;// Add SMS notification for shipments&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The AI looks at the codebase. Sees Sarah’s pattern.  &lt;strong&gt;Instantly replicates it.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Code appears. Mike reviews it. Looks good. Ships.&lt;/p&gt;

&lt;p&gt;He never even saw the architectural decision being made.&lt;/p&gt;

&lt;p&gt;The AI made it for him. Based on what already existed.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;“AI doesn’t just copy your code. It copies your architecture. Even the accidental parts.”&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  The Speed and Scale Just Exploded
&lt;/h3&gt;

&lt;p&gt;Remember Week 12? Three developers, three features, three different approaches emerging over a week?&lt;/p&gt;

&lt;p&gt;With AI,  &lt;strong&gt;that’s Tuesday.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Developer A asks AI for push notifications. AI sees Sarah’s inline handler. Copies it.&lt;/p&gt;

&lt;p&gt;Developer B asks AI for digest emails. AI sees Mike’s templates. Copies those.&lt;/p&gt;

&lt;p&gt;Developer C asks AI for snoozing. AI sees Emma’s service class. Copies that.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;All three features ship the same day.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;But it’s not just faster. It’s  &lt;strong&gt;bigger.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Pre-AI: 50-200 lines of code per day.&lt;/p&gt;

&lt;p&gt;With AI: 500-2000 lines in the same time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;That’s 5-10x more code&lt;/strong&gt;  implementing patterns, creating variations, spreading duplication.&lt;/p&gt;

&lt;p&gt;You have two ways of checking preferences? AI propagates both. Three error handling approaches? AI replicates all three.  &lt;strong&gt;Every inconsistency becomes a seed that AI plants everywhere.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The notification system that took Sarah’s team 20 weeks to become unmaintainable?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;With AI, you can get there in 4.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  AI Can’t See What You Didn’t Write Down
&lt;/h3&gt;

&lt;p&gt;Here’s the fundamental problem.&lt;/p&gt;

&lt;p&gt;AI is  &lt;strong&gt;incredible&lt;/strong&gt;  at implementation. It can write clean, working code. It follows patterns. It handles edge cases.&lt;/p&gt;

&lt;p&gt;But it cannot  &lt;strong&gt;architect.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It can’t look at your codebase and think: &lt;em&gt;“Wait, this is getting fragmented. We should consolidate these patterns.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;It can’t say: &lt;em&gt;“I see three different approaches here. Which one should I follow?”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;It just… picks one. Based on similarity to what you’re asking for.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If your architecture is accidental, AI accelerates the accident.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Old Advice Is Now Dangerous
&lt;/h3&gt;

&lt;p&gt;The advice used to be: &lt;em&gt;“Don’t over-architect small projects. Start simple. Refactor when you need to.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;That advice just became  &lt;strong&gt;dangerous.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With AI, “small projects” don’t stay small. They  &lt;strong&gt;explode.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;By the time you realize you need to refactor, you have 10x more code to untangle.&lt;/p&gt;

&lt;p&gt;The window between “clean start” and “architectural debt crisis” collapsed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Week 1 decisions matter more than ever.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can’t afford to defer architecture anymore.&lt;/p&gt;

&lt;h3&gt;
  
  
  But Here’s the Good News
&lt;/h3&gt;

&lt;p&gt;The same force that amplifies chaos can amplify  &lt;strong&gt;order.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AI replicates good patterns just as enthusiastically as bad ones.&lt;/p&gt;

&lt;p&gt;If you write that architecture document in Week 1. If you establish clear boundaries. If you make the “right way” obvious.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI will follow it.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Consistently. Every single time. Across every feature.&lt;/p&gt;

&lt;p&gt;It will use your NotificationService. It will follow your template structure. It will implement your error handling exactly as specified.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;At scale. At speed. Without deviation.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The chaos multiplier becomes a  &lt;strong&gt;consistency multiplier.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;But only if you give it something consistent to multiply.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;“AI doesn’t make architecture optional. It makes it mandatory.”&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;This is why the next post matters even more now.&lt;/p&gt;

&lt;p&gt;I’ll show you how to set up that architectural foundation  &lt;strong&gt;before&lt;/strong&gt;  you start generating code with AI.&lt;/p&gt;

&lt;p&gt;How to make the right patterns so obvious that AI can’t help but follow them.&lt;/p&gt;

&lt;p&gt;How to turn AI from an architectural time bomb into an architectural enforcement mechanism.&lt;/p&gt;




&lt;h2&gt;
  
  
  What’s Next
&lt;/h2&gt;

&lt;p&gt;In the next post, I’ll show you how to build that architectural foundation.&lt;/p&gt;

&lt;p&gt;Not some enterprise framework. Not over-engineered complexity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The simple, practical structure that prevents this chaos.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We’ll rebuild this exact notification system with clear boundaries, testable code, and patterns that guide developers toward consistency instead of fragmentation.&lt;/p&gt;

&lt;p&gt;You’ll see:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Where things live (and why)&lt;/li&gt;
&lt;li&gt;How to test without infrastructure&lt;/li&gt;
&lt;li&gt;How to make architectural decisions stick&lt;/li&gt;
&lt;li&gt;How AI helps instead of amplifying chaos&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Until then, look at your codebase.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What week are you on?&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Have you lived through this story? I’d love to hear about it. Find me on &lt;a href="https://twitter.com/pachecot" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt; or &lt;a href="https://linkedin.com/in/pachecothiago" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://sudoish.com/a-tale-of-accidental-architecture-how-50-lines-became-a-black-friday-disaster/" rel="noopener noreferrer"&gt;A Tale of Accidental Architecture: How 50 Lines Became A Black Friday Disaster&lt;/a&gt; appeared first on &lt;a href="https://sudoish.com" rel="noopener noreferrer"&gt;sudoish&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>uncategorized</category>
      <category>accidentalarchitectu</category>
      <category>codequality</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Nobody Knows How to Estimate Software Anymore (And It’s Not Your Fault)</title>
      <dc:creator>Thiago Pacheco</dc:creator>
      <pubDate>Sun, 15 Feb 2026 16:12:00 +0000</pubDate>
      <link>https://forem.com/pacheco/nobody-knows-how-to-estimate-software-anymore-and-its-not-your-fault-3d1a</link>
      <guid>https://forem.com/pacheco/nobody-knows-how-to-estimate-software-anymore-and-its-not-your-fault-3d1a</guid>
      <description>&lt;p&gt;Here’s a pattern I keep seeing (and living):&lt;/p&gt;

&lt;p&gt;A feature that would have taken 2-3 weeks gets estimated at “2 days with AI.”&lt;/p&gt;

&lt;p&gt;It ships in 4. Sometimes 5.&lt;/p&gt;

&lt;p&gt;Not because the AI was slow. Because there are three other “2-day AI projects” running simultaneously. Each one spiraling into bugs, edge cases, and integration issues nobody saw coming. Context-switching between half-finished features, fighting fires, somehow falling behind on all of them.&lt;/p&gt;

&lt;p&gt;The AI writes code faster than any human could.&lt;/p&gt;

&lt;p&gt;But we’re not shipping faster. We’re drowning.&lt;/p&gt;

&lt;p&gt;And here’s the uncomfortable part: we’re doing this to ourselves.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Promise vs The Reality
&lt;/h2&gt;

&lt;p&gt;You’ve seen the headlines. AI coding tools boost productivity by 39%. Developers are shipping faster than ever. The future is here.&lt;/p&gt;

&lt;p&gt;What they don’t tell you is what happens next.&lt;/p&gt;

&lt;p&gt;Your manager sees those numbers too. And if AI makes you 39% more productive, why can’t you handle 60% more work? Why are estimates still slipping? Why are bugs still happening?&lt;/p&gt;

&lt;p&gt;The math doesn’t add up. And you’re stuck in the middle trying to explain why “AI writes the code” doesn’t mean “features appear instantly.”&lt;/p&gt;

&lt;p&gt;Here’s what the data actually shows:&lt;/p&gt;

&lt;p&gt;A UC Berkeley study found that &lt;strong&gt;AI doesn’t reduce work, it intensifies it.&lt;/strong&gt; One developer they interviewed said it perfectly: “You thought maybe you’d work less with AI. But you don’t work less. You just work the same amount or even more.”&lt;/p&gt;

&lt;p&gt;TechCrunch reported last week: teams adopting AI workflows saw &lt;strong&gt;expectations triple, stress triple, but actual productivity only go up by maybe 10%.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;And here’s the kicker: a METR study found developers &lt;em&gt;expected&lt;/em&gt; AI to speed them up by 24%. In reality? &lt;strong&gt;It slowed them down.&lt;/strong&gt; But they still &lt;em&gt;believed&lt;/em&gt; it made them 20% faster.&lt;/p&gt;

&lt;p&gt;The gap between perception and reality is dangerous. And most of us are living in it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The High Expectations Problem
&lt;/h2&gt;

&lt;p&gt;This isn’t just coming from management.&lt;/p&gt;

&lt;p&gt;Yes, leadership hears “AI can write massive amounts of code” and expects you to prompt your way through multiple features in no time. First try, maybe second. They don’t understand how AI actually works.&lt;/p&gt;

&lt;p&gt;But here’s the honest truth: &lt;strong&gt;we don’t either.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I thought I did. I thought “AI writes the code, I review it, we ship it” was the workflow. But that’s not what happens.&lt;/p&gt;

&lt;p&gt;What happens is: AI writes the code. I start reviewing it. I find issues. I ask the AI to fix them. It creates new issues. I start another feature while waiting. That one has issues too. Now I’m juggling three half-finished features, each with its own set of AI-generated bugs I’m trying to understand and fix.&lt;/p&gt;

&lt;p&gt;The promise was velocity. The reality is fragmentation.&lt;/p&gt;

&lt;p&gt;And I keep saying yes to more because “it’s just AI, how hard can it be?” But the cognitive load of reviewing, validating, debugging, and integrating AI code across multiple parallel tracks is crushing.&lt;/p&gt;

&lt;p&gt;Every feature &lt;em&gt;looks&lt;/em&gt; 80% done. None of them actually ship.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Estimation Trap
&lt;/h2&gt;

&lt;p&gt;Here’s the confession I don’t want to make: &lt;strong&gt;I have no idea how to estimate tasks anymore.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Developers have always struggled with estimation. We underestimate until we get burned enough times to realize “add a simple feature” is never simple when there’s an existing codebase involved.&lt;/p&gt;

&lt;p&gt;But AI broke our calibration completely.&lt;/p&gt;

&lt;p&gt;A task that used to take 3 days now takes… 2 hours? 4 days? Both? Neither? It depends on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How well I can describe what I want&lt;/li&gt;
&lt;li&gt;How many edge cases the AI misses&lt;/li&gt;
&lt;li&gt;How much integration complexity exists&lt;/li&gt;
&lt;li&gt;Whether the AI understands the existing patterns&lt;/li&gt;
&lt;li&gt;How many iterations it takes to get it right&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So now I swing between two extremes:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Under-estimating&lt;/strong&gt; because “AI will handle it” — then spending 3 days debugging what the AI generated in 20 minutes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Over-estimating&lt;/strong&gt; because “who knows what AI will break” — then looking slow when it actually works the first time.&lt;/p&gt;

&lt;p&gt;The old rules don’t apply. New ones haven’t emerged. And this isn’t just an academic problem:&lt;/p&gt;

&lt;p&gt;Sprint planning becomes guesswork. Roadmaps turn into fiction. Technical debt compounds faster than we can track it. Trust erodes when commitments slip repeatedly.&lt;/p&gt;

&lt;p&gt;When someone asks “how long will this take?” the honest answer is often “I don’t know anymore.”&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hidden Costs Nobody’s Talking About
&lt;/h2&gt;

&lt;p&gt;The expectation problem is obvious once you see it. But there are other traps hiding underneath:&lt;/p&gt;

&lt;h3&gt;
  
  
  You’re Not Writing Code Anymore. You’re Validating It.
&lt;/h3&gt;

&lt;p&gt;One researcher described it perfectly: “A senior developer with Copilot doesn’t become a code-writing machine. They become a &lt;strong&gt;code-validation machine&lt;/strong&gt;.”&lt;/p&gt;

&lt;p&gt;When AI generates 40% more code, you have 40% more code to review. But reviewing AI code is different than reviewing human code. Humans make predictable mistakes. AI makes plausible-sounding nonsense that looks right until you run it.&lt;/p&gt;

&lt;p&gt;Context switching between your work and reviewing AI output costs 20-30% of your focus &lt;em&gt;per switch&lt;/em&gt;. When you’re juggling multiple AI-started features, you’re switching constantly.&lt;/p&gt;

&lt;p&gt;You’re not more productive. You’re just more exhausted.&lt;/p&gt;

&lt;h3&gt;
  
  
  You Say Yes to Everything
&lt;/h3&gt;

&lt;p&gt;AI makes tasks that used to be “too expensive” feel trivial. So you say yes to things you would have declined or delegated.&lt;/p&gt;

&lt;p&gt;“Can you add that dashboard feature?”&lt;br&gt;&lt;br&gt;
“Sure, AI can knock that out.”&lt;/p&gt;

&lt;p&gt;“Can you refactor that module?”&lt;br&gt;&lt;br&gt;
“Yeah, should be quick with AI.”&lt;/p&gt;

&lt;p&gt;“Can you investigate that performance issue?”&lt;br&gt;&lt;br&gt;
“I’ll have AI profile it.”&lt;/p&gt;

&lt;p&gt;Harvard Business Review calls this &lt;strong&gt;work intensification&lt;/strong&gt; : AI doesn’t reduce your workload, it makes you take on more.&lt;/p&gt;

&lt;p&gt;You’re not automating your way to free time. You’re automating your way to more commitments.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Quality-Speed Death Spiral
&lt;/h3&gt;

&lt;p&gt;Here’s how it compounds:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;AI gives you an initial productivity surge&lt;/li&gt;
&lt;li&gt;That surge creates expectations for speed&lt;/li&gt;
&lt;li&gt;Speed pressure leads to cutting corners on review&lt;/li&gt;
&lt;li&gt;Lower quality creates more bugs&lt;/li&gt;
&lt;li&gt;More bugs mean more debugging and rework&lt;/li&gt;
&lt;li&gt;Debugging takes longer because you didn’t write the code&lt;/li&gt;
&lt;li&gt;You fall behind, pressure increases, quality drops further&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The Berkeley researchers warned: “The productivity surge enjoyed at the beginning can give way to lower quality work, turnover, and other problems.”&lt;/p&gt;

&lt;p&gt;This is happening right now across teams. Features ship that “work,” but the developers don’t fully understand how. When they break, debugging becomes an archaeology project through code nobody wrote and barely reviewed. That’s not faster. That’s deferred pain.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Actually Feels Like
&lt;/h2&gt;

&lt;p&gt;A software engineer named Siddhant Khare wrote about “AI fatigue” last week. It resonated with me immediately because &lt;strong&gt;it’s real and nobody talks about it&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;AI-era burnout doesn’t look like working 80-hour weeks. It looks like:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Decision fatigue&lt;/strong&gt; from validating endless AI outputs. Every line might be wrong. Every function might have a subtle bug. You can’t just skim.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cognitive load&lt;/strong&gt; from juggling multiple AI-started initiatives. Each one 80% done. None actually shipping.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Imposter syndrome&lt;/strong&gt; when you can’t tell if you’re productive or just busy. You wrote 3,000 lines this week. Zero features shipped. Are you slow? Or is the process broken?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Anxiety&lt;/strong&gt; from commitments you can’t estimate. You said 2 days. It’s been 4. The AI generated the code in 20 minutes and you’ve been debugging it ever since.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Guilt&lt;/strong&gt; for not keeping up. Everyone else seems to be shipping faster with AI. Why aren’t you?&lt;/p&gt;

&lt;p&gt;The research shows that some developers see burnout risk drop 17% with AI — but only if their workload doesn’t increase to fill the gap.&lt;/p&gt;

&lt;p&gt;In practice? Workload always increases.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Team Lead’s Nightmare
&lt;/h2&gt;

&lt;p&gt;If estimating one AI-assisted task is this chaotic, imagine coordinating an entire team.&lt;/p&gt;

&lt;p&gt;You’re trying to plan a sprint. Every developer gives you an estimate. Half of them are wildly optimistic because “AI will handle it.” The other half are padding heavily because they’ve been burned.&lt;/p&gt;

&lt;p&gt;You don’t know which estimates to trust. You don’t know how to aggregate them into a roadmap. You don’t know how to explain to stakeholders why the team that just adopted “productivity-boosting AI tools” is still missing deadlines.&lt;/p&gt;

&lt;p&gt;And when the sprint ends? Half the stories are “80% done.” A quarter shipped but with bugs. The rest are stuck in AI-generated complexity no one fully understands.&lt;/p&gt;

&lt;p&gt;Tech leads are stuck in the middle. Can’t estimate their own AI-assisted work. Somehow supposed to help the team estimate theirs.&lt;/p&gt;

&lt;p&gt;Sprint planning feels like collective guessing. Retrospectives turn into “we don’t know what went wrong, the AI just… took longer than expected.”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The only solution I’ve found is the boring one: go back to basics.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Estimate anyway. Even if it’s totally wrong.&lt;br&gt;&lt;br&gt;
Run retrospectives. Understand what actually happened.&lt;br&gt;&lt;br&gt;
Repeat every cycle. Gather data.&lt;br&gt;&lt;br&gt;
Adjust future estimates based on reality, not hope.&lt;/p&gt;

&lt;p&gt;It’s unglamorous. It’s slow. But it’s the only path I see to understanding our true capacity with AI.&lt;/p&gt;

&lt;p&gt;You can’t optimize what you don’t measure. And right now, most teams aren’t measuring anything except “we’re using AI, we should be faster.”&lt;/p&gt;

&lt;p&gt;After enough cycles, patterns emerge:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AI features that touch legacy code take 3x longer than expected&lt;/li&gt;
&lt;li&gt;Net-new features hit estimates more reliably&lt;/li&gt;
&lt;li&gt;Code review adds 40% to any AI-heavy story&lt;/li&gt;
&lt;li&gt;Integration work still takes the same time regardless of AI&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;None of this is in the “AI boosts productivity 39%” headline. But it’s the reality of coordinating a team in the AI era.&lt;/p&gt;

&lt;p&gt;The boring strategies we’ve always used — estimate, measure, learn, adjust — they still work. They’re just slower to calibrate now because the variables changed.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I’m Trying (Not Prescribing)
&lt;/h2&gt;

&lt;p&gt;I don’t have this figured out. Nobody does yet. But here’s what’s helping in my experience:&lt;/p&gt;

&lt;h3&gt;
  
  
  The One-Thing Rule
&lt;/h3&gt;

&lt;p&gt;Stop saying yes to multiple simultaneous AI features. One thing from start to shipped before starting the next.&lt;/p&gt;

&lt;p&gt;Does it feel slower? Yes.&lt;br&gt;&lt;br&gt;
Do you actually ship more? Also yes.&lt;/p&gt;

&lt;p&gt;Multiple AI-started initiatives feel like progress until nothing’s actually done. Finishing one thing beats starting five.&lt;/p&gt;

&lt;h3&gt;
  
  
  Honest Estimates
&lt;/h3&gt;

&lt;p&gt;When someone asks “how long will this take?” stop giving the optimistic AI-boosted number.&lt;/p&gt;

&lt;p&gt;Instead: “AI might generate it in an hour. Integration and debugging might take 3 days. Estimate 4 days to be safe.”&lt;/p&gt;

&lt;p&gt;It feels slow. But estimates stop slipping constantly. And trust improves.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Validation Budget
&lt;/h3&gt;

&lt;p&gt;Timebox AI code review. If you can’t fully understand and validate what the AI built in the time it would have taken to write it yourself, don’t use the AI code.&lt;/p&gt;

&lt;p&gt;Sounds counterintuitive. But reviewing 800 lines of AI code you don’t understand for 6 hours defeats the purpose. Sometimes writing 200 lines yourself in 4 hours is actually faster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Measuring What Matters
&lt;/h3&gt;

&lt;p&gt;Stop tracking lines of code or features started. Track instead:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Features actually in production&lt;/li&gt;
&lt;li&gt;Bugs introduced per feature&lt;/li&gt;
&lt;li&gt;Time from “start” to “shipped” (not “AI generated code”)&lt;/li&gt;
&lt;li&gt;Team stress level (are people sleeping okay?)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The numbers are uncomfortable. But they’re honest.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using Speed for Quality, Not Quantity
&lt;/h3&gt;

&lt;p&gt;When AI genuinely saves time, spend that time on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Better tests&lt;/li&gt;
&lt;li&gt;Clearer documentation&lt;/li&gt;
&lt;li&gt;Paying down technical debt&lt;/li&gt;
&lt;li&gt;Deeper thinking on architecture&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Not just “more features.”&lt;/p&gt;

&lt;p&gt;The productivity gain is real. The question is: who captures it? If it all goes to “more output for the same salary,” you’re on a treadmill. If some goes to making work better and more sustainable, everyone might actually benefit.&lt;/p&gt;

&lt;h3&gt;
  
  
  At the Team Level: The Data Discipline
&lt;/h3&gt;

&lt;p&gt;For tech leads and managers, the same boring-but-effective cycle applies:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Estimate&lt;/strong&gt; — Even when it feels like guessing. Get the team’s best guess on record.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Measure&lt;/strong&gt; — Track actual time, not AI generation time. Start to shipped, not start to “AI wrote code.”&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Retrospect&lt;/strong&gt; — What took longer than expected? What patterns are emerging?&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Adjust&lt;/strong&gt; — Use the data. If AI stories touching legacy code are consistently 3x estimates, factor that in next sprint.&lt;/p&gt;

&lt;p&gt;After 4-5 cycles, you start seeing your team’s actual capacity with AI. Not the theoretical 39% boost. The real number.&lt;/p&gt;

&lt;p&gt;It’s slower than anyone wants. But it’s the only path to honest planning.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Uncomfortable Question
&lt;/h2&gt;

&lt;p&gt;Here’s what I keep coming back to:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The 39% productivity gain might be real. But who’s capturing it?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Your company captures it as more output for the same salary.&lt;br&gt;&lt;br&gt;
Your manager captures it as more ambitious roadmaps.&lt;br&gt;&lt;br&gt;
The market captures it as higher expectations.&lt;/p&gt;

&lt;p&gt;You capture… what? More stress? More context-switching? More debugging code you didn’t write?&lt;/p&gt;

&lt;p&gt;Unless you actively defend your boundaries, AI productivity tools become productivity &lt;em&gt;traps&lt;/em&gt; — a treadmill that speeds up but never lets you off.&lt;/p&gt;

&lt;p&gt;I don’t want to sound cynical. AI is genuinely powerful. I use it every day. But the default path is work intensification, not work reduction. And if you don’t choose differently, the default will choose for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Different Path
&lt;/h2&gt;

&lt;p&gt;What if AI augmentation wasn’t about doing &lt;em&gt;more&lt;/em&gt;? What if it was about doing &lt;em&gt;better&lt;/em&gt;?&lt;/p&gt;

&lt;p&gt;What if productivity gains went toward:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deeper thinking on hard problems&lt;/li&gt;
&lt;li&gt;Mentoring junior developers&lt;/li&gt;
&lt;li&gt;Paying down technical debt&lt;/li&gt;
&lt;li&gt;Building more resilient systems&lt;/li&gt;
&lt;li&gt;Actually shipping polished features instead of half-finished experiments&lt;/li&gt;
&lt;li&gt;Sustainable pace instead of constant sprinting&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The technology is powerful. The question is: &lt;strong&gt;who decides what that power is for?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Right now, the default answer is “more output.” But you can choose differently.&lt;/p&gt;

&lt;p&gt;You can say no to the fifth simultaneous initiative.&lt;br&gt;&lt;br&gt;
You can give honest estimates instead of optimistic ones.&lt;br&gt;&lt;br&gt;
You can spend AI-gained time on quality instead of quantity.&lt;br&gt;&lt;br&gt;
You can protect your boundaries instead of filling every efficiency gain with new commitments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The 39% productivity trap is only a trap if you don’t see it coming.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now you do.&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s Next
&lt;/h2&gt;

&lt;p&gt;This is still being figured out across the industry. Some weeks teams ship fast and feel great. Other weeks they’re drowning in half-finished AI features and wondering what went wrong.&lt;/p&gt;

&lt;p&gt;But patterns are emerging. Data is being gathered. Teams are learning to say no more often. And slowly, the industry is learning to use AI as a tool for better work, not just more work.&lt;/p&gt;

&lt;p&gt;If you’re feeling this too — the expectations, the estimation chaos, the validation treadmill — you’re not slow. You’re not behind. The system is broken, and you’re just the first to notice.&lt;/p&gt;

&lt;p&gt;The question is: what are you going to do about it?&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If this resonated with you, I’d love to hear your experience. Are you in the productivity trap too? What are you trying? Find me on &lt;a href="https://linkedin.com/in/pachecothiago" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Further Reading:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;TechCrunch: &lt;a href="https://techcrunch.com/2026/02/09/the-first-signs-of-burnout-are-coming-from-the-people-who-embrace-ai-the-most/" rel="noopener noreferrer"&gt;“The first signs of burnout are coming from the people who embrace AI the most”&lt;/a&gt; (Feb 2026)&lt;/li&gt;
&lt;li&gt;Harvard Business Review: &lt;a href="https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it" rel="noopener noreferrer"&gt;“AI Doesn’t Reduce Work—It Intensifies It”&lt;/a&gt; (Feb 2026)&lt;/li&gt;
&lt;li&gt;Fortune: &lt;a href="https://fortune.com/2026/02/10/ai-future-of-work-white-collar-employees-technology-productivity-burnout-research-uc-berkeley/" rel="noopener noreferrer"&gt;“AI is having the opposite effect it was supposed to”&lt;/a&gt; (Feb 2026)&lt;/li&gt;
&lt;li&gt;METR: &lt;a href="https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/" rel="noopener noreferrer"&gt;“Measuring the Impact of Early-2025 AI on Developer Productivity”&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Business Insider: &lt;a href="https://www.businessinsider.com/ai-fatigue-burnout-software-engineer-essay-siddhant-khare-2026-2" rel="noopener noreferrer"&gt;“AI fatigue is real and nobody talks about it”&lt;/a&gt; (Feb 2026)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The post &lt;a href="https://sudoish.com/nobody-knows-how-to-estimate-software-anymore/" rel="noopener noreferrer"&gt;Nobody Knows How to Estimate Software Anymore (And It’s Not Your Fault)&lt;/a&gt; appeared first on &lt;a href="https://sudoish.com" rel="noopener noreferrer"&gt;sudoish&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>career</category>
      <category>aicoding</category>
    </item>
    <item>
      <title>The Review Bottleneck: Why AI Explanations Are Making Us Trust Less, Not More</title>
      <dc:creator>Thiago Pacheco</dc:creator>
      <pubDate>Fri, 13 Feb 2026 14:07:52 +0000</pubDate>
      <link>https://forem.com/pacheco/the-review-bottleneck-why-ai-explanations-are-making-us-trust-less-not-more-35al</link>
      <guid>https://forem.com/pacheco/the-review-bottleneck-why-ai-explanations-are-making-us-trust-less-not-more-35al</guid>
      <description>&lt;p&gt;Last week I spent 3 hours reviewing code that took 20 minutes to write.&lt;/p&gt;

&lt;p&gt;The AI was faster. The review wasn’t.&lt;/p&gt;

&lt;p&gt;And I’m starting to realize: that’s the problem.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Less coding, more engineering.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I keep hearing this phrase everywhere. The idea is simple: AI handles the coding, so developers focus on the higher-level work. The engineering. The architecture. The review.&lt;/p&gt;

&lt;p&gt;But here’s what nobody’s talking about: AI isn’t just writing the code anymore. It’s reviewing it too.&lt;/p&gt;

&lt;p&gt;And the paradox is obvious once you see it: AI generates code faster, but reviewing it takes longer than ever.&lt;/p&gt;

&lt;p&gt;Here’s what those 3 hours looked like:&lt;/p&gt;

&lt;p&gt;I read through 300 lines of code carefully. Checked the tests. Verified the logic flow. Examined edge cases.&lt;/p&gt;

&lt;p&gt;But that was only the first hour.&lt;/p&gt;

&lt;p&gt;The next two hours? Reading AI-generated explanations. Reviewing the AI code reviewer’s feedback. Cross-referencing the AI’s architectural justifications with the actual implementation. Trying to reconcile conflicting suggestions from different AI systems.&lt;/p&gt;

&lt;p&gt;By the end, I understood the code. But I’d spent more time processing AI commentary than reviewing actual logic.&lt;/p&gt;

&lt;p&gt;And here’s what bothered me: I see people in the industry are approving similar PRs in 20 minutes.&lt;/p&gt;

&lt;p&gt;Are they reading all of this? Or are they skimming the AI explanations and trusting by default?&lt;/p&gt;

&lt;p&gt;I’m pretty sure it’s the second one.&lt;/p&gt;

&lt;p&gt;This isn’t about being thorough versus lazy. It’s a recognition of something shifting.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;300 lines of actual code&lt;/li&gt;
&lt;li&gt;1,200 words of AI-generated explanation&lt;/li&gt;
&lt;li&gt;800 words of AI code review feedback&lt;/li&gt;
&lt;li&gt;15 inline comments from the AI about trade-offs and alternatives&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;I had more documentation to review than code.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;300 lines&lt;/strong&gt; of actual implementation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;2,000+ words&lt;/strong&gt; of AI-generated commentary&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The code was the easy part. The cognitive load came from synthesizing multiple AI perspectives, each confident, each reasonable-sounding, some subtly contradicting each other.&lt;/p&gt;

&lt;p&gt;The tests passed. The linting passed. The AI explanations sounded reasonable. The AI reviewer’s concerns seemed addressed.&lt;/p&gt;

&lt;p&gt;So I trusted the process and moved on.&lt;/p&gt;

&lt;p&gt;And that’s becoming the norm.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzy76dfj6ck6kof5advbj.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzy76dfj6ck6kof5advbj.gif" width="244" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I’m not alone.&lt;/p&gt;

&lt;p&gt;At Anthropic—the company building Claude—engineers are generating 2,000 to 3,000 line pull requests regularly. Mike Krieger, their Chief Product Officer, openly admits: “pretty much 100%” of their code is now AI-generated.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;And they’re using Claude to review it too.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Boris Cherny, head of Anthropic’s Claude Code team, hasn’t written a single line of code in over two months. He shipped 22 pull requests in one day, 27 the next.&lt;/p&gt;

&lt;p&gt;“Each one 100% written by Claude.”&lt;/p&gt;

&lt;p&gt;This isn’t the future. It’s happening right now, at the companies building the AI tools we’re all using.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqsek1p2uqplbnix3zfkg.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqsek1p2uqplbnix3zfkg.gif" width="244" height="150"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Confidence Trap
&lt;/h2&gt;

&lt;p&gt;Code reviews were already hard. They required skill, domain knowledge, and patience.&lt;/p&gt;

&lt;p&gt;Now multiply that by the sheer volume AI generates.&lt;/p&gt;

&lt;p&gt;But volume isn’t even the real issue.&lt;/p&gt;

&lt;p&gt;The real issue is that AI writes &lt;strong&gt;confident&lt;/strong&gt; code. It comes with detailed explanations. Trade-off analysis. References. Architecture justifications.&lt;/p&gt;

&lt;p&gt;Enough well-articulated reasoning to make everything sound sensible.&lt;/p&gt;

&lt;p&gt;When you look at a 500-line PR with a 2,000-word explanation of why every decision was made, the cognitive load is enormous.&lt;/p&gt;

&lt;p&gt;You can dig in and verify every claim.&lt;/p&gt;

&lt;p&gt;Or you can trust that the explanation sounds reasonable and move on.&lt;/p&gt;

&lt;p&gt;Most developers are choosing “move on.”&lt;/p&gt;

&lt;p&gt;Here’s where we are:&lt;/p&gt;

&lt;p&gt;Claude Code and Codex are generating code at unprecedented scale. &lt;strong&gt;46% of developers’ code&lt;/strong&gt; is now AI-written across major tools like Claude Code, Codex, and GitHub Copilot.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;84% of developers&lt;/strong&gt; use AI coding tools regularly.&lt;/p&gt;

&lt;p&gt;And here’s the kicker: while AI generates nearly half our code, &lt;strong&gt;only 30% of AI-suggested code actually gets accepted&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The rest gets rejected during review—or should get rejected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcojyumx8m43c5noyf4n5.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcojyumx8m43c5noyf4n5.gif" width="498" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Then We Added AI Code Review
&lt;/h2&gt;

&lt;p&gt;So teams did the obvious thing: bring in AI code review tools.&lt;/p&gt;

&lt;p&gt;Now every PR has:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The AI-generated code (500 lines)&lt;/li&gt;
&lt;li&gt;The AI’s explanation of what it built and why (2,000 words)&lt;/li&gt;
&lt;li&gt;The AI reviewer’s analysis (another 1,500 words)&lt;/li&gt;
&lt;li&gt;Sometimes multiple AI reviewers, each with their own opinions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You’re staring at 4,000+ words of confident, reasonable-sounding explanations from multiple AI systems.&lt;/p&gt;

&lt;p&gt;All of it well-structured. All of it articulate. Much of it contradicting itself in subtle ways.&lt;/p&gt;

&lt;p&gt;And you’re supposed to synthesize all of this, make a judgment call, and approve or reject.&lt;/p&gt;

&lt;p&gt;What actually happens?&lt;/p&gt;

&lt;p&gt;You skim the AI’s explanation. You skim the AI reviewer’s comments. If they roughly agree and the tests pass, you approve it.&lt;/p&gt;

&lt;p&gt;The AI’s confidence became your confidence by default.&lt;/p&gt;

&lt;p&gt;Research confirms what we all feel: AI-generated code creates &lt;strong&gt;1.7x more issues&lt;/strong&gt; than human-written code.&lt;/p&gt;

&lt;p&gt;Unclear naming. Mismatched terminology. Generic identifiers everywhere.&lt;/p&gt;

&lt;p&gt;All of it increasing cognitive load for reviewers.&lt;/p&gt;

&lt;p&gt;And here’s the kicker: all of it explained so confidently you don’t question it.&lt;/p&gt;

&lt;p&gt;This is what researchers call “automation bias”—our tendency to accept answers from automated systems, even when we encounter contradictory information.&lt;/p&gt;

&lt;p&gt;We’re not carefully evaluating the code. We’re trusting that the volume of explanation equals correctness.&lt;/p&gt;

&lt;h2&gt;
  
  
  More Explanation ≠ More Understanding
&lt;/h2&gt;

&lt;p&gt;The paradox is obvious once you see it:&lt;/p&gt;

&lt;p&gt;Adding AI code reviewers didn’t make reviews better. It made them worse.&lt;/p&gt;

&lt;p&gt;Not because the AI reviewers are bad. But because the sheer volume of explanation—from the writer AI, from the reviewer AI, sometimes from multiple reviewer AIs—has become impossible to actually process.&lt;/p&gt;

&lt;p&gt;We traded one problem (not enough context) for another (too much confident noise).&lt;/p&gt;

&lt;p&gt;And the human reviewer, the supposed quality gate, is now just the person who clicks “Approve” after skimming thousands of words they don’t have time to verify.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm8f1zitdtj0ynedskswd.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm8f1zitdtj0ynedskswd.gif" width="498" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The bottleneck isn’t writing code anymore.&lt;/p&gt;

&lt;p&gt;It’s not even reviewing code.&lt;/p&gt;

&lt;p&gt;It’s trusting code we don’t fully understand because we’re drowning in explanations that sound reasonable but are too expensive to verify.&lt;/p&gt;

&lt;p&gt;Even OpenAI acknowledges this in their Codex documentation: “It still remains essential for users to manually review and validate all agent-generated code.”&lt;/p&gt;

&lt;p&gt;But are we actually doing that?&lt;/p&gt;

&lt;p&gt;The evidence suggests no.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wait. See What Just Happened?
&lt;/h2&gt;

&lt;p&gt;I need to be honest with you.&lt;/p&gt;

&lt;p&gt;I almost did the exact same thing to you.&lt;/p&gt;

&lt;p&gt;I almost buried this post in citations.&lt;/p&gt;

&lt;p&gt;16 footnotes. Statistics every other paragraph. Research from Anthropic, OpenAI, arXiv, CodeRabbit, Qodo. All credible. All well-sourced. All making the same point.&lt;/p&gt;

&lt;p&gt;And if you’re like most readers, you would have skimmed them. Trusted that they said what I claimed. Moved on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;That’s exactly what we’re doing with code reviews.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The volume of explanation—even when accurate—becomes its own problem. Too many words. Too much confidence. Not enough time to verify.&lt;/p&gt;

&lt;p&gt;So we trust by default.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk22bvtdph5h9wgiywyg4.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk22bvtdph5h9wgiywyg4.gif" width="500" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I’m Trying
&lt;/h2&gt;

&lt;p&gt;I don’t have this solved. But here’s what’s working for me:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The 30-minute rule&lt;/strong&gt; – If I can’t understand the PR in 30 minutes of focused review, it’s too big. Send it back or break it down.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No AI reviewer without human review&lt;/strong&gt; – AI review is a supplement, not a replacement. I still need to read the actual code, not just the summary.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The explain-it test&lt;/strong&gt; – If I can’t explain the core logic to someone else, I don’t approve it. Knowing “the tests passed” isn’t good enough.&lt;/p&gt;

&lt;p&gt;Does this slow me down? Yes.&lt;/p&gt;

&lt;p&gt;Does it help? I think so.&lt;/p&gt;

&lt;p&gt;But I’m also watching my team ship faster by trusting more. And I don’t know if I’m being careful or just stubborn.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where This Leaves Us
&lt;/h2&gt;

&lt;p&gt;I’m caught in the same trap.&lt;/p&gt;

&lt;p&gt;I want to ship faster. But I also want to understand what I’m shipping.&lt;/p&gt;

&lt;p&gt;And the current tools make both feel impossible at the same time.&lt;/p&gt;

&lt;p&gt;Some days I slow down and review everything carefully. Other days I skim and trust.&lt;/p&gt;

&lt;p&gt;And I’m not sure which approach is right anymore.&lt;/p&gt;

&lt;p&gt;Anthropic’s Dario Amodei predicts the industry may be “just six to twelve months away from AI handling most or all of software engineering work from start to finish.”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;25% of Google’s code&lt;/strong&gt; is already AI-assisted.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;30% of Microsoft’s code&lt;/strong&gt; is AI-generated.&lt;/p&gt;

&lt;p&gt;These aren’t small experiments. This is how we’re building software now.&lt;/p&gt;

&lt;p&gt;But here’s what we’re not saying out loud:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;We’ve replaced code we wrote and didn’t fully understand with code AI wrote and we definitely don’t understand.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We’re not talking about this problem honestly enough.&lt;/p&gt;

&lt;p&gt;The “less coding, more engineering” narrative assumes we’re still doing the review work.&lt;/p&gt;

&lt;p&gt;We’re not.&lt;/p&gt;

&lt;p&gt;We’re skimming AI-generated justifications and hoping for the best.&lt;/p&gt;

&lt;p&gt;Maybe that’s fine. Maybe the tests are good enough. Maybe AI review plus AI generation actually works.&lt;/p&gt;

&lt;p&gt;But we should stop pretending we’re still doing the review work.&lt;/p&gt;

&lt;p&gt;Because “less coding, more engineering” sounds great until you realize:&lt;/p&gt;

&lt;p&gt;We’re not doing more engineering.&lt;/p&gt;

&lt;p&gt;We’re doing more &lt;strong&gt;trusting&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm0pt47fnfowidxqlw48l.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm0pt47fnfowidxqlw48l.gif" width="498" height="211"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So here’s my question:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Are you actually reviewing AI code? Or are you just hoping the explanation is right?&lt;/p&gt;

&lt;p&gt;Because if it’s the second one—and the data suggests it is—we need to start talking about what comes next.&lt;/p&gt;

&lt;p&gt;The quality gate we automated away isn’t coming back. We need to figure out what replaces it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Next up:&lt;/strong&gt; I’m going to share how I’m breaking down AI-generated features into bite-sized review sessions that force comprehension instead of trust. It’s slower. It’s deliberate. And it might be the only way to stay honest about what we’re shipping.&lt;/p&gt;

&lt;h2&gt;
  
  
  References &amp;amp; Further Reading
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Key Sources:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://fortune.com/2026/01/29/100-percent-of-code-at-anthropic-and-openai-is-now-ai-written-boris-cherny-roon/" rel="noopener noreferrer"&gt;Fortune: “Top engineers at Anthropic, OpenAI say AI now writes 100% of their code”&lt;/a&gt; – Mike Krieger and Boris Cherny interviews, January 2026&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.quantumrun.com/consulting/github-copilot-statistics/" rel="noopener noreferrer"&gt;GitHub Copilot Statistics 2026&lt;/a&gt; – 46% of code AI-generated&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.coderabbit.ai/blog/state-of-ai-vs-human-code-generation-report" rel="noopener noreferrer"&gt;CodeRabbit: “AI code creates 1.7x more issues”&lt;/a&gt; – Cognitive load study, 2025&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.index.dev/blog/developer-productivity-statistics-with-ai-tools" rel="noopener noreferrer"&gt;Index.dev: Developer Productivity Statistics 2026&lt;/a&gt; – 84% adoption, 30% acceptance rate&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://help.openai.com/en/articles/11369540-using-codex-with-your-chatgpt-plan" rel="noopener noreferrer"&gt;OpenAI: Using Codex with ChatGPT&lt;/a&gt; – Manual review guidance&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://link.springer.com/article/10.1007/s00146-025-02422-7" rel="noopener noreferrer"&gt;Springer: Automation bias in human–AI collaboration&lt;/a&gt; – AI &amp;amp; Society, July 2025&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://indianexpress.com/article/technology/artificial-intelligence/anthropic-100-percent-code-ai-generated-claude-10522033/" rel="noopener noreferrer"&gt;Indian Express: Anthropic’s 100% AI-generated code&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://techcommunity.microsoft.com/blog/appsonazureblog/an-ai-led-sdlc-building-an-end-to-end-agentic-software-development-lifecycle-wit/4491896" rel="noopener noreferrer"&gt;Qodo 2025 AI Code Quality Report&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://arxiv.org/html/2508.18771v1" rel="noopener noreferrer"&gt;arXiv: “Does AI Code Review Lead to Code Changes?”&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://thedecisionlab.com/biases/automation-bias" rel="noopener noreferrer"&gt;The Decision Lab: Automation Bias&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://addyo.substack.com/p/the-reality-of-ai-assisted-software" rel="noopener noreferrer"&gt;Addy Osmani: AI-Assisted Engineering Reality&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The post &lt;a href="https://sudoish.com/the-review-bottleneck-why-ai-explanations-are-making-us-trust-less-not-more/" rel="noopener noreferrer"&gt;The Review Bottleneck: Why AI Explanations Are Making Us Trust Less, Not More&lt;/a&gt; appeared first on &lt;a href="https://sudoish.com" rel="noopener noreferrer"&gt;sudoish&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>aicoding</category>
      <category>claudecode</category>
    </item>
    <item>
      <title>Working Twice as Hard to Be Seen as Average: Life as a Latino Developer</title>
      <dc:creator>Thiago Pacheco</dc:creator>
      <pubDate>Wed, 11 Feb 2026 02:04:07 +0000</pubDate>
      <link>https://forem.com/pacheco/working-twice-as-hard-to-be-seen-as-average-life-as-a-latino-developer-2l95</link>
      <guid>https://forem.com/pacheco/working-twice-as-hard-to-be-seen-as-average-life-as-a-latino-developer-2l95</guid>
      <description>&lt;p&gt;I walked into the conference room with my laptop to set up the infrastructure demo. Before I could connect to the projector, someone asked me to refill the coffee first.&lt;/p&gt;

&lt;p&gt;I had a computer science degree. I was working in infra and support. But they saw a Latino face and assumed “service staff,” not “software engineer.”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This was in Brazil.&lt;/strong&gt; My own country.&lt;/p&gt;

&lt;p&gt;If the bias is this strong at home, imagine what it’s like abroad.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjs41i7b87pv9h3yuyw7a.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjs41i7b87pv9h3yuyw7a.gif" width="498" height="278"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  When I Moved Abroad
&lt;/h2&gt;

&lt;p&gt;That coffee moment in Brazil taught me the bias runs deep, so deep it exists even at home.&lt;/p&gt;

&lt;p&gt;But when I moved abroad? I learned what &lt;strong&gt;intensity&lt;/strong&gt; means.&lt;/p&gt;

&lt;p&gt;No one asked me to refill coffee anymore. The bias evolved. Got sophisticated.&lt;/p&gt;

&lt;p&gt;I exceeded all expectations. Top performer. Multiple successful projects. When my first promotion came up, leadership hesitated.&lt;/p&gt;

&lt;p&gt;Not because of my work, that was undeniable. But because something about me didn’t fit their mental model of what “senior” looks like.&lt;/p&gt;

&lt;p&gt;It wasn’t just that moment. It was &lt;strong&gt;every single day&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Every code review scrutinized harder. Every meeting where I had to prove my point twice. Every technical decision questioned just a bit more. Every accomplishment met with surprise instead of recognition.&lt;/p&gt;

&lt;p&gt;The weight isn’t in one coffee incident or one delayed promotion.&lt;/p&gt;

&lt;p&gt;The weight is in living it &lt;strong&gt;every day&lt;/strong&gt; , in ways so subtle that calling them out feels like paranoia—until the pattern becomes undeniable.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftp3qwdn9cawsueqx66ak.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftp3qwdn9cawsueqx66ak.gif" width="486" height="354"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  You’re Not Imagining It
&lt;/h2&gt;

&lt;p&gt;If you’re a Latino developer, you’ve felt it. That sense that you need to work twice as hard to prove half as much. That your accent makes people second-guess your skills. That your degree from a Latin American university is worth less.&lt;/p&gt;

&lt;p&gt;Here’s the data: &lt;strong&gt;You’re not imagining it.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Latinos are &lt;strong&gt;19% of the US population&lt;/strong&gt; but only &lt;strong&gt;5.9-8% of the tech workforce&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;At Google in 2020, despite a $150M diversity commitment, Latinos made up just &lt;strong&gt;5.9%&lt;/strong&gt; of employees&lt;/li&gt;
&lt;li&gt;In core computer/math roles, only &lt;strong&gt;8.3%&lt;/strong&gt; are Latino&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Research on imposter syndrome shows it’s “especially prevalent in underrepresented racial and ethnic minorities” (NIH). Forbes notes that “minorities face bias that makes it harder for them to be promoted or selected for certain roles.”&lt;/p&gt;

&lt;p&gt;This isn’t personal failure. This is &lt;strong&gt;structural exclusion&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc0e07ujf6qvl5jgo9aw2.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc0e07ujf6qvl5jgo9aw2.gif" width="498" height="280"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Google Interview
&lt;/h2&gt;

&lt;p&gt;Ten years of experience. Proven delivery. Strong references.&lt;/p&gt;

&lt;p&gt;I made it through all the Google interview rounds. The feedback was good.&lt;/p&gt;

&lt;p&gt;Then: &lt;strong&gt;rejected&lt;/strong&gt;. No clear explanation.&lt;/p&gt;

&lt;p&gt;I kept replaying the system design interview. I knew my architecture was sound. But did I explain it the way they expected? Did my phrasing sound uncertain when I was being thoughtful? Did my accent make them doubt my competence?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I’ll never know.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;But I know this: technical competence wasn’t the only thing being evaluated.&lt;/p&gt;

&lt;p&gt;Research on technical interviews confirms it. interviewing.io found that “implicit biases sneak in and people aren’t even aware of them.” Non-native speakers face an inherent disadvantage in interviews where “effective communication is key to success.”&lt;/p&gt;

&lt;p&gt;Here’s the nuance: You might be fluent in English, but &lt;strong&gt;cultural differences in how you convey ideas&lt;/strong&gt; still come across as weakness.&lt;/p&gt;

&lt;p&gt;Brazilian communication style is more relationship-focused, context-aware. North American style is more direct, transactional. &lt;strong&gt;Neither is wrong&lt;/strong&gt; , but one gets judged as “unprofessional.”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm0m7lsshzcz9d7libnyo.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm0m7lsshzcz9d7libnyo.gif" width="498" height="498"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Hidden Barriers
&lt;/h2&gt;

&lt;p&gt;These aren’t isolated incidents. They’re patterns.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Name Effect&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Resumes with Latino/Hispanic names get fewer callbacks. Even before the interview, the name creates bias. Some developers anglicize their names to get past this filter.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Cultural Fit Trap&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
“Cultural fit” is often code for “thinks and communicates like us.” When you express ideas differently—even if technically sound—it gets labeled as not fitting in.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Timezone as Invisible Labor&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Working from Brazil/LATAM for North American companies? Early morning or late night calls are YOUR problem to solve. You adjust. They don’t. This invisible labor never counts in performance reviews.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Credential Discount&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Your degree from a top Brazilian university isn’t seen as equal to a North American degree, regardless of actual education quality.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Promotion Ceiling&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
High-performing Latino mid-level developers get held back from senior roles. The bar for “leadership presence” or “communication skills” becomes a convenient filter.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdqahapu0yw25kemsgg3m.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdqahapu0yw25kemsgg3m.gif" width="498" height="278"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Paradox
&lt;/h2&gt;

&lt;p&gt;Here’s the weird part: this structural imposter syndrome (the one society applies to us and we apply to ourselves) makes us work &lt;strong&gt;10x harder&lt;/strong&gt; to achieve what others achieve easily.&lt;/p&gt;

&lt;p&gt;Which makes us &lt;strong&gt;excellent engineers&lt;/strong&gt;. But also &lt;strong&gt;exhausted humans&lt;/strong&gt; who never feel like we’ve done enough.&lt;/p&gt;

&lt;p&gt;This isn’t just personal. It’s social, structural, cultural. We do it to ourselves, AND the world does it to us.&lt;/p&gt;

&lt;p&gt;The same trait that drives us to over-deliver also prevents us from recognizing our own value. We minimize our contributions. We overestimate North American tech. We stay silent in meetings. We accept lower salaries.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9eff9cwouq3q1a2whcg0.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9eff9cwouq3q1a2whcg0.gif" width="497" height="280"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What We Bring
&lt;/h2&gt;

&lt;p&gt;Flip the narrative. What do Latino developers bring that North American tech culture often lacks?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Work ethic&lt;/strong&gt; – We’re willing to go further, learn more, prove ourselves repeatedly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resourcefulness&lt;/strong&gt; – Building with constraints makes better engineers. We know how to do more with less.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cultural intelligence&lt;/strong&gt; – Navigating multiple cultures IS a technical skill. We understand global markets beyond the Silicon Valley bubble.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Relationship-building&lt;/strong&gt; – Brazilian emphasis on personal connections creates stronger, more cohesive teams.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multilingual abilities&lt;/strong&gt; – Our “accent” is proof we’re multilingual. That’s a skill, not a weakness.&lt;/p&gt;

&lt;p&gt;These are competitive advantages. But only if companies recognize them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2p6n9n50b07sy428zg37.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2p6n9n50b07sy428zg37.gif" width="498" height="498"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What To Do About It
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;For Latino Developers:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Document everything&lt;/strong&gt; – Bias thrives in ambiguity. Keep records of your work and wins.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build in public&lt;/strong&gt; – Blog, contribute to open source, give talks. Create undeniable proof of competence.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Find the right companies&lt;/strong&gt; – Look for Latino leadership or strong D&amp;amp;I track records. Culture starts at the top.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Practice technical communication explicitly&lt;/strong&gt; – Mock interviews with native speakers help.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Leverage networks&lt;/strong&gt; – Connect with groups like SHPE (Society of Hispanic Professional Engineers).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Own your accent&lt;/strong&gt; – Reframe it as proof of multilingual ability, not a deficit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For Allies and Managers:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Question your assumptions in code review&lt;/strong&gt; – Am I judging the code or the communicator?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Separate technical competence from communication style&lt;/strong&gt; – Different doesn’t mean worse.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Audit promotion decisions for bias&lt;/strong&gt; – Are Latinos hitting a ceiling in your org?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Value multilingual abilities as a skill&lt;/strong&gt; – Not just a neutral trait.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Champion Latino developers actively&lt;/strong&gt; – Advocate for them in senior roles when they’re ready.&lt;/p&gt;




&lt;h2&gt;
  
  
  Not Just a Brazil Story
&lt;/h2&gt;

&lt;p&gt;I wrote this from the perspective of a Brazilian software engineer. But these patterns aren’t unique to Brazil or Latin America. They’re not unique to tech.&lt;/p&gt;

&lt;p&gt;This is what happens when you’re perceived as “other” in spaces built by and for one dominant culture.&lt;/p&gt;

&lt;p&gt;The coffee. The furniture. The Google rejection.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;These moments happen to Latinos across industries.&lt;/strong&gt; To Africans in European companies. To Asians in Western firms. The details change. The pattern doesn’t.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Honest Truth
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzilkmf45r4a0382yxcda.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzilkmf45r4a0382yxcda.gif" width="498" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Some days I feel confident. I know I’m good at what I do. I see the systems I’ve built, the people I’ve mentored, the problems I’ve solved.&lt;/p&gt;

&lt;p&gt;Other days, the imposter syndrome wins. I wonder if I’ll ever be “enough.” I replay conversations, second-guessing how I phrased things. I see another rejection and wonder if it was my accent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;That’s okay.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Naming the reality doesn’t make it go away. But it does make it visible. And once it’s visible, we can change it.&lt;/p&gt;

&lt;p&gt;If you’re a Latino developer: You’re not imagining it. You’re not alone. Your work is valuable. Your perspective matters. Keep pushing forward.&lt;/p&gt;

&lt;p&gt;If you’re an ally: Look around your team. Who’s missing? Whose ideas get dismissed? Who has to work twice as hard to get half the credit?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Now you know. What will you do about it?&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Per Scholas: &lt;em&gt;Latino Representation in Tech&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;SQ Magazine: &lt;em&gt;Diversity in Tech Statistics 2026&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;NIH: &lt;em&gt;Imposter Phenomenon in Racially/Ethnically Minoritized Groups&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;BairesDev: &lt;em&gt;Breaking Barriers – Tackling Imposter Syndrome Among Minorities in Tech&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Forbes: &lt;em&gt;How To Navigate Imposter Syndrome – A Hispanic Perspective&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;ACM: &lt;em&gt;Fairness and Bias in Algorithmic Hiring&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;interviewing.io: &lt;em&gt;Unconscious Bias in Technical Interviews&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The post &lt;a href="https://sudoish.com/working-twice-as-hard-to-be-seen-as-average-life-as-a-latino-developer/" rel="noopener noreferrer"&gt;Working Twice as Hard to Be Seen as Average: Life as a Latino Developer&lt;/a&gt; appeared first on &lt;a href="https://sudoish.com" rel="noopener noreferrer"&gt;sudoish&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>career</category>
      <category>diversityinclusion</category>
      <category>careergrowth</category>
      <category>careeradvice</category>
    </item>
    <item>
      <title>Are We Still Developers? The Hidden Cost of Vibe Coding</title>
      <dc:creator>Thiago Pacheco</dc:creator>
      <pubDate>Fri, 06 Feb 2026 03:34:05 +0000</pubDate>
      <link>https://forem.com/pacheco/are-we-still-developers-the-hidden-cost-of-vibe-coding-3209</link>
      <guid>https://forem.com/pacheco/are-we-still-developers-the-hidden-cost-of-vibe-coding-3209</guid>
      <description>

&lt;p&gt;I generated 847 lines of production code in 12 minutes.&lt;/p&gt;

&lt;p&gt;Not pseudocode. Not a prototype. Real, working Python with tests, error handling, API integration, and it works well. I described what I wanted to an AI agent, went to grab coffee, came back, and it was done.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It felt incredible.&lt;/strong&gt; Like unlocking god mode. Why would I ever go back to writing code line by line?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu7obpkojne879jhidv1y.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu7obpkojne879jhidv1y.gif" width="498" height="281"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Promise &amp;amp; The Reality
&lt;/h2&gt;

&lt;p&gt;This is the promise of AI-centric development tools like Zencoder, vibe-kanban, and even parts of Cursor. Steve Yegge calls it “vibe coding”: stop micromanaging, trust the AI, let it scaffold entire features while you focus on the big picture. And when you see it work, it’s intoxicating.&lt;/p&gt;

&lt;p&gt;But here’s what happened next.&lt;/p&gt;

&lt;p&gt;I had to review those 847 lines. Every function. Every edge case. Every assumption the AI made about my requirements. Did it handle validation correctly? Is this maintainable? Did it miss something subtle about the business logic?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The review took longer than writing it would have.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F30gx76vfpy5q4k4z94na.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F30gx76vfpy5q4k4z94na.gif" width="498" height="281"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And I still wasn’t confident. So I asked a different AI to review the first AI’s code. It found some issues. I fixed them. But now I’m reviewing AI-generated fixes to AI-generated code, and I’m three layers deep in a review process that feels more like managing a team of junior developers than writing software.&lt;/p&gt;

&lt;p&gt;Then comes the PR. Do I ask my teammates to review 847 lines of AI code? They’ll either spend hours on it (and resent me) or run it through AI themselves (and we’re all just trusting machines at that point).&lt;/p&gt;

&lt;h2&gt;
  
  
  The Identity Question
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;So I have to ask: what does this make us as developers?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If AI writes the code and AI reviews the code, and we’re just approving diffs we don’t fully understand… are we developers anymore? Or are we product managers for code we didn’t write?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkxr2sb7iq6dh6e0vz8vr.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkxr2sb7iq6dh6e0vz8vr.gif" width="498" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Maybe that’s fine. Maybe AI is good enough now that we should embrace it.&lt;/p&gt;

&lt;p&gt;But I don’t think it is. Not yet. I find issues constantly: bugs the AI missed, patterns it didn’t understand, edge cases it overlooked. And the “big bang” feature implementations don’t feel right to me. &lt;strong&gt;I still value ensuring the AI is building the right thing according to my understanding of the problem.&lt;/strong&gt; And the only way to do that is to review in detail, stay close to the code, and actually understand what’s being built.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Journey Through the Landscape
&lt;/h2&gt;

&lt;p&gt;I’ve been a Vim user for years. Not the “I use Vim btw” meme kind. The “my fingers know hjkl better than WASD” kind. The muscle memory runs deep. So when AI coding assistants exploded onto the scene, I had a choice: migrate to VS Code like everyone else, or figure out how to make AI work in my world.&lt;/p&gt;

&lt;p&gt;Let’s be clear: &lt;strong&gt;I didn’t stick with Vim out of stubbornness.&lt;/strong&gt; I explored the alternatives. I &lt;em&gt;keep&lt;/em&gt; exploring them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1tkgfo3uwcpp2m7501iv.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1tkgfo3uwcpp2m7501iv.gif" width="498" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Cursor: Too Distant
&lt;/h3&gt;

&lt;p&gt;I’ve tried Cursor. It’s impressive: genuinely AI-first, with inline suggestions and chat that feels magical. But here’s the problem: it makes you &lt;em&gt;too distant from the code&lt;/em&gt;. You’re directing an AI that’s directing the editor. There’s a layer of abstraction I don’t trust yet. I want my hands closer to the metal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsson0zyugdajpknzrd21.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsson0zyugdajpknzrd21.gif" width="498" height="317"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Zed: Doesn’t Fit My Workflow
&lt;/h3&gt;

&lt;p&gt;I tried Zed. Better balance. It’s code-centric with AI tools bolted on the side: you choose when to invoke them. I liked that. But Zed expects you to work on &lt;em&gt;one project at a time&lt;/em&gt;. That breaks my workflow immediately. I live in tmux with 3-4 projects open, constantly switching contexts, piping output from one terminal to another. Zed doesn’t fit that reality.&lt;/p&gt;

&lt;h3&gt;
  
  
  Zencoder &amp;amp; vibe-kanban: Even More Distant
&lt;/h3&gt;

&lt;p&gt;Then I went to the other extreme: full AI-centric tools like vibe-kanban and Zencoder. &lt;strong&gt;I was motivated to try this route after reading Steve Yegge’s writings on vibe coding.&lt;/strong&gt; The idea is compelling: stop micromanaging the AI, trust the vibes, let it scaffold entire features while you focus on the big picture. So I gave it an honest shot.&lt;/p&gt;

&lt;p&gt;These tools are &lt;em&gt;wild&lt;/em&gt;. You describe what you want, and they scaffold entire features, write tests, integrate APIs. It feels like having a senior dev in a box. Zencoder especially caught my attention. You feel powerful. You ship fast.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7132suvtdrlaea2c8f2r.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7132suvtdrlaea2c8f2r.gif" width="498" height="315"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But here’s the catch: &lt;strong&gt;you’re absurdly distant from the code.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AI writes it. AI organizes it. You review diffs like a manager reviewing PRs. And I don’t trust AI &lt;strong&gt;that much&lt;/strong&gt; yet. Every line it writes, I have to re-review. Does it meet standards? Is it maintainable? Did it miss an edge case? The review overhead is real.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I realized:&lt;/strong&gt; I still need to review code in detail and ensure features are going in the right direction. The “trust the vibes” approach sounds liberating, but in practice, I’m doing &lt;em&gt;more&lt;/em&gt; cognitive work reviewing after the fact than I would have supervising during development.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution: Bite-Sized Pair Programming
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F75e8po3yhod2c8uv2an2.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F75e8po3yhod2c8uv2an2.gif" width="498" height="301"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So I found a middle ground: bite-sized pair programming with AI.&lt;/strong&gt; The AI does most of the coding. I supervise and keep it on track. I course-correct in real time instead of reviewing a massive diff later. And the best way I’ve found to do that is still &lt;strong&gt;Neovim + tmux&lt;/strong&gt; : AI in one pane, code in another, constant back-and-forth.&lt;/p&gt;

&lt;p&gt;I don’t write code the same way I used to. I used to open a file, think through the problem, and type. Now? I spin up a worktree, open an AI in a terminal pane, and direct the solution instead of typing it character by character. But I stay close. I supervise. I course-correct in real time.&lt;/p&gt;

&lt;p&gt;The AI does the heavy lifting. I do the thinking. It’s not full vibe coding. It’s not solo coding either. It’s &lt;strong&gt;collaborative&lt;/strong&gt; , with me staying close enough to catch problems early. The tools are different. The medium is the same: the terminal.&lt;/p&gt;

&lt;h3&gt;
  
  
  AI in the Terminal: Multiple Tools, One Workflow
&lt;/h3&gt;

&lt;p&gt;The real shift wasn’t about finding one perfect plugin. It was about building a workflow that lets me use &lt;strong&gt;whatever AI tool fits the task&lt;/strong&gt; without leaving the terminal.&lt;/p&gt;

&lt;p&gt;For a while, I experimented with different approaches: Claude Code in a split buffer, Codex in a tmux pane, jumping between terminal windows to manage different tools manually. I wasn’t married to any particular setup.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;More recently, I’ve found somewhat of a sweet spot:&lt;/strong&gt; I use &lt;strong&gt;sidekick.nvim&lt;/strong&gt; as my interface layer. It gives me the flexibility to switch between different AI agents when I want to. But in practice? &lt;strong&gt;I mostly default to Claude Code&lt;/strong&gt;. Its rules and configuration are pretty robust right now, plus my company pays for it, so why not use it?&lt;/p&gt;

&lt;p&gt;That’s the real advantage of the terminal workflow: &lt;strong&gt;the flexibility is there when you need it.&lt;/strong&gt; Want to test a new model? Swap it in. Want a second opinion on code? Switch agents mid-task. But you’re not forced to constantly context-switch. You can settle into what works and only switch when it makes sense.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;And here’s a workflow trick I’ve been using:&lt;/strong&gt; implement a feature with one AI tool, then review or test it with a different one. You get a less biased second opinion. If Model A wrote the code and Model B flags the same issues you’re concerned about, you know it’s real. If Model B says “looks good,” you have more confidence. It’s like pair programming, but the second programmer is a completely different intelligence.&lt;/p&gt;

&lt;p&gt;The “best” AI model changes constantly. Claude Code dominates today, but that changes by the hour. Being locked into one tool means you’re always playing catch-up. In the terminal, switching is trivial.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the Terminal Wins
&lt;/h2&gt;

&lt;p&gt;Neovim scored 83% in StackOverflow’s 2024 Developer Survey as the most admired IDE, even though VS Code is the most used at 59%. That gap tells you something. People who use Vim don’t just tolerate it. They love it. And it’s not Stockholm syndrome.&lt;/p&gt;

&lt;p&gt;Here’s what keeps me here.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hands on the Keyboard
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft71raru008quqov3le32.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft71raru008quqov3le32.gif" width="498" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When my hands never leave the keyboard, I stay in flow. Every time I reach for the mouse, there’s a micro-decision: where’s the cursor? What am I clicking? Did I miss? Those interruptions compound. Not just in seconds, but in mental overhead.&lt;/p&gt;

&lt;p&gt;Want to find a file? sf brings up Telescope. Search across everything? sg for live grep. Navigate between Vim splits, tmux panes, or even tmux windows? Ctrl+h/j/k/l handles it all seamlessly. Start a new task with a worktree and AI ready? at. Toggle a floating terminal? ;.&lt;/p&gt;

&lt;p&gt;No mouse. No sidebars. No menu hunting. Just muscle memory.&lt;/p&gt;

&lt;p&gt;The AI tools I use (Claude Code, Codex, OpenCode) fit into this. I don’t context-switch to a browser or separate app. I invoke them in a split pane with a keybind. Everything stays in one place.&lt;/p&gt;

&lt;p&gt;Speed isn’t about typing fast. It’s about never stopping.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Terminal is a Toolkit
&lt;/h3&gt;

&lt;p&gt;The terminal isn’t one tool. It’s composable. Want to process JSON from an API? Pipe it to jq. Transform file paths? sed or awk. Run a command on 50 files? find | xargs. Monitor logs while coding? tmux split with tail -f.&lt;/p&gt;

&lt;p&gt;When you add AI to this composability, things get wild.&lt;/p&gt;

&lt;p&gt;I can prompt Claude Code in one pane, watch it write code in another, pipe the output through a test runner, grep the results, and feed errors back into the AI. All without leaving the terminal. All scriptable. All reproducible.&lt;/p&gt;

&lt;p&gt;Cursor can’t do that. You can’t pipe Cursor’s output through grep. You can’t script it to run in a loop. It’s a black box. The terminal is Lego blocks, and AI is just another piece.&lt;/p&gt;

&lt;h3&gt;
  
  
  Performance You Actually Feel
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbja3b2hnptzztc5p2r0o.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbja3b2hnptzztc5p2r0o.gif" width="480" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;VS Code and Cursor are Electron apps. They’re running a full Chromium browser under the hood. Neovim is written in C. It opens instantly, uses around 50MB of RAM, and never lags.&lt;/p&gt;

&lt;p&gt;When I spin up 10 tmux tabs with Neovim and AI tools, my system barely blinks. Try opening 10 VS Code windows and listen to your fan scream.&lt;/p&gt;

&lt;p&gt;I juggle 5 to 10 worktrees at any given time. Each one is a separate environment. If each took 500MB of RAM and 3 seconds to load, my workflow would fall apart. Neovim and tmux? Instant, lightweight, snappy.&lt;/p&gt;

&lt;h3&gt;
  
  
  My Setup is Code
&lt;/h3&gt;

&lt;p&gt;My entire development environment is 600 lines of dotfiles in a git repo. New machine? Clone it, run a script, and I’m back up in two minutes. Same keybinds, same plugins, same aliases, same AI integrations.&lt;/p&gt;

&lt;p&gt;GUI tools let you sync settings, but you’re at the mercy of their config systems. With terminal tools, the setup &lt;em&gt;is code&lt;/em&gt;. You can version it, diff it, review it, share it. I can recreate my entire workflow on a fresh VM faster than VS Code can install.&lt;/p&gt;

&lt;h3&gt;
  
  
  These Tools Have Staying Power
&lt;/h3&gt;

&lt;p&gt;Vim came out in 1991. Neovim in 2014. tmux in 2007. They’ve outlived programming languages, frameworks, companies. They’ll outlive Cursor. They’ll outlive Zed. They might outlive JavaScript.&lt;/p&gt;

&lt;p&gt;The muscle memory I’m building, the keybinds, the workflow patterns: they’ll be relevant in 10 years. Will Cursor? Maybe. Probably not. Learning Vim is an investment that compounds. GUI tools are a bet on a company staying solvent.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Control Gradient
&lt;/h2&gt;

&lt;p&gt;Here’s how I think about the spectrum:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Full Control (Traditional Vim)&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
You write every character. You think through every problem. Slow, but you know exactly what’s happening.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI-Assisted (My Current Setup)&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
You direct the solution. AI accelerates execution. You stay close to the code. You review as it’s built, not after. You can do manual trivial actions where pertinent instead of asking everything to the agent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI-First (Cursor, Zed)&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
AI suggests, you accept/reject. Fast, but you’re reacting more than creating.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI-Centric (Zencoder, vibe-kanban)&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
AI builds, you review diffs. Fastest, but you’re a product manager for code you didn’t write.&lt;/p&gt;

&lt;p&gt;Right now, &lt;strong&gt;AI-assisted is the sweet spot.&lt;/strong&gt; I get speed without losing control. I stay in my flow. I trust the output because I was &lt;em&gt;there&lt;/em&gt; while it was written.&lt;/p&gt;

&lt;p&gt;But I’ll be honest: &lt;strong&gt;this might not last.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AI is getting scary good at both writing &lt;em&gt;and&lt;/em&gt; reviewing code. It’s catching bugs I miss. It’s flagging patterns I overlook. If AI review becomes as reliable as human review, the argument for staying hands-on gets weaker.&lt;/p&gt;

&lt;p&gt;We might be transitioning to AI-centric whether we like it or not. The question is: how long do I have before “staying close to the code” becomes nostalgia instead of pragmatism?&lt;/p&gt;

&lt;p&gt;For now, I’m staying in the terminal. But I’m watching.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Honest Uncertainty
&lt;/h2&gt;

&lt;p&gt;This workflow makes me feel in control. I know where my code is, how it got there, and what’s happening at every step. I’m not watching an AI build something in the background and hoping it got it right. I’m directing it, reviewing as it goes, staying close to the work.&lt;/p&gt;

&lt;p&gt;That control matters. For now.&lt;/p&gt;

&lt;p&gt;But I’d be lying if I said I’m confident this is the future. AI is getting better at an absurd pace. It’s already catching bugs I miss. It’s writing code that would take me hours in minutes. It’s reviewing my work and finding patterns I overlooked.&lt;/p&gt;

&lt;p&gt;At some point, the argument for staying hands-on stops being pragmatic and starts being nostalgic. Maybe we’re already there and I just don’t want to admit it.&lt;/p&gt;

&lt;p&gt;I keep testing the AI-first tools. Zencoder, vibe-kanban, Cursor. Not because I think they’re worse, but because I want to know when the terminal workflow stops being the smart choice and starts being stubbornness.&lt;/p&gt;

&lt;p&gt;Maybe that happens next week. Maybe it already happened and I’m just slow to see it. AI-centric development might not be a distant future. It might be now, and I’m still clinging to a workflow that makes me comfortable.&lt;/p&gt;

&lt;p&gt;For today, I’m staying in the terminal. It’s faster for me. It fits how I think. It keeps me close to the code in a way that feels right.&lt;/p&gt;

&lt;p&gt;But tomorrow? Who knows.&lt;/p&gt;

&lt;p&gt;The terminal-centric workflow wins for me right now. But I’m watching the gap close. And when it does, I’ll probably switch. Not because I want to, but because it’ll be the obvious move.&lt;/p&gt;

&lt;p&gt;Until then, I’ll keep hitting at and letting the AI do the heavy lifting while I stay at the wheel.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkp8havgwpdwv2sc8qzrj.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkp8havgwpdwv2sc8qzrj.gif" width="498" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://sudoish.com/are-we-still-developers-the-hidden-cost-of-vibe-coding/" rel="noopener noreferrer"&gt;Are We Still Developers? The Hidden Cost of Vibe Coding&lt;/a&gt; appeared first on &lt;a href="https://sudoish.com" rel="noopener noreferrer"&gt;sudoish&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>aicoding</category>
      <category>claudecode</category>
    </item>
    <item>
      <title>What I’m Doing to Not Become Irrelevant</title>
      <dc:creator>Thiago Pacheco</dc:creator>
      <pubDate>Mon, 02 Feb 2026 16:46:15 +0000</pubDate>
      <link>https://forem.com/pacheco/what-im-doing-to-not-become-irrelevant-3ah4</link>
      <guid>https://forem.com/pacheco/what-im-doing-to-not-become-irrelevant-3ah4</guid>
      <description>&lt;p&gt;I wrote recently about the &lt;a href="https://dev.to/pacheco/the-developer-identity-crisis-5abi-temp-slug-3423427"&gt;developer identity crisis&lt;/a&gt;. The weird feeling of watching AI do the work we spent years learning to do ourselves.&lt;/p&gt;

&lt;p&gt;But naming the problem isn’t enough. I’ve been thinking about what to actually do about it. What habits keep us sharp. What makes a developer valuable when the execution layer is being automated.&lt;/p&gt;

&lt;p&gt;I don’t have all the answers. But I’ve been experimenting. Here’s what I’m trying.&lt;/p&gt;

&lt;h2&gt;
  
  
  I Still Write Code By Hand
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg2y0ifntq27wkpmtxnr5.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg2y0ifntq27wkpmtxnr5.gif" width="480" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There’s a real risk in the AI-assisted workflow: you stop coding.&lt;/p&gt;

&lt;p&gt;I noticed it happening to me. Language features I used to know cold got fuzzy. I was slower without the AI. Rustier. Looking up things that used to be automatic.&lt;/p&gt;

&lt;p&gt;So I started doing LeetCode again. Not because I’m interviewing anywhere. Just to keep the muscle memory alive.&lt;/p&gt;

&lt;p&gt;It feels a bit like practicing flatground tricks at an empty parking lot. Nobody’s filming you do kickflips for Instagram there. But if you stop drilling the basics, your whole game falls apart when it actually matters.&lt;/p&gt;

&lt;p&gt;I’m not trying to compete with AI on speed. I gave up on that. But I need to keep my judgment sharp enough to evaluate what the AI produces. And that requires actually understanding code at a level you can only maintain by writing it.&lt;/p&gt;

&lt;h2&gt;
  
  
  I’m Learning to Be Visible
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqvyi39yan30fitq4wt5o.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqvyi39yan30fitq4wt5o.gif" width="498" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This one feels uncomfortable to write about. I’m not naturally someone who promotes their work. I always figured good work speaks for itself.&lt;/p&gt;

&lt;p&gt;It doesn’t. Not anymore.&lt;/p&gt;

&lt;p&gt;When AI can theoretically do what you do, you need to show that you’re the one making things happen. Not in a braggy way. Just… clearly. Document your decisions. Share your thinking in PRs and meetings. Write about what you’re learning.&lt;/p&gt;

&lt;p&gt;If your work is invisible, people will assume AI could replace you. Maybe not consciously. But when cuts happen, the people nobody notices are the first to go.&lt;/p&gt;

&lt;p&gt;I’m still figuring out how to do this without feeling gross about it. But I’ve accepted that it’s part of the job now.&lt;/p&gt;

&lt;h2&gt;
  
  
  Every Developer Is Becoming a Team Lead
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6jyvtg4toi2dz3etyec4.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6jyvtg4toi2dz3etyec4.gif" width="498" height="227"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here’s a weird realization I had: the skills for leading AI agents are the same skills for leading people.&lt;/p&gt;

&lt;p&gt;Breaking down ambiguous problems into clear tasks. Providing enough context so others can execute well. Reviewing work and course-correcting. Unblocking progress. Communicating status.&lt;/p&gt;

&lt;p&gt;That’s management. That’s tech leading. And now that’s also… prompting?&lt;/p&gt;

&lt;p&gt;If you’re orchestrating a bunch of AI agents, you’re basically running a team. A very fast, very literal team that does exactly what you say. Which means you better say the right things.&lt;/p&gt;

&lt;p&gt;The upside is that you have more resources than ever. You can be bolder with what you take on because you have the capacity to actually execute it. But only if you know how to lead.&lt;/p&gt;

&lt;p&gt;Communication skills, leadership skills. These were always important. Now they’re the whole job.&lt;/p&gt;

&lt;h2&gt;
  
  
  I Keep a Dev Journal
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flpxbad55e7cxq3xr1yqi.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flpxbad55e7cxq3xr1yqi.gif" width="498" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is maybe the simplest habit, and it makes all the others easier.&lt;/p&gt;

&lt;p&gt;Every day I write down what I worked on. What I accomplished. What went wrong. What decisions I made and why.&lt;/p&gt;

&lt;p&gt;It sounds basic. But when AI does the implementation, it’s easy to lose track of your own contributions. The journal anchors it.&lt;/p&gt;

&lt;p&gt;It also makes 1:1s with my manager way easier. Performance reviews too. Instead of scrambling to remember what I did six months ago, I just look it up. I have receipts.&lt;/p&gt;

&lt;p&gt;And there’s something about writing things down that creates clarity. I notice patterns. I see when I’m spinning my wheels. I catch myself before I waste a week on something that doesn’t matter.&lt;/p&gt;

&lt;h2&gt;
  
  
  I Automated My Morning Planning
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkd74gm9jmi3d87b689zf.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkd74gm9jmi3d87b689zf.gif" width="498" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is the meta move: using AI to help me stay relevant in the AI era.&lt;/p&gt;

&lt;p&gt;I set up an automation with &lt;a href="https://openclaw.ai/" rel="noopener noreferrer"&gt;OpenClaw&lt;/a&gt; that runs every morning. It gathers context from my work. What’s in progress, what’s blocked, what the priorities are. Then it suggests where I should focus, how I can help unblock others, and how to communicate my impact without being annoying about it.&lt;/p&gt;

&lt;p&gt;The output is a day plan broken into pomodoros. I know exactly what I’m doing and when.&lt;/p&gt;

&lt;p&gt;Before this, I’d start my day scattered. Check Slack. Check email. Context-switch for an hour before doing anything useful. Now I start with clarity.&lt;/p&gt;

&lt;p&gt;It’s not about working harder. It’s about not wasting time figuring out what to work on.&lt;/p&gt;

&lt;h2&gt;
  
  
  I Do Things That Aren’t Work
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkde60l12222xgcz12doz.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkde60l12222xgcz12doz.gif" width="498" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I almost didn’t include this one because it sounds like generic self-help advice. But it’s been more important than I expected.&lt;/p&gt;

&lt;p&gt;The temptation when everything is changing is to grind harder. Learn more. Stay online longer. Keep up with every new tool, every new technique, every new AI model drop.&lt;/p&gt;

&lt;p&gt;That’s a trap.&lt;/p&gt;

&lt;p&gt;I skate. Have for years. And I’ve noticed something: the days I get on my board, I’m better at my job. Not despite the time away from the screen. Because of it.&lt;/p&gt;

&lt;p&gt;There’s something about skateboarding that clears my head in a way nothing else does. You can’t think about work when you’re trying to land a trick. You can’t worry about AI taking your job when you’re focused on not eating concrete. It forces presence.&lt;/p&gt;

&lt;p&gt;And when I come back to work, I think more clearly. I make better decisions. I’m less reactive, less anxious about all the change happening around us.&lt;/p&gt;

&lt;p&gt;I’ve noticed the opposite too. When I skip skating for a few weeks, when I’m just grinding at the computer every day, I get worse. Not better. My thinking gets foggy. I burn out on things I used to enjoy. I make dumber calls.&lt;/p&gt;

&lt;p&gt;If your whole identity is “developer,” and that identity feels threatened, you’re going to spiral. Having something else gives you stability. A sport, a hobby, anything that pulls you out of your head. A foundation that doesn’t shake every time a new model drops.&lt;/p&gt;

&lt;p&gt;Take care of yourself. The developers who burn out won’t be around to see what happens next.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Common Thread
&lt;/h2&gt;

&lt;p&gt;All of this requires being intentional. That’s the actual habit underneath the habits.&lt;/p&gt;

&lt;p&gt;The developers who fall behind will be the ones who just let things happen to them. Who assume their skills stay relevant automatically. Who keep doing what they’ve always done because it worked before.&lt;/p&gt;

&lt;p&gt;The job changed. We have to change with it. Not by panicking, but by building habits that keep us sharp, visible, and grounded.&lt;/p&gt;

&lt;p&gt;I don’t know if I’m doing it right. But I’m trying.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://sudoish.com/what-im-doing-to-not-become-irrelevant/" rel="noopener noreferrer"&gt;What I’m Doing to Not Become Irrelevant&lt;/a&gt; appeared first on &lt;a href="https://sudoish.com" rel="noopener noreferrer"&gt;sudoish&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>aicoding</category>
      <category>careergrowth</category>
    </item>
    <item>
      <title>The Developer Identity Crisis</title>
      <dc:creator>Thiago Pacheco</dc:creator>
      <pubDate>Sun, 01 Feb 2026 18:41:21 +0000</pubDate>
      <link>https://forem.com/pacheco/the-developer-identity-crisis-ap2</link>
      <guid>https://forem.com/pacheco/the-developer-identity-crisis-ap2</guid>
      <description>&lt;p&gt;I keep seeing variations of this pitch everywhere:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“A bug report hits Jira at 3 AM? Your autonomous agent wakes up, reproduces it, writes the fix, and opens the PR before your alarm goes off.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;That’s from Zencoder’s marketing site. But it’s not just marketing anymore, developers are actually doing this. They’re using tools like Zencoder, Claude code, Openclaw, and others to delegate entire features or bug fixes to AI agents that run autonomously while they sleep.&lt;/p&gt;

&lt;p&gt;The workflow is straightforward: assign a GitHub issue to the agent, let it work overnight in a sandboxed environment, wake up to a pull request with passing tests, review the diff, merge it.&lt;/p&gt;

&lt;p&gt;The tests pass. The code works. They ship it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhu2vjpq6ydn3u1d8cs2f.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhu2vjpq6ydn3u1d8cs2f.gif" alt="Ship it squirrel - the eternal developer mantra" width="498" height="281"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And here’s the part that keeps me up at night: many of them admit they don’t fully understand every implementation detail. They review for correctness and patterns, but not line-by-line comprehension. And they’re shipping faster because of it.&lt;/p&gt;

&lt;p&gt;I’ve been thinking about this for weeks. Because I can’t do that. Not yet.&lt;/p&gt;




&lt;h2&gt;
  
  
  My Workflow Isn’t There Yet
&lt;/h2&gt;

&lt;p&gt;I use AI constantly. Claude Code and orchestrators like OpenClaw, they’ve transformed how I work. I can spin up features that would have taken days in a fraction of the time. The productivity gains are real.&lt;/p&gt;

&lt;p&gt;But I still review everything. I still read the diffs. I still make sure I understand what went into the codebase before I approve it. I don’t merge code I haven’t checked.&lt;/p&gt;

&lt;p&gt;And here’s the uncomfortable part:  &lt;strong&gt;I’m starting to wonder if that makes me slow.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Because I’m watching people who &lt;em&gt;don’t&lt;/em&gt; review everything. They’re shipping faster. They’re building more. And their code… works. Their products ship. Their companies grow.&lt;/p&gt;

&lt;p&gt;The question that keeps me up at night isn’t “can AI code?” anymore. It’s:  &lt;strong&gt;how much can I trust it before my caution becomes a liability?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8estk69609kuiqv8m3ya.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8estk69609kuiqv8m3ya.gif" alt="This is fine - dog sitting in burning room" width="498" height="280"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Job Changed Underneath Us
&lt;/h2&gt;

&lt;p&gt;If you’ve been coding for more than a few years, you’ve felt it. The ground shifted. What we do day-to-day looks fundamentally different than it did even 18 months ago.&lt;/p&gt;

&lt;p&gt;We used to write code. Now we orchestrate AI agents that write code.&lt;/p&gt;

&lt;p&gt;We used to debug line by line. Now we describe the bug and let the agent fix it.&lt;/p&gt;

&lt;p&gt;We used to refactor carefully. Now we prompt for a rewrite and evaluate the output.&lt;/p&gt;

&lt;p&gt;The transformation isn’t subtle anymore. AI agent tools crossed some threshold. They’re not assistants anymore, they’re doing the actual work.&lt;/p&gt;

&lt;p&gt;And here’s what nobody wants to say out loud:  &lt;strong&gt;they’re often better at it than we are.&lt;/strong&gt;  Faster, definitely. More thorough in edge cases. Less prone to the lazy shortcuts we take when we’re tired.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftfhc8hb6rw17znrg3cnk.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftfhc8hb6rw17znrg3cnk.gif" alt="Confused math lady - trying to process all the changes" width="498" height="326"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The role of “software developer” didn’t disappear. It just became something else entirely. We’re now:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Task organizers&lt;/strong&gt;  — Breaking down requirements into agent-friendly chunks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configuration engineers&lt;/strong&gt;  — Building rules, skills, and context for AI systems&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Quality evaluators&lt;/strong&gt;  — Reviewing output for correctness, not writing it ourselves&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Visibility champions&lt;/strong&gt;  — Advocating for our work because “just coding” is invisible now&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you’re a developer who just wants to code in peace, I have bad news for you: that job is disappearing.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Pioneers Are Already There
&lt;/h2&gt;

&lt;p&gt;I’m not imagining this shift. Some of the sharpest minds in our industry are actively building for a future where human code review might be optional and their work reveals just how far the transformation has already gone.&lt;/p&gt;

&lt;h3&gt;
  
  
  Steve Yegge’s Gas Town: The Industrial Coding Factory
&lt;/h3&gt;

&lt;p&gt;Steve Yegge—the legendary blogger who gave us classics like “Stevey’s Google Platforms Rant”—recently launched Gas Town, what he calls “a new take on the IDE for 2026.” It’s not an IDE in any traditional sense. It’s an orchestrator for running &lt;em&gt;dozens&lt;/em&gt; of Claude Code instances simultaneously.&lt;/p&gt;

&lt;p&gt;In his words: &lt;em&gt;“Gas Town is an industrialized coding factory manned by superintelligent robot chimps.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;He describes an 8-stage evolution of the AI-assisted developer, from basic code completions all the way to running 10+ agents at once. If you’re not at stage 6 or 7? &lt;em&gt;“You will not be able to use Gas Town. You aren’t ready yet.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Yegge also created Beads, a persistent memory system for coding agents. Because when you’re running that many agents, you need infrastructure just to keep track of what they know. He’s literally building Kubernetes for AI coders.&lt;/p&gt;

&lt;p&gt;The pattern he describes:  &lt;strong&gt;Prompt → Sleep → Evaluate → Keep or Toss.&lt;/strong&gt;  Work becomes “an uncountable substance that you sling around freely, like slopping shiny fish into wooden barrels at the docks.”&lt;/p&gt;

&lt;p&gt;That’s not coding as we knew it. That’s something else entirely. And Yegge’s doing it right now.&lt;/p&gt;

&lt;h3&gt;
  
  
  ThePrimeagen’s 99: The Neovim Purist’s Answer
&lt;/h3&gt;

&lt;p&gt;On the other end of the spectrum, ThePrimeagen, known for his Neovim evangelism, launched 99, which he describes as “the AI agent that Neovim deserves.”&lt;/p&gt;

&lt;p&gt;His philosophy is different from the full-orchestration approach. 99 is built for developers “who don’t have skill issues”, those people who want AI integrated into their existing workflow, not replacing it. It’s about restricted AI interactions: fill in a function, handle a visual selection, stop when you want.&lt;/p&gt;

&lt;p&gt;The interesting part? Even ThePrimeagen, arguably one of the most skilled traditional developers in the content space, is building AI tooling. He’s not fighting the wave; he’s figuring out how to ride it on his own terms.&lt;/p&gt;

&lt;p&gt;These aren’t fringe experiments. These are serious developers building serious tools because they see what’s coming.&lt;/p&gt;




&lt;h2&gt;
  
  
  Caught in the Middle
&lt;/h2&gt;

&lt;p&gt;So where does that leave people like me?&lt;/p&gt;

&lt;p&gt;I see where this is going. I believe the future probably shifts heavily toward these “prompt and trust” workflows. The economics are too compelling. The speed advantages are too real. Companies will gravitate toward developers who ship faster, and right now, that means developers who trust AI more.&lt;/p&gt;

&lt;p&gt;But I’m not there yet. And I’m honestly not sure if my reluctance is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Wisdom:&lt;/strong&gt;  A healthy caution born from experience with software that breaks in subtle ways&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ego:&lt;/strong&gt;  An attachment to feeling like a “real programmer” who understands their code&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fear:&lt;/strong&gt;  Discomfort with a world where my hard-won skills matter less&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Maybe it’s all three.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ja47tqqt6vuutba56w9.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ja47tqqt6vuutba56w9.gif" alt="Squid Game voting buttons - the impossible choice" width="498" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The honest truth is: I don’t fully trust AI-generated code yet. Not because I’ve seen it fail catastrophically. But because the few times I &lt;em&gt;have&lt;/em&gt; caught issues, they were subtle. The kind of bugs that pass tests but cause problems in production. The kind that make you question everything you didn’t review carefully.&lt;/p&gt;

&lt;p&gt;And yet… those catches are getting rarer. The AI is getting better. Every month the code quality improves, the edge cases get handled more gracefully, the architecture decisions get more sensible.&lt;/p&gt;

&lt;p&gt;At what point does my reviewing become theater? At what point am I just going through the motions because it feels wrong not to, even though I’m not actually catching anything?&lt;/p&gt;

&lt;p&gt;I don’t have an answer.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Uncomfortable Truths We’re Not Talking About
&lt;/h2&gt;

&lt;p&gt;Whether you’re in the “trust fully” camp or the “review everything” camp, some things are happening to all of us.&lt;/p&gt;

&lt;h3&gt;
  
  
  Skill Atrophy is Real
&lt;/h3&gt;

&lt;p&gt;I’ve noticed it in myself. There are language features I used to know cold that I now… don’t. Not because I forgot them entirely, but because I haven’t actually typed them as often. The AI does it.&lt;/p&gt;

&lt;p&gt;When I do need to write code without AI assistance, I’m slower. Rustier. More likely to look things up that used to be automatic.&lt;/p&gt;

&lt;p&gt;Is this a problem? I don’t know. Maybe these skills don’t matter anymore. Maybe they do. But either way, they’re fading.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg7jke3wnuh7rc8oglxfm.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg7jke3wnuh7rc8oglxfm.gif" alt="Pinky and the Brain - what was the password again?" width="498" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Carelessness Creep
&lt;/h3&gt;

&lt;p&gt;Even when I review code, I’ve caught myself being less thorough than I used to be. When the AI can regenerate something in seconds, the stakes for any individual piece feel lower. Made a mistake? Trash it and re-prompt.&lt;/p&gt;

&lt;p&gt;This creeping carelessness scares me. Not because every line needs to be perfect, but because I’ve noticed myself letting things slide that I wouldn’t have before. Small things. Subtle things. The kind of issues that compound.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpybhtd0gjbaslwwhhfoe.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpybhtd0gjbaslwwhhfoe.gif" alt="Meh shrug - when you stop caring about the details" width="498" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Code Review Culture is Fragmenting
&lt;/h3&gt;

&lt;p&gt;Here’s where it gets really weird: people are using AI to review code too.&lt;/p&gt;

&lt;p&gt;On one hand, this is powerful. AI can catch patterns and issues that humans miss. But on the other hand… we’re now in a loop where AI writes code and AI reviews code and humans just glance at both and click approve.&lt;/p&gt;

&lt;p&gt;What happens when we stop deeply understanding the systems we’re building? When code becomes something we orchestrate but don’t really know?&lt;/p&gt;

&lt;h3&gt;
  
  
  The Disposable Code Question
&lt;/h3&gt;

&lt;p&gt;Maybe code quality just doesn’t matter the way it used to.&lt;/p&gt;

&lt;p&gt;If AI can rewrite a module in 30 seconds, why spend hours making it elegant? Why obsess over structure, readability, maintainability? Just make it work. When it breaks or needs to change, throw it away and regenerate it.&lt;/p&gt;

&lt;p&gt;This feels wrong to me. But I can’t articulate exactly why. Maybe I’m just attached to an old way of working. Maybe “good code” was always just a means to an end, and now there’s a faster means.&lt;/p&gt;

&lt;p&gt;Maybe code doesn’t need to be human-readable anymore. Maybe it just needs to be AI-readable. And maybe that’s fine.&lt;/p&gt;

&lt;p&gt;I genuinely don’t know.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Paradox: Embrace or Fall Behind
&lt;/h2&gt;

&lt;p&gt;Here’s the impossible choice: embrace AI coding fully and potentially lose something essential, or maintain your review discipline and risk becoming irrelevant.&lt;/p&gt;

&lt;p&gt;If you don’t use these tools you’re objectively slower than colleagues who trust more and verify less. You deliver less. You look less productive. In a world where companies are cutting costs and measuring output, that’s not sustainable.&lt;/p&gt;

&lt;p&gt;But if you do embrace them fully, you risk becoming dependent. Losing skills. Becoming someone who can’t actually code anymore, just prompt.&lt;/p&gt;

&lt;p&gt;I’m trying to find a middle path. Use AI heavily for generation, but maintain understanding. Trust but verify. It’s uncomfortable. It might not be sustainable. But it’s where I am right now.&lt;/p&gt;

&lt;p&gt;And here’s the kicker: the job now requires something that coding alone never did:  &lt;strong&gt;visibility&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It’s not enough to do good work. You have to be seen doing it. You have to advocate. Document. Communicate impact. Build relationships. The IC who just crushes code tickets in silence? That person is becoming a liability, because from the outside, it’s unclear what AI could or couldn’t do in their place.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Even Is a Developer Anymore?
&lt;/h2&gt;

&lt;p&gt;Here’s where I diverge from the panic.&lt;/p&gt;

&lt;p&gt;A lot of developers are having an identity crisis because “AI can code now.” But honestly? I always thought being a developer was about more than writing code.&lt;/p&gt;

&lt;p&gt;The job was never just about typing syntax. It was always about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Knowing  &lt;strong&gt;what&lt;/strong&gt;  to build (strategic thinking)&lt;/li&gt;
&lt;li&gt;Knowing  &lt;strong&gt;how to break it down&lt;/strong&gt;  (task decomposition)&lt;/li&gt;
&lt;li&gt;Knowing  &lt;strong&gt;how to evaluate quality&lt;/strong&gt;  (critical judgment)&lt;/li&gt;
&lt;li&gt;Knowing  &lt;strong&gt;how to communicate impact&lt;/strong&gt;  (visibility)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Coding was just the tool we used to execute on those skills. A really important tool, sure.&lt;/p&gt;

&lt;p&gt;What’s happening now is that the tool is changing. The execution layer is being automated. But the thinking, the judgment, the strategy? That’s still ours.&lt;/p&gt;

&lt;p&gt;If anything, AI is forcing the industry to acknowledge what good developers have always known: the hard part was never the syntax. It was figuring out what to build, why it matters, and whether it actually works.&lt;/p&gt;

&lt;p&gt;The developers freaking out about “AI replacing us” are often the ones who built their identity entirely around code execution. And I get it, that was the most visible, most measurable part of the job. It’s what we practiced, what we got good at, what differentiated us.&lt;/p&gt;

&lt;p&gt;But it was never the whole job. And now we’re being forced to reckon with that.&lt;/p&gt;




&lt;h2&gt;
  
  
  We’re the Canary in the Coal Mine
&lt;/h2&gt;

&lt;p&gt;Software developers are experiencing this transformation first not because we’re special, but because we’re closest to the technology. We use AI to automate our own work. Of course it’s impacting us fastest.&lt;/p&gt;

&lt;p&gt;But every profession is next.&lt;/p&gt;

&lt;p&gt;The writer using AI to draft articles. The designer using AI to generate concepts. The analyst using AI to build models. The lawyer using AI to review contracts.&lt;/p&gt;

&lt;p&gt;Everyone is about to face their own version of this identity crisis: &lt;em&gt;“What is my job when AI can do the execution better than I can?”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We don’t have answers yet. We’re navigating in real-time, making it up as we go.&lt;/p&gt;




&lt;h2&gt;
  
  
  Navigating the Blur
&lt;/h2&gt;

&lt;p&gt;I don’t have a clean conclusion here. No five-step framework for staying relevant. No confident prediction about where this goes.&lt;/p&gt;

&lt;p&gt;What I do know:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The technology won’t slow down.&lt;/strong&gt;  We can’t wish our way back to a simpler time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Skills still matter, but which ones is shifting.&lt;/strong&gt;  Deep technical knowledge isn’t obsolete, but it might not be sufficient anymore.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Intentionality is everything.&lt;/strong&gt;  We can drift into AI dependency without thinking, or we can engage with these tools critically, keeping what works, discarding what doesn’t, staying honest about trade-offs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trust isn’t binary.&lt;/strong&gt;  I’m somewhere on the spectrum between “review every line” and “ship without looking.” That’s probably where most of us are. And that’s okay, we’re all calibrating in real-time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;We’re figuring this out together.&lt;/strong&gt;  Nobody has it solved. The developers who seem most confident are probably just better at hiding their uncertainty.&lt;/p&gt;

&lt;p&gt;The job is changing. Our identity as developers is changing with it. The only way through is acknowledging the discomfort and navigating it intentionally.&lt;/p&gt;

&lt;p&gt;I still &lt;strong&gt;check the code&lt;/strong&gt;. I still review the diffs. Maybe that makes me slow. Maybe that makes me careful. Maybe it’s just how I’m wired.&lt;/p&gt;

&lt;p&gt;But I’m watching the people who don’t. And I’m taking notes.&lt;/p&gt;

&lt;p&gt;We’re in uncharted territory.&lt;/p&gt;

&lt;p&gt;And honestly? That’s both terrifying and kind of exciting.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fagvoymf9kn6rc2kbhbsa.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fagvoymf9kn6rc2kbhbsa.gif" alt="Back to the Future - ready for whatever comes next" width="498" height="324"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;What’s your experience with this shift? Are you in the “trust fully” camp, the “review everything” camp, or somewhere in between like me? I’d genuinely love to hear how other developers are navigating this.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://sudoish.com/the-developer-identity-crisis/" rel="noopener noreferrer"&gt;The Developer Identity Crisis&lt;/a&gt; appeared first on &lt;a href="https://sudoish.com" rel="noopener noreferrer"&gt;sudoish&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>aicoding</category>
      <category>claudecode</category>
    </item>
    <item>
      <title>How I Built an AI-Powered Development Workflow That Actually Works (And Increased My Productivity by 300%)</title>
      <dc:creator>Thiago Pacheco</dc:creator>
      <pubDate>Tue, 17 Jun 2025 15:27:23 +0000</pubDate>
      <link>https://forem.com/pacheco/how-i-built-an-ai-powered-development-workflow-that-actually-works-and-increased-my-productivity-372</link>
      <guid>https://forem.com/pacheco/how-i-built-an-ai-powered-development-workflow-that-actually-works-and-increased-my-productivity-372</guid>
      <description>&lt;h2&gt;
  
  
  The Wake-Up Call: An On-Call Experience to Remember
&lt;/h2&gt;

&lt;p&gt;A few months ago, I had my first on-call shift in a company I had recently started, and that changed the way I approach development forever. Alerts were firing in Sentry, and our monitoring dashboard was a sea of red. Despite having spent nearly 6 months at my company, I realized I only truly understood the small portion of the codebase I worked on daily. The rest felt like a vast, uncharted territory.&lt;/p&gt;

&lt;p&gt;This experience was a stark reminder of the cognitive load that comes with modern software development. We often juggle countless services, databases, and deployment environments, and it can quickly become overwhelming. This on-call shift made me realize that I needed a faster approach to quickly fill the gaps that I was still missing, one that could help me manage this complexity without feeling lost.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Mindset Shift: From Custom Configs to AI-Powered Productivity
&lt;/h2&gt;

&lt;p&gt;As someone who has always been passionate about developer productivity, I've spent years fine-tuning my development environment. From crafting the perfect NeoVim config to creating scripts that automated my workflow, I was always on the lookout for ways to work smarter. But even with all these optimizations, it felt like there was still room for improvement—especially as AI tools started to emerge and revolutionize the way developers work.&lt;/p&gt;

&lt;p&gt;When I first explored AI-powered editors and tools, it felt like stepping into a new world. I tried out editors like Cursor and Zed to see what AI could do, but ultimately, I wanted to bring that power back into my terminal-centric workflow. The turning point came with the introduction of Claude Code, which allowed me to seamlessly integrate AI into my existing setup, combining the best of both worlds.&lt;/p&gt;

&lt;h2&gt;
  
  
  Integrating AI: The Journey Back to a Smarter Terminal
&lt;/h2&gt;

&lt;p&gt;Claude Code introduced an AI memory system that allowed me to store and retrieve context about my projects effortlessly. Each project directory had its own Claude markdown file, which acted as a living repository of knowledge, capturing everything from debugging breadcrumbs to service startup sequences.&lt;/p&gt;

&lt;p&gt;With AI integrated into my terminal, my workflow transformed. Instead of juggling multiple tools and tabs, I had a single, cohesive environment where AI provided real-time assistance, whether it was pulling information from Sentry, retrieving a relevant Linear ticket, or referencing past debugging sessions.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Tangible Benefits: From Overwhelm to Efficiency (with a Human-in-the-Loop Approach)
&lt;/h2&gt;

&lt;p&gt;Integrating AI into my development workflow wasn't just about convenience; it led to measurable improvements in productivity and confidence. One of the most significant benefits was the drastic reduction in the time it took to diagnose and resolve issues.&lt;/p&gt;

&lt;p&gt;However, it’s important to note that while AI can be a powerful assistant, it isn’t a silver bullet—especially when dealing with legacy codebases or complex architectural quirks. Sometimes, AI suggestions might not be the most optimal or might regurgitate outdated patterns. That’s where the human-in-the-loop approach truly shines.&lt;/p&gt;

&lt;p&gt;By keeping yourself in the driver’s seat, you can leverage AI for what it does best—providing quick insights, surfacing patterns, and reducing cognitive load—while you ensure that the final implementation is sound and aligns with best practices.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: Embracing AI, One Step at a Time
&lt;/h2&gt;

&lt;p&gt;My journey into the world of AI-powered development has been transformative. By integrating AI thoughtfully, I’ve been able to boost productivity, reduce cognitive load, and keep my focus on what truly matters: solving interesting problems and writing great code.&lt;/p&gt;

&lt;p&gt;In the next post, we’ll dive into the technical side of things: how to set up your own AI-powered development environment, integrate tools like Claude Code, and make the most of this exciting technology.&lt;/p&gt;

</description>
      <category>devs</category>
      <category>ai</category>
      <category>claude</category>
      <category>neovim</category>
    </item>
  </channel>
</rss>
