<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Panav Mhatre</title>
    <description>The latest articles on Forem by Panav Mhatre (@panav_mhatre_732271d2d44b).</description>
    <link>https://forem.com/panav_mhatre_732271d2d44b</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/panav_mhatre_732271d2d44b"/>
    <language>en</language>
    <item>
      <title>I Let Claude Write the Whole Thing. Here's What I Learned the Hard Way.</title>
      <dc:creator>Panav Mhatre</dc:creator>
      <pubDate>Sun, 10 May 2026 14:30:39 +0000</pubDate>
      <link>https://forem.com/panav_mhatre_732271d2d44b/i-let-claude-write-the-whole-thing-heres-what-i-learned-the-hard-way-e69</link>
      <guid>https://forem.com/panav_mhatre_732271d2d44b/i-let-claude-write-the-whole-thing-heres-what-i-learned-the-hard-way-e69</guid>
      <description>&lt;p&gt;Six months ago I shipped a project that I was proud of. Claude wrote about 80% of the code. It worked great on launch day.&lt;/p&gt;

&lt;p&gt;Three weeks later I couldn't add a single feature without something else breaking.&lt;/p&gt;

&lt;p&gt;This wasn't a prompt quality problem. I was writing detailed prompts. I was being specific about what I wanted. The issue was something subtler, and once I understood it, I changed how I work completely.&lt;/p&gt;

&lt;p&gt;Here's what I figured out.&lt;/p&gt;

&lt;h2&gt;
  
  
  The real problem isn't the AI. It's the handoff.
&lt;/h2&gt;

&lt;p&gt;When you work with Claude on a project, there's an invisible handoff happening at every step: you're transferring the responsibility for design decisions to the model, and the model is making those decisions based on what's locally coherent — not what's globally maintainable.&lt;/p&gt;

&lt;p&gt;Every session, Claude starts fresh. It doesn't know why your auth module was structured a certain way last Tuesday. It doesn't know that you made a specific tradeoff in your database layer because of a constraint you discovered three sessions ago. It fills in those gaps with plausible guesses.&lt;/p&gt;

&lt;p&gt;Individually, those guesses are fine. Accumulated over 30 sessions? You have a codebase that works but that nobody — including you — fully understands.&lt;/p&gt;

&lt;p&gt;That's not AI failure. That's a workflow problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I changed: session hygiene before anything else
&lt;/h2&gt;

&lt;p&gt;The thing that helped most wasn't better prompts. It was treating each Claude session as a handoff between two contractors who don't share memory.&lt;/p&gt;

&lt;p&gt;Before I start any session that touches existing code, I now write a brief context block. Not a massive doc. Just 5–10 lines covering:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What module I'm working on&lt;/li&gt;
&lt;li&gt;What it's allowed to do (and what it's explicitly not responsible for)&lt;/li&gt;
&lt;li&gt;Any constraints that aren't visible from the code itself&lt;/li&gt;
&lt;li&gt;What "done" looks like for this session&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This sounds bureaucratic. In practice it takes two minutes and it changes the quality of what Claude produces significantly. Not because the model suddenly became smarter — but because it stopped having to guess at things I should have told it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The confidence trap
&lt;/h2&gt;

&lt;p&gt;Here's the part I wish someone had told me earlier: Claude is confident by default.&lt;/p&gt;

&lt;p&gt;When it doesn't know something, it doesn't usually say "I'm not sure how this fits into your architecture." It builds something that fits a reasonable interpretation of what you asked. That output looks right. It passes your quick review. You move on.&lt;/p&gt;

&lt;p&gt;The cost doesn't show up until you're three features deeper and the whole thing starts fighting itself.&lt;/p&gt;

&lt;p&gt;The fix is building in a verification step that isn't "does this look okay to me right now." It's closer to: "does this follow the structural contract this module was supposed to have?" That's a different question, and it requires you to have that contract written down somewhere before you generate the code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scope containment is a superpower
&lt;/h2&gt;

&lt;p&gt;The other shift that helped: I stopped asking Claude to "implement feature X" and started asking it to "implement the data layer for feature X, nothing else."&lt;/p&gt;

&lt;p&gt;Smaller scope, tighter output, easier review. When each session has a single clearly bounded job, the integration work becomes predictable. When sessions are open-ended, you get code that assumes things about adjacent parts of your system — assumptions you won't discover until something breaks in production.&lt;/p&gt;

&lt;p&gt;This isn't about distrusting the model. It's about recognizing that you're the one who holds the system-level view, and acting like it.&lt;/p&gt;

&lt;h2&gt;
  
  
  One practical place to start
&lt;/h2&gt;

&lt;p&gt;If any of this resonates, the most useful thing you can do today isn't switching tools or rewriting prompts. It's starting a &lt;code&gt;PROJECT_CONTEXT.md&lt;/code&gt; file in your repo.&lt;/p&gt;

&lt;p&gt;In it: write what your project does, what each major module is responsible for, and what the rules of the road are (error handling conventions, naming patterns, what should never happen in this layer). &lt;/p&gt;

&lt;p&gt;Then, at the start of every Claude session that touches existing code, paste the relevant section.&lt;/p&gt;

&lt;p&gt;You'll notice the difference within one or two sessions.&lt;/p&gt;




&lt;p&gt;I put together a free pack around this kind of workflow structure — starter templates, a shipping checklist, and some prompt scaffolding for first-time Claude builders. It's free, no email required, no upsell. If you're working through this problem on a solo project, it might save you a few frustrating weeks: &lt;a href="https://panavy.gumroad.com/l/skmaha" rel="noopener noreferrer"&gt;Ship With Claude — Starter Pack&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you've figured out other ways to handle the continuity problem with Claude sessions, I'd genuinely like to hear them. This is one of those things where everyone seems to be figuring it out independently and not sharing much.&lt;/p&gt;

</description>
      <category>refinehackathon</category>
      <category>claude</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Why Your Claude Code Rots (And How to Stop It Before It Starts)</title>
      <dc:creator>Panav Mhatre</dc:creator>
      <pubDate>Sat, 09 May 2026 04:11:08 +0000</pubDate>
      <link>https://forem.com/panav_mhatre_732271d2d44b/why-your-claude-code-rots-and-how-to-stop-it-before-it-starts-om</link>
      <guid>https://forem.com/panav_mhatre_732271d2d44b/why-your-claude-code-rots-and-how-to-stop-it-before-it-starts-om</guid>
      <description>&lt;p&gt;There's a pattern I've seen repeat across dozens of solo projects and small teams:&lt;/p&gt;

&lt;p&gt;Week 1: Claude generates something. It works. You're flying.&lt;/p&gt;

&lt;p&gt;Week 4: You need to add a feature. You open the codebase and feel faintly ill. You can't tell what's load-bearing and what's scaffolding. You try prompting Claude again. It makes things worse.&lt;/p&gt;

&lt;p&gt;This isn't a model problem. It's a workflow problem — and it happens so early you usually can't see it coming.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Actually Goes Wrong
&lt;/h2&gt;

&lt;p&gt;Most people assume the issue is prompt quality. If their Claude output is messy, they think: "I need better prompts."&lt;/p&gt;

&lt;p&gt;That's not quite right.&lt;/p&gt;

&lt;p&gt;Prompts are requests. What breaks things is the &lt;em&gt;structure you give those requests&lt;/em&gt; — or the absence of one.&lt;/p&gt;

&lt;p&gt;Here's the specific failure mode I see most often:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You ship Claude's first-pass output without establishing a coherent structure first.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Claude is excellent at filling in a shape. It's much weaker at &lt;em&gt;designing&lt;/em&gt; the shape autonomously when it has no context about your constraints, your project's specific invariants, or what "good" looks like for your codebase.&lt;/p&gt;

&lt;p&gt;The result: you get code that compiles, passes your quick test, and ships — but whose internal logic is idiosyncratic. Each module makes slightly different assumptions. There's no consistent error-handling pattern. The naming is half yours, half whatever Claude defaulted to.&lt;/p&gt;

&lt;p&gt;This is fine for a weekend hack. It becomes a serious liability the moment you try to build on top of it.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Debt Is Invisible at First
&lt;/h2&gt;

&lt;p&gt;What makes this especially tricky is that AI-generated technical debt doesn't look like debt.&lt;/p&gt;

&lt;p&gt;Traditional tech debt is ugly. You can see it. It slows you down immediately.&lt;/p&gt;

&lt;p&gt;AI-assisted debt often looks &lt;em&gt;clean&lt;/em&gt;. The functions are named sensibly. There are docstrings. Unit tests exist. But the underlying structure was never designed — it was generated. And generated structure degrades fast under change.&lt;/p&gt;

&lt;p&gt;The tell: every new feature requires you to fight the existing code rather than extend it. Or you find yourself completely re-explaining the codebase to Claude in every new session because it has no idea what your own project is doing.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Fix Isn't More Prompts — It's More Structure
&lt;/h2&gt;

&lt;p&gt;The builders I've watched who actually ship maintainable AI-assisted code all do some version of the same thing:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;They design the interface before they generate the implementation.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before they ask Claude to write code, they've written (in plain English or pseudocode):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What this module is responsible for&lt;/li&gt;
&lt;li&gt;What it is &lt;em&gt;not&lt;/em&gt; responsible for&lt;/li&gt;
&lt;li&gt;What the inputs and outputs look like&lt;/li&gt;
&lt;li&gt;What invariants must hold&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Claude then works inside that container. It fills in a shape you've designed, rather than designing a shape from scratch.&lt;/p&gt;

&lt;p&gt;This isn't new engineering advice. It's just that with AI-assisted coding, skipping it has become so easy — and so costly — that it's worth spelling out explicitly.&lt;/p&gt;




&lt;h2&gt;
  
  
  Context Sessions Are Their Own Problem
&lt;/h2&gt;

&lt;p&gt;The other major source of rot: Claude doesn't remember your project between sessions.&lt;/p&gt;

&lt;p&gt;Every conversation starts fresh. If you don't have a compact, accurate description of your system — something Claude can absorb quickly — you're re-inventing context every time. And if you do that informally ("just tell it what's going on"), Claude builds a picture from your description that may drift from reality.&lt;/p&gt;

&lt;p&gt;A few things that help:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Keep a PROJECT.md that describes your app's architecture in 2-3 paragraphs. Update it as you build. Feed it to Claude at the start of sessions.&lt;/li&gt;
&lt;li&gt;When you add a new module, write its "contract" before generating code: what it does, what it doesn't do, what error cases it handles.&lt;/li&gt;
&lt;li&gt;After any significant refactor, update PROJECT.md before the session ends.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These feel like overhead. They're actually the minimum viable documentation for AI-assisted work — and they pay for themselves in saved debugging time within weeks.&lt;/p&gt;




&lt;h2&gt;
  
  
  Trust Your Output, But Verify the Structure
&lt;/h2&gt;

&lt;p&gt;Claude is confident. That's part of why it's useful. But confidence doesn't mean correctness, and it definitely doesn't mean &lt;em&gt;coherence with your existing system&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The single question I ask most often when reviewing AI-assisted code:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Could I explain why this is structured this way — not what it does, but &lt;em&gt;why it's shaped like this&lt;/em&gt;?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If the answer is no, there's a problem. Not necessarily a bug, but a structural choice you don't understand and can't maintain.&lt;/p&gt;

&lt;p&gt;That's the moment to pause and refactor before moving forward, not after three more features are built on top of it.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Free Resource
&lt;/h2&gt;

&lt;p&gt;I've been thinking about this problem for a while and put together a small free starter pack — prompts, a checklist, and a workflow structure for going from idea to MVP with Claude in a way that doesn't leave you with a mess.&lt;/p&gt;

&lt;p&gt;It's aimed at CS students, interns, and first-time builders who are shipping with Claude for the first time and want to do it right from the start: &lt;a href="https://panavy.gumroad.com/l/skmaha" rel="noopener noreferrer"&gt;Ship With Claude — Starter Pack&lt;/a&gt; (free, no upsell).&lt;/p&gt;

&lt;p&gt;If you're already experienced, you probably have most of this figured out. If you're just starting, it might save you a few weeks of figuring it out the hard way.&lt;/p&gt;




&lt;p&gt;What patterns have you found that keep AI-assisted codebases healthy? I'm genuinely curious what's working for others — drop a comment.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>productivity</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Your Claude-generated code works. Then it doesn't. Here's why.</title>
      <dc:creator>Panav Mhatre</dc:creator>
      <pubDate>Wed, 06 May 2026 23:24:07 +0000</pubDate>
      <link>https://forem.com/panav_mhatre_732271d2d44b/your-claude-generated-code-works-then-it-doesnt-heres-why-380n</link>
      <guid>https://forem.com/panav_mhatre_732271d2d44b/your-claude-generated-code-works-then-it-doesnt-heres-why-380n</guid>
      <description>&lt;p&gt;I've talked to a lot of developers who use Claude daily and describe the same arc:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"It worked great for the first week. Then the codebase got confusing. Now every new feature breaks something."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This isn't a Claude problem. It's a workflow problem — and once you see it, you can't unsee it.&lt;/p&gt;




&lt;h2&gt;
  
  
  The real issue: AI collapses the gap between "working" and "right"
&lt;/h2&gt;

&lt;p&gt;When you write code by hand, there's a natural forcing function. The act of typing out logic makes you think through it. You notice when something doesn't fit. You slow down at the edges of your understanding.&lt;/p&gt;

&lt;p&gt;With AI assistance, that friction disappears. Claude generates something that compiles, passes a few manual checks, and looks right. You move on.&lt;/p&gt;

&lt;p&gt;Three weeks later, you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Two components doing the same thing, named differently&lt;/li&gt;
&lt;li&gt;Utility functions scattered across files with no clear ownership&lt;/li&gt;
&lt;li&gt;A module that started as a "quick helper" and is now load-bearing&lt;/li&gt;
&lt;li&gt;Zero test coverage on the parts that matter most&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;None of this was the AI's fault. The AI did exactly what you asked. The problem is what you didn't ask — and what you didn't define before you started.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's actually happening under the hood
&lt;/h2&gt;

&lt;p&gt;Claude is a prediction engine operating on the context you give it. If your context is fuzzy — a vague feature description, an implied architecture, an assumption you never stated — it fills that fuzziness in with something plausible.&lt;/p&gt;

&lt;p&gt;"Plausible" is not the same as "aligned with your system."&lt;/p&gt;

&lt;p&gt;When you prompt Claude without first being explicit about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What already exists and shouldn't change&lt;/li&gt;
&lt;li&gt;What the target state looks like&lt;/li&gt;
&lt;li&gt;What constraints apply (naming, patterns, file structure)&lt;/li&gt;
&lt;li&gt;What "done" actually means&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;...you're essentially asking it to make architectural decisions for you. Sometimes that's fine. Often, it drifts.&lt;/p&gt;




&lt;h2&gt;
  
  
  The shift that actually helps
&lt;/h2&gt;

&lt;p&gt;The most effective Claude users I've seen treat it less like an oracle and more like a contractor.&lt;/p&gt;

&lt;p&gt;You wouldn't hand a contractor a vague brief and walk away. You'd spend time upfront making sure they understand the building's existing structure, what you want added, what they're not allowed to touch, and how you'll know the work is done.&lt;/p&gt;

&lt;p&gt;That's the mental shift.&lt;/p&gt;

&lt;p&gt;Before starting any feature with Claude, I now answer four questions explicitly:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;What exists?&lt;/strong&gt; (Which files are relevant. What the current behavior is.)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;What should change?&lt;/strong&gt; (The specific delta — not a vague description of the end state.)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;What must not change?&lt;/strong&gt; (Explicit constraints. Adjacent code, interfaces, conventions.)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;What does done look like?&lt;/strong&gt; (How will I verify this worked.)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When I brief Claude at this level, output quality goes up significantly. More importantly, the output fits — it doesn't create new confusion to clean up later.&lt;/p&gt;




&lt;h2&gt;
  
  
  The hidden cost of prompt speed
&lt;/h2&gt;

&lt;p&gt;There's a seductive productivity loop with AI tools: prompt → accept → move on. It feels like velocity. And short-term, it is.&lt;/p&gt;

&lt;p&gt;The debt shows up later. You're not adding features anymore — you're managing the inconsistency of a codebase that was shaped by dozens of half-specified prompts.&lt;/p&gt;

&lt;p&gt;This is what I'd call &lt;strong&gt;AI-assisted drift&lt;/strong&gt;: the codebase grows, but coherence shrinks. You end up spending more time context-switching and reading AI-generated code you don't fully trust than you would have if you'd been more deliberate from the start.&lt;/p&gt;




&lt;h2&gt;
  
  
  A practical checklist before your next AI coding session
&lt;/h2&gt;

&lt;p&gt;Before you open Claude for your next feature, try this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Write one sentence describing the exact change, not the desired outcome&lt;/li&gt;
&lt;li&gt;[ ] List 2-3 files that are relevant&lt;/li&gt;
&lt;li&gt;[ ] Name at least one thing that must not change&lt;/li&gt;
&lt;li&gt;[ ] Define what you'll check to know it worked&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It takes 3 minutes. It saves hours of cleanup.&lt;/p&gt;




&lt;h2&gt;
  
  
  Free resource if this resonates
&lt;/h2&gt;

&lt;p&gt;I put together a starter pack around this approach — prompt templates, a shipping checklist, and a reusable project structure for keeping Claude-assisted work maintainable over time.&lt;/p&gt;

&lt;p&gt;It's completely free (no email required, no upsell): &lt;strong&gt;&lt;a href="https://panavy.gumroad.com/l/skmaha" rel="noopener noreferrer"&gt;Ship With Claude — Starter Pack&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It's aimed at solo builders and developers who ship real things with Claude and want their projects to stay coherent over time — not just in the first sprint.&lt;/p&gt;




&lt;p&gt;What patterns have you noticed in your own AI-assisted workflow? I'm curious whether the "briefing" approach resonates or if you've found different solutions to the drift problem.&lt;/p&gt;

</description>
      <category>claudeaiproductivitywebdev</category>
    </item>
    <item>
      <title>The Hidden Cost of AI-Generated Code: Why Your Claude Project Becomes Unmaintainable (And How to Fix It)</title>
      <dc:creator>Panav Mhatre</dc:creator>
      <pubDate>Tue, 05 May 2026 04:06:55 +0000</pubDate>
      <link>https://forem.com/panav_mhatre_732271d2d44b/the-hidden-cost-of-ai-generated-code-why-your-claude-project-becomes-unmaintainable-and-how-to-4l89</link>
      <guid>https://forem.com/panav_mhatre_732271d2d44b/the-hidden-cost-of-ai-generated-code-why-your-claude-project-becomes-unmaintainable-and-how-to-4l89</guid>
      <description>&lt;p&gt;You start a project with Claude. The first week is incredible — features land fast, the code runs, you feel like a 10x developer.&lt;/p&gt;

&lt;p&gt;Then, somewhere around week three or four, something quietly shifts. You ask Claude to modify a function and it breaks two other things. You ask it to add a feature and the solution doesn't fit the patterns already in your codebase. You open a file you "wrote" a month ago and realize you can't fully explain what it does.&lt;/p&gt;

&lt;p&gt;This isn't a prompt problem. It's a workflow problem — and it's one that almost nobody talks about.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Real Cost of AI-Assisted Speed
&lt;/h2&gt;

&lt;p&gt;Here's what's actually happening: Claude is remarkably good at producing code that works &lt;em&gt;right now&lt;/em&gt;. It's less naturally optimized for code that stays coherent &lt;em&gt;over time&lt;/em&gt; as your project grows and changes.&lt;/p&gt;

&lt;p&gt;When you move fast without a clear structure, you accumulate what I call &lt;strong&gt;AI debt&lt;/strong&gt; — not just technical debt, but comprehension debt. Code you don't fully understand, architecture decisions that weren't made so much as generated, modules that are internally consistent but don't fit together coherently.&lt;/p&gt;

&lt;p&gt;The warning signs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You've started re-prompting the same kinds of fixes repeatedly&lt;/li&gt;
&lt;li&gt;Claude's suggestions are getting longer and more convoluted&lt;/li&gt;
&lt;li&gt;You feel like you're fighting the codebase instead of building on it&lt;/li&gt;
&lt;li&gt;Onboarding anyone else (or your future self) feels impossible&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Why This Happens: The Handoff Problem
&lt;/h2&gt;

&lt;p&gt;Most developers using Claude fall into one of two failure modes:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. The Oracle Mode:&lt;/strong&gt; Treating Claude as a black box that produces output you ship without fully understanding. Fast in the short term, catastrophic for maintainability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. The Prompt Fatigue Loop:&lt;/strong&gt; Crafting increasingly elaborate prompts trying to control Claude's output, burning time and context on workarounds instead of building.&lt;/p&gt;

&lt;p&gt;Both patterns share a root cause: there's no clear protocol for &lt;em&gt;what you own&lt;/em&gt; and &lt;em&gt;what you're delegating&lt;/em&gt;. When that boundary is fuzzy, comprehension decays across sessions.&lt;/p&gt;




&lt;h2&gt;
  
  
  A More Sustainable Approach
&lt;/h2&gt;

&lt;p&gt;This isn't about prompting better — it's about designing a better working relationship with the model. Here's what actually helps:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Establish ownership before you generate
&lt;/h3&gt;

&lt;p&gt;Before you ask Claude to write something, be specific about what architectural decisions are already made. Don't let the model make structural choices implicitly — those are yours to make explicitly.&lt;/p&gt;

&lt;p&gt;Bad: "Build me a user authentication system"&lt;br&gt;&lt;br&gt;
Better: "Using our existing Express middleware pattern and the session structure in auth.js, add email/password login. The session shape should match what's in types.ts."&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Verify understanding, not just output
&lt;/h3&gt;

&lt;p&gt;The question isn't "does this code work?" — it's "do I understand why it works?" If you can't explain a function to a colleague in plain English, you don't own it yet. Run it past Claude: "Explain what this does in plain English, then tell me what would break if X changed."&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Keep a project brief Claude can reference
&lt;/h3&gt;

&lt;p&gt;One of the highest-leverage things you can do: maintain a short BRIEF.md in your project root. It describes the architecture decisions, the conventions, the parts that are intentionally weird. Paste it at the start of each new session. This simple habit dramatically reduces drift between sessions.&lt;/p&gt;

&lt;p&gt;A brief doesn't need to be long — a few paragraphs covering: what the project does, what the main patterns are, and what NOT to change. That's enough to anchor most sessions.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Distinguish between exploration and production
&lt;/h3&gt;

&lt;p&gt;Some Claude sessions are explorations — prototyping, trying ideas, seeing what's possible. Others are production sessions — shipping real code into your system. These require different modes.&lt;/p&gt;

&lt;p&gt;For exploration: give Claude a lot of latitude. Run fast.&lt;br&gt;&lt;br&gt;
For production: be explicit about constraints, verify outputs carefully, update your brief when architectural decisions get made.&lt;/p&gt;

&lt;p&gt;Mixing these two modes without being intentional about it is one of the biggest sources of maintainability problems.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Treat the context window as a resource
&lt;/h3&gt;

&lt;p&gt;Claude's context is limited. If you're loading a whole codebase into context and asking for a change, you're probably doing it wrong. Instead, think about what &lt;em&gt;minimal context&lt;/em&gt; the model needs to make a good decision. This forces you to understand your own code better — and usually produces better output from Claude too.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Meta-Lesson
&lt;/h2&gt;

&lt;p&gt;The developers I've seen get the most out of Claude long-term aren't the ones with the best prompts. They're the ones with the clearest picture of their own system.&lt;/p&gt;

&lt;p&gt;AI tools give you speed. They can't give you comprehension — that has to come from you. But if you design your workflow around maintaining that comprehension, you get both.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Free Resource
&lt;/h2&gt;

&lt;p&gt;I put together a starter pack for exactly this problem: a workflow + checklist + prompt templates built around maintaining clarity and ownership while using Claude. It's free, no upsell.&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://panavy.gumroad.com/l/skmaha" rel="noopener noreferrer"&gt;Ship With Claude — Starter Pack (free)&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It's aimed at developers and solo builders who are already using Claude and want to stop feeling like they're losing control of their projects. Nine pages, practically oriented, no fluff.&lt;/p&gt;




&lt;p&gt;If you've hit any of these patterns — or found other strategies that work — I'd genuinely love to hear about it in the comments. This is one of those areas where the community's collective experience is way ahead of what's written down anywhere.&lt;/p&gt;

</description>
      <category>claudeaiwebdevproductivity</category>
    </item>
    <item>
      <title>Your Claude-generated code is already out of date. Here's why.</title>
      <dc:creator>Panav Mhatre</dc:creator>
      <pubDate>Mon, 04 May 2026 04:05:59 +0000</pubDate>
      <link>https://forem.com/panav_mhatre_732271d2d44b/your-claude-generated-code-is-already-out-of-date-heres-why-fo1</link>
      <guid>https://forem.com/panav_mhatre_732271d2d44b/your-claude-generated-code-is-already-out-of-date-heres-why-fo1</guid>
      <description>&lt;p&gt;You asked Claude to build a feature. It worked. You shipped it.&lt;/p&gt;

&lt;p&gt;Six weeks later, you're adding something related, and nothing makes sense anymore. The code is technically correct but completely opaque. You can't remember why anything was structured this way. Claude can't figure it out either — it starts guessing, and the guesses start breaking things.&lt;/p&gt;

&lt;p&gt;This is the scenario I keep seeing. And it's not really a Claude problem. It's a workflow problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  The real issue: you're generating before you've defined
&lt;/h2&gt;

&lt;p&gt;When you prompt Claude without first establishing the structure of your system, a few things happen:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Claude fills in ambiguity with plausible-sounding assumptions&lt;/li&gt;
&lt;li&gt;You accept those assumptions because the code works&lt;/li&gt;
&lt;li&gt;The assumptions pile up across sessions&lt;/li&gt;
&lt;li&gt;You end up with a codebase that reflects Claude's guesses, not your actual design intent&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The code isn't wrong. But it's nobody's code. No one owns the decisions, so no one can maintain them.&lt;/p&gt;

&lt;h2&gt;
  
  
  What actually breaks
&lt;/h2&gt;

&lt;p&gt;Here's what this looks like in practice:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Session drift.&lt;/strong&gt; Each conversation with Claude starts fresh. Without a stable reference document — something that defines your architecture, naming conventions, what's in scope and what isn't — Claude reinvents your system every time. Slightly differently each time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;False confidence from passing tests.&lt;/strong&gt; Claude writes tests for the code it just wrote. Those tests pass. But they're testing the implementation, not your intent. You don't catch the drift until something breaks in production or a new feature is impossible to fit cleanly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prompt fatigue.&lt;/strong&gt; You find yourself writing longer and longer context blocks at the start of every session, trying to re-explain your own codebase. This is the tax you pay for not investing in structure upfront.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hidden coupling.&lt;/strong&gt; Claude tends to solve the immediate problem. Cleanly, often. But it doesn't have a strong incentive to consider how this component interacts with the one you'll build three weeks from now. You get local clarity, systemic blur.&lt;/p&gt;

&lt;h2&gt;
  
  
  What actually helps
&lt;/h2&gt;

&lt;p&gt;The fix isn't better prompts. It's better scaffolding before you prompt.&lt;/p&gt;

&lt;p&gt;A few things that make a real difference:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A project brief Claude can reference.&lt;/strong&gt; Not a giant spec. A short document: what this is, what it's not, the key architectural decisions already made, what "done" looks like for the current phase. Two pages is plenty. This becomes the anchor for every session.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Explicit scope per session.&lt;/strong&gt; Before generating anything, tell Claude what you're building in this session and what it should leave alone. This alone reduces a huge amount of unintended drift.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Verify, don't just accept.&lt;/strong&gt; After Claude generates something non-trivial, ask it to explain the tradeoffs it made. Not to praise or critique it — just to surface whether the assumptions it made are actually the assumptions you wanted. You'll be surprised how often they're not.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Maintain a decisions log.&lt;/strong&gt; A simple markdown file where you note: "decided to use X instead of Y because Z." Claude can't remember between sessions, but you can keep the memory alive.&lt;/p&gt;

&lt;h2&gt;
  
  
  The underlying shift
&lt;/h2&gt;

&lt;p&gt;The most useful way I've found to think about this: Claude is excellent at execution within a defined context. It's poor at defining the context itself.&lt;/p&gt;

&lt;p&gt;When you skip the definition step and go straight to execution, you're not really saving time — you're just deferring the cost to later, when it's more expensive to fix.&lt;/p&gt;

&lt;p&gt;This is less of a Claude complaint and more of a workflow design principle that's always been true: clarity before implementation reduces rework. AI just makes the cost of skipping that step arrive faster and harder.&lt;/p&gt;




&lt;p&gt;I put together a free starter pack around exactly this — prompts, a project brief template, a shipping checklist, and a reusable session structure I use when building with Claude. It's aimed at solo builders and students who are shipping their first real projects with AI assistance and want to avoid the structural debt that makes things fall apart later.&lt;/p&gt;

&lt;p&gt;It's completely free: &lt;a href="https://panavy.gumroad.com/l/skmaha" rel="noopener noreferrer"&gt;Ship With Claude — Starter Pack&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;No upsell required to get the core material. Download it, try it on one project, and see if it changes how you work.&lt;/p&gt;




&lt;p&gt;Would be curious what others have found helps with this — particularly around keeping context consistent across sessions. Drop a comment if you've found something that works.&lt;/p&gt;

</description>
      <category>productivitybeginnerswebdev</category>
    </item>
    <item>
      <title>Your Claude-generated code isn't the problem. Your workflow is.</title>
      <dc:creator>Panav Mhatre</dc:creator>
      <pubDate>Thu, 30 Apr 2026 04:03:39 +0000</pubDate>
      <link>https://forem.com/panav_mhatre_732271d2d44b/your-claude-generated-code-isnt-the-problem-your-workflow-is-1p5b</link>
      <guid>https://forem.com/panav_mhatre_732271d2d44b/your-claude-generated-code-isnt-the-problem-your-workflow-is-1p5b</guid>
      <description>&lt;p&gt;I've watched a lot of developers go through the same arc with Claude.&lt;/p&gt;

&lt;p&gt;First session: it's incredible. You ship a feature in an hour you would have spent a day on.&lt;/p&gt;

&lt;p&gt;Third week: something's off. The codebase is getting harder to reason about. You ask Claude to extend something and it makes three changes when you wanted one. You're not sure if you can trust the output without reading all of it. You start copy-pasting less and reviewing more — but now you're reviewing code you didn't write and don't fully understand.&lt;/p&gt;

&lt;p&gt;By month two: you've built something that works, mostly, but you don't feel like you own it. Changing one part breaks another. You spend more time fixing Claude's fixes than building new things.&lt;/p&gt;

&lt;p&gt;Sound familiar?&lt;/p&gt;

&lt;p&gt;The instinct is to blame the prompts. Write better prompts. Add more context. Inject personality into the system message.&lt;/p&gt;

&lt;p&gt;But in my experience, the prompt is rarely the problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  The real issue: no structure beneath the collaboration
&lt;/h2&gt;

&lt;p&gt;When you code alone, you have a system — even if it's informal. You know what a module is responsible for. You make judgment calls about tradeoffs. You hold the architecture in your head.&lt;/p&gt;

&lt;p&gt;When Claude enters the picture, that system often disappears. You describe what you want, Claude generates it, you accept the output, and move on. The judgment calls that used to live in your head now live... nowhere, really.&lt;/p&gt;

&lt;p&gt;The result is what I'd call &lt;strong&gt;AI-generated debt&lt;/strong&gt;: code that runs, passes your quick check, but was never organized around a set of principles that make it easy to maintain or extend. It's not technical debt in the classical sense — it's more like borrowed confidence. The code looks fine until you need to change it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three patterns that actually help
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Make decisions before you delegate&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before asking Claude to implement anything, write out what the constraints are — not just what you want, but what trade-offs you're willing to accept. Speed vs readability. DRY vs explicitness. If you can't articulate that, Claude will pick defaults that may not match your project's direction.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Verify behavior, not syntax&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Most developers review Claude's output the way you'd review your own code: scanning for obvious mistakes, checking that it compiles. But Claude's mistakes tend to be structural, not syntactic. The function looks correct. The edge case it doesn't handle is the one that will matter in three weeks. Build a habit of asking "what would have to be true for this to break?" instead of "does this look right?"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Treat your CLAUDE.md or project context file like documentation you'd ship&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The session-by-session memory loss is a real problem, but the bigger issue is that most builders don't have a stable written model of their own project. If you were onboarding a new developer, what would they need to know to make changes without breaking things? That document is what you should be giving Claude every session — and maintaining it as the project evolves is your job, not Claude's.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters more as the project grows
&lt;/h2&gt;

&lt;p&gt;Small projects can absorb drift. A 200-line script can be held in your head even if the AI generated most of it. But somewhere around the point where you have multiple files, multiple concerns, and multiple sessions, the invisible contract between you and Claude starts to matter a lot.&lt;/p&gt;

&lt;p&gt;The developers I see getting the most out of Claude long-term aren't the ones with the best prompts. They're the ones who've figured out how to keep themselves in the loop — as architects, as verifiers, as the person who still decides what the thing should be.&lt;/p&gt;




&lt;p&gt;I put together a free starter pack for builders who are hitting this wall. It includes a workflow template, a shipping checklist, and a reusable project structure designed to keep your sessions coherent and your codebase recoverable. No upsell, no email required — just a 9-page PDF you can apply to your next project.&lt;/p&gt;

&lt;p&gt;If you're a CS student, intern, or first-time indie builder trying to ship something real with Claude, it's at: &lt;strong&gt;&lt;a href="https://panavy.gumroad.com/l/skmaha" rel="noopener noreferrer"&gt;https://panavy.gumroad.com/l/skmaha&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Curious what patterns others have found useful here — especially for keeping longer-running projects from drifting.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdevproductivitybeginners</category>
    </item>
    <item>
      <title>The Hidden Cost of AI-Assisted Coding: When Your Codebase Becomes a Black Box</title>
      <dc:creator>Panav Mhatre</dc:creator>
      <pubDate>Wed, 29 Apr 2026 04:04:46 +0000</pubDate>
      <link>https://forem.com/panav_mhatre_732271d2d44b/the-hidden-cost-of-ai-assisted-coding-when-your-codebase-becomes-a-black-box-5f47</link>
      <guid>https://forem.com/panav_mhatre_732271d2d44b/the-hidden-cost-of-ai-assisted-coding-when-your-codebase-becomes-a-black-box-5f47</guid>
      <description>&lt;p&gt;There's a specific kind of technical debt that doesn't show up in your linter, doesn't trigger a failing test, and doesn't announce itself until the third sprint — when you need to change something the AI wrote and realize you have no idea how it actually works.&lt;/p&gt;

&lt;p&gt;I've started calling it &lt;strong&gt;AI opacity debt&lt;/strong&gt;, and I think it's becoming one of the bigger hidden costs of building with AI assistants.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Pattern
&lt;/h2&gt;

&lt;p&gt;You're building fast. Claude or Copilot writes a chunk of logic. It works. Tests pass. You ship it.&lt;/p&gt;

&lt;p&gt;Three weeks later, a requirement changes. You open that module and find code that's technically correct but structurally alien — it solves the problem from the angle the AI was aiming at, not from the angle your system needs. Changing one thing requires understanding all of it, and understanding all of it takes longer than you expected because you never built that understanding in the first place.&lt;/p&gt;

&lt;p&gt;This is the compounding effect nobody warns you about: &lt;strong&gt;speed upfront, confusion later&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where it Actually Comes From
&lt;/h2&gt;

&lt;p&gt;Here's the thing — the code quality isn't usually the issue. AI assistants are remarkably good at writing syntactically clean, logically sound code. The problem is structural ownership.&lt;/p&gt;

&lt;p&gt;When you type a feature yourself, you build a map in your head: the edge cases you considered, the tradeoffs you made, the invariants you assumed. When AI generates it, that map exists in the model's latent space and then evaporates. You're left with the artifact, not the reasoning.&lt;/p&gt;

&lt;p&gt;Over time, a codebase built this way starts to feel like a collection of correct answers to the wrong questions.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Prompt Treadmill
&lt;/h2&gt;

&lt;p&gt;A lot of developers compensate by prompting more. When something breaks, they describe the bug and ask the AI to fix it. This works, right up until it doesn't — because you're patching output you don't fully understand with more output you don't fully understand.&lt;/p&gt;

&lt;p&gt;The feedback loop tightens: less understanding, more prompting, less ownership, more fragility.&lt;/p&gt;

&lt;p&gt;I've watched solo builders hit this wall hard. They make fast initial progress, then around week 4-6 their velocity collapses because every change requires re-prompting several other pieces that depended on assumptions they never consciously made.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Helps
&lt;/h2&gt;

&lt;p&gt;The fix isn't fewer AI tools. It's changing your relationship to the output.&lt;/p&gt;

&lt;p&gt;A few things that help in practice:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Before you generate, define the seams.&lt;/strong&gt; Know which parts of your system are structural (data models, API contracts, module boundaries) and never let AI decide those for you. Give those to the model as constraints, not as things to figure out.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Review for reasoning, not just correctness.&lt;/strong&gt; When AI writes something you'd accept, ask: can I explain &lt;em&gt;why&lt;/em&gt; this works? If not, either ask the model to explain it, or rewrite the part you don't understand. The goal isn't perfect code — it's maintained understanding.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Slow down at integration points.&lt;/strong&gt; AI is excellent at writing isolated components. It's weaker at understanding how components interact over time. Slow down whenever you're stitching things together and make those decisions yourself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Keep a decision log.&lt;/strong&gt; Even two sentences per significant choice: "Using optimistic updates here because latency matters more than consistency in this flow." This is the map the AI isn't building for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Note on Free Resources
&lt;/h2&gt;

&lt;p&gt;If you're a student, intern, or solo builder just getting started with Claude, I put together a free starter pack — prompts, workflow structure, and a shipping checklist — specifically designed to help you avoid these traps from the beginning rather than debugging them six weeks in.&lt;/p&gt;

&lt;p&gt;No upsell, no signup wall. Just a practical starting point: &lt;a href="https://panavy.gumroad.com/l/skmaha" rel="noopener noreferrer"&gt;Ship With Claude — Starter Pack&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bottom Line
&lt;/h2&gt;

&lt;p&gt;AI-assisted development is genuinely fast. But fast and understood are different things. The builders who get the most out of these tools over time are the ones who stay in the driver's seat — using AI to accelerate execution, not to replace thinking.&lt;/p&gt;

&lt;p&gt;The goal is a codebase you can explain. That's what makes it yours.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>productivity</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Why Your Claude-Generated Code Becomes a Nightmare to Maintain (And How to Fix It)</title>
      <dc:creator>Panav Mhatre</dc:creator>
      <pubDate>Tue, 28 Apr 2026 04:04:28 +0000</pubDate>
      <link>https://forem.com/panav_mhatre_732271d2d44b/why-your-claude-generated-code-becomes-a-nightmare-to-maintain-and-how-to-fix-it-46mk</link>
      <guid>https://forem.com/panav_mhatre_732271d2d44b/why-your-claude-generated-code-becomes-a-nightmare-to-maintain-and-how-to-fix-it-46mk</guid>
      <description>&lt;p&gt;There's a pattern I see constantly in teams that have adopted Claude for coding: the first few weeks feel magical. Features ship fast. PRs go out. Everyone's excited.&lt;/p&gt;

&lt;p&gt;Then around week 6 or 8, things start getting weird.&lt;/p&gt;

&lt;p&gt;A refactor breaks three unrelated things. A bug fix introduces two new ones. The codebase has grown quickly but understanding it feels harder than ever. Nobody quite knows why a piece of code does what it does — or what side effects touching it might cause.&lt;/p&gt;

&lt;p&gt;This isn't a bug in Claude. It's a workflow design problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hidden Cost of "Just Ask Claude"
&lt;/h2&gt;

&lt;p&gt;When you treat Claude as a pure code generator — describe feature, get code, ship it — you're making a trade. You're exchanging long-term comprehension for short-term speed.&lt;/p&gt;

&lt;p&gt;Claude will write code that works. But working code and maintainable code are not the same thing, and Claude doesn't automatically optimize for your ability to reason about it six weeks later.&lt;/p&gt;

&lt;p&gt;The output is shaped by your inputs. If your inputs are mostly "make this work," you'll get code that works. If you never ask Claude to explain its architectural decisions, justify trade-offs, or flag things you might need to revisit, those considerations just don't happen.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Goes Wrong
&lt;/h2&gt;

&lt;p&gt;Three failure modes show up repeatedly:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Implicit assumptions buried in the implementation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Claude will pick one of many valid approaches and implement it. That choice encodes assumptions about your data, your usage patterns, your constraints. If you didn't surface those assumptions in the conversation, they're invisible in the code — until they break.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Local correctness, global incoherence&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Each individual piece of code Claude generates can look great in isolation. But without someone (or something) holding the global picture, the pieces start working against each other. Abstractions don't compose. Naming conventions drift. Logic gets duplicated in slightly different ways.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. False confidence from passing tests&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Claude can write tests for the code it just wrote. Those tests will pass. But tests written by the same process that wrote the code tend to test the code as it was written, not as it should behave. Edge cases that weren't considered in generation aren't considered in testing either.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Different Mental Model
&lt;/h2&gt;

&lt;p&gt;The shift that actually helps is this: stop thinking of Claude as a coding machine and start thinking of it as an incredibly fast, context-limited pair programmer.&lt;/p&gt;

&lt;p&gt;A good pair programmer doesn't just write code — they think out loud, question your assumptions, surface trade-offs, and occasionally say "wait, have we considered what happens when X?" Claude can do all of that too, but only if you create space for it.&lt;/p&gt;

&lt;p&gt;In practice, this means restructuring how you engage:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Before generating&lt;/strong&gt;: Describe not just what you want, but the constraints that matter — performance, readability, testability, the parts of this that will change. Ask Claude to flag design decisions you should know about.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;During generation&lt;/strong&gt;: Keep tasks scoped. If the context grows large enough that Claude has to "remember" things from 50 messages ago, you've lost the ability to audit the reasoning. Smaller scopes = more legible outputs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;After generation&lt;/strong&gt;: Before moving on, ask Claude to explain what it just built as if you were onboarding someone new to it. You'll surface assumptions faster than any code review.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Verification as a first-class step&lt;/strong&gt;: Don't ask Claude to verify its own work with tests it writes. Use external checks. Run the code. Test the boundaries. Claude's self-assessment is useful signal, but it's not a substitute for independent verification.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Compounding Problem
&lt;/h2&gt;

&lt;p&gt;None of this matters much on a one-off script. But on a real product you're planning to ship and iterate on, the compounding effect is brutal.&lt;/p&gt;

&lt;p&gt;AI-generated code that you don't deeply understand accumulates in exactly the same way technical debt does — silently, gradually, until one day the cost of moving forward exceeds the cost of stopping to clean up.&lt;/p&gt;

&lt;p&gt;The difference is that AI debt accrues faster, because the generation speed lets you move past understanding before it catches up with you.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where to Start
&lt;/h2&gt;

&lt;p&gt;If any of this rings true, the practical starting point is simpler than you might think:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Stop treating long Claude sessions as a feature. Context length is a liability when you lose track of what was decided and why.&lt;/li&gt;
&lt;li&gt;Add one review step per session: after Claude generates something, before you ship it, write down what it does in your own words. If you can't, the understanding isn't there yet.&lt;/li&gt;
&lt;li&gt;Separate generation from verification. Don't ask Claude to check Claude's work without an independent step in between.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These aren't advanced techniques — they're workflow habits that preserve your ability to own what you've built.&lt;/p&gt;




&lt;p&gt;If you're hitting this wall and want a structured starting point, I put together a free resource called &lt;strong&gt;Ship With Claude — Starter Pack&lt;/strong&gt; that covers the workflow patterns, task scoping strategies, and verification approaches I use when building with Claude seriously.&lt;/p&gt;

&lt;p&gt;It's free, no upsell: &lt;a href="https://panavy.gumroad.com/l/skmaha" rel="noopener noreferrer"&gt;https://panavy.gumroad.com/l/skmaha&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Built for developers who are past the "wow it works" phase and into the "why is this so hard to maintain" phase.&lt;/p&gt;




&lt;p&gt;What's your experience been? Are there workflow patterns that have helped you keep Claude-generated code comprehensible over time? I'm curious what's working for others.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Why Your AI-Generated Code Works But Your Project Doesn't</title>
      <dc:creator>Panav Mhatre</dc:creator>
      <pubDate>Mon, 27 Apr 2026 04:04:23 +0000</pubDate>
      <link>https://forem.com/panav_mhatre_732271d2d44b/why-your-ai-generated-code-works-but-your-project-doesnt-28gc</link>
      <guid>https://forem.com/panav_mhatre_732271d2d44b/why-your-ai-generated-code-works-but-your-project-doesnt-28gc</guid>
      <description>&lt;p&gt;I've noticed a pattern in how developers describe their AI coding experience.&lt;/p&gt;

&lt;p&gt;First month: "This is incredible. I'm shipping 3x faster."&lt;/p&gt;

&lt;p&gt;Month three: "I've been staring at this codebase for an hour and I genuinely don't understand what I built."&lt;/p&gt;

&lt;p&gt;If that second part sounds familiar, you're not alone — and it's not a prompting problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  The actual issue
&lt;/h2&gt;

&lt;p&gt;When you write code yourself, you're doing two things simultaneously: solving the problem and building a mental model of the solution. The struggle is part of how the understanding gets formed.&lt;/p&gt;

&lt;p&gt;When an AI writes the code for you, you get the artifact without that process. The code can be correct, well-structured, even elegant — and you can still end up with a system you don't really understand.&lt;/p&gt;

&lt;p&gt;This matters because software development is mostly maintenance. Features get changed. Bugs appear in production. Requirements shift. All of that requires you to reason about the system, not just run it.&lt;/p&gt;

&lt;p&gt;If your mental model is shallow, every change becomes expensive.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where it actually breaks down
&lt;/h2&gt;

&lt;p&gt;Here are the three places I see AI-generated codebases fall apart:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. The context grows but the structure doesn't&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You start a project small. You prompt for feature after feature. Claude obliges. But AI output isn't automatically coherent across a session — each prompt fills in the immediate gap without necessarily reinforcing the overall architecture. After 40 features, you have a working product held together by coincidence.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. You optimize for passing tests, not for understanding&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;"Does it work?" becomes the only validation. But passing tests doesn't mean you understand &lt;em&gt;why&lt;/em&gt; it works. When something breaks in a way that doesn't trigger a test, you're stuck.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. The naming is confident but wrong&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AI-generated code tends to have plausible-sounding names for everything. Functions, variables, modules — all named with authority. The problem is that plausible and accurate aren't the same. Misleading names at scale create serious cognitive overhead.&lt;/p&gt;

&lt;h2&gt;
  
  
  What actually helps
&lt;/h2&gt;

&lt;p&gt;This isn't an argument against using AI to build. It's an argument for building your workflow around keeping your own thinking in the loop.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Before you prompt, write a short brief.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Not for Claude — for you. Two or three sentences: what is this component supposed to do, what are its boundaries, what should it definitely &lt;em&gt;not&lt;/em&gt; do? This forces you to have an opinion before you see output. That opinion is the seed of your mental model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Review like you wrote it.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Don't skim AI output looking for bugs. Read it like you're trying to understand a colleague's code. If you can't explain a function to yourself in plain language, that's a signal — either refactor until you can, or ask Claude to explain what it actually did and why.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build a working vocabulary for your project.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Early on, define the nouns. What's a "user" vs a "member" vs an "account" in your system? Claude will name things consistently once you establish the terms. Without that, it'll improvise — and you'll spend hours later untangling what everything actually means.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Think in phases, not in one big prompt.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Structure → Logic → Polish. Get the shape right before filling in the details. This keeps you making architectural decisions instead of delegating them.&lt;/p&gt;

&lt;h2&gt;
  
  
  The deeper point
&lt;/h2&gt;

&lt;p&gt;The best AI-assisted builders I know aren't faster prompters. They've just figured out which decisions to keep for themselves.&lt;/p&gt;

&lt;p&gt;The speed gains are real. But they compound when you stay in control of the system's structure — not just its output.&lt;/p&gt;




&lt;p&gt;If you're building with Claude and finding that projects get messy or hard to maintain over time, I put together a free starter pack around this workflow — prompts, checklists, and structure templates:&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://panavy.gumroad.com/l/skmaha" rel="noopener noreferrer"&gt;Ship With Claude — Starter Pack (free)&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;No upsell pressure. It's genuinely free. Try it on one project and see if it helps.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;What's your experience been? Have you found approaches that help you stay oriented in AI-assisted codebases?&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Why Your Claude-Generated Code Becomes Unmaintainable (And It's Not the Prompts)</title>
      <dc:creator>Panav Mhatre</dc:creator>
      <pubDate>Sun, 26 Apr 2026 04:04:31 +0000</pubDate>
      <link>https://forem.com/panav_mhatre_732271d2d44b/why-your-claude-generated-code-becomes-unmaintainable-and-its-not-the-prompts-1b50</link>
      <guid>https://forem.com/panav_mhatre_732271d2d44b/why-your-claude-generated-code-becomes-unmaintainable-and-its-not-the-prompts-1b50</guid>
      <description>&lt;p&gt;You've been using Claude for a few months now. Maybe longer. And somewhere along the way, something shifted.&lt;/p&gt;

&lt;p&gt;It started fast. Claude wrote boilerplate, fixed bugs, scaffolded components. It felt like having a senior dev on demand. Then the codebase started getting harder to change. New features broke old ones. You'd ask Claude to modify something and it would produce confident, syntactically correct code that subtly ignored how the rest of the system worked.&lt;/p&gt;

&lt;p&gt;You blamed the prompts. You wrote more detailed prompts. You still got messy output.&lt;/p&gt;

&lt;p&gt;Here's what I've learned after using Claude seriously for over a year of building: &lt;strong&gt;the problem almost never starts with the prompts.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Real Issue: Treating Output as Outcome
&lt;/h2&gt;

&lt;p&gt;The biggest mistake I see builders make with Claude (and I made it too) is treating generated code as a finished product rather than a draft that requires judgment.&lt;/p&gt;

&lt;p&gt;When a human engineer writes code, there's an implicit understanding of the whole system. They know about the existing patterns, the team conventions, the technical debt in certain modules. Their output carries that context.&lt;/p&gt;

&lt;p&gt;Claude doesn't have that unless you explicitly give it. And even when you do, it's working from a frozen snapshot of context — not a living understanding of your codebase.&lt;/p&gt;

&lt;p&gt;So when you ask Claude to "add a feature to the user dashboard," it does exactly that — optimizing locally for the task you gave it, without awareness of the broader system state. The result looks right. It often works. But it may be quietly inconsistent with how the rest of the system is built.&lt;/p&gt;

&lt;p&gt;Over time, these local-optimal decisions compound into global disorder.&lt;/p&gt;




&lt;h2&gt;
  
  
  The False Confidence Trap
&lt;/h2&gt;

&lt;p&gt;There's a specific failure mode I call &lt;strong&gt;false confidence from AI output&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Claude writes code that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;passes your immediate test&lt;/li&gt;
&lt;li&gt;looks clean and well-commented&lt;/li&gt;
&lt;li&gt;follows reasonable patterns (just not yours)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So you accept it. You ship it. You move on.&lt;/p&gt;

&lt;p&gt;Three months later, someone (probably you) tries to extend that feature. Now you're debugging something that was never quite right — just quietly wrong in a way that didn't matter until it did.&lt;/p&gt;

&lt;p&gt;This isn't Claude being bad at coding. It's Claude being good at writing &lt;em&gt;plausible&lt;/em&gt; code without the system-level awareness needed to write &lt;em&gt;correct-for-your-context&lt;/em&gt; code.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Actually Helps
&lt;/h2&gt;

&lt;p&gt;I've settled on a few structural habits that make Claude dramatically more useful over long-horizon projects:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Scope before you generate&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before you ask Claude to write anything significant, describe the constraint space, not just the desired outcome. What must NOT change? What existing patterns should this follow? Where does this new code slot into the existing architecture?&lt;/p&gt;

&lt;p&gt;This forces you to think about the system first, and gives Claude the context it actually needs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Treat Claude's output as a first draft, always&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Even when the output looks perfect, read it as if you were reviewing a junior engineer's PR. Ask: does this fit the existing patterns? Does it introduce new dependencies or abstractions that aren't justified? Would I write this differently if I knew the whole codebase?&lt;/p&gt;

&lt;p&gt;This isn't skepticism for its own sake — it's how you catch the 10% of cases where the output is quietly wrong.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Define what "done" looks like before you start&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One of the most common sources of AI-assisted debt is ambiguous task scope. When you ask Claude to "refactor the auth module," it will make judgment calls about what to change. Some of those calls may not align with your actual goals.&lt;/p&gt;

&lt;p&gt;Being explicit about scope boundaries — "don't change the API interface, only the internal implementation" — leads to much more predictable and usable output.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Version by intent, not just by diff&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When you accept a chunk of AI-generated code, write a brief comment or commit message that captures &lt;em&gt;why&lt;/em&gt; you accepted it and what the intent was. Future-you (or your teammate) will thank you when debugging.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Underlying Pattern
&lt;/h2&gt;

&lt;p&gt;All of these habits share something in common: they're about &lt;strong&gt;maintaining your own clarity&lt;/strong&gt; as the developer, rather than outsourcing decisions to Claude.&lt;/p&gt;

&lt;p&gt;Claude is excellent at execution. It's not designed to hold your architectural vision. That's your job. And when builders lose that distinction — treating Claude as a co-architect rather than an executor — the codebase slowly starts to drift from any coherent design.&lt;/p&gt;

&lt;p&gt;The builders I've seen get the most long-term value from Claude aren't the ones with the most elaborate prompts. They're the ones who stay in the driver's seat, use Claude for leverage, and maintain a clear standard for what acceptable output looks like.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Free Resource
&lt;/h2&gt;

&lt;p&gt;I put together a short starter pack covering the workflow structure, scoping patterns, and checklists I actually use when building with Claude. It's aimed at builders who've moved past "wow this is fast" and into "why is my codebase turning into a mess."&lt;/p&gt;

&lt;p&gt;It's completely free — no email capture, no upsell required to get the core content.&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://panavy.gumroad.com/l/skmaha" rel="noopener noreferrer"&gt;Ship With Claude — Starter Pack (free)&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you're a student, intern, or first-time indie builder just getting started, it should save you some of the trial-and-error I went through.&lt;/p&gt;




&lt;p&gt;What patterns have you developed for keeping Claude output maintainable? I'd be curious what others have found — especially on larger or longer-lived projects.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>The Hidden Debt in AI-Assisted Code (And How to Stop Accumulating It)</title>
      <dc:creator>Panav Mhatre</dc:creator>
      <pubDate>Sat, 25 Apr 2026 04:04:07 +0000</pubDate>
      <link>https://forem.com/panav_mhatre_732271d2d44b/the-hidden-debt-in-ai-assisted-code-and-how-to-stop-accumulating-it-4kh0</link>
      <guid>https://forem.com/panav_mhatre_732271d2d44b/the-hidden-debt-in-ai-assisted-code-and-how-to-stop-accumulating-it-4kh0</guid>
      <description>&lt;p&gt;You've probably had this experience: you ask Claude to write a feature, it produces something that looks completely reasonable, you paste it in, tests pass — and then three weeks later you're staring at that code trying to remember why it works that way.&lt;/p&gt;

&lt;p&gt;Not because the code is wrong. Because you don't own it.&lt;/p&gt;

&lt;p&gt;That's the quiet problem with a lot of AI-assisted development right now. The velocity is real. The debt is real too — it just doesn't show up for a while.&lt;/p&gt;

&lt;h2&gt;
  
  
  What AI debt actually looks like
&lt;/h2&gt;

&lt;p&gt;Traditional technical debt is usually about shortcuts: skipped tests, tight coupling, a quick fix that became permanent. You know it's there because you made the tradeoff consciously.&lt;/p&gt;

&lt;p&gt;AI debt is sneakier. The code looks clean. It often passes review. But no one on the team — including the person who "wrote" it — can walk through it confidently. When something breaks at 2am, the fastest path to a fix is asking Claude to debug code that Claude wrote, which Claude no longer has context for.&lt;/p&gt;

&lt;p&gt;You end up with a codebase that runs but that nobody fully understands. That's not a prompting problem. It's a workflow problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three patterns that cause it
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Scope creep per session&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The bigger and more sprawling your request, the more Claude has to infer. "Build me an auth system with refresh tokens, rate limiting, and RBAC" produces a lot of code that makes a lot of decisions you didn't explicitly make. Some of those decisions are fine. Some will surprise you later.&lt;/p&gt;

&lt;p&gt;The fix isn't shorter prompts — it's narrower scope per session. One behavior, one contract, one concern. If you can't describe what the function must NOT do, the scope is still too wide.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Treating output as a solution instead of a draft&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When something works, there's a strong pull to move on. But "it works" and "I understand it well enough to maintain it" are different bars. Reading AI-generated code with an ownership mindset — asking "what would I change if requirements shifted?" — surfaces assumptions you didn't make consciously.&lt;/p&gt;

&lt;p&gt;The habit shift: after you get code that works, spend 5 minutes asking Claude to explain its decisions, not to verify correctness, but to build your own model of what's actually there.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. No reusable project structure&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Most AI-assisted projects start from scratch every session. No shared context about conventions, no established patterns, no agreed-on boundaries between layers. So Claude reinvents slightly differently each time, and over weeks, your codebase accumulates several slightly different opinions about how to handle errors, or structure API calls, or name things.&lt;/p&gt;

&lt;p&gt;The fix: write down your project's conventions once, early. Not a full spec — just the things Claude shouldn't decide for itself. Then include it in relevant sessions.&lt;/p&gt;

&lt;h2&gt;
  
  
  What actually helps
&lt;/h2&gt;

&lt;p&gt;None of this requires new tools or better prompts. It's closer to workflow design:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Define before generating.&lt;/strong&gt; Write what the function does, what it must not do, and what the caller assumes. Give Claude something to be correct about, not just something to fill in.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Keep context scoped.&lt;/strong&gt; One conversation per concern. When context wanders across files and responsibilities, coherence drops.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Review for ownership, not just correctness.&lt;/strong&gt; Run the mental test: could I explain this to a teammate? Could I debug it without AI help? If not, that's the gap to close before moving on.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Establish reusable structure early.&lt;/strong&gt; A short conventions doc, a standard file layout, a checklist of what needs to exist before you ship. These become project memory that lives outside any single conversation.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The speed trap
&lt;/h2&gt;

&lt;p&gt;Here's the thing nobody says out loud: AI tools make the beginning of a project feel incredibly fast, which creates pressure to skip the structural work that makes the &lt;em&gt;middle and end&lt;/em&gt; of a project manageable. The gap shows up around week three or month two, when the codebase is big enough that context doesn't fit cleanly, and changes start having unexpected side effects.&lt;/p&gt;

&lt;p&gt;The builders who get sustained value from AI assistance tend to treat the generated code as a starting point, not a destination. They stay in the loop on decisions. They keep sessions narrow. They build projects in layers, with explicit contracts between them.&lt;/p&gt;

&lt;p&gt;It's less about prompting better and more about preserving your ability to understand and extend what gets built.&lt;/p&gt;




&lt;p&gt;I've been pulling these habits together into a free resource — &lt;strong&gt;&lt;a href="https://panavy.gumroad.com/l/skmaha" rel="noopener noreferrer"&gt;Ship With Claude — Starter Pack&lt;/a&gt;&lt;/strong&gt; — with workflow templates, a shipping checklist, and reusable project structure for going from idea to MVP without accumulating a pile of code you can't maintain. It's genuinely free, no upsell required to use it.&lt;/p&gt;

&lt;p&gt;If you've been running into any of these patterns, I'd be curious what you've found helps. The workflow side of AI-assisted development feels underexplored compared to the prompting side.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>claude</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Your Claude-generated code works. You just can't maintain it.</title>
      <dc:creator>Panav Mhatre</dc:creator>
      <pubDate>Fri, 24 Apr 2026 04:05:18 +0000</pubDate>
      <link>https://forem.com/panav_mhatre_732271d2d44b/your-claude-generated-code-works-you-just-cant-maintain-it-38a5</link>
      <guid>https://forem.com/panav_mhatre_732271d2d44b/your-claude-generated-code-works-you-just-cant-maintain-it-38a5</guid>
      <description>&lt;p&gt;There's a specific kind of frustration I see a lot from developers using Claude for real projects.&lt;/p&gt;

&lt;p&gt;Not "the AI wrote broken code." The code works fine, at least at first.&lt;/p&gt;

&lt;p&gt;The problem is: three weeks later, you can't touch it.&lt;/p&gt;

&lt;p&gt;Functions are 200 lines long. State is managed in three different ways in the same file. There's no clear separation between business logic and UI. You ask Claude to add one small feature and it rewrites half the component, breaking two things to fix one.&lt;/p&gt;

&lt;p&gt;This isn't a prompt quality problem. It's a structural problem — and it comes from how most people use Claude by default.&lt;/p&gt;




&lt;h2&gt;
  
  
  The speed trap
&lt;/h2&gt;

&lt;p&gt;When you're moving fast with AI assistance, the feedback loop feels incredible. You describe what you want. You get working code. You ship it. You describe the next thing.&lt;/p&gt;

&lt;p&gt;The problem is you're not accumulating clarity — you're accumulating complexity. Every session adds a new layer. Claude doesn't have a global view of your project architecture. It fills in gaps based on what it can see in the current context window.&lt;/p&gt;

&lt;p&gt;Over time, the codebase starts to look like it was written by a dozen different developers, none of whom talked to each other.&lt;/p&gt;




&lt;h2&gt;
  
  
  The false confidence problem
&lt;/h2&gt;

&lt;p&gt;Here's what makes this worse: it works.&lt;/p&gt;

&lt;p&gt;You run it. It does the thing. Tests pass (if you wrote any). You move on.&lt;/p&gt;

&lt;p&gt;What you don't see is that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The abstraction level is inconsistent&lt;/li&gt;
&lt;li&gt;There are implicit dependencies between modules that nobody documented&lt;/li&gt;
&lt;li&gt;The "state" for a feature lives in three places&lt;/li&gt;
&lt;li&gt;The naming conventions shifted partway through and nobody reconciled them&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Claude will happily build on top of all of this and make it worse.&lt;/p&gt;




&lt;h2&gt;
  
  
  What actually helps
&lt;/h2&gt;

&lt;p&gt;The shift that made the biggest difference for me wasn't better prompts. It was treating each Claude session with the same discipline I'd apply to working with a capable-but-new contractor.&lt;/p&gt;

&lt;p&gt;Specifically:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Give Claude an explicit scope boundary before starting.&lt;/strong&gt;&lt;br&gt;
Not "add user authentication" but "we're adding auth to the user module only — here's the current structure, here are the interfaces we're using, don't touch the data layer."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Break work into verification checkpoints.&lt;/strong&gt;&lt;br&gt;
Don't hand Claude a 2-hour task and come back to review. Decompose into smaller pieces where you can check the output before it becomes load-bearing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Write the structure before you write the feature.&lt;/strong&gt;&lt;br&gt;
If Claude is about to create a new module, have it write the interface/contract first and get that right before any implementation. It's much easier to fix a bad interface before there's code depending on it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Treat "refactor this" as a high-risk request.&lt;/strong&gt;&lt;br&gt;
Claude refactors aggressively and correctly-looking. But correctness and equivalence aren't the same thing. If you're not carefully reviewing refactors, you're accumulating silent semantic drift.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Reset context intentionally.&lt;/strong&gt;&lt;br&gt;
If your session has gotten long and tangled, starting a new one with a clean summary of where things are is usually better than trying to steer a confused long conversation.&lt;/p&gt;




&lt;h2&gt;
  
  
  The thing nobody says out loud
&lt;/h2&gt;

&lt;p&gt;A lot of "Claude got worse" complaints I've seen recently are actually "my project structure accumulated enough debt that Claude can no longer navigate it without making things worse."&lt;/p&gt;

&lt;p&gt;That's not a model problem. That's a workflow problem. And it's fixable — but it requires being intentional about how you work with AI tools, not just what you ask them to do.&lt;/p&gt;




&lt;h2&gt;
  
  
  One more thing
&lt;/h2&gt;

&lt;p&gt;I put together a free starter pack specifically for builders who want to use Claude to ship real projects without the codebase becoming a maintenance nightmare: &lt;a href="https://panavy.gumroad.com/l/skmaha" rel="noopener noreferrer"&gt;Ship With Claude — Starter Pack&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It covers workflow structure, prompt templates, and a shipping checklist — the things that are hard to find because everyone writes about prompts but not about the layer underneath them. It's free ($0), no upsell required to get value from it.&lt;/p&gt;

&lt;p&gt;Happy to dig into any of this in the comments — especially if you've found patterns that work or patterns that reliably blow up.&lt;/p&gt;

</description>
      <category>aiproductivitywebdevbeginners</category>
    </item>
  </channel>
</rss>
