<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Lars Faye | Confident Coding</title>
    <description>The latest articles on Forem by Lars Faye | Confident Coding (@larsfaye).</description>
    <link>https://forem.com/larsfaye</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/larsfaye"/>
    <language>en</language>
    <item>
      <title>Agentic Coding is a Trap | Remaining vigilant about cognitive debt and atrophy.</title>
      <dc:creator>Lars Faye | Confident Coding</dc:creator>
      <pubDate>Thu, 30 Apr 2026 01:47:47 +0000</pubDate>
      <link>https://forem.com/larsfaye/agentic-coding-is-a-trap-remaining-vigilant-about-cognitive-debt-and-atrophy-2bo8</link>
      <guid>https://forem.com/larsfaye/agentic-coding-is-a-trap-remaining-vigilant-about-cognitive-debt-and-atrophy-2bo8</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"AI does the coding, and the human in the loop is the orchestrator"&lt;/em&gt; &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is the sentiment being hyped up around the industry currently: traditional coding is all but dead, and Spec Driven Development (SDD) is the future. You generate a plan, and disconnect from writing any code. The agents know better, and handle all the implementation. You are there as &lt;em&gt;the expert&lt;/em&gt;, to provide "good taste", review the outputs, and constantly steer the agent(s) to execute the plan that you meticulously put together.&lt;/p&gt;

&lt;p&gt;The workflow takes many shapes at this point, but in general, it is a process where someone defines the project's requirements (simultaneously at a micro &lt;em&gt;and&lt;/em&gt; macro level), generates a plan, and then &lt;a href="https://blog.quent.in/blog/2026/03/09/one-more-prompt-the-dopamine-trap-of-agentic-coding/" rel="noopener noreferrer"&gt;pulls the slot machine lever&lt;/a&gt; over and over, iterating and reiterating with often &lt;em&gt;multiple&lt;/em&gt; agent instances until it's done. All the while, putting a growing distance between the "orchestrator" and the code that is being generated and committed.&lt;/p&gt;

&lt;p&gt;Coding Agents are helpful, and powerful, but there's already some quantifiable trade-offs that need to be discussed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;An increase in the complexity of the surrounding systems to mitigate the increased ambiguity of AI's non-determinism.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Atrophying skills for a wide swath of the population.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Vendor lock-in for individuals and entire teams (Claude Code outages have already had entire teams at a stand-still).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fluctuating and increasing costs to access the tools. An employee's cost is fixed; tokens are a constantly moving target.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Being successful with this approach to coding agents hinges on a rather crucial element: only a skilled developer who's thinking critically, and comfortable operating at the architectural level, can spot issues in the thousands of lines of generated code, &lt;em&gt;before&lt;/em&gt; they become a problem. &lt;/p&gt;

&lt;p&gt;Yet, in an ironic twist of fate, it's the individual's critical thinking skills and cognitive clarity that AI tooling has now been proven to &lt;a href="https://margaretstorey.com/blog/2026/02/18/cognitive-debt-revisited/" rel="noopener noreferrer"&gt;impact negatively&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Not Just Another "Abstraction"
&lt;/h2&gt;

&lt;p&gt;A common refrain we hear in the community is that programmers are just "moving up the stack" and into a different type of abstraction. Whether or not these tools are really an abstraction layer in the first place is not a settled matter; a higher level of ambiguity is not a higher level of abstraction.&lt;/p&gt;

&lt;p&gt;If we put that to the side though, it is true that programmers tend to be wary of new languages and new ways of programming. When FORTRAN was released, programmers were skeptical of it, too. They had similar claims: it was likely to introduce more bugs and instability, and writing assembly directly was more efficient. Later, there would be discourse around the integration of compilers introducing too much "magic" into the process. These were normative arguments around a fear of what &lt;em&gt;might&lt;/em&gt; be lost if these new technologies were embraced.&lt;/p&gt;

&lt;p&gt;The difference with what is happening today is that those previous fears were speculative and theoretical. In just the short few years that AI tooling has existed, we are already seeing significant impacts. These aren't &lt;em&gt;just junior developers&lt;/em&gt;, but even those with a decade (or more) of experience:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg1bvbvi4fknxan0ee687.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg1bvbvi4fknxan0ee687.png" alt=" " width="765" height="537"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn83tk4vvvukqjvhcxacz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn83tk4vvvukqjvhcxacz.png" alt=" " width="752" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhrj4vh7wra5fqrkjp9it.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhrj4vh7wra5fqrkjp9it.png" alt=" " width="770" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhd9u4vb2bql3ydwm8rt1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhd9u4vb2bql3ydwm8rt1.png" alt=" " width="770" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Junior developers are faced with an even steeper climb, as we truncate their ability to work with code and replace it with reviewing generated code. Reviewing code is important, but it's only 50% of the learning process, at best. Without the friction and challenges that come with working with code directly, their ability to learn is seriously diminished.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdwrvynf5ul5j8dzk9uga.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdwrvynf5ul5j8dzk9uga.png" alt=" " width="765" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8vjeef32lgfh6oib887f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8vjeef32lgfh6oib887f.png" alt=" " width="752" height="464"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Studying this phenomenon takes time, so anecdotal evidence is important to gather to get a real-time view of the situation. But it has also been studied, and there are &lt;a href="https://www.media.mit.edu/publications/your-brain-on-chatgpt/" rel="noopener noreferrer"&gt;numerous&lt;/a&gt; &lt;a href="https://www.404media.co/microsoft-study-finds-ai-makes-human-cognition-atrophied-and-unprepared-3/" rel="noopener noreferrer"&gt;reports&lt;/a&gt; reinforcing that this is a real phenomenon. &lt;/p&gt;

&lt;h3&gt;
  
  
  It actually is different this time.
&lt;/h3&gt;

&lt;p&gt;When a C++ developer moved to Java or Python, they didn't complain of brain fog. When a sysadmin moved to AWS, they didn't feel like they were losing their ability to understand networking.&lt;/p&gt;

&lt;p&gt;A Senior Engineer losing their coding edge and becoming "rusty" over time as they move into managerial roles and practice coding less is not a new phenomenon. This was the natural progression of expertise: an engineer who had &lt;strong&gt;decades&lt;/strong&gt; of coding, friction, and experience logged would have the time and experience to solidify those skills and wisdom. And they could apply that wisdom when their job became less about syntax, and more about higher-level architectural decisions. Those individuals are not only exceedingly rare, but you won't get the next wave of seniors if we're all abdicating the friction of writing, problem-solving, and debugging. &lt;/p&gt;

&lt;p&gt;What is happening right now is a trend where developers, who've never had that longevity or the 30+ years of friction that led to that deep understanding, are being moved into higher-level workflows requiring the same skills to manage the AI agents that the senior engineer took decades to obtain. &lt;/p&gt;

&lt;p&gt;However, Senior Engineers aren't immune, either. &lt;a href="https://simonwillison.net/2026/Feb/15/cognitive-debt/" rel="noopener noreferrer"&gt;Simon Willison&lt;/a&gt;, a developer with nearly 30 years experience, has reported not having a &lt;em&gt;"firm mental model of what [the applications] can do and how they work, which means each additional feature becomes harder to reason about"&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The "Skilled" Orchestrator Problem
&lt;/h2&gt;

&lt;p&gt;Buried in a recent &lt;a href="https://www.anthropic.com/research/how-ai-is-transforming-work-at-anthropic#and-less-hands-on-practice" rel="noopener noreferrer"&gt;study&lt;/a&gt; by Anthropic was a surprisingly honest moment when speaking about the risks of engaging with coding agents on a regular basis:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;One reason that the atrophy of coding skills is concerning is the “paradox of supervision” [...] effectively using Claude requires supervision, and supervising Claude requires the very coding skills that &lt;u&gt;may atrophy from AI overuse&lt;/u&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Sandor Nyako, &lt;a href="https://www.businessinsider.com/leaders-worry-about-skill-atrophy-due-to-ai-adoption-2025-10#8786254a-3fe1-4407-98a2-7c2eaac66b6f" rel="noopener noreferrer"&gt;Director of Software Engineering at LinkedIn&lt;/a&gt; who oversees 50 engineers, has noticed it proliferating throughout the organization and requested his team not to use them for &lt;em&gt;"tasks that require critical thinking or problem-solving."&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"To grow skills, people need to go through hardship. They need to develop the muscle to think through problems," he said. "How would someone question if AI is accurate if they don't have critical thinking?"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;There is also the question of what constitutes "overuse". We already have evidence, both &lt;a href="https://www.anthropic.com/research/AI-assistance-coding-skills" rel="noopener noreferrer"&gt;data-driven&lt;/a&gt; and &lt;a href="https://www.youtube.com/watch?v=pzkwn3hu1Cc" rel="noopener noreferrer"&gt;anecdotal&lt;/a&gt;, that these skills can atrophy and dissipate rather quickly (within months in some cases). &lt;/p&gt;

&lt;p&gt;This is the contradiction that has many AI boosters talking out of both sides of their mouths: The use of coding agents is actively diminishing the very skills needed to effectively manage the coding agents. &lt;/p&gt;

&lt;h2&gt;
  
  
  LLMs accelerate the wrong parts.
&lt;/h2&gt;

&lt;p&gt;Contrary to the current narrative that is being espoused, we didn’t necessarily &lt;em&gt;need&lt;/em&gt; to write code faster. Especially code we didn’t fully understand, and particularly in huge swaths that we couldn't review in reasonable time frames. &lt;/p&gt;

&lt;h3&gt;
  
  
  Before AI, a (good) developer's priority list might look like:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Understanding of the code and its relation to the codebase&lt;/li&gt;
&lt;li&gt;If the code is aligned with the documented and efficient standards&lt;/li&gt;
&lt;li&gt;As few lines of code as needed to accomplish the goal (while maintaining readability)&lt;/li&gt;
&lt;li&gt;Turnaround time&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Agentic coding, and LLMs in general, &lt;em&gt;completely invert this list&lt;/em&gt;.
&lt;/h3&gt;

&lt;p&gt;Their capabilities and usage tend to focus on speed by increasing the amount of code that can be generated in a specified time frame. Speed is a natural byproduct of high aptitude. When it's forced, it always leads to lower accuracy. The integration of these tools doesn't tend to focus much on deeper understanding or conciseness.&lt;/p&gt;

&lt;p&gt;Can they be used that way? Yes, with determination, they certainly can be.&lt;/p&gt;

&lt;p&gt;Are they? No, not really; forced mandates and &lt;a href="https://www.pymnts.com/artificial-intelligence-2/2026/ai-adoption-is-being-measured-in-tokens-but-the-metric-falls-short-experts-say/" rel="noopener noreferrer"&gt;hype around token usage across organizations&lt;/a&gt; is demonstrating as such.&lt;/p&gt;

&lt;h2&gt;
  
  
  Coding === Planning
&lt;/h2&gt;

&lt;p&gt;There is a divide between developers that isn’t highlighted as much: &lt;em&gt;Some of us plan, and think, better with code&lt;/em&gt;. Thinking and working in code isn't just meaningless drudgery; it forces you to think about things on a technical level that involves everything from security to performance to user experience to maintainability. &lt;/p&gt;

&lt;p&gt;In a recent &lt;a href="https://youtu.be/IGsbARhERqc?t=501" rel="noopener noreferrer"&gt;interview&lt;/a&gt; discussing "Spec Driven Development", Dax, the creator of OpenCode &lt;em&gt;(an open-source coding agent, no less)&lt;/em&gt; was quoted saying:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“When working on something new or something challenging, &lt;u&gt;me typing out code is the process by which I figure out what we should even be doing&lt;/u&gt;. &lt;br&gt;&lt;br&gt;&lt;br&gt;
I have a really tough time just sitting there, writing out a giant spec on exactly how the feature should work. &lt;br&gt;&lt;br&gt;I like writing out types. I like writing out how some of the functions might play together. I like playing with folder structure to see what the different concepts should be. And this is all stuff that I think most people—most programmers—have always done. I don't really see a good reason why I would stop that personally, because it's how I figure out what to do.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;What you &lt;em&gt;say&lt;/em&gt; is often not what you &lt;em&gt;mean&lt;/em&gt;, and LLMs fill in ambiguity with assumptions (or hallucinations), which leads to: &lt;strong&gt;more&lt;/strong&gt; review, &lt;strong&gt;more&lt;/strong&gt; agent revisions, &lt;strong&gt;more&lt;/strong&gt; tokens burned, and &lt;strong&gt;more&lt;/strong&gt; disconnection from what is being created. Inversely, You can marvel at the most beautiful, unambiguous, perfectly structured prompt you've ever written, and the LLM can still output a hallucinated method because it is fundamentally a next-token-prediction engine, not a compiler. You cannot replace a deterministic system with a probabilistic one and expect zero ambiguity. &lt;/p&gt;

&lt;p&gt;Even the most &lt;a href="https://www.youtube.com/watch?v=cv6rwHTGT5w" rel="noopener noreferrer"&gt;AI-enthusiastic senior developers&lt;/a&gt; are starting to see this disconnection as a looming and growing issue. &lt;/p&gt;

&lt;h2&gt;
  
  
  Vendor Lock-In
&lt;/h2&gt;

&lt;p&gt;When I was browsing LinkedIn during the Claude outage that occurred a bit ago, I noticed numerous &lt;a href="https://www.linkedin.com/pulse/claudes-outages-show-dark-side-ai-productivity-total-system-lam-osbjc/" rel="noopener noreferrer"&gt;posts&lt;/a&gt; highlighting that certain developers and engineering teams were at a standstill. Their workflows, their own own coding abilities, had &lt;em&gt;already&lt;/em&gt; reached a point where they were largely dependent on these vendors. What used to be a skill that they could execute with just a keyboard and text editor suddenly required a subscription to an AI model provider. &lt;/p&gt;

&lt;h3&gt;
  
  
  You can't predict your token cost.
&lt;/h3&gt;

&lt;p&gt;Model providers are &lt;a href="https://www.theverge.com/ai-artificial-intelligence/917380/ai-monetization-anthropic-openai-token-economics-revenue" rel="noopener noreferrer"&gt;heavily subsidized&lt;/a&gt;, and the models themselves are built on shifting sands. Every new model release follows the same pattern of high benchmarks, followed by hype, followed by the reality of usage and everyone complaining of them being "nerfed" and burning through 2x-3x as many tokens to get the same job done. &lt;/p&gt;

&lt;p&gt;You &lt;em&gt;know&lt;/em&gt; how much your employees cost; you have &lt;em&gt;no idea&lt;/em&gt; how much your token costs will be day to day, month to month, year to year. If your entire team is using agentic coding as the default, your expense account will need to remain highly nimble. As Primeagen said recently: &lt;em&gt;"when you use these fully agentic workflows, the &lt;a href="https://www.youtube.com/watch?v=_vB0PDzaa7I&amp;amp;t=3299s" rel="noopener noreferrer"&gt;model providers essentially own you".&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;It's not unreasonable to play this pattern forward, where we could be creating an industry where you &lt;em&gt;need&lt;/em&gt; to pay for token consumption to accomplish something that used to be the product of your own critical thinking and problem-solving abilities. This would resemble a type of "vendor lock-in", but for an entire industry skillset (and I'm sure the model providers are gleefully &lt;a href="https://giphy.com/gifs/giphyqa-g0JP0HG6zF0o8" rel="noopener noreferrer"&gt;rubbing their hands&lt;/a&gt; in anticipation for that). The financial, and intellectual, rug-pull could come at any moment, and local LLMs are nowhere near ready to scale to absorb that level of usage.&lt;/p&gt;

&lt;p&gt;This isn't theoretical conjecture; &lt;a href="https://www.youtube.com/watch?v=5HaQnIPrfKk" rel="noopener noreferrer"&gt;it's being reported on right now&lt;/a&gt;. Even the model providers themselves are bringing it to light. Yet another Anthropic &lt;a href="https://www.anthropic.com/research/AI-assistance-coding-skills" rel="noopener noreferrer"&gt;study&lt;/a&gt; showed a precipitous &lt;strong&gt;47% drop-off&lt;/strong&gt; in debugging skills:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Incorporating AI aggressively into the workplace—especially in software engineering—inevitably comes with trade-offs...developers may lean on AI to deliver quick results at the expense of building critical skills—most notably, the ability to debug when things go wrong.” &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;There's a way to avoid all of this, of course. LLMs are a powerhouse technological advancement, and when used responsibly, they can be a stellar tool for learning and upskilling. They enable me to dive deeper and wider into concepts and techniques, expanding understanding and enabling exploration of new ideas that used to be more arduous and time consuming to experiment with. This is where I think they will offer the industry the most long-term value.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Approach: Demote AI's role
&lt;/h2&gt;

&lt;p&gt;I'm certainly not advocating for typing code out manually. Programmers have always been looking for ways to &lt;em&gt;create&lt;/em&gt; code without having to &lt;em&gt;write&lt;/em&gt; code. This is why we even have &lt;a href="https://code.visualstudio.com/docs/languages/emmet" rel="noopener noreferrer"&gt;Emmet&lt;/a&gt;, autocomplete, and snippets in the first place. Even COBOL was designed to encapsulate more instructions with less writing by using "English-like" words such as &lt;code&gt;MOVE&lt;/code&gt; and &lt;code&gt;WRITE&lt;/code&gt;. jQuery's motto was &lt;a href="https://brand.jquery.org/logos/" rel="noopener noreferrer"&gt;"write less, do more"&lt;/a&gt;. LLMs are another addition to this array of code generation tools.&lt;/p&gt;

&lt;p&gt;What I am advocating for, though, is leveraging LLMs and coding agents as secondary processes. A way that doesn't sacrifice the individual's skills at the altar of productivity. You can flip the script and lean on them to brainstorm the planning parts of the process while staying actively engaged throughout implementation, delegating to them on an as-needed basis. You can leverage the productivity gains, and mitigate the comprehension debt. &lt;/p&gt;

&lt;h3&gt;
  
  
  My daily workflow:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;I use LLMs to help generate specs and plans, while &lt;em&gt;I facilitate the implementation&lt;/em&gt;. This is an inversion of the "orchestration" workflow; I am still manually coding anywhere from 20% to 100%, depending on the task.&lt;/li&gt;
&lt;li&gt;I very often am writing pseudo-code when I do engage with the models, closing the distance between the request and the generated code.&lt;/li&gt;
&lt;li&gt;I use the models as &lt;a href="https://cheewebdevelopment.com/dont-vibe-code-delegate-responsible-development-with-llms/" rel="noopener noreferrer"&gt;delegation utilities&lt;/a&gt; for ad-hoc code generation and interactive documentation, as well as research tools so that I can constantly ask questions, iterate, refactor, and gain clarity around my approaches.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;I never generate more than I can review in a sitting&lt;/em&gt;. If it's too much to review, I slow down and split the task up, manually refactoring where needed to ensure a comprehensive understanding of the end result.&lt;/li&gt;
&lt;li&gt;I never ask an LLM or agent to implement something that I've never done before or couldn't do on my own, except perhaps purely for educational or tutorial purposes (and often discarded afterwards).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If I had to TL;DR this list, it would be: Use them like the Ship's Computer, not Data. &lt;br&gt;&lt;br&gt;
&lt;em&gt;(any Star Trek fans should get the reference)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;### I'm not going faster, but I'm doing better quality work.&lt;/p&gt;

&lt;p&gt;The productivity gains from these models are real, &lt;em&gt;and so is the friction and understanding that come from engaging with the work on a tangible and frequent basis&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;Despite the countless failed attempts at trying to democratize coding while not understanding coding, we're faced with the reality that you &lt;strong&gt;cannot&lt;/strong&gt; understand code without engaging with it. And it's become clear that if you don't keep engaging and writing it, you &lt;em&gt;can&lt;/em&gt; lose touch with that understanding, which will in turn make you a less capable orchestrator in the first place, rendering this phase of AI coding a strange and needlessly stressful interlude.&lt;/p&gt;

&lt;h3&gt;
  
  
  Perhaps I am worrying too much, but history contains lessons.
&lt;/h3&gt;

&lt;p&gt;This all feels similar, though, like another large experiment we're running on ourselves. We've been through a similar period with the introduction of social media without understanding the long-term implications, and we're now faced with attention deficit (amongst many other issues) on a wide scale. &lt;/p&gt;

&lt;p&gt;This time, we're gambling with something much riskier.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“People who go all in on AI agents now are guaranteeing their obsolescence. If you outsource all your thinking to computers, you stop upskilling, learning, and becoming more competent.”  &lt;br&gt;  &lt;br&gt; – Jeremy Howard, creator of &lt;a href="https://www.fast.ai/about.html" rel="noopener noreferrer"&gt;fast.ai&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;Full article can be read at: &lt;a href="https://larsfaye.com/articles/agentic-coding-is-a-trap" rel="noopener noreferrer"&gt;https://larsfaye.com/articles/agentic-coding-is-a-trap&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>llm</category>
      <category>agents</category>
    </item>
    <item>
      <title>My AI workflow seems to be the opposite of what the industry is encouraging, and I don't care.</title>
      <dc:creator>Lars Faye | Confident Coding</dc:creator>
      <pubDate>Sun, 29 Mar 2026 19:32:00 +0000</pubDate>
      <link>https://forem.com/larsfaye/my-ai-workflow-seems-to-be-the-opposite-of-what-the-industry-is-encouraging-and-i-dont-care-14il</link>
      <guid>https://forem.com/larsfaye/my-ai-workflow-seems-to-be-the-opposite-of-what-the-industry-is-encouraging-and-i-dont-care-14il</guid>
      <description>&lt;p&gt;The general consensus is that you should spec out your project, requirements, generate a bullet-proof plan, and then implement it via some kind of agent workflow. I've attempted this numerous times, and yes, it works...sort of. It can produce an application to the spec, but the main issue I continuously run into is that it's really hard to think about all the nuances and caveats ahead of time, and its not until I see something start to come together where I realize I need to think about things differently. Any ambiguity and the LLMs fills in with assumptions (or hallucinations).&lt;/p&gt;

&lt;p&gt;I can just keep iterating with the agents, but its just more token usage, more code churn, more disconnection from the codebase, and more potential complexity as I am needing to trust the agent to refactor appropriately. It can be done, but I personally find it exhausting and rather annoying.&lt;/p&gt;

&lt;p&gt;Lately, I've started to do the opposite, which I'm sure the AI bros would balk at: I use the LLM to generate the plan, and I do the implementation. Especially starting from scratch, deciding how to architect and plan an app out can be challenging, and I often like to see examples of other architectures to help me decide how I want to structure things. In the past, I'd often look for starter repos as inspiration and then begin putting things together after getting some initial direction.&lt;/p&gt;

&lt;p&gt;With these AI tools, I can work with them to tailor an architecture and structure that suits my exact needs, plan the entire app, and then...&lt;strong&gt;build it myself&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;Sure, I use these AI tools alongside to delegate tasks to and implement features on an as-needed basis, but its highly incremental and I'm still "manually" coding a good 50% of the project.&lt;/p&gt;

&lt;p&gt;This flies in the face of how I think the industry is hyping things and seemingly the opposite of the &lt;em&gt;"let AI do the coding, you only do the planning &amp;amp; review"&lt;/em&gt; workflow, but I &lt;strong&gt;really&lt;/strong&gt; don't care. Anytime I've done that, I could feel the atrophy of my critical thinking setting in within a few days, and feeling like I &lt;em&gt;inherited&lt;/em&gt; a codebase rather than helped &lt;em&gt;create&lt;/em&gt; one.&lt;/p&gt;

&lt;p&gt;Thinking in and working through the project in code isn't just drudgery; it forces you to think about things on a technical level that involves everything from security to performance to user experience to maintainability. Trying to do that while staying in the "natural language" mindset doesn't get specific enough, and specificity is absolutely essential to doing this work successfully.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>programming</category>
      <category>productivity</category>
    </item>
    <item>
      <title>I'm thinking of putting together a course that focuses on webdev troubleshooting and debugging.</title>
      <dc:creator>Lars Faye | Confident Coding</dc:creator>
      <pubDate>Tue, 24 Mar 2026 21:00:48 +0000</pubDate>
      <link>https://forem.com/larsfaye/im-thinking-of-putting-together-a-course-that-focuses-on-frontend-troubleshooting-and-debugging-2aph</link>
      <guid>https://forem.com/larsfaye/im-thinking-of-putting-together-a-course-that-focuses-on-frontend-troubleshooting-and-debugging-2aph</guid>
      <description>&lt;p&gt;I've been in the industry a while (back when tables were used for layout) and I've learned most of what I know through reverse engineering and breaking things/putting back together. I've always had a knack for it, and have helped a lot of developers over the years with tips and tricks I picked up along the way. I've had instances where I've found the solution in minutes that other developers were spending hours on. It's not like I was a better developer, it just seemed I had a process and mental framework whereas they would get overwhelmed on where to start.&lt;/p&gt;

&lt;p&gt;My theory is: if developers can be more confident they can troubleshoot problems, they're less likely to feel imposter syndrome. I find I'm at my happiest when I'm being helpful and working with other developers, so I'm moving on something that I've wanted to do for over a decade and put the course together.&lt;/p&gt;

&lt;p&gt;I'm working on content, and I'm still proving the concept out, so curious what you guys think. I want to focus on frontend workflows, although IMO, debugging skills are pretty universal.&lt;/p&gt;

&lt;p&gt;Landing page: &lt;a href="https://confident-coding.com/" rel="noopener noreferrer"&gt;https://confident-coding.com/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>beginners</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Everyone Can Delegate Now | AI is enabling every knowledge worker to learn management skills.</title>
      <dc:creator>Lars Faye | Confident Coding</dc:creator>
      <pubDate>Mon, 23 Mar 2026 23:30:32 +0000</pubDate>
      <link>https://forem.com/larsfaye/everyone-can-delegate-now-ai-is-enabling-every-knowledge-worker-to-learn-management-skills-ci1</link>
      <guid>https://forem.com/larsfaye/everyone-can-delegate-now-ai-is-enabling-every-knowledge-worker-to-learn-management-skills-ci1</guid>
      <description>&lt;h2&gt;
  
  
  Delegating was once reserved for managerial positions.
&lt;/h2&gt;

&lt;p&gt;The ability to effectively delegate and outsource tasks was the primary role of a manager, who had, ideally, already spent time "in the trenches" doing the work they now assign to others.&lt;/p&gt;

&lt;p&gt;Compiling data and generating reports, creating slideshow presentations, implementing features per client requirements...this delegation process flowed from the project managers and was distributed across the team.&lt;/p&gt;

&lt;p&gt;Delegation was a skill that had to be honed and refined. You would move up the chain and take on higher-level work, &lt;em&gt;orchestrating&lt;/em&gt;, rather than &lt;em&gt;doing&lt;/em&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  AI tools can empower individuals to learn delegation.
&lt;/h2&gt;

&lt;p&gt;Because of their general ease of use, AI tools have proliferated quickly and we're still trying to pin down exactly what role they can effectively play, and how integrated they should be. The major shift that I have observed in the workplace, however, is that their introduction has created an opportunity where now &lt;strong&gt;everyone has the ability to leverage them for delegation&lt;/strong&gt;, no matter the type or scope of task.&lt;/p&gt;

&lt;p&gt;The ability to delegate is an empowering skill to practice. It offers a sense of freedom to the individual who might be facing down an exhaustive list of tasks: they don't have to do it &lt;em&gt;all&lt;/em&gt; themselves. They can learn how to manage a workload efficiently because there's a way to relieve some of the pressure. &lt;/p&gt;

&lt;p&gt;Tasks that were once required to be completed by an individual (e.g. populating a spreadsheet, drafting an email) can now be outsourced to a system which can &lt;em&gt;loosely emulate human interaction&lt;/em&gt;. AI tooling has zero &lt;em&gt;interface&lt;/em&gt; learning curve. Anybody can type into a chatbox. Just upload your request, requirements, and documents, and get a response. A draft of a report could be generated while you are on your lunch break, or article topics could be researched while you respond to emails.&lt;/p&gt;

&lt;p&gt;Using AI tooling for delegation is different from delegating directly to humans, but there are core similarities: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Analyzing one's own workload and learning how to prioritize effectively&lt;/li&gt;
&lt;li&gt;Breaking up large tasks into smaller, actionable pieces&lt;/li&gt;
&lt;li&gt;Deciding what can be completed oneself versus what can be assigned to "someone else"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the past, all these tasks would sit on someone's desk until they were able to get to them. If there wasn't enough time in the day, the work didn't get done (and the backlog grew).&lt;/p&gt;

&lt;h2&gt;
  
  
  AI delegation is a skill that needs to be learned.
&lt;/h2&gt;

&lt;p&gt;While it's an empowering skill, delegating doesn't come naturally to everyone, and we should not conflate the ease of use of the tool with mastery over delegation as a skill. Delegating to AI in particular comes with some extra "gotchas", as well:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;I've witnessed individuals not providing enough guidance and context, resulting in half-baked and incorrect results &lt;em&gt;(this isn't all that different with humans, either!)&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I've seen people blindly using responses without verification because they are delegating work &lt;em&gt;outside their domain of expertise&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I've worked with people who didn't understand the nature of context window limitations and how to break their task up into smaller pieces, eventually being forced to abandon it and start over&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, while the &lt;em&gt;interface&lt;/em&gt; for an AI tool does not have a learning curve, these tools have an element of unpredictability &amp;amp; unreliability, and the responsible party needs to heavily scrutinize the outputs. If someone is not careful, it could end up proving to be more time consuming than doing the task manually (and you end up being more of a "micro-manager" than a "delegator").&lt;/p&gt;

&lt;p&gt;Now that &lt;em&gt;everyone&lt;/em&gt; has a potential &lt;em&gt;delegatee&lt;/em&gt; they can assign work to, this is a skill that will need to be learned by everyone who is in a knowledge work role. The ability to offload some of their responsibilities and tasks and mitigate some overhead &lt;strong&gt;can teach essential lessons around management and leadership&lt;/strong&gt;.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Efficiency breeds expectation.
&lt;/h2&gt;

&lt;p&gt;Still, there are legitimate concerns that it &lt;a href="https://www.theregister.com/2026/02/11/ai_makes_employees_work_harder/" rel="noopener noreferrer"&gt;won't reduce workload&lt;/a&gt;. When personal computers were rolled out into office spaces in the 70s/80s, the role of the secretary, with the narrow discipline of answering phones and taking messages, transformed to include word processing, electronic filing, spreadsheet management, and print queues. This resulted in a higher workload and &lt;em&gt;more&lt;/em&gt; skills to learn.&lt;/p&gt;

&lt;p&gt;Giving every individual the ability to delegate their work could create a situation where the increased amount of the 'resource' (an individual's productivity) ends up being consumed at an equal rate, negating any efficiency gains or resulting in &lt;a href="https://fortune.com/2026/02/10/ai-future-of-work-white-collar-employees-technology-productivity-burnout-research-uc-berkeley/" rel="noopener noreferrer"&gt;burnout&lt;/a&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  Delegation is likely to evolve the workforce.
&lt;/h2&gt;

&lt;p&gt;The ability for workers to delegate is a skill that is going to have to be continuously refined over the years to come. For the employer, it should &lt;em&gt;not be an excuse&lt;/em&gt; to merely shoehorn more work into a workday, but to reduce stress on the individual &lt;em&gt;and to create space to do better work&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;For the employee, whether a manager or a team member, individuals need to be mindful about what they are offloading. If someone delegates &lt;em&gt;all&lt;/em&gt; of their workload to others (or to AI tools), they will end up in a "placeholder" role, rendering their own position all the more easy to replace (and creating their own potential &lt;a href="https://news.harvard.edu/gazette/story/2025/11/is-ai-dulling-our-minds/" rel="noopener noreferrer"&gt;cognitive atrophy&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;These tools also offer tremendous opportunity in shaping how we can not just get &lt;em&gt;more&lt;/em&gt; work done, but get &lt;em&gt;better&lt;/em&gt; work done, as the ability to delegate can hone critical thinking, build managerial skills, and give every individual a sense of personal power. &lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Your Indispensable Value in the AI Era</title>
      <dc:creator>Lars Faye | Confident Coding</dc:creator>
      <pubDate>Sat, 21 Mar 2026 18:23:37 +0000</pubDate>
      <link>https://forem.com/larsfaye/your-indispensable-value-in-the-ai-era-427g</link>
      <guid>https://forem.com/larsfaye/your-indispensable-value-in-the-ai-era-427g</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;"In a world where the cost of answers is dropping to zero, the value of the question becomes everything."&lt;br&gt;
&lt;cite&gt;— Brit Cruise, the &lt;a href="https://www.youtube.com/watch?v=dcolM6W5Odc" rel="noopener noreferrer"&gt;AI Paradox&lt;/a&gt;&lt;/cite&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;LLMs and their adjacent AI tools have provided us with something truly novel: the ability to ask anything, at any time, and receive an answer. We might have thought Google was playing this role, until these tools showed us that what we had before was a ubiquitous digital encyclopedia, and what we have now resembles a librarian who will &lt;em&gt;attempt&lt;/em&gt; to answer any question, regardless of the complexity or the absurdity.&lt;/p&gt;

&lt;p&gt;When you have a tool that has &lt;em&gt;all the answers&lt;/em&gt;, what value do you have to bring? &lt;/p&gt;

&lt;p&gt;It turns out: a whole lot. More than you probably thought, too.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Question is the Value
&lt;/h2&gt;

&lt;p&gt;If I were asked by someone to describe what it's like be a developer &lt;em&gt;(or programmer, or coder, whatever you identify with)&lt;/em&gt;, I would describe it as &lt;strong&gt;a state of being in which one is ceaselessly asking questions:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Why doesn't this work?&lt;/li&gt;
&lt;li&gt;Why &lt;strong&gt;&lt;em&gt;does&lt;/em&gt;&lt;/strong&gt; this work?&lt;/li&gt;
&lt;li&gt;Is there a better way to approach this?&lt;/li&gt;
&lt;li&gt;How can I build this feature? &lt;/li&gt;
&lt;li&gt;Should I refactor this code?&lt;/li&gt;
&lt;li&gt;What happens if I change X?&lt;/li&gt;
&lt;li&gt;How does it behave when I move Y?&lt;/li&gt;
&lt;li&gt;What happens if I remove Z?&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Turning Blue to Purple
&lt;/h2&gt;

&lt;p&gt;Many moons ago, circa 2010, I was hired to build an eCommerce site using &lt;a href="https://www.cs-cart.com/" rel="noopener noreferrer"&gt;CS Cart&lt;/a&gt;, a platform that was still in its infancy (even eCommerce was not that old by that point). During the checkout process, I got an obscure mySQL error, and I wasn't super experienced with SQL at the time, nevertheless something specific to CS Cart.&lt;/p&gt;

&lt;p&gt;Off to Google I went, hoping to find an easy answer &lt;em&gt;(spoiler: nope)&lt;/em&gt;. I quickly turned every blue link I found to purple, with no clear resolution. So, I kept asking. &lt;/p&gt;

&lt;p&gt;Each time I reformulated the question, I would get slightly different results, which led to refining the question. I was piecing together jigsaw puzzle without the picture, and each piece showed me the shape of the next piece I should begin sifting through the box for. Each reformulation of the question was another potential shape that could fit.&lt;/p&gt;

&lt;p&gt;Eventually, after creating page after page of &lt;span&gt;purple links&lt;/span&gt;, I gathered together enough pieces to formulate the &lt;em&gt;actual question that I needed to answer&lt;/em&gt;. And once I had that question, the answer was immediate, and self-evident. After a solid day of searching (punctuated with many walks around the block), I found the answer! &lt;/p&gt;

&lt;p&gt;Or, more accurately: I found &lt;strong&gt;the&lt;/strong&gt; question.&lt;/p&gt;

&lt;p&gt;The answer was the result of the labor, it was the outcome. As Brit Cruise also said in his scintillating video, &lt;a href="https://youtu.be/dcolM6W5Odc?t=850" rel="noopener noreferrer"&gt;finding the question itself is the work&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Nothing Is Becoming Simpler
&lt;/h2&gt;

&lt;p&gt;From the introduction of GPT 3.5 to the latest models and tooling, answers are now abundant. Whether or not they are the &lt;em&gt;correct answers&lt;/em&gt;...that is a problem that remains unchanged even with the latest frontier models. And it's a problem that is only solveable by those that know the power of asking the right questions.&lt;/p&gt;

&lt;p&gt;I realize that with modern troubleshooting tooling and the assistance of AI models, that particular debugging session would have likely been resolved differently, and likely quicker. This is a wonderful turn of events; this is how progress works. That is that is applying an old problem to new tooling, though. The problems we face now are very different, and &lt;em&gt;the challenges we face at any point in time, scale to the complexity of industry.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Programming and development are far more complicated in modern times than they were in 2010, if not just due to the sheer number of abstractions we've created for even the simplest features. &lt;/p&gt;

&lt;p&gt;I recently found a great article by Paul Herbert who demonstrates this so effectively: &lt;br&gt; &lt;a href="https://paulmakeswebsites.com/writing/shadcn-radio-button/" rel="noopener noreferrer"&gt;The Incredible Overcomplexity of the Shadcn Radio Button&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For example, troubleshooting a Docker issue is orders of magnitudes more complex than anything I encountered in 2010, because the state of technology to support something like Docker did not exist then, but it does now. &lt;/p&gt;

&lt;p&gt;As the tools grow in capability, the complexity grows. And as things get more complex, we encounter novel problems. We'll need more people who are able to approach these novel problems to help solve them, who know how to ask questions, how to research, and how to formulate new questions to navigate these new problems.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI Cannot ask the Question For You
&lt;/h2&gt;

&lt;p&gt;It's been a few years now and it's quite apparent that LLMs and AI tools are &lt;em&gt;not&lt;/em&gt; going to simplify anything about what it means to program and create software. Peruse OpenAI's &lt;a href="https://openai.com/index/harness-engineering/" rel="noopener noreferrer"&gt;Harness engineering&lt;/a&gt; write-up, and it's clear to see that the new way to approach programming, seems to still be the same fundamentals as programming, but with more abstractions between you and the result, and with potential higher complexity due to the sheer volume of code that can be generated in shorter amounts of time. Which will lead to more complex software, which leads to more complex issues. Software is not a static industry, and &lt;em&gt;we constantly scale the capabilities of the software to the capabilities of the tools&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;What remains is the most valuable discipline that you can cultivate: how to ask effective, productive questions. And once you receive an answer to that question, how to take that answer to reformulate the next question, performing this process recursively until clarity comes. This is the process of "critical thinking", but that phrase handwaves away the mechanics of what it means to do just that. Being able to distill a general question into a highly specific one involves relentless scrutiny, ongoing experimentation, and being comfortable residing in a state of unknowing for an unspecific amount of time. &lt;strong&gt;This is the true job description of the programmer.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AI models cannot think for you, and they cannot formulate the question for you. They can certainly be a tool that helps you on your research path, but they are still bound by their training data, and they cannot escape the weighted dice that determines their paths which drive to their outputs. Their ability to generate answers is unmatched, but those answers are highly sensitive to the original input &amp;amp; context, and fundamentally untrustworthy by their very nature. That's OK, because they still have tremendous value, but &lt;em&gt;only&lt;/em&gt; when someone is present and capable to sift through the noise, distill the truth, and verify the answer (or perhaps, the next question).&lt;/p&gt;

&lt;h2&gt;
  
  
  The Answers are Cheap Now
&lt;/h2&gt;

&lt;p&gt;As Brit Cruise stated in a beautifully succinct manner: in an era where AI tooling has dropped the cost of answers to near zero, the value resides in asking the right questions. And while AI tools have exacerbated this situation, I would postulate that the value has &lt;strong&gt;always&lt;/strong&gt; been in asking the right questions.&lt;/p&gt;

&lt;p&gt;And when you have an &lt;em&gt;infinite answer machine&lt;/em&gt;, your ability to ask good questions &lt;em&gt;becomes infinitely more valuable&lt;/em&gt;.&lt;/p&gt;




&lt;p&gt;View the article at: &lt;a href="https://larsfaye.com/articles/the-question-is-the-work" rel="noopener noreferrer"&gt;https://larsfaye.com/articles/the-question-is-the-work&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Interesting in learning how to ask better questions? Check out &lt;a href="https://confident-coding.com" rel="noopener noreferrer"&gt;Confident Coding&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>programming</category>
      <category>development</category>
    </item>
    <item>
      <title>Your AI generated code is "almost right", and that is actually WORSE than it being "wrong".</title>
      <dc:creator>Lars Faye | Confident Coding</dc:creator>
      <pubDate>Fri, 20 Mar 2026 15:13:02 +0000</pubDate>
      <link>https://forem.com/larsfaye/your-ai-generated-code-is-almost-right-and-that-is-actually-worse-than-it-being-wrong-59og</link>
      <guid>https://forem.com/larsfaye/your-ai-generated-code-is-almost-right-and-that-is-actually-worse-than-it-being-wrong-59og</guid>
      <description>&lt;ul&gt;
&lt;li&gt;"Almost right" will make it past reviews. &lt;/li&gt;
&lt;li&gt;"Almost right" will pass tests and linters&lt;/li&gt;
&lt;li&gt;"Almost right" will make it in your codebase, and wait for the right mix of reasons to create a potential catastrophe.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Yes, AI tools enhance your work and empower you to "punch above your weight". You also need discipline and practice, and you can give yourself permission to slow down and learn what is happening on a deeper level.&lt;/p&gt;

&lt;p&gt;While the industry is pushing relentlessly for handing over control to “agents,” I propose a more measured approach, and recommend that the default mode when working with LLMs should always be scrutiny and skepticism. The trust needs to be earned, not granted.&lt;/p&gt;

&lt;p&gt;When working in areas where the training data is robust and plentiful and the requests are clearly architected with proper context, they have a fairly high accuracy rate. Nevertheless, the real work happens in the nuance and the details, and they are renowned for introducing application-breaking issues through seemingly innocuous additions or changes. Every response should have a “trust, but verify” approach.&lt;/p&gt;

&lt;p&gt;Anthropic themselves support this approach:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.anthropic.com/research/how-ai-is-transforming-work-at-anthropic#trust-but-verify" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A security engineer highlighted the importance of experience when Claude proposed a solution that was “really smart in the dangerous way, the kind of thing a very talented junior engineer might propose.” That is, it was something that could only be recognized as problematic by users with judgment and experience.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It's only by knowing how to code, and practicing your coding on a regular basis (and your debugging, &lt;a href="//confident-coding.com"&gt;which I'm starting a course on&lt;/a&gt;), will you learn the skills to be able to catch those "almost right" solutions that these models provide, and be able to vet them properly, and ensure you're not pushing up a time bomb to your repo! 💣&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>programming</category>
      <category>llm</category>
    </item>
  </channel>
</rss>
