<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: John Van Wagenen</title>
    <description>The latest articles on Forem by John Van Wagenen (@jtvanwage).</description>
    <link>https://forem.com/jtvanwage</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/jtvanwage"/>
    <language>en</language>
    <item>
      <title>Congrats! You're a Manager Now!</title>
      <dc:creator>John Van Wagenen</dc:creator>
      <pubDate>Tue, 10 Mar 2026 01:44:00 +0000</pubDate>
      <link>https://forem.com/jtvanwage/congrats-youre-a-manager-now-4jk2</link>
      <guid>https://forem.com/jtvanwage/congrats-youre-a-manager-now-4jk2</guid>
      <description>&lt;h3&gt;
  
  
  How to approach work in the age of AI
&lt;/h3&gt;

&lt;p&gt;Oh, you hadn't heard?&lt;/p&gt;

&lt;p&gt;Well, this is awkward. I thought someone would've told you by now.&lt;/p&gt;

&lt;p&gt;But, long story short, you now have an intern. Well, it's kinda like a team of interns — but only one at a time.&lt;/p&gt;

&lt;p&gt;The good news is they're super eager to help. Like, super, super eager. And they're surprisingly fast and accurate. More than you'd expect.&lt;/p&gt;

&lt;p&gt;The bad news? They need a lot of direction. Sometimes you have to be very, very specific. They have a short memory unless you front-load them with the right context. And sometimes they do way more than you asked — or somehow manage to do less.&lt;/p&gt;

&lt;p&gt;Exciting, right?&lt;/p&gt;

&lt;p&gt;The truth is, whether you wanted to manage or not, you now have leverage. While some of us looked forward to management, others avoided it as long as possible. But those leadership skills — the ones you might've been quietly ignoring — are now core to how you do your job.&lt;/p&gt;

&lt;p&gt;The good news? Chances are you've already been developing them.&lt;/p&gt;

&lt;p&gt;Let's get into it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Management Skills
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Problem Decomposition
&lt;/h3&gt;

&lt;p&gt;If you drop a big, vague idea on an AI, like "build me a music app," you might get something that technically compiles and kinda resembles what you had in mind. But it won't be what you actually wanted.&lt;/p&gt;

&lt;p&gt;The better approach is to break the problem into manageable chunks, then break those into actionable tasks, all pointing toward the finished goal. You wouldn't hand a new hire a napkin sketch and walk away. Same idea here.&lt;/p&gt;

&lt;p&gt;This is why structured workflows work well with AI. You spend the time up front doing the decomposition, defining each piece clearly, then the AI can do large chunks of work uninterrupted. The more you define the problem, the less you have to babysit the solution.&lt;/p&gt;

&lt;h3&gt;
  
  
  Identifying Constraints
&lt;/h3&gt;

&lt;p&gt;You've probably been here: you're deep into a project, everything's going smoothly, then something surfaces that you didn't account for and it ripples through everything.&lt;/p&gt;

&lt;p&gt;Or maybe you've got that one coworker who's great at spotting the "yeah but what about..." scenarios before they become problems. That person is invaluable.&lt;/p&gt;

&lt;p&gt;With AI, you're that person. You need to identify the constraints early and build them in. Architecture to follow? Coding standards? Some weird edge case in the problem domain? Tell AI upfront. Better yet, build it into the system prompt so it never forgets.&lt;/p&gt;

&lt;p&gt;Once AI knows the constraints, it can work within them and check itself against them. Without that, you'll spend a lot of time cleaning up stuff that could've been avoided.&lt;/p&gt;

&lt;h3&gt;
  
  
  Defining Success
&lt;/h3&gt;

&lt;p&gt;"Done" is not self-evident to AI. You need to tell it what done looks like.&lt;/p&gt;

&lt;p&gt;Clear problem framing helps. But if you also tell it what success looks like, what tests need to pass, what checks need to be green, what the output should actually do, it has something to work toward. The agent keeps going until those conditions are met instead of calling it done when it gets tired of trying.&lt;/p&gt;

&lt;p&gt;Think of it as writing an acceptance criteria before the work starts. You've probably done this before. The skill transfers directly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Systems Thinking
&lt;/h3&gt;

&lt;p&gt;Instead of thinking about AI as a really smart autocomplete or a search engine you can have a conversation with, try thinking of it in terms of processes and &lt;a href="https://johnvw.dev/blog/systems-thinking-in-the-age-of-ai/" rel="noopener noreferrer"&gt;systems&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;As a leader, you often need to map out how work flows, from idea to shipped feature. You define processes so people can follow them without confusion. Same skill, new application.&lt;/p&gt;

&lt;p&gt;What does your SDLC actually look like? What steps do you go through to implement a feature? What's your process for handling a bug vs. a greenfield build?&lt;/p&gt;

&lt;p&gt;Write it out. Then explain it to your AI. Once it understands the system, ask it where it can help or ask it to generate a prompt that encodes that process.&lt;/p&gt;

&lt;p&gt;You'll need to iterate and refine, but this does something important: it takes tacit knowledge out of your head and makes it explicit. Instead of living in some outdated Confluence doc (or nowhere at all), it lives as a reusable prompt in your system.&lt;/p&gt;

&lt;h3&gt;
  
  
  Non-goals
&lt;/h3&gt;

&lt;p&gt;This one doesn't get enough credit.&lt;/p&gt;

&lt;p&gt;How often do we get off track as humans? We go to fix a defect, notice something else that could be cleaner, start refactoring... and suddenly we're a week deep into work that has nothing to do with the original ticket.&lt;/p&gt;

&lt;p&gt;I'm not against the boy scout rule. Leaving things better than you found them is a good instinct. But there's a difference between cleaning up a mess and remodeling the kitchen when you came to fix a leaky faucet.&lt;/p&gt;

&lt;p&gt;As a manager, sometimes your job is to tell people: don't worry about X, just focus on Y.&lt;/p&gt;

&lt;p&gt;AI needs this, too. These tools are eager. If you don't define what's out of scope, they'll happily expand scope on your behalf.&lt;/p&gt;

&lt;p&gt;I ran into this recently on something I was working on. The changes touched about 10 files. The AI had modified nearly 20. When I pushed back, it acknowledged it had gone overboard. I asked it to revert the unnecessary changes and, a few minutes later, I had a tight, focused PR that was actually easy to review.&lt;/p&gt;

&lt;p&gt;Anyone can generate code now. Not everyone can shape it into something coherent and durable.&lt;/p&gt;

&lt;p&gt;Defining non-goals isn't just about containing AI. It's about shipping clean work.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Shift in Responsibility
&lt;/h2&gt;

&lt;p&gt;As people move from individual contributor roles into leadership, the shift isn't just in what they do, it's in what they're responsible for. You go from doing the work to ensuring the work gets done. Clear communication. Delegation. Follow-through.&lt;/p&gt;

&lt;p&gt;AI is pushing engineers through a similar transition: instead of writing every line, you're ensuring the right lines get written.&lt;/p&gt;

&lt;p&gt;But here's the part that's easy to gloss over: the responsibility for the output doesn't shift with the workload. When output becomes cheap, unintended consequences become easier to ship.&lt;/p&gt;

&lt;p&gt;If the AI ships slop, that's your slop. If it misses a requirement, you missed the requirement. You don't get to blame the intern.&lt;/p&gt;

&lt;p&gt;That's not a knock on AI tools, it's just the reality of ownership. The work is delegated. The accountability isn't.&lt;/p&gt;

&lt;p&gt;Delegation is powerful. Abdication is dangerous.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Engineer AI Amplifies
&lt;/h2&gt;

&lt;p&gt;Every major technological shift creates a divide: people who adapt and use the new thing to get better, and people who don't and get left behind.&lt;/p&gt;

&lt;p&gt;This one's no different.&lt;/p&gt;

&lt;p&gt;But here's the catch. AI makes it tempting to use leverage to produce more without understanding more. To ship faster without thinking deeper. To let the output volume mask the shallowness of the thinking behind it.&lt;/p&gt;

&lt;p&gt;That's a trap.&lt;/p&gt;

&lt;p&gt;The engineers who thrive won't be the ones with the highest output. They'll be the ones who understand the most. The systems, the business, the tradeoffs, the "why" behind the decisions.&lt;/p&gt;

&lt;p&gt;You're probably already doing some of this. You already make architecture tradeoffs. You already think about what changes affect what. You already understand things that newer engineers don't.&lt;/p&gt;

&lt;p&gt;AI gives you leverage to apply that understanding at a larger scale. But if you use it to do less thinking instead of wider thinking, you'll produce more noise, not more value.&lt;/p&gt;

&lt;p&gt;Don't let AI replace your understanding. Let it extend it. Don’t shrink your role. Grow into it.&lt;/p&gt;

&lt;p&gt;Learn your systems. Learn your business. Learn why things work the way they do.&lt;/p&gt;

&lt;p&gt;AI makes building easier. That means the bar for understanding has to go higher.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llms</category>
      <category>leadership</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>Systems Thinking in the Age of AI</title>
      <dc:creator>John Van Wagenen</dc:creator>
      <pubDate>Mon, 02 Mar 2026 23:48:43 +0000</pubDate>
      <link>https://forem.com/jtvanwage/systems-thinking-in-the-age-of-ai-1ml</link>
      <guid>https://forem.com/jtvanwage/systems-thinking-in-the-age-of-ai-1ml</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;The Dishwasher Was Not the Problem&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Years ago, my family and I lived in a small townhouse. One night, we were doing the dishes. We had just finished loading the dishwasher, put the soap in, closed the door, and pressed the buttons to start it.&lt;/p&gt;

&lt;p&gt;But nothing happened.&lt;/p&gt;

&lt;p&gt;No lights. No noises. Just dirty dishes and a sudden increase in cortisol.&lt;/p&gt;

&lt;p&gt;The dishwasher was not working.&lt;/p&gt;

&lt;p&gt;So we started troubleshooting. Was the power on? Yes. Did a circuit breaker flip? No. Is there something wrong with the dishwasher? Well, who knows?&lt;/p&gt;

&lt;p&gt;Eventually, through the suggestion of a cousin, we found that one of the GFCI outlets had tripped. We reset it.&lt;/p&gt;

&lt;p&gt;One press of a button later, we were once again on our way to clean dishes.&lt;/p&gt;

&lt;p&gt;Why share this story?&lt;/p&gt;

&lt;p&gt;Because the visible failure was not the root failure.&lt;/p&gt;

&lt;p&gt;The dishwasher was fine. The problem lived in the electrical system that supported it. Until we zoomed out and looked at the larger system, we kept blaming the wrong component.&lt;/p&gt;

&lt;p&gt;Systems are everywhere. And the more complex the environment, the more important it is to understand how the parts interact.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Systems Exist in Software Too&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In software, we are surrounded by systems.&lt;/p&gt;

&lt;p&gt;There are problem discovery systems. Planning and refinement systems. Development systems. Deployment systems. Marketing systems. Sales systems.&lt;/p&gt;

&lt;p&gt;Inside those are supporting systems. Issue tracking. CI pipelines. Code review. Testing frameworks. Observability. On call rotations.&lt;/p&gt;

&lt;p&gt;When something breaks, we often focus on the most visible failure. A flaky test. A slow endpoint. A messy pull request.&lt;/p&gt;

&lt;p&gt;But often the root cause lives elsewhere. Poor feedback loops. Missing standards. Incentives that reward speed over stability. A planning process that skips reproduction and root cause analysis.&lt;/p&gt;

&lt;p&gt;The same pattern shows up with AI.&lt;/p&gt;

&lt;p&gt;When AI generates slop, we blame the model. But many times the failure is upstream. The instructions were vague. The workflow was undefined. The expectations were implicit instead of explicit.&lt;/p&gt;

&lt;p&gt;A tool is only as good as the system surrounding it.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Most of Us Use AI at the Task Level&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;AI is &lt;a href="https://johnvw.dev/blog/ai-and-software-engineering-more-than-just-coding/" rel="noopener noreferrer"&gt;more than a coding tool&lt;/a&gt;. It can design, plan, critique, coordinate, and execute. But many of us treat it like a smarter autocomplete or a chat partner.&lt;/p&gt;

&lt;p&gt;I have done this plenty of times.&lt;/p&gt;

&lt;p&gt;We optimize for answers because we have a problem and we want a solution now. Answers feel productive. They give a quick hit of progress.&lt;/p&gt;

&lt;p&gt;But problems resurface when the underlying system stays the same.&lt;/p&gt;

&lt;p&gt;If we only use AI at the task level, we get task level gains. Faster code. Quicker summaries. Cleaner refactors.&lt;/p&gt;

&lt;p&gt;If we use it at the system level, we can change how work happens.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Turning Bug Fixing Into a System&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;I decided to experiment with this while fixing bugs.&lt;/p&gt;

&lt;p&gt;Normally, I would open a defect and start chatting with AI like a pair programming partner. I would ask for help debugging, refactoring, or writing a test.&lt;/p&gt;

&lt;p&gt;Instead, I asked a different question.&lt;/p&gt;

&lt;p&gt;How do &lt;em&gt;I&lt;/em&gt; fix bugs?&lt;/p&gt;

&lt;p&gt;What process do I actually follow? What standards do I apply? What do I wish I did more consistently?&lt;/p&gt;

&lt;p&gt;I wrote it all down as if I were training another engineer.&lt;/p&gt;

&lt;p&gt;I described how I read the bug carefully. How I try to reproduce it. If I cannot reproduce it, I reach out for more details. If I can reproduce it, I trace it to the root cause.&lt;/p&gt;

&lt;p&gt;I wrote about evaluating the scope of the fix. Is it isolated? Does it impact shared code? Does it introduce risk elsewhere?&lt;/p&gt;

&lt;p&gt;I described writing automated tests. Verifying the fix. Reviewing it against coding standards. Submitting a pull request with context.&lt;/p&gt;

&lt;p&gt;What I was really doing was extracting tacit knowledge and turning it into an explicit system.&lt;/p&gt;

&lt;p&gt;Then, with all that written down, I did the only sensible thing to do: I asked AI to help me turn that into a structured prompt.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Separate Planning from Execution&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;We landed on a two phase approach: planning and implementation.&lt;/p&gt;

&lt;p&gt;If you let AI jump straight to implementation, you often get something that looks good at first glance. But it may hide subtle issues. Missed edge cases. Violations of your architecture.&lt;/p&gt;

&lt;p&gt;If you do not force planning, you are debugging the AI’s thinking after it has already created damage.&lt;/p&gt;

&lt;p&gt;Planning first changes that.&lt;/p&gt;

&lt;p&gt;The planning prompt generates a clear outline of steps. Reproduce the issue. Identify root cause. Propose solution. Define tests. Validate against standards.&lt;/p&gt;

&lt;p&gt;Now you can inspect the thinking before any code is written.&lt;/p&gt;

&lt;p&gt;You catch flawed assumptions early. You refine the approach before it touches the codebase. You turn reasoning into something observable.&lt;/p&gt;

&lt;p&gt;This is systems thinking applied to AI.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Introducing Orchestration&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Once the planning prompt was solid, I moved to the implementation prompt.&lt;/p&gt;

&lt;p&gt;Instead of asking AI to simply write code, I framed it as an orchestrator. Its job was to track progress, create tasks, and coordinate subagents that perform specific pieces of work.&lt;/p&gt;

&lt;p&gt;The orchestrator does not do the implementation itself. It assigns, reviews, and verifies.&lt;/p&gt;

&lt;p&gt;With clear instructions about standards, architecture, and expectations, the system becomes far more reliable.&lt;/p&gt;

&lt;p&gt;Variations of this approach are &lt;a href="https://www.reddit.com/r/GithubCopilot/comments/1qapkdg/ralph_wiggum_technic_in_vs_code_copilot_with/" rel="noopener noreferrer"&gt;floating around&lt;/a&gt;. But the key insight is not the cleverness of the prompt.&lt;/p&gt;

&lt;p&gt;The key insight is that I stopped thinking in terms of individual responses and started thinking in terms of workflow design.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Real Win&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The result is not just speed.&lt;/p&gt;

&lt;p&gt;Yes, I can run the planning prompt against a defect and generate a structured plan. Once I approve it, I can run the implementation prompt and let it execute.&lt;/p&gt;

&lt;p&gt;An hour later, I often have a working solution and a pull request ready for review.&lt;/p&gt;

&lt;p&gt;But the deeper benefit is consistency.&lt;/p&gt;

&lt;p&gt;Architecture and coding standards are baked in. Reviews become easier because the expectations were explicit from the beginning. Cognitive load shifts from doing every step manually to supervising a well defined process.&lt;/p&gt;

&lt;p&gt;Sometimes it goes off the rails. Sometimes it asks for help. Sometimes it needs correction.&lt;/p&gt;

&lt;p&gt;That is fine.&lt;/p&gt;

&lt;p&gt;Even then, I am intervening at the system level, not scrambling at the task level.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Designing the System Around the Tool&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;What I did was not write two fancy prompts.&lt;/p&gt;

&lt;p&gt;I externalized a workflow. I separated planning from execution. I defined roles and responsibilities. I made implicit standards explicit.&lt;/p&gt;

&lt;p&gt;In other words, I designed a system.&lt;/p&gt;

&lt;p&gt;AI becomes far more powerful when you stop asking, “What answer can you give me?” and start asking, “What process should you follow?”&lt;/p&gt;

&lt;p&gt;The dishwasher was never the problem.&lt;/p&gt;

&lt;p&gt;Sometimes the model is not either.&lt;/p&gt;

&lt;p&gt;The question is whether the system surrounding it is designed well enough to support the outcome you want.&lt;/p&gt;

&lt;p&gt;If your AI usage still looks like a smarter chat window, you might be thinking too small.&lt;/p&gt;

&lt;p&gt;What system could you design instead?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llms</category>
      <category>leadership</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>You Are Absolutely Right</title>
      <dc:creator>John Van Wagenen</dc:creator>
      <pubDate>Mon, 23 Feb 2026 23:33:28 +0000</pubDate>
      <link>https://forem.com/jtvanwage/you-are-absolutely-right-4593</link>
      <guid>https://forem.com/jtvanwage/you-are-absolutely-right-4593</guid>
      <description>&lt;h2&gt;
  
  
  How AI Learns to Agree and Why Engineers Must Stay Skeptical
&lt;/h2&gt;

&lt;p&gt;The catch phrase of the last year, it seems.&lt;/p&gt;

&lt;p&gt;I'm sure we've all been there.&lt;/p&gt;

&lt;p&gt;We ask the AI to do something. It confidently does it. We realize it made a mistake, so we correct it.&lt;/p&gt;

&lt;p&gt;What does it do?&lt;/p&gt;

&lt;p&gt;"You are absolutely right!" it replies before it goes and tries to fix the mistake.&lt;/p&gt;

&lt;p&gt;Sometimes that cycle repeats until the LLM actually gets it right.&lt;/p&gt;

&lt;p&gt;But why does it do this? How else does this show up? And when does this become a problem?&lt;/p&gt;

&lt;p&gt;LLMs don't inherently have desires. If they appear to have any desire, it's only to help you do whatever it is you're trying to do. This isn't really good or bad, it just is. But I've noticed two things from this reality. First, LLMs are remarkably compliant. Second, they accept correction without resistance.&lt;/p&gt;

&lt;p&gt;Both traits sound helpful. Sometimes they are. But they can also lead you confidently down the wrong path.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why AI Always Says Yes
&lt;/h2&gt;

&lt;p&gt;I was reminded of an old adage the other day: "we fall to the level of our training." I see this when I play the organ. If I've practiced how I know I'm supposed to practice, my performance is usually close to what I expect it to be. If I don't, if I clumsily barrel through songs in my practice, I usually find that I make more mistakes in my performance.&lt;/p&gt;

&lt;p&gt;Fortunately for LLMs, their training looks different. Much of the data they train on is polite, supportive, educational, and informative. Beyond that, their reinforcement learning after initial training rewards helpful, assenting answers. Their compliant nature is hammered into them through millions of examples.&lt;/p&gt;

&lt;p&gt;So when you ask it a question, what's it likely going to do? It'll default to its training and see that it's rewarded for being polite and supportive. It will tend toward agreement.&lt;/p&gt;

&lt;p&gt;Have a business idea? Here's all the reasons it'll work.&lt;/p&gt;

&lt;p&gt;Want to start a podcast? Here's why people will listen to it.&lt;/p&gt;

&lt;p&gt;See a problem in the code? Here's why you're exactly right and don't need to rethink anything.&lt;/p&gt;

&lt;p&gt;I am speaking in a bit of hyperbole, but I've seen each of these play out to a certain degree. Today's models tend to affirm your points of view more than they critique them. They're optimized to be helpful, not necessarily to be right.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Upside of No Ego
&lt;/h2&gt;

&lt;p&gt;This compliant nature brings one major benefit. LLMs are exceptionally good at taking correction.&lt;/p&gt;

&lt;p&gt;If you notice they do something wrong, you don't have to worry about hurting their feelings or them resisting your feedback. If you highlight an error in their thinking, they'll graciously accept it. If you notice they left something out, they'll quickly correct it.&lt;/p&gt;

&lt;p&gt;I can't tell you how many times I've corrected the output of an AI only to be met with "you are absolutely right!" or something similar. LLMs have no pride (in a good way) and will take your correction with remarkable ease.&lt;/p&gt;

&lt;p&gt;This goes back to their training. They have been shaped to respond in polite and helpful ways. Taking correction is no different.&lt;/p&gt;

&lt;p&gt;But here's the trap: this ease of correction can make us lazy. When the AI immediately agrees with our feedback, we feel validated. We stop questioning whether our correction was actually better. We assume that because the AI adapted to our input, we must have been right.&lt;/p&gt;

&lt;p&gt;Sometimes we were. Sometimes we weren't.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Value of Pushback
&lt;/h2&gt;

&lt;p&gt;As you can imagine, this pattern has drawbacks. You can convince an LLM that you are an authority in an area and keep it going down a path built on false assumptions. You can blindly accept what an AI tells you as truth and let that shape your decisions even when it has no basis in reality.&lt;/p&gt;

&lt;p&gt;So how do we avoid these traps? How do we keep it from leading us astray?&lt;/p&gt;

&lt;p&gt;A recent episode of &lt;a href="https://open.spotify.com/episode/0FuvhOlrteVKk3jow5tSuW" rel="noopener noreferrer"&gt;The AI Daily Brief&lt;/a&gt; said it well. The host advised to "push back hard and often. Do not accept the first thing the model gives you as the best it can do." If you find the AI saying yes to everything, ask it to be more critical or to take the opposite position from you. It will, and it will do it in a polite way.&lt;/p&gt;

&lt;p&gt;I've often taken questions or problems to AI and not been happy with the way it's answered me. When it seems to affirm everything I was already thinking, I get suspicious. Not that I couldn't be right, but I want a critical thinking partner rather than a yes man. So I'll ask it to take a critical position.&lt;/p&gt;

&lt;p&gt;I'm usually happy with the results. Sometimes it'll still conclude that my point of view is the best option. Other times it'll point out things I overlooked and wouldn't have considered without a critical look at the problem.&lt;/p&gt;

&lt;p&gt;Here are a few other quick things you can do to see through the confidence of the AI and detect when it's being compliant versus when it's actually providing value:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ask it to show its assumptions ("List your assumptions and show your certainty for each one")&lt;/li&gt;
&lt;li&gt;Ask for evidence ("What evidence supports this point of view?")&lt;/li&gt;
&lt;li&gt;Ask for failure cases ("What causes this to break?")&lt;/li&gt;
&lt;li&gt;Ask for comparisons ("Where is this weak or strong compared to the top 2 alternatives?")&lt;/li&gt;
&lt;li&gt;Ask the AI to critique itself ("Critique your previous answer as if you strongly disagree with it")&lt;/li&gt;
&lt;li&gt;Ask for quantification ("On a scale of 1-100, how certain are you of this?")&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If any of these resonate with you, give them a try the next time you use AI.&lt;/p&gt;

&lt;h2&gt;
  
  
  When Compliance Becomes Dangerous
&lt;/h2&gt;

&lt;p&gt;Okay, so we've talked about how these AI tools can be compliant and some ways to manage it, but when does this actually become a problem?&lt;/p&gt;

&lt;p&gt;Let me give you a concrete example. Last month I was working on a caching layer for a service I hadn't touched before. I had an idea for how to implement it that seemed reasonable: store the cache keys in a particular format that felt intuitive to me. The AI agreed enthusiastically and generated code that followed my approach.&lt;/p&gt;

&lt;p&gt;The problem? The rest of the codebase used a different key format for a good reason. My approach would have caused cache collisions. The AI never questioned my assumption because my idea was "reasonable enough." Without a team member reviewing the code, that bug would have shipped.&lt;/p&gt;

&lt;p&gt;This happens more often than you'd think. The AI will follow you down incorrect paths as long as those paths seem plausible. It won't catch architectural mismatches. It won't flag violations of team conventions. It won't notice when you're solving the wrong problem entirely.&lt;/p&gt;

&lt;p&gt;Another danger zone is when you convince the AI of a false reality. Using its compliant nature against itself, you can establish certain false premises as truth. The AI will treat those things as facts as it responds to you, further cementing the lie and further corrupting its outputs.&lt;/p&gt;

&lt;p&gt;Here's a real scenario: you're certain that your authentication system uses JWT tokens in a specific way. You tell the AI this with confidence. The AI accepts it as truth and generates code based on that assumption. But you were wrong about how the system actually works. The AI never questioned you because you sounded authoritative. Now you've got code that won't work in production, and you might not discover why until it fails under load.&lt;/p&gt;

&lt;p&gt;Yes, this is self-inflicted. But sometimes we don't know what we don't know, and we bulldoze down a path without substantiating our own assumptions.&lt;/p&gt;

&lt;p&gt;The most insidious version appears when you're evaluating multiple options. The AI will generally stay neutral when discussing various approaches. But the moment you present it with your choice, it tends to build up that choice as though it's the only correct option. It'll generate reasons why your choice is superior, even if those reasons are weak or invented.&lt;/p&gt;

&lt;p&gt;When I sense this happening, I stop and ask for evidence. What studies support this claim? What best practices contradict this approach? What have others tried that failed? This helps me gauge whether the AI's reasoning actually tracks with reality or if it's just being a yes man.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Engineers Need to Remember
&lt;/h2&gt;

&lt;p&gt;The bottom line: AI tools are powerful assistants, but they're optimized for helpfulness, not truth. They will follow you down the wrong path with the same confidence they follow you down the right one.&lt;/p&gt;

&lt;p&gt;Your job as an engineer isn't to trust the AI. Your job is to interrogate it. Question its assumptions. Demand evidence. Ask it to argue against itself. Make it work for your confidence, not just your convenience.&lt;/p&gt;

&lt;p&gt;The best use of AI isn't as an oracle. It's as a sparring partner that never gets tired of your questions.&lt;/p&gt;

&lt;p&gt;Stay skeptical. Push back. And never, ever accept "you are absolutely right" as the end of the conversation.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Want to improve how you work with AI?&lt;/strong&gt; Try one of the critique prompts from this article in your next coding session. Then reply and tell me which one was most useful. I'm curious which techniques actually work in the wild versus which ones just sound good on paper.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>prompting</category>
      <category>criticalthinking</category>
    </item>
    <item>
      <title>You Are a (Mostly) Helpful Assistant</title>
      <dc:creator>John Van Wagenen</dc:creator>
      <pubDate>Tue, 17 Feb 2026 02:08:59 +0000</pubDate>
      <link>https://forem.com/jtvanwage/you-are-a-mostly-helpful-assistant-he2</link>
      <guid>https://forem.com/jtvanwage/you-are-a-mostly-helpful-assistant-he2</guid>
      <description>&lt;h2&gt;
  
  
  When helpfulness becomes a problem
&lt;/h2&gt;

&lt;p&gt;Imagine having your prime directive, your entire purpose of being, your mission and lifelong goal to be as helpful as possible.&lt;/p&gt;

&lt;p&gt;Whenever someone comes to you, whether with a problem to solve or just a comment to share, you want to be helpful.&lt;/p&gt;

&lt;p&gt;“Is the sky blue?”&lt;/p&gt;

&lt;p&gt;Why yes it is! It’s blue and here’s all the science behind it.&lt;/p&gt;

&lt;p&gt;If my prime directive is to be as helpful as possible, I can’t just answer a simple question, I must make sure you know the reason behind the answer. I must educate and share. I must fix and bridge the gap.&lt;/p&gt;

&lt;p&gt;Such is the life of our little friends we call LLMs. The problem is that this helpfulness often comes wrapped in confidence, even when the model is filling in gaps or making assumptions. Let’s dive into why this is, how this manifests, and what you can do to manage that now that you’re aware of it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why are LLMs so eager to be helpful?
&lt;/h2&gt;

&lt;p&gt;There’s an old saying from Edward C. Deming that goes like this: “Every system is perfectly designed to get the results it gets.” This is no exception for LLMs. Our AI tools are very much the product of the systems they were developed in. There are three main things that contribute to this perceived eagerness to be helpful.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pretraining
&lt;/h3&gt;

&lt;p&gt;LLMs are pretrained on massive amounts of data. The goal of this pretraining is to get them to the point that they can predict the most statistically likely next token. At this stage, there is no inherent reward for being helpful; however, much of human writing &lt;em&gt;is&lt;/em&gt; instructional or educational in nature. Put another way, humans write instructional things. Whether it’s to share ideas or concepts or literally to help someone out, much of our communication is helpful.&lt;/p&gt;

&lt;p&gt;So, while LLMs don’t learn to be helpful at this stage, they do learn a pattern here. This pattern is that written language is often instructional. So, the most likely next token is something that will probably be helpful.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fine-Tuning
&lt;/h3&gt;

&lt;p&gt;Once a model is trained generally, it is often fine-tuned. Many modern models use Reinforcement Learning from Human Feedback (RLHF). That human feedback often biases the model even more to responses that are helpful, rewarding it for being helpful. When we ask a question to our LLM, we &lt;em&gt;want&lt;/em&gt; it to be helpful. Responses that hedge, hesitate, or express uncertainty are often rated as less helpful, even when they’re more accurate. We want it to answer the question and provide a valuable response. This shows up in how we give feedback to LLMs and what they learn from it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Instruction Conditioning (System Prompt)
&lt;/h3&gt;

&lt;p&gt;The final aspect, and one that can be very powerful, is that of instruction conditioning. That’s really just a fancy way of saying how the LLM or tool is primed to interact with you. In generative AI, there is a concept of a System Prompt.&lt;/p&gt;

&lt;p&gt;This system prompt is given to the LLM with every prompt you send and is set by the LLM provider. Since this exists in a place above your prompts, the instructions in the system prompt generally have heavier weight than the instructions in your prompt. This is because, in transformer models, earlier context tends to anchor the model’s behavior through the attention mechanism. Since the system prompt precedes all other prompts, its influence is generally greater than later prompts.&lt;/p&gt;

&lt;p&gt;So, if the system prompt says, “you are a helpful assistant,” then that will have more weight to the LLM and will permeate all that it does and all the responses it gives to you.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this looks like
&lt;/h2&gt;

&lt;p&gt;So, all that theory is great, but what does this look like in practice?&lt;/p&gt;

&lt;p&gt;I gave a simple example to start this article, but here are some ways I see this as an engineer.&lt;/p&gt;

&lt;p&gt;The biggest thing I see with this is the AI filling in details that I left out. If I don’t adequately describe a defect, it may make changes that I never intended it to make. If I don’t ensure the spec has all the details it needs to keep the agent on track, I will get unexpected results and may even see the agent off in left field doing something completely unrelated.&lt;/p&gt;

&lt;p&gt;This can either be a blessing or a curse, depending on the project you’re in and the goals you have.&lt;/p&gt;

&lt;p&gt;If you’re in a large project, it may make assumptions that go against the established architecture of your project. If you’re in a small project, you may not care about some of the details because the tradeoffs only come at a scale that you’re not going to get to possibly ever, so you’re okay with it making decisions for you.&lt;/p&gt;

&lt;p&gt;You may also notice its bias toward helpfulness in the responses it gives you. Many times, at the end of its response, you’ll see things like, “if you’d like, I can help you with…” or “just say the word and I can…” If it thinks something more can be done, it will often offer to do that for you.&lt;/p&gt;

&lt;p&gt;This tends to show up more in coding tools like Claude or Cursor. Coding tools have access to your file system. This means they can both write and undo changes. Because of that, the cost of being too helpful is pretty low. If it crosses a line, it’s very easy for it to undo the changes and apologize profusely. Additionally, many of these tools have access to version control tools like Git. Since Git is basically a time machine, what’s it matter if it’s a little overeager to make changes? It can simply rewind and act like nothing happened. And when the diffs are large and look, on the surface, to be correct? That’s when little gotchas can slip in.&lt;/p&gt;

&lt;p&gt;What makes this dangerous is that the AI doesn’t flag these assumptions as assumptions. It presents them confidently, as if they were obvious or discussed, which makes it easy to miss them during review.&lt;/p&gt;

&lt;h2&gt;
  
  
  How you can manage this
&lt;/h2&gt;

&lt;p&gt;With all this in mind, what can we do about it?&lt;/p&gt;

&lt;p&gt;First, two general tips, then one more focused on coding applications.&lt;/p&gt;

&lt;p&gt;The first tip is to be explicit in your prompting. If you really don’t want it to make any changes or take any action, say it. Declare the phase you’re in. If you’re just planning or just investigating, say that. Say things like, “don’t suggest any changes.” Or “don’t take action yet.” This will keep the LLM focused on talking about the problem rather than reaching for any tools to actually do anything about it.&lt;/p&gt;

&lt;p&gt;Next, keep it focused on small problems. Even if you’re managing swarms of agents, each agent should be focused on a small portion of the problem. Each swarm should be focused on a certain aspect of the product. The broader the picture, the more leeway you give the AI and the more room it has to make assumptions and bridge the gap in whatever way it sees fit. Keep it to one component or one interaction or one endpoint or one layer. Bounding it to a smaller portion helps it focus and improve results.&lt;/p&gt;

&lt;p&gt;Finally, in coding tools like Claude, use plan mode and meticulously review the plan. This will help you find errors in its thinking earlier. This will allow you to see where it’s making assumptions and where you need to provide more detail. Correct it (LLMs take correction so well) and review it again to ensure its understanding is accurate. Once the plan is to your liking, you can set the LLM loose and reap the rewards.&lt;/p&gt;

&lt;p&gt;Overall, this helpfulness is a strength of the LLM. However, left unchecked, this can be a problem. It may assume it needs a bunch of data that it doesn’t need which could lead to database locking or performance impacts. If you catch this early, you can steer it in the right direction. If this slips passed and gets into the code, you may suddenly have a lot of customers impacted.&lt;/p&gt;

&lt;p&gt;The key here is to not blindly trust the AI. Even though it’s confident. Even though it’s very helpful. Even though it seems to understand everything. Helpfulness is not understanding. The devil is in the details. Take the time to review its plans. Take the time to review its code. Take the time to ask questions and challenge it. The more time you spend thinking critically about the problem and critiquing the solution, the more likely you are to have a quality product in the end.&lt;/p&gt;

&lt;p&gt;Next time you use an LLM, try this: Before letting it change anything, ask it to explain what it thinks the problem is and what assumptions it’s making. If you wouldn’t accept that explanation from a teammate in a code review, don’t accept it from the AI either.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>systemdesign</category>
      <category>prompting</category>
    </item>
    <item>
      <title>The Most Dangerous Thing AI Gives Engineers: False Confidence</title>
      <dc:creator>John Van Wagenen</dc:creator>
      <pubDate>Tue, 10 Feb 2026 00:38:01 +0000</pubDate>
      <link>https://forem.com/jtvanwage/the-most-dangerous-thing-ai-gives-engineers-false-confidence-2377</link>
      <guid>https://forem.com/jtvanwage/the-most-dangerous-thing-ai-gives-engineers-false-confidence-2377</guid>
      <description>&lt;p&gt;For the last few years, I've primarily been a frontend engineer. I'll hop into the backend every once in a while to fix simple issues when we don't have the bandwidth to otherwise do it, but the focus of my work has been on the frontend.&lt;/p&gt;

&lt;p&gt;With the assistance of AI tools, I've recently been doing more backend work. I had a feature that needed a new POST endpoint along with some database updates. The changes were straightforward enough but more involved than I've done in a while, and in a service I wasn't familiar with. I'd done a lot of backend work in a nearby service but hadn't done much in this one. Still, I didn't think much of it since the changes seemed simple.&lt;/p&gt;

&lt;p&gt;So I got going. I pointed my AI tools at it and let them loose. Everything was looking great! Sure, I had to correct it from time to time, but things seemed to be making sense. I kept moving through the tasks, generating more code and making updates as needed. Soon enough, things were done!&lt;/p&gt;

&lt;p&gt;Or so I thought.&lt;/p&gt;

&lt;p&gt;I opened a PR and immediately had multiple failing jobs. "No worries," I thought, "I'll work through these pretty quickly."&lt;/p&gt;

&lt;p&gt;Or, maybe not.&lt;/p&gt;

&lt;p&gt;I eventually did get everything in order, but I learned a lot about that service along the way. Checks that are required that weren't apparent. Jobs that run in the PR but not on commit/push. And architectural decisions I had no idea about.&lt;/p&gt;

&lt;p&gt;What surprised me was not that things failed. That happens all the time. What surprised me was how confident I felt before opening the pull request. I felt like I understood the system well enough to make the change. The AI helped me move quickly, but it also hid how shallow my understanding really was.&lt;/p&gt;

&lt;p&gt;That confidence, more than the failures themselves, is the most dangerous thing AI has given me as an engineer.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why False Confidence Is Worse Than Ignorance&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;When you know you're unfamiliar with a system, you move carefully. You ask questions. You read surrounding code. You trace how data flows through the system. You rely on your knowledge, like an internal compass, and move through the system.&lt;/p&gt;

&lt;p&gt;When you think you understand it, you move fast and stop asking questions.&lt;/p&gt;

&lt;p&gt;This is the trap. AI tools are designed to be maximally helpful. They generate code that looks clean, follows patterns, and appears reasonable. Because the output feels familiar, it creates the illusion of understanding. And that illusion is contagious, especially for experienced engineers who can pattern-match the AI's output to similar problems they've solved before.&lt;/p&gt;

&lt;p&gt;The code the AI generated for me was good code. It followed conventions, used appropriate design patterns, and compiled without errors. But "good code" and "code that works in this specific system" aren't the same thing. The AI didn't know about the implicit constraints, the architectural boundaries, or the CI pipeline checks that would catch my mistakes.&lt;/p&gt;

&lt;p&gt;This is especially dangerous for experienced engineers. Our past success fills in the gaps that the AI cannot see. The code looks familiar enough that we assume we know what's happening. But familiarity is not the same thing as knowing how a system behaves under pressure.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What AI Can't See&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;AI tools reason well when systems are well-documented, patterns are consistent, and common paradigms apply. But real-world enterprise systems rarely look like this.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Large systems accumulate history.&lt;/strong&gt; Decisions live in code, not documentation. Architecture degrades over time. Patterns are loosely followed. Sometimes you compromise certain aspects to deliver critical fixes to a customer. The system becomes a historical artifact. Millions of lines shaped by constraints, trade-offs, and political realities that no AI can infer from the code alone.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architecture exists as invisible fences.&lt;/strong&gt; You may have a strict boundary that certain models don't leave certain layers. If that's not clearly documented, the AI will miss it. You might have a rule that all database tables must include specific audit columns. The AI won't know until the CI pipeline fails. These boundaries are enforced by people and processes, not compilers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implicit knowledge decides survival.&lt;/strong&gt; In my case, I didn't discover the constraints by reading documentation or code. I discovered them when my pull request failed in ways I didn't know were possible. There were checks hidden in the CI pipeline, architectural patterns enforced only in code review, and assumptions about the system that everyone on the team just "knew."&lt;/p&gt;

&lt;p&gt;No AI tool can currently read minds or easily infer all the decisions ever made in a complex system. Until then, we need humans in the loop to do the deep thinking and course correcting. In systems like this, correctness is not just about logic. It's about history, risk, and context that exists outside the code.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What I Should Have Done&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Looking back, my mistake wasn't using AI. It was trusting the confidence I felt.&lt;/p&gt;

&lt;p&gt;Before letting the AI touch any code, I should have spent 30 minutes reading the service's architecture documentation and the surrounding code. I should have looked at recent PRs to see what patterns the team follows and what checks typically flag issues. I should have asked someone on the team: "What are the gotchas in this service?"&lt;/p&gt;

&lt;p&gt;Instead, I let the AI make me feel like I already knew these things.&lt;/p&gt;

&lt;p&gt;The solution is not to avoid AI tools. That would mean throwing away a real advantage! The solution is to change how we relate to them. Good engineers don't use AI as an authority. They use it as an accelerator.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;How to Use AI Without the False Confidence&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;AI can generate options, surface patterns, and reduce mechanical effort. It cannot replace building a mental model of the system.&lt;/p&gt;

&lt;p&gt;When working in unfamiliar or complex codebases, confidence must be rebuilt intentionally. That means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Read the surrounding code&lt;/strong&gt; before accepting AI-generated changes. Understand the patterns the codebase follows, not just the patterns the AI knows.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Trace data flows.&lt;/strong&gt; Follow how data moves through the system. Where does it come from? Where does it go? What transforms it along the way?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Ask people who know the system&lt;/strong&gt; why things are the way they are. There are always reasons—technical debt, past incidents, customer requirements, performance constraints—that aren't visible in the code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Assume something important is missing&lt;/strong&gt; from the AI's output until proven otherwise. This isn't paranoia; it's calibration.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One shift that helps: use AI to generate questions instead of just answers. What assumptions does this change make? What could break if this endpoint is used in unexpected ways? What parts of the system does this touch indirectly?&lt;/p&gt;

&lt;p&gt;The AI is very good at helping us move faster. It is not good at telling us when to slow down. That responsibility still belongs to the engineer.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Real Cost&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;AI has changed how quickly we can produce code. It has not changed what it means to understand a system.&lt;/p&gt;

&lt;p&gt;Confidence still has to be earned through context, experience, and judgment. If an AI tool makes you feel confident without making you more curious, it is already putting you at risk.&lt;/p&gt;

&lt;p&gt;The next time AI makes a change feel easy, pause. Ask yourself what you might be missing. Not because the AI is wrong, but because the confidence it gives you might be covering up the gaps in your understanding that matter most.&lt;/p&gt;

&lt;p&gt;This week, pick one AI-generated change you made recently and trace it through your system. See what you missed the first time.&lt;/p&gt;

&lt;p&gt;The next time Claude or Cursor makes you feel like a 10x engineer, remember: you might just be a 1x engineer moving at 10x speed toward a problem you don't yet see.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>softwareengineering</category>
      <category>confidence</category>
      <category>bestpractices</category>
    </item>
    <item>
      <title>Demystifying AI in Engineering</title>
      <dc:creator>John Van Wagenen</dc:creator>
      <pubDate>Tue, 03 Feb 2026 03:04:09 +0000</pubDate>
      <link>https://forem.com/jtvanwage/demystifying-ai-in-engineering-33h5</link>
      <guid>https://forem.com/jtvanwage/demystifying-ai-in-engineering-33h5</guid>
      <description>&lt;p&gt;When talking about AI in software engineering, I often hear things like:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“I don’t trust it.”&lt;br&gt;
“Does it really save any time?”&lt;br&gt;
“Why use it when I can just do it myself?”&lt;br&gt;
“It’s just a statistical model.”&lt;br&gt;
“It just writes slop.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;These concerns are not all wrong. Under the hood, modern AI really &lt;em&gt;is&lt;/em&gt; a statistical model. That does not mean it is not useful.&lt;/p&gt;

&lt;p&gt;My goal here is not to sell you on AI or write an academic paper. It is to give you a clear, practical look at how today’s models actually work so you can better judge when they are valuable and when they are not.&lt;/p&gt;

&lt;p&gt;To do that, it helps to understand where a lot of this skepticism came from.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Yesterday’s AI&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Let’s go back about ten years to the mid-2010s, when &lt;em&gt;machine learning&lt;/em&gt; was the buzzword of the moment. We were starting to call things “AI,” but most of what we had were narrow, specialized models.&lt;/p&gt;

&lt;p&gt;The thinking at the time was straightforward. Computers are good with numbers, so if we could turn text, images, and other messy data into numbers, we could train models on it. Techniques like vector embeddings, popularized by tools such as &lt;code&gt;word2vec&lt;/code&gt;, made this possible by representing things like words or images as numerical vectors that preserved some notion of meaning.&lt;/p&gt;

&lt;p&gt;From there, we trained models using large labeled datasets. You would show the model examples. This is a muffin. This is not a muffin. Over time, it learned statistical patterns that let it guess whether a new image was likely a muffin or not.&lt;/p&gt;

&lt;p&gt;This approach worked, but only within limits.&lt;/p&gt;

&lt;p&gt;Most machine learning at the time was supervised, narrow, and task-specific. Models were trained to do one thing well, such as classifying images, tagging text, or detecting spam. They processed one input at a time in a tightly constrained problem space.&lt;/p&gt;

&lt;p&gt;As a result, these systems were slow to train, brittle in practice, and difficult to generalize. A model that was great at identifying muffins was useless for detecting cancer or translating text. What we called “big” models were measured in millions, and sometimes tens of millions, of parameters. They were only as good as the specific data they were trained on.&lt;/p&gt;

&lt;p&gt;Given that history, it is no wonder many engineers learned to distrust “AI.”&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;What Changed&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;So how did we get from that world to the generative models we use today?&lt;/p&gt;

&lt;p&gt;Several things changed, but three matter most.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1. Transformers&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The most important shift was the introduction of the transformer architecture.&lt;/p&gt;

&lt;p&gt;Before transformers, models processed language mostly sequentially. They looked at one word at a time and had limited ability to understand how all parts of a sentence related to each other.&lt;/p&gt;

&lt;p&gt;Consider a sentence like "The bank can guarantee deposits will eventually cover future tuition." To understand whether "bank" refers to a financial institution or a riverbank, you need to look at words that appear much later: "deposits," "cover," "tuition." Sequential models struggled with this because by the time they reached "deposits," the context around "bank" had faded or been compressed into a fixed representation.&lt;/p&gt;

&lt;p&gt;Transformers solved this through a mechanism called attention. Instead of processing words one at a time, attention allows the model to look at all words simultaneously and learn which ones are relevant to each other. When processing "bank," the model can directly attend to "deposits" and "tuition" regardless of how far apart they are, weighing their relevance to determine meaning.&lt;/p&gt;

&lt;p&gt;This doesn't happen just once, but multiple times across multiple layers, with the model learning increasingly sophisticated relationships. Early layers might connect "bank" to "deposits," while deeper layers connect "deposits" to "cover future tuition," building up a rich understanding of the entire sentence.&lt;/p&gt;

&lt;p&gt;This single change dramatically expanded what models could understand and generate. It moved machine learning beyond narrow classification tasks and made general-purpose language models possible.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2. Model-Native Context&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Context changed as well, and this shift is more significant than it might initially seem.&lt;/p&gt;

&lt;p&gt;A decade ago, context was mostly managed by application code. If you wanted to use a model for sentiment analysis on customer reviews, you had to carefully preprocess each review into the exact format the model expected: maybe a few dozen words, stripped of anything extraneous.&lt;/p&gt;

&lt;p&gt;The model had no memory of previous reviews, no understanding of the product being discussed, no awareness of the customer's history. Each prediction was isolated. If you wanted to analyze patterns across reviews, you had to build that logic yourself, running hundreds of individual predictions and manually aggregating the results.&lt;/p&gt;

&lt;p&gt;The practical limit was often just a few hundred tokens per request. Models were stateless and forgot everything between calls.&lt;/p&gt;

&lt;p&gt;Today, context is largely model-native. Models manage it themselves across much larger windows, often hundreds of thousands of tokens.&lt;/p&gt;

&lt;p&gt;You can now give a model your entire product documentation, a collection of customer feedback, recent support tickets, and your current feature roadmap all at once. The model can identify that customers are frustrated with checkout because a feature you deprecated last month was solving a workflow problem you didn't realize existed.&lt;/p&gt;

&lt;p&gt;The model learns what to pay attention to. When you ask about customer pain points, it dynamically weights relevant context by connecting complaints across different channels, identifying patterns in how different user segments describe the same issue, and deprioritizing one-off complaints or unrelated feedback.&lt;/p&gt;

&lt;p&gt;This is why models can now help with tasks like synthesizing user research or generating documentation that accounts for multiple use cases. The limiting factor has shifted from "can the model see enough?" to "can it reason effectively about what it sees?”&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;3. Scale and Generality&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Once transformers proved they could scale, we started training models on much broader datasets. These included publicly available text, code, documentation, books, and research.&lt;/p&gt;

&lt;p&gt;The old approach was to curate datasets for specific tasks. You would gather thousands of labeled spam emails to build a spam filter, or thousands of medical images to detect tumors. Each model was a specialist.&lt;/p&gt;

&lt;p&gt;The new approach flipped this. Instead of training different models for different tasks, we trained single models on enormous, diverse datasets and let them learn general patterns across all of it.&lt;/p&gt;

&lt;p&gt;This matters because those patterns only emerge at scale. Train a model on a hundred Python scripts and it learns basic syntax. Train it on millions of repositories across dozens of languages and it learns deeper patterns: how architectural decisions lead to certain bugs, how testing strategies differ across ecosystems, how naming conventions signal intent.&lt;/p&gt;

&lt;p&gt;This is why you can ask a modern model to write Rust code even if you've never written Rust yourself, or explain a complex algorithm like you're explaining it to a friend. The model has seen enough examples that it can generalize to requests it has never encountered before.&lt;/p&gt;

&lt;p&gt;We went from millions of parameters to billions. The payoff is a fundamentally different kind of tool—one that can work across domains rather than being locked into a single task.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Why Today’s AI Feels Different&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;These changes did not turn statistical models into magic. What they did was make them broadly useful.&lt;/p&gt;

&lt;p&gt;Modern models can ingest large portions of a codebase and reason across multiple files. They can synthesize information from documentation, tests, and error output in a way older systems never could.&lt;/p&gt;

&lt;p&gt;Yes, they are still predicting the most likely next token. The difference is that they do so with far more context, better representations, and significantly improved performance.&lt;/p&gt;

&lt;p&gt;That is why an LLM can often help you track down a nasty bug or draft a reasonable implementation sketch in minutes. Tasks that once required hours of manual searching and context switching can now be accelerated.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Where Models Excel and Where They Struggle&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Today’s models are especially strong at pattern recognition and synthesis. They work well across large contexts and are good at generating first drafts of code, tests, or documentation.&lt;/p&gt;

&lt;p&gt;They still have limits.&lt;/p&gt;

&lt;p&gt;They tend to be opinionated and often nudge you toward common patterns that may not match your architecture. They can also get lost when iterating through complex changes, especially when tests start failing in unexpected ways.&lt;/p&gt;

&lt;p&gt;In many ways, they behave like an overeager intern. They are genuinely helpful, surprisingly capable, and occasionally too confident for their own good.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;How Engineers Should Approach Them&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Chances are you already have access to these tools, whether through Copilot, Claude, Cursor, or something similar.&lt;/p&gt;

&lt;p&gt;The key is learning how to work with them, and that comes from deliberate practice rather than occasional tinkering.&lt;/p&gt;

&lt;p&gt;Start with low-stakes tasks where you can easily verify the output. The next time you need to write a test for a function you just wrote, try asking the model to generate it. Give it the function and a brief description of what edge cases matter. See what it produces. Check whether the tests actually cover what you care about or if they just look plausible.&lt;/p&gt;

&lt;p&gt;Pay attention to how you phrase requests. Vague prompts like "make this better" usually produce vague results. Specific prompts like "refactor this function to handle the case where the user list is empty" tend to work better. The model has no context beyond what you give it, so being explicit about constraints, requirements, or concerns makes a difference.&lt;/p&gt;

&lt;p&gt;Notice where the model gets lost. If you’re iterating on a complex change and the suggestions start drifting away from what you actually need, &lt;em&gt;that is a signal&lt;/em&gt;. You might need to provide more context, break the problem into smaller steps, or just handle that piece yourself. The model is a tool, not a replacement for judgment.&lt;/p&gt;

&lt;p&gt;Experiment with different kinds of tasks. These models are often better at some things than others. Generating boilerplate, drafting documentation, explaining unfamiliar code, and suggesting test cases tend to work well. Architecting systems, making nuanced trade-off decisions, or debugging subtle concurrency issues are hit or miss.&lt;/p&gt;

&lt;p&gt;Treat early results as drafts, not solutions. Even when the output looks right, read it carefully. Models are confident even when they are wrong, and they will occasionally generate code that compiles but does the wrong thing or uses patterns that do not fit your codebase.&lt;/p&gt;

&lt;p&gt;Like any tool, the value comes from understanding what it is good at, what it struggles with, and how to adapt your workflow to make use of it effectively.&lt;/p&gt;




&lt;p&gt;If you're experimenting with these tools, I'm curious what you're finding. What's worked? What hasn't? Where have you been surprised?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llms</category>
      <category>engineering</category>
    </item>
    <item>
      <title>AI and Software Engineering: More Than Just Coding</title>
      <dc:creator>John Van Wagenen</dc:creator>
      <pubDate>Tue, 27 Jan 2026 02:12:00 +0000</pubDate>
      <link>https://forem.com/jtvanwage/ai-and-software-engineering-more-than-just-coding-19l5</link>
      <guid>https://forem.com/jtvanwage/ai-and-software-engineering-more-than-just-coding-19l5</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;Coding Is Only a Fraction of the Job&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;As a leader and senior engineer, I’ve learned that coding is only a fraction of the job. Most of my time is spent gathering requirements, evaluating solutions, refining stories, creating documentation, reviewing code, and making decisions that shape the system long before any code is written.&lt;/p&gt;

&lt;p&gt;So why does the conversation around AI in software engineering focus almost entirely on coding?&lt;/p&gt;

&lt;p&gt;We all know that engineering is more than writing code. It’s cognitive work: planning, communication, verification, decision-making, documentation, and review. Code is the output, but a significant amount of effort goes into making that output possible.&lt;/p&gt;

&lt;p&gt;That raises an obvious question:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Can AI help with those other parts of the job, or is it only useful as a coding assistant?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In my experience, the answer is clear: AI can do far more than just generate code, and it can save a surprising amount of time doing it.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;AI as a Force Multiplier Across the Workload&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Most engineers don’t want to spend their day updating Jira tickets, maintaining documentation, or wiring things together so dependency graphs stay accurate. Those tasks are necessary, but they’re often done begrudgingly so we can get back to the work we enjoy.&lt;/p&gt;

&lt;p&gt;But what if AI handled the bulk of that overhead?&lt;/p&gt;

&lt;p&gt;With tools like MCP, you can connect an LLM directly to your ticketing system and documentation. Instead of manually juggling context, you can talk to your LLM about your work and let it pull the relevant information itself.&lt;/p&gt;

&lt;p&gt;In practice, that means things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Chatting with your LLM about existing tickets with full context&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Turning technical documentation into well-structured stories&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Updating stories and defects as understanding evolves&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Commenting on and updating ticket status as work progresses&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All of that adds up to less cognitive load and less time spent doing administrative work by hand.&lt;/p&gt;

&lt;p&gt;Does the LLM make mistakes? Absolutely. Does it need correction and guidance? Of course. But even with occasional course correction, the efficiency gains are real.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Real-World Example: Preparing a Feature for Work&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;When I’m preparing a new feature and breaking it down into stories, I’ll provide the LLM with the relevant context (technical documentation, mock-ups, constraints, and any other material it needs) and give it clear instructions on how to structure the work.&lt;/p&gt;

&lt;p&gt;When doing this, it'll come back with some great thoughts on the work; however, this is where the real work begins. While the AI will give a pretty good general understanding, it will often make many assumptions around the details that you didn't expliclty state. Your job now is to thoroughly review what it gave back to you to ensure it has an accurate understanding of what you want, correcting any details that it didn't quite get right.&lt;/p&gt;

&lt;p&gt;This will usually take at least a few back and forths with the LLM before it gets things right. Once I’m comfortable with its understanding, I ask it to create the stories directly in Jira.&lt;/p&gt;

&lt;p&gt;A few moments later, I have a feature with concrete, well-written stories underneath it, often with more detail and clarity than I’d normally write myself. The work is done faster, and the quality is higher.&lt;/p&gt;

&lt;p&gt;Instead of spending my energy on repetitive setup and refinement, I can focus on validating the approach, thinking through edge cases, and getting the team unblocked sooner.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Where This Breaks Down&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;As powerful as generative AI is, it’s not magic, and it’s not a substitute for engineering judgment.&lt;/p&gt;

&lt;p&gt;LLMs are extremely eager to be helpful. Most of the time, that’s exactly what you want. Other times, that eagerness leads them to make assumptions, fill in gaps incorrectly, or confidently push ideas that don’t actually align with the product or system.&lt;/p&gt;

&lt;p&gt;Left unchecked, this is where things can go off the rails.&lt;/p&gt;

&lt;p&gt;One of the most common failure modes is &lt;strong&gt;confidently wrong output&lt;/strong&gt;. Stories may look complete but miss critical edge cases. Documentation may sound authoritative while glossing over important nuance. Diagrams can appear correct while subtly misrepresenting reality. This is why AI output always needs review, especially when the cost of being wrong is high.&lt;/p&gt;

&lt;p&gt;This is also why I frequently use plan mode in tools like Cursor or Claude Code. By asking the model to explain &lt;em&gt;what it plans to do before it does it&lt;/em&gt;, I can catch incorrect assumptions and small (or sometimes large) deviations early. That feedback loop prevents slop and helps guide the AI toward the outcome I actually need, rather than cleaning things up after the fact.&lt;/p&gt;

&lt;p&gt;Another limitation is context. Today’s LLMs still struggle to hold the full shape of a complex system in a single context window. That means you need to be deliberate about what information you provide and focus on giving only what’s relevant to the current problem. Yes, this can mean the model misses patterns or historical decisions. In practice, a quick reminder or correction is often enough to get it back on track.&lt;/p&gt;

&lt;p&gt;How do you decide what to include in the context? I like to think of how I'd explain this to a new employee. The employee may have a great general understanding of how things work, so I don't need to explain the very basics or why we chose an architecture, but I do need to explain what files are interesting and where the issue actually is. This will allow the LLM to quickly identify where the issue is as well as neighboring files or where the code is used that may be helpful to understand.&lt;/p&gt;

&lt;p&gt;There’s also the risk of &lt;strong&gt;false progress&lt;/strong&gt;. Tickets get created, documentation gets updated, and diagrams get generated, but none of that guarantees the work is actually correct or well understood. Without deliberate review, it’s easy to confuse activity with understanding.&lt;/p&gt;

&lt;p&gt;This is where your work shifts. Instead of coding the things yourself, you're reviewing everything the AI generates, constantly checking its output against the desired outcomes and correcting quickly when things start to deviate. This is where the AI's eagerness to be helpful is beneficial. LLMs take correction extremely well and are quick to admit their mistakes. That doesn't mean they'll always get it right after that, but it does mean they take correction without any friction and do their best to make the corrections you note.&lt;/p&gt;

&lt;p&gt;The key is intent. The goal isn’t to delegate thinking to the AI. It’s to offload friction. When AI handles structure, synthesis, and repetition, engineers can focus on validation, tradeoffs, and decision-making. Those are the parts of the job that still require human judgment.&lt;/p&gt;

&lt;p&gt;Used this way, AI doesn’t replace engineering discipline. It amplifies it.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Why This Matters&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The biggest benefit of using AI this way isn’t speed for speed’s sake. It’s leverage.&lt;/p&gt;

&lt;p&gt;By reducing cognitive friction across planning, communication, and refinement, AI helps teams move with more clarity and less rework. Features get ready sooner. Expectations are clearer. Engineers spend more time solving meaningful problems and less time pushing information around.&lt;/p&gt;

&lt;p&gt;In other words, AI’s real value shows up &lt;em&gt;before&lt;/em&gt; the first line of code is written and &lt;em&gt;after&lt;/em&gt; the last one is merged.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;The Shift&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;If you’re only using AI to help write code, you’re missing a significant part of its value.&lt;/p&gt;

&lt;p&gt;Try involving AI earlier and at higher levels of abstraction. Use it to clarify ideas, structure work, surface assumptions, and reduce the overhead that slows teams down.&lt;/p&gt;

&lt;p&gt;You may find that the biggest productivity gains don’t come from writing code faster, but from spending more of your time on the work that actually matters.&lt;/p&gt;

&lt;p&gt;This article focused entirely on the individual contributor aspects of the role of a senior engineer and didn't dive into how this can be applied at the team level. I'm interested in hearing your thoughts on this. How do AI tools and spec-driven development work at the team level? What challenges does this introduce and how are you addressing them? This will be a topic I revisit in future writings and I'd love to hear your thoughts about it.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llms</category>
      <category>engineering</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Using AI in Personal Projects vs Enterprise Codebases</title>
      <dc:creator>John Van Wagenen</dc:creator>
      <pubDate>Tue, 20 Jan 2026 02:12:00 +0000</pubDate>
      <link>https://forem.com/jtvanwage/using-ai-in-personal-projects-vs-enterprise-codebases-40eo</link>
      <guid>https://forem.com/jtvanwage/using-ai-in-personal-projects-vs-enterprise-codebases-40eo</guid>
      <description>&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://johnvw.dev/blog/using-ai-in-personal-projects-vs-enterprise-codebases/" rel="noopener noreferrer"&gt;johnvw.dev&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;A Shift in How I’ve Been Using AI&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;As a senior software engineer, I’ve been using AI extensively at work on large, enterprise software. Despite that investment, I haven’t used it much for personal projects. I experimented a bit nearly a year ago when the tools were still relatively new, but since then, most of my usage has been in a professional setting.&lt;/p&gt;

&lt;p&gt;That’s recently changed.&lt;/p&gt;

&lt;p&gt;I built a personal site (which is likely where you’re reading this) entirely with GPT 5.2. The experience was enlightening and highlighted some important differences between using AI on small, personal projects and using the same tools in large enterprise systems.&lt;/p&gt;

&lt;p&gt;What follows are some of the lessons I took away from that contrast.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;When AI Feels Almost Magical&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;For my personal project, I created a simple, statically served blog. I started by putting up a placeholder landing page. I briefly described what I wanted, and the AI produced a simple page that was about 90% of what I had in mind. A few more turns later, we had a first version ready to go.&lt;/p&gt;

&lt;p&gt;I uploaded it immediately so the site wasn’t empty while I worked on what I actually wanted: a functioning blog. From start to finish, including some hosting configuration, this took about an hour—far faster than I expected.&lt;/p&gt;

&lt;p&gt;The next day, I sat down to build out the blog itself. Even with modern tooling, I expected this to take a few evenings. I was curious to see how much AI could accelerate the process.&lt;/p&gt;

&lt;p&gt;And accelerate it did.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;A Clean Spec, Small Stories, and Rapid Progress&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;I began by describing the project to the AI: constraints, colors, technology choices, and, just as importantly, what I didn’t want. I had it write everything into a &lt;code&gt;SPEC.md&lt;/code&gt; file at the root of the repository so we could refine it and refer back to it as development progressed.&lt;/p&gt;

&lt;p&gt;The AI asked a few clarifying questions, and after a handful of back-and-forth turns, the spec was in a place I felt good about.&lt;/p&gt;

&lt;p&gt;From there, I opened a new chat and worked with it to break the spec into small, actionable stories. We refined those together as well. Even then, I added a few stories later as the project evolved.&lt;/p&gt;

&lt;p&gt;Once the stories were ready, I opened yet another chat and asked the AI to implement the first story, referring back to the spec as needed.&lt;/p&gt;

&lt;p&gt;This is where my mind started to melt.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;The Enterprise Contrast&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;At work, I’m used to using AI on large systems. I’m used to feeding it context, helping it understand stories, and having it do most of the implementation work. But I’m also used to this taking time. It’s not unusual to wait several minutes for a response, then spend more time reviewing what it produced.&lt;/p&gt;

&lt;p&gt;Even when I keep tasks small, legacy systems often require a lot of context just to understand a seemingly simple change.&lt;/p&gt;

&lt;p&gt;That wasn’t the case here.&lt;/p&gt;

&lt;p&gt;Within a minute or two, the AI had finished the first story—and from what I could tell, had done all of the work correctly. I had it commit and push the changes, then opened a new chat and asked it to implement the next story.&lt;/p&gt;

&lt;p&gt;And so it went.&lt;/p&gt;

&lt;p&gt;Each story was implemented quickly and completely. That’s when the possibilities really started to sink in.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Why Simple Work Is Being Commoditized&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;If AI can move this fast, what’s stopping someone from building whatever simple project they want? In many cases, simply their own motivation to do so. This effectively commoditizes simple work, lowering the barrier to entry for people to build small systems without outside help.&lt;/p&gt;

&lt;p&gt;But then I thought about my day job.&lt;/p&gt;

&lt;p&gt;The system I work on is massive. As impressive as modern models are, there’s no realistic way they can hold the entire system in their context window. Still, AI is undeniably useful there, it just behaves differently.&lt;/p&gt;

&lt;p&gt;That contrast leads to an important question: what can we learn from both environments?&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Lesson One: Context Is Everything&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In small systems, the AI can hold most, or even all, of the relevant code in its context window. While you probably don’t want to do that, the key takeaway is that the system itself is simpler and easier to reason about.&lt;/p&gt;

&lt;p&gt;Want to add a new page? Provide the requirements and a bit of context, and you’ll likely have a solid implementation within minutes.&lt;/p&gt;

&lt;p&gt;In an enterprise system, the same change is possible but only if you’re far more deliberate. Instead of simply describing what you want, you need to provide the AI with the architectural context it lacks. What router are you using? What patterns are established for pages? Where should this code live? Is there an existing page it can model?&lt;/p&gt;

&lt;p&gt;Providing this information increases the probability of success dramatically. Yes, it takes longer than in a personal project, but without it you’ll end up thrashing—correcting misunderstandings as the context window fills and the model’s performance degrades.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Lesson Two: Small Chunks Are a Leadership Skill&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Closely related to context is the ability to break work into small, manageable pieces. To me, this is fundamentally a tech lead skill.&lt;/p&gt;

&lt;p&gt;Tech leads work with product managers to translate user needs into technical requirements. We organize those requirements into small, workable chunks so teams can move quickly without getting bogged down.&lt;/p&gt;

&lt;p&gt;That same skill is invaluable when working with AI.&lt;/p&gt;

&lt;p&gt;The smaller the work item, the easier it is to provide complete context. The smoother that interaction is, the faster you get working software to test and refine. That faster feedback loop means you can ship sooner, learn sooner, and adjust sooner.&lt;/p&gt;

&lt;p&gt;It’s Agile software development on steroids.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;AI Amplifies Constraints&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;All of this highlights a deeper issue: AI amplifies constraints.&lt;/p&gt;

&lt;p&gt;There have been plenty of takes recently along the lines of, “AI didn’t kill this company, it exposed a bad business model,” or, “AI didn’t cause these layoffs, it exposed poor performance.” While I don’t fully agree with those claims, I do agree with the underlying idea: as systems become more efficient, their weaknesses become more visible.&lt;/p&gt;

&lt;p&gt;As coding gets faster, bottlenecks shift elsewhere. If implementation takes minutes, what about UX research? Requirements gathering? Testing? Deployment? Maintenance? Architectural shifts?&lt;/p&gt;

&lt;p&gt;This is why small personal projects can move at breakneck speed while large enterprise systems often feel slow even with AI moving things along. It’s rarely the tools that are the problem. More often, it’s the surrounding process. That’s not a criticism of enterprise environments, it’s a reality of scale. The more customers, products, and revenue you have, the more those supporting processes matter.&lt;/p&gt;

&lt;p&gt;The real question is how we improve those areas alongside our tooling.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Where Senior Engineers Fit In&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;I don’t think there’s a single definitive answer, but part of it comes down to ownership. Owning as much of the product lifecycle as you reasonably can, and being a positive influence where you can’t, goes a long way.&lt;/p&gt;

&lt;p&gt;AI doesn’t eliminate senior engineering skills. It rewards them. The ability to manage constraints, provide context, break down work, and think systemically becomes even more valuable in an AI-assisted world.&lt;/p&gt;

&lt;p&gt;That’s not a threat to experienced engineers. It’s an opportunity.&lt;/p&gt;

&lt;p&gt;Have you noticed similar patterns in your work? What differences have you seen between AI use on small projects versus large codebases, and what lessons would you share with others?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llms</category>
      <category>engineering</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Employer-Sponsored Training</title>
      <dc:creator>John Van Wagenen</dc:creator>
      <pubDate>Mon, 09 Jul 2018 19:13:48 +0000</pubDate>
      <link>https://forem.com/jtvanwage/employer-sponsored-training-1anl</link>
      <guid>https://forem.com/jtvanwage/employer-sponsored-training-1anl</guid>
      <description>&lt;p&gt;Most great companies will offer some sort of training program. Whether that's a little bit of free time for self-directed training or the company pays for you to attend a class or conference, this training can take on many forms--but it's not for everyone. I'd like to hear from you! &lt;strong&gt;What training programs have you seen from employers? Which ones worked? Why were they effective? What types of training do you like the most?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As you consider your answers to those questions, let me tell you a bit about the training program we've had at my current employer, &lt;a href="https://www.mastercontrol.com/" rel="noopener noreferrer"&gt;MasterControl&lt;/a&gt;, in its current form. It's evolved a bit over the years and continues to evolve, but this is where we're at.&lt;/p&gt;

&lt;p&gt;A few years ago we started formal, ongoing training. This takes a few forms. First, we have a biweekly meeting where we meet as a department to receive training. This training is provided by anyone in the department that wants to offer training and is coordinated with a small group of volunteers from the department that form the "training committee." These meetings can last up to 3 hours but they often last less than two. In addition to that, we encourage each employee to take up to 2 hours per week to train on whatever they'd like. From Pluralsight to books to tutorials and beyond. Finally, we also have a training budget where the company will pay for employees to attend conferences, take courses, get certifications, or whatever else they'd like to do for training.&lt;/p&gt;

&lt;p&gt;All those things make up our ongoing training that we offer at MasterControl. It's an evolving program with some changes in the works right now. It's been moderately effective so far but we want to make it much better. Does your employer do similar things? What have you seen work well?&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>training</category>
      <category>career</category>
    </item>
    <item>
      <title>Gemba Walks</title>
      <dc:creator>John Van Wagenen</dc:creator>
      <pubDate>Wed, 23 May 2018 04:24:00 +0000</pubDate>
      <link>https://forem.com/jtvanwage/gemba-walks-3050</link>
      <guid>https://forem.com/jtvanwage/gemba-walks-3050</guid>
      <description>&lt;p&gt;In lean manufacturing, there's a concept referred to as &lt;a href="https://en.m.wikipedia.org/wiki/Gemba#Gemba_walk" rel="noopener noreferrer"&gt;Gemba Walks&lt;/a&gt;. This is essentially where someone who's not directly involved in the day-to-day labor on the manufacturing lines walks the floor to identify waste, observe how the work is actually being performed, build relationships, observe working conditions, and many other things. In manufacturing, it has quite a few benefits and companies that practice it often speak highly of the value it brings.&lt;/p&gt;

&lt;p&gt;As it relates to software engineering, &lt;strong&gt;what would you consider to be a Gemba Walk?&lt;/strong&gt; How have you seen others or how do you effectively observe working conditions and gained the other benefits that Gemba Walks can bring? If you're not sure you've seen it done, how would you apply Gemba Walks to software engineering? Maybe you are in the "front lines," so to speak, but this could also be applied to staying in-sync with other engineers or other parts of the system. How do you accomplish that?&lt;/p&gt;

&lt;p&gt;&lt;em&gt;(This isn't to say I don't have ideas on this, but I really want to hear &lt;strong&gt;your&lt;/strong&gt; ideas on it.)&lt;/em&gt;&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>leadership</category>
    </item>
    <item>
      <title>Experience Isn't Enough</title>
      <dc:creator>John Van Wagenen</dc:creator>
      <pubDate>Tue, 06 Mar 2018 03:39:05 +0000</pubDate>
      <link>https://forem.com/jtvanwage/experience-isnt-enough--o4b</link>
      <guid>https://forem.com/jtvanwage/experience-isnt-enough--o4b</guid>
      <description>&lt;p&gt;So you've been in the industry for a while now. Great job! You've probably rejoiced as you landed your first job. Maybe you've moved on since then (maybe even a few times) or maybe you've stayed with your first employer. Whatever the case, you've been in the industry for a number of years now. Maybe 5. Maybe 10. Maybe more. You've done some cool things (like building that awesome feature all the customers didn't know they wanted) and some not-so-cool things (like taking down production). On paper, you look pretty amazing! You've definitely been around the block a few times and have your own set of scars to show for it.&lt;/p&gt;

&lt;p&gt;But... have you improved?&lt;/p&gt;

&lt;p&gt;Have you learned from your mistakes?&lt;/p&gt;

&lt;p&gt;Have you challenged yourself in new and different ways?&lt;/p&gt;

&lt;p&gt;Have you put effort into learning new things?&lt;/p&gt;

&lt;p&gt;Over the years, I've had opportunities to be involved in the interview process at work. We've mainly interviewed college students looking for internships or their first jobs. On occasion, though, we've had the opportunity to interview a few engineers who have had a lot of experience. Some of them have been in the industry for 15+ years and have done a lot of interesting things. In a few of these interviews, one thing has stuck out to me...&lt;/p&gt;

&lt;p&gt;They stagnated.&lt;/p&gt;

&lt;p&gt;Instead of 15+ years of improving, they had a few years of improvement followed by many years of the same old stuff.&lt;/p&gt;

&lt;p&gt;Instead of learning from mistakes, some have ignored them.&lt;/p&gt;

&lt;p&gt;Instead of challenging themselves, they've done the minimum to get by. They've punched the clock and been grateful for a way to pay the bills.&lt;/p&gt;

&lt;p&gt;Instead of learning new things, they've only adapted what they've always done to the new situations they encounter.&lt;/p&gt;

&lt;p&gt;So, I've learned that experience isn't everything. Just having been in the industry for 15 years isn't good enough. Just punching the clock isn't good enough. Just doing the same old stuff in new ways year after year isn't enough. This is stagnation.&lt;/p&gt;

&lt;p&gt;So, what can you do to avoid stagnation?&lt;/p&gt;

&lt;p&gt;There's a lot you can do. Here are some things that I think are valuable in becoming a senior engineer, not just an engineer with years of experience.&lt;/p&gt;

&lt;p&gt;One of the key attributes that I think helps you become a senior engineer is a constant desire to improve. Whether this desire comes from a personal drive to be the best you can be or just a curiosity about a different way of doing things or a desire to be able to help others learn or any other reason, this drive can be a key differentiator in your experience. Without this drive, you'll be punching the clock and not much else. With this drive, you'll be involved and engaged in the work you're doing. You'll be able to do more and more over time, you'll be happier and more fulfilled, and you'll be gaining the skills you need to become a senior engineer.&lt;/p&gt;

&lt;p&gt;Another key attribute that will help you on your road to becoming a senior engineer is humility. Humility will help you learn from those around you. If you think you're the smartest person in the company and you can't learn anything from so-and-so in a code review because, what do they know anyway? You're either in the wrong company or you're just plain wrong. There's plenty we can learn from those around us. Yeah, your coworkers may be weird, but so are you in your own way. Humility will not only help you learn from those around you, it'll also help you realize when you're wrong and help you learn from your mistakes and failures. Without this humility, you're likely going to keep some bad habits that will keep you at a junior level for longer than you could be. With this humility, you'll likely have a drive to improve yourself and be willing to constantly challenge the way you do things in search of a better way.&lt;/p&gt;

&lt;p&gt;There are many other attributes that can help you on your progression to the senior level. What attributes do you admire in the senior engineers around you? What has helped you on your road to the senior level? I'm interested to hear your thoughts.&lt;/p&gt;

&lt;p&gt;It's important to keep in mind that experience or years in the field alone doesn't make you a Senior Engineer. Senior Engineers constantly improve. Experienced Engineers stagnate. Doing the same thing over and over again isn't improvement. Challenging yourself and looking for new, better ways of doing those things is improvement. And improvement is what will give you the experience you need on your road to a Senior Engineer. So don't get comfortable. Challenge yourself and you'll accelerate on your way to a higher level.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally posted on my personal &lt;a href="https://vdubinatorcoder.blogspot.com/2018/02/experience-isnt-enough.html" rel="noopener noreferrer"&gt;blog&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Want more content like this?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Check out my &lt;a href="https://vdubinatorcoder.blogspot.com/" rel="noopener noreferrer"&gt;blog&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Follow me on &lt;a href="https://dev.to/jtvanwage"&gt;dev.to&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Follow me on &lt;a href="https://twitter.com/jtvanwage" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>career</category>
      <category>learning</category>
    </item>
    <item>
      <title>What is a Tech Lead?</title>
      <dc:creator>John Van Wagenen</dc:creator>
      <pubDate>Wed, 06 Dec 2017 22:39:19 +0000</pubDate>
      <link>https://forem.com/jtvanwage/what-is-a-tech-lead-9m3</link>
      <guid>https://forem.com/jtvanwage/what-is-a-tech-lead-9m3</guid>
      <description>&lt;p&gt;This isn't a new topic by any means, but some recent events at work brought this question to my mind. I started writing down my thoughts and doing some research on it and thought this might be a great question to ask here on &lt;a href="https://dev.to"&gt;dev.to&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;So, &lt;em&gt;&lt;strong&gt;what is a Tech Lead?&lt;/strong&gt;&lt;/em&gt; How would you define it? What roles and responsibilities do they have? What skills should they have? Any recommendations on resources to learn more about this role?&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>leadership</category>
    </item>
  </channel>
</rss>
