<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Brian Zimbelman</title>
    <description>The latest articles on Forem by Brian Zimbelman (@brian_zimbelman).</description>
    <link>https://forem.com/brian_zimbelman</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/brian_zimbelman"/>
    <language>en</language>
    <item>
      <title>Your AI Coding Assistant Is Not Enough</title>
      <dc:creator>Brian Zimbelman</dc:creator>
      <pubDate>Wed, 13 May 2026 10:24:46 +0000</pubDate>
      <link>https://forem.com/brian_zimbelman/your-ai-coding-assistant-is-not-enough-3cbo</link>
      <guid>https://forem.com/brian_zimbelman/your-ai-coding-assistant-is-not-enough-3cbo</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcawnbq3njbn30qo7g186.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcawnbq3njbn30qo7g186.png" alt="Your AI Coding Assistant Is Not Enough" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is Article 1 of Beyond the Coding Assistant, a multi-part series on AI-assisted software engineering at enterprise scale. The full series is available free of any paywall at&lt;/em&gt; &lt;a href="https://articles.zimetic.com/" rel="noopener noreferrer"&gt;&lt;em&gt;https://articles.zimetic.com&lt;/em&gt;&lt;/a&gt;&lt;em&gt;. Previously:&lt;/em&gt; &lt;a href="https://articles.zimetic.com/beyond-the-coding-assistant-a-series-on-ai-assisted-software-engineering/" rel="noopener noreferrer"&gt;&lt;em&gt;Beyond the Coding Assistant — A New Series&lt;/em&gt;&lt;/a&gt;&lt;em&gt;. Coming next: Article 2 — Why AI Tools Make Some Teams Slower.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;Pick a developer. Pick a Tuesday. Break down their workday.&lt;/p&gt;

&lt;p&gt;How many of those hours were actually in the editor? How many were in tickets, dashboards, Slack, meetings, and review? Published survey data on developer time allocation isn't subtle on this question. Stripe's &lt;em&gt;Developer Coefficient&lt;/em&gt; report found that roughly 42% of a typical developer's week goes to addressing technical debt and fixing bad code, with only a minority of the week left for the kind of new-feature coding the marketing pictures show (&lt;a href="https://stripe.com/files/reports/the-developer-coefficient.pdf" rel="noopener noreferrer"&gt;Stripe, 2018&lt;/a&gt;). Other analyses of how engineers spend their time put hands-on coding in a similar range — a few hours a day at best, not the bulk of the week.&lt;/p&gt;

&lt;p&gt;The AI coding tools helped with those few hours. They did very little for the other five or six. And once you take into account everyone else on the team — and all the non-development tasks in the development &lt;em&gt;process&lt;/em&gt; that the coding tools simply don't touch — even the best-case scenario only improves a fraction of the team's time. To realize the enormous gains these tools could deliver, we need to rethink the fundamentals of how we build software in this new era. Some of what was helping us is now holding us back.&lt;/p&gt;

&lt;h2&gt;
  
  
  What coding assistants actually do well
&lt;/h2&gt;

&lt;p&gt;Let's be clear about the starting point. The current generation of AI coding assistants is impressive, and the breakthroughs are genuine. Inside a single repository, with a human driving, with a well-scoped task, they produce usable code at speeds that would have seemed absurd three years ago. Smart engineers have built real workarounds for the tools' limitations — scripts, custom harnesses, carefully tuned prompts, agents, and multi-session workflows that extend the tools' reach further than the vendors originally imagined.&lt;/p&gt;

&lt;p&gt;None of what follows is a takedown of those tools. The argument is that they alone are not enough. They were designed for a specific context — editor, session, one developer, one repo, one well-bounded change — and that context is not where most of the engineering work in real organizations actually happens.&lt;/p&gt;

&lt;h2&gt;
  
  
  The iceberg
&lt;/h2&gt;

&lt;p&gt;Shipping software is mostly &lt;em&gt;not&lt;/em&gt; coding. It is requirements gathering, stakeholder conversations, design docs, architecture reviews, feature flag wiring, secrets provisioning, CI pipeline updates, deploy runbooks, dashboard setup, alert tuning, incident response, post-mortems, migration plans, and deprecation notices. It is the scheduled meeting that becomes a Slack thread that becomes an RFC that becomes a backlog ticket that becomes, eventually, a short coding session and a pull request.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnh123k80kdjpa6nwuz2q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnh123k80kdjpa6nwuz2q.png" alt="Your AI Coding Assistant Is Not Enough" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here's another test. Ask an engineer to take the next ticket off their queue and complete it end-to-end inside a single Docker container — code, build, run, validate, deploy. For 99 of every 100 real tickets in a real enterprise codebase, the answer is no. They need multiple repos. Several services running in dev or staging. Credentials to external systems. Documentation scattered across Confluence and a few other internal sources. And usually a conversation or two with product, QA, or a teammate who knows the corner of the system where the bug lives.&lt;/p&gt;

&lt;p&gt;The current tooling treats the session as the unit of work and leaves everything outside the session — which is to say, most of the work — to the engineer.&lt;/p&gt;

&lt;h2&gt;
  
  
  The rest of the team
&lt;/h2&gt;

&lt;p&gt;Engineering is a team sport. The primary focus of coding agents has been the engineer-writing-code role. That makes sense as a starting point; it's where the highest-volume, most-bounded, most-codifiable work lives. Plenty of clever people have built workarounds to extend that focus to other parts of the team — agents that draft tickets, agents that summarize Slack threads, agents that turn a paragraph of intent into a Figma flow — but these are workarounds, not the design center of the tools, and someone is still there monitoring the agent on every step of the process, giving detailed instructions and often repeating those instructions multiple times.&lt;/p&gt;

&lt;p&gt;The next generation of tooling has to make those other roles first-class citizens of the process, not afterthoughts. If the goal is &lt;em&gt;team&lt;/em&gt; throughput, then concentrating all the AI investment on one role is ineffective. It's the local optimum of "make the developer faster" rather than the global optimum of "help the team ship more." A tool that only helps developers can only move the bottleneck to whichever role is next in the handoff.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujk84dvtpo6ogyhu45gv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujk84dvtpo6ogyhu45gv.png" alt="Your AI Coding Assistant Is Not Enough" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This isn't a hypothetical claim, either. There's already industry data showing that some teams, after rolling out AI coding assistants without changing the rest of their development process, have actually seen overall delivery slow down — the front-end coding step gets faster while review, testing, and coordination start to choke. We will dig into the data behind that observation in more detail in the next article.&lt;/p&gt;

&lt;h2&gt;
  
  
  A walk through the lifecycle
&lt;/h2&gt;

&lt;p&gt;Every team in every company has a development lifecycle. Some teams write theirs down explicitly. Others make it up as they go. Some are formal and governance-heavy. Others are loose and improvisational. The names vary — ideation, design, specification, implementation, configuration, deployment, refinement, monitoring, debugging, retirement, and others — and so do the boundaries between phases. None of that matters very much for this argument.&lt;/p&gt;

&lt;p&gt;What does matter is the coverage pattern. Today's AI tooling concentrates almost entirely in the implementation phase, and even there it misses most of the coordination work between developers and the people they hand off to and from. Whatever phases an organization uses, the next generation of tooling has to support the &lt;em&gt;entire&lt;/em&gt; lifecycle if AI is going to deliver on the team-level promise people keep making for it. Anything less is a tool playing in one corner of a much bigger problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  The engineer as orchestrator
&lt;/h2&gt;

&lt;p&gt;Current coding tools require the engineer to drive at a low level — prompt by prompt, session by session. Different tools have different mechanisms (chat, autocomplete, slash commands, terminal CLIs, IDE plugins) but the underlying interaction is still pretty manual. The engineer asks. The tool responds. The engineer reviews. The engineer decides what to do next. Repeat.&lt;/p&gt;

&lt;p&gt;This was an excellent way to start. When the tools were new and went off the rails easily, tight engineer-in-the-loop control was exactly the right design. Since then, several things have improved. We've learned how to keep the tools in line — better prompts, better guardrails, better evaluation. The underlying models have improved at producing quality work. The interfaces have grown new affordances. And yet the &lt;em&gt;fundamental shape&lt;/em&gt; of the interaction hasn't changed very much. The engineer is still the orchestrator, asking the tool to perform every step, granting every permission, and reminding the tool of context the tool ought to have remembered on its own.&lt;/p&gt;

&lt;p&gt;A whole sub-industry has grown up around helping the tools do the right thing the first time — custom agents, hooks, prompt libraries, role definitions, project context files, MCP servers, on and on. These help. They don't address the fundamental shape of the problem, which is that the tools should be capable of running a process largely on their own, with clear, well-defined points where the engineer's judgment is needed — and only at those points does the engineer have to step in. Until that shape changes, the engineer remains the bottleneck for everything that happens around the coding step.&lt;/p&gt;

&lt;p&gt;That's not a small cost. Research on interrupted work and context switching is consistent and old: it takes around 23 minutes to fully regain deep focus after an interruption (&lt;a href="https://ics.uci.edu/~gmark/chi08-mark.pdf" rel="noopener noreferrer"&gt;Mark et al., CHI 2008&lt;/a&gt;), and the workflow most engineers have with AI tools today is essentially a context-switch generator. Recent measurement work from METR has shown experienced developers running roughly 19% &lt;em&gt;slower&lt;/em&gt; at real work in some controlled conditions, in part because of the cognitive overhead of constant prompting and review (&lt;a href="https://www.augmentcode.com/guides/why-ai-coding-tools-make-experienced-developers-19-slower-and-how-to-fix-it" rel="noopener noreferrer"&gt;Augment Code summary&lt;/a&gt;). The "AI fatigue" conversation that has emerged in 2025 and 2026 is the engineer's-eye view of the same phenomenon (&lt;a href="https://www.cerbos.dev/blog/productivity-paradox-of-ai-coding-assistants" rel="noopener noreferrer"&gt;Cerbos&lt;/a&gt;; &lt;a href="https://www.zensoftware.cloud/articles/ai-fatigue-in-development-why-constant-ai-assistance-can-wear-you-down" rel="noopener noreferrer"&gt;ZEN Software&lt;/a&gt;).&lt;/p&gt;

&lt;h2&gt;
  
  
  Why single-session tooling hits a ceiling
&lt;/h2&gt;

&lt;p&gt;The session is the wrong unit of work. If all we cared about was the code, then sure a session is fine, we start a session, tell the agent to code something up and end the session. We have the code, all is good. But we care about more than just the code. We care about designs, architectures, tests, QA processes, security and performance reviews, and on and on and on.&lt;/p&gt;

&lt;p&gt;A work item — the thing that actually gets shipped — persists across many sessions, many agents, many repos, and many days. If the tool's unit is the session, the unwritten assumption is that &lt;em&gt;humans will glue the sessions together&lt;/em&gt; into something coherent. They do, and that gluing is where the time goes.&lt;/p&gt;

&lt;p&gt;This isn't just an industry observation; it has academic backing. Researchers at MIT Sloan and Microsoft argue in &lt;em&gt;Chaining Tasks, Redefining Work: A Theory of AI Automation&lt;/em&gt; that AI's biggest impact comes from reshaping entire workflows — how tasks are sequenced, grouped, and handed off — rather than from speeding up any single task in isolation. Their concept of "task chaining" — clustering AI-friendly steps so AI executes them as a continuous sequence — is exactly the gap that session-bound tooling can't close on its own. They also point out that every handoff between AI and human carries coordination cost: review, validation, adjustment. End-to-end workflows minimize those handoffs; task-level workflows accumulate them. The session-bound coding assistant is structurally a handoff machine (&lt;a href="https://mitsloan.mit.edu/ideas-made-to-matter/how-ai-reshaping-workflows-and-redefining-jobs" rel="noopener noreferrer"&gt;MIT Sloan: How AI is reshaping workflows and redefining jobs&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;Amdahl's Law is the right rhetorical anchor here. If the part of the job you're speeding up is 20% of the total, the ceiling on your overall speedup is low no matter how fast you make that part. Even a 10× improvement on the coding step lifts whole-job throughput by only about 1.2× when coding was 20% of the job to begin with. The published data on developer time allocation has been consistently in that range for years. The math is not friendly to "make the coding step faster and call it a day."&lt;/p&gt;

&lt;h2&gt;
  
  
  Practices, structure, and the SDLC as the differentiator
&lt;/h2&gt;

&lt;p&gt;There's one observation that keeps recurring across the team-level studies: the teams that genuinely benefit from AI tools tend to share a cluster of practices. Fast feedback loops. Clear testing standards. Documentation discipline. Shared conventions for how agents are prompted, what context they're given, and what they're expected to return. Architectural ownership. Small, well-bounded work items.&lt;/p&gt;

&lt;p&gt;That cluster of practices is what an SDLC actually &lt;em&gt;is&lt;/em&gt;, whether or not anyone wrote it down. The teams that have one — explicit or implicit — are the ones absorbing AI tools well. The teams that don't are the ones that struggle. And once again I'll mention that if we just take our existing practices and try to shoehorn the ai coding practices into it we will not find that it fits, it is the square peg in the round hole problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  What changes if we treat the whole lifecycle as the unit
&lt;/h2&gt;

&lt;p&gt;If the work item, not the session, is the unit of work — and if the tooling supports the entire lifecycle, not just the implementation phase — several things shift at once. Coordination becomes a first-class concept rather than a human chore. Artifacts become durable across phases rather than ephemeral within a session. Review gates become part of the workflow rather than a separate meeting. Costs become attributable. Roles beyond the developer get genuine support.&lt;/p&gt;

&lt;p&gt;That is the frame shift this series argues for. The productivity gains from better autocomplete are largely tapped. The next order of magnitude is in orchestration across phases, not generation within one — and not just generation for one role out of many.&lt;/p&gt;

&lt;h2&gt;
  
  
  Coming next
&lt;/h2&gt;

&lt;p&gt;In &lt;em&gt;Why AI Tools Make Some Teams Slower&lt;/em&gt;, the team-level data point this article kept hinting at gets the spotlight. DORA's 2024 &lt;em&gt;State of DevOps&lt;/em&gt; report found a paradox: AI adoption increased individual productivity but was associated with declines in delivery throughput and stability. The teams losing on that trade are losing for structural reasons, not because the tools are bad — and naming those structural reasons is the setup for everything that comes after.&lt;/p&gt;




&lt;h2&gt;
  
  
  Sources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://mitsloan.mit.edu/ideas-made-to-matter/how-ai-reshaping-workflows-and-redefining-jobs" rel="noopener noreferrer"&gt;How AI is reshaping workflows and redefining jobs — Kristin Burnham, MIT Sloan Ideas Made to Matter, April 2026&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://peymanshahidi.github.io/assets/pdf/chaining_tasks_ai_automation.pdf" rel="noopener noreferrer"&gt;Chaining Tasks, Redefining Work: A Theory of AI Automation — Shahidi, Demirer, Horton, Immorlica, Lucier (paper PDF)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://stripe.com/files/reports/the-developer-coefficient.pdf" rel="noopener noreferrer"&gt;The Developer Coefficient — Stripe, 2018&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ics.uci.edu/~gmark/chi08-mark.pdf" rel="noopener noreferrer"&gt;The Cost of Interrupted Work — Gloria Mark et al., CHI 2008&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.augmentcode.com/guides/why-ai-coding-tools-make-experienced-developers-19-slower-and-how-to-fix-it" rel="noopener noreferrer"&gt;Why AI Coding Tools Make Experienced Developers 19% Slower — Augment Code&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.cerbos.dev/blog/productivity-paradox-of-ai-coding-assistants" rel="noopener noreferrer"&gt;The Productivity Paradox of AI Coding Assistants — Cerbos&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.zensoftware.cloud/articles/ai-fatigue-in-development-why-constant-ai-assistance-can-wear-you-down" rel="noopener noreferrer"&gt;AI Fatigue in Software Development — ZEN Software&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>softwareengineering</category>
      <category>series</category>
    </item>
    <item>
      <title>Beyond the Coding Assistant: A Series on AI-Assisted Software Engineering</title>
      <dc:creator>Brian Zimbelman</dc:creator>
      <pubDate>Mon, 11 May 2026 10:58:30 +0000</pubDate>
      <link>https://forem.com/brian_zimbelman/beyond-the-coding-assistant-a-series-on-ai-assisted-software-engineering-2lkh</link>
      <guid>https://forem.com/brian_zimbelman/beyond-the-coding-assistant-a-series-on-ai-assisted-software-engineering-2lkh</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flvxvlu0s7pt1aed9k7ln.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flvxvlu0s7pt1aed9k7ln.png" alt="Beyond the Coding Assistant: A Series on AI-Assisted Software Engineering" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is the first article of Beyond the Coding Assistant, a multi-part series on AI-assisted software engineering at enterprise scale. The full series is available free of any paywall at &lt;a href="https://articles.zimetic.com" rel="noopener noreferrer"&gt;https://articles.zimetic.com&lt;/a&gt;. Coming next: Article 1 — &lt;a href="https://articles.zimetic.com/your-ai-coding-assistant-is-not-enough/" rel="noopener noreferrer"&gt;Your AI Coding Assistant Is Not Enough&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;The last few years of AI-assisted development have been remarkable. Coding assistants have crossed real quality bars. Engineers can now produce working code, in unfamiliar languages, against unfamiliar systems, at speeds that would have looked like science fiction in 2022. There are real productivity gains, real new affordances, and a real shift in what an individual developer can do in an afternoon.&lt;/p&gt;

&lt;p&gt;And yet — when the conversation turns to the &lt;em&gt;team&lt;/em&gt; and the &lt;em&gt;organization&lt;/em&gt; — the picture is more complicated. The dramatic gains many leaders were promised haven't shown up on every team. Some teams ship more. Some teams ship the same. Some teams have actually gotten &lt;em&gt;slower&lt;/em&gt;, with the AI helping at the keystroke while the wider delivery metrics regress.&lt;/p&gt;

&lt;p&gt;That gap, between what's possible at the keystroke and what's actually showing up in delivery, is what this series is about. The question I want to ask, and try to answer over the next several articles, is simple: what has changed, and what changes could take us so much farther than where current AI coding assistants have brought us?&lt;/p&gt;

&lt;h2&gt;
  
  
  A state-of-affairs question, not a tooling complaint
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F526gzivhjrtntlk2j41p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F526gzivhjrtntlk2j41p.png" alt="Beyond the Coding Assistant: A Series on AI-Assisted Software Engineering" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It would be easy to frame this series as a critique of current tools. That would also be wrong. The current generation of coding assistants is genuinely excellent at what it does. The problem isn't that the tools are bad. The problem is that the tools are designed for a single role on a much larger team, doing a small part of a very large multi-step process.&lt;/p&gt;

&lt;p&gt;Software is shipped by teams — developers and product managers, designers, QA engineers, technical writers, program managers, security reviewers, compliance reviewers, devops, and more! Most of those people either don't write any code, or don't spend most of their time writing code. If the goal is &lt;em&gt;team&lt;/em&gt; throughput rather than individual keystroke speed, optimizing one role's tooling is only going to get you so far. Instead, this series is looking at how we build tools and restructure the team and its workflows for the dynamics that are here today and will be here in the near future.&lt;/p&gt;

&lt;p&gt;There's also a structural part of the story that's only now becoming visible. Token economics are shifting. The "burn-through-it" approach that worked when tokens were essentially free is getting expensive. The teams that have built disciplined development practices around AI tools are pulling away from the teams that haven't. None of this is anyone's fault, exactly. It's the natural moment in a maturing technology when the next set of questions starts to bite.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where the series goes
&lt;/h2&gt;

&lt;p&gt;I'll work through this in four parts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Part I — The Shifting Landscape.&lt;/strong&gt; Why now. The crafting of code is the visible tip of the iceberg, and I'd like to address it and the rest of the iceberg — much of which isn't touched by current AI tools. Recent studies have shown that many teams are actually getting slower with AI, so we will talk about that and why it's happening. And of course the economic changes that are forcing the question of how to make this work economically. And finally, the quality-speed-cost trilemma that frames everything after.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Part II — Reframing the Problem.&lt;/strong&gt; If the team is getting slower with our current SDLC and processes, are they the right processes for the team going forward? One of the key aspects of the Agile Manifesto was that we continue to improve our processes. So, we owe it to ourselves to evaluate whether the processes we built for a much longer coding step are still the right processes today. Let's also bring into view better ways to use our coding agents as we start to consider having them take on more and more of the SDLC. One of these changes is the true implementation of multi-pass workflows. We will also discuss why different kinds of work need different workflow shapes, and the benefits of specialized agents instead of generic workers. And then we will discuss how organizations may want to manage compute and AI resources in pools that engineers can utilize in a much more economical and controlled manner.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Part III — Design Principles.&lt;/strong&gt; The coming economic changes are going to make cost a first-class constraint. They are also going to require us to manage the project's context in different ways than we have had to in the past, so that we can get optimal performance and cost effectiveness out of our agents. Managing our shared resources gets more complicated as we start to have pools of agents updating things in parallel. And of course, we need to make sure we are doing all of this in ways where the engineers are not being overburdened, and the entire team gets to come along in a meaningful way.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Part IV — The Road Ahead.&lt;/strong&gt; I will share some of what I'm building and why I think it will move us forward. But let me reassure you: this series is not a sales pitch for that tool. It is more my way of sharing my thoughts on the state of the industry as we make this monumental transition, and what I think is happening. Feel free to skip Part IV if you are not interested in my tool, but please don't skip the discussion about how our industry is changing.&lt;/p&gt;

&lt;p&gt;Each piece stands alone. Read in order, they build a cumulative argument: the next frontier of AI-assisted development is &lt;em&gt;lifecycle orchestration&lt;/em&gt;, not better code generation — and it has to serve the whole team, not just the engineer.&lt;/p&gt;

&lt;h2&gt;
  
  
  A note on tone, and an invitation
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr0nrfv0hcecq1xfa7p0m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr0nrfv0hcecq1xfa7p0m.png" alt="Beyond the Coding Assistant: A Series on AI-Assisted Software Engineering" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These are my thoughts, based on what I'm seeing and hearing across the industry. They're claims, not conclusions. I have opinions and I'm going to defend them, but the whole point of publishing in public is to sharpen the ideas against readers who disagree. Your feedback is welcome, and desired.&lt;/p&gt;

&lt;p&gt;I'm also building a tool that applies these ideas. I'll describe it in Part IV, and I'll keep references to it brief in the meantime. The series is about the ideas first; the tool is one way to test them. My goal is to get you thinking about these ideas, these changes and get a dialog going so we can all learn and grow in a meaningful way.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to follow along
&lt;/h2&gt;

&lt;p&gt;I will be publishing three articles a week (Monday, Wednesday, and Friday), with the goal of having the entire series published in four weeks from start to finish. Feel free to drop by on a regular basis, or to sign up for notifications when articles are published. A hosted landing page at &lt;a href="https://articles.zimetic.com/" rel="noopener noreferrer"&gt;https://beyond-the-coding-assistant.ghost.io/&lt;/a&gt; lists all of the articles in this series, including the expected publication dates of the upcoming pieces. That's the home for the free, paywall-free reading copy of the series. Bookmark it if you want to follow along; subscribe if your platform of choice supports it.&lt;/p&gt;

&lt;p&gt;Coming next: &lt;em&gt;&lt;a href="https://articles.zimetic.com/your-ai-coding-assistant-is-not-enough/" rel="noopener noreferrer"&gt;Your AI Coding Assistant Is Not Enough&lt;/a&gt;&lt;/em&gt;. The iceberg of non-coding work, why current tools concentrate almost all their value on a fraction of the engineering job, and why "make the developer faster" is a local optimum that's already running out of room.&lt;/p&gt;

&lt;p&gt;If you've read this far, thank you. Please join us for the ride.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>softwareengineering</category>
      <category>series</category>
    </item>
  </channel>
</rss>
