<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Carl Hembrough</title>
    <description>The latest articles on Forem by Carl Hembrough (@carl_hembrough_4ff217c2f1).</description>
    <link>https://forem.com/carl_hembrough_4ff217c2f1</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/carl_hembrough_4ff217c2f1"/>
    <language>en</language>
    <item>
      <title>Pair Programming in the AI Era</title>
      <dc:creator>Carl Hembrough</dc:creator>
      <pubDate>Mon, 16 Feb 2026 15:39:12 +0000</pubDate>
      <link>https://forem.com/carl_hembrough_4ff217c2f1/pair-programming-in-the-ai-era-52bb</link>
      <guid>https://forem.com/carl_hembrough_4ff217c2f1/pair-programming-in-the-ai-era-52bb</guid>
      <description>&lt;h2&gt;
  
  
  A guide to faster delivery &lt;em&gt;and&lt;/em&gt; stronger developers
&lt;/h2&gt;

&lt;p&gt;Artificial intelligence has become a constant part of many developers’ workflows. For many teams, AI is already woven into daily work—generating snippets, fixing bugs, writing tests, and explaining unfamiliar APIs.&lt;/p&gt;

&lt;p&gt;But as AI becomes more capable, other questions emerge:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Is pair programming still effective with AI tooling?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;How do we use AI effectively without quietly eroding the skills that make us good developers?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This article offers a practical framework: when to pair with AI, when to pair with humans, how to keep learning, and how to prevent fast output from turning into shallow understanding.&lt;/p&gt;

&lt;p&gt;It’s written for teams (especially leads and seniors) establishing norms, but the same principles work for individual developers optimizing their workflow.&lt;/p&gt;




&lt;h2&gt;
  
  
  TL;DR - Key takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;AI accelerates execution. Humans elevate understanding.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;The risk isn’t using AI—it’s &lt;strong&gt;becoming passive&lt;/strong&gt; and stopping the mental work that builds expertise.&lt;/li&gt;
&lt;li&gt;Don’t ask “Should we use AI?” Ask: &lt;strong&gt;What’s the goal (learning vs speed) and what’s the risk level (low vs high)?&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Treat AI output as &lt;strong&gt;untrusted until verified&lt;/strong&gt;: tests, invariants, edge cases, and security checks.&lt;/li&gt;
&lt;li&gt;Protect the skills that must not fade: debugging models, system design, testing strategy, security reasoning, code reading, performance intuition, and soft skills.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  AI as a new kind of pair programmer
&lt;/h2&gt;

&lt;p&gt;If you use AI tools today, you’re already pairing—just not with a human.&lt;/p&gt;

&lt;p&gt;AI is excellent at:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;generating boilerplate and glue code&lt;/li&gt;
&lt;li&gt;drafting tests and documentation&lt;/li&gt;
&lt;li&gt;explaining unfamiliar APIs and error messages&lt;/li&gt;
&lt;li&gt;proposing refactors and alternative implementations&lt;/li&gt;
&lt;li&gt;helping you get unstuck quickly&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But AI isn’t a full replacement for human pairing. Humans bring what models still struggle with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;shared context and domain nuance&lt;/li&gt;
&lt;li&gt;mentorship and teaching&lt;/li&gt;
&lt;li&gt;design intuition and trade-off judgement&lt;/li&gt;
&lt;li&gt;the willingness to challenge assumptions&lt;/li&gt;
&lt;li&gt;accountability for decisions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AI accelerates execution. Humans elevate understanding.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The balance problem: speed vs passivity
&lt;/h2&gt;

&lt;p&gt;AI can make us faster—but it can also make us passive.&lt;/p&gt;

&lt;p&gt;If we let AI handle every routine task, we risk losing the muscle memory of coding and the deeper capabilities that sit behind it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;spotting subtle bugs&lt;/li&gt;
&lt;li&gt;reasoning about state, concurrency, and failure modes&lt;/li&gt;
&lt;li&gt;designing maintainable systems&lt;/li&gt;
&lt;li&gt;understanding why something works (not just that it works)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To be fair, skill atrophy isn’t always bad. We don’t memorise phone numbers any more. Most of us don’t write assembly. We routinely offload work to tools—and that’s progress.&lt;/p&gt;

&lt;p&gt;So the real question isn’t “Will skills fade?”&lt;br&gt;&lt;br&gt;
It’s: &lt;strong&gt;which skills can fade safely, and which ones are non-negotiable?&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  A relevant signal from recent research and commentary
&lt;/h3&gt;

&lt;p&gt;Some recent writing on AI coding assistance (including discussion of Anthropic-related research) points to a recurring pattern: AI can help people complete tasks faster, while reducing conceptual understanding and debugging performance afterwards—especially for less experienced developers.&lt;br&gt;&lt;br&gt;
Source: &lt;a href="https://www.rafay99.com/blog/ai-coding-assistance-skill-atrophy-anthropic-research/" rel="noopener noreferrer"&gt;https://www.rafay99.com/blog/ai-coding-assistance-skill-atrophy-anthropic-research/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Whether or not the exact numbers generalise to your context, the pattern is worth taking seriously: &lt;strong&gt;delegation style matters&lt;/strong&gt;. If AI replaces thinking, skills decay; if AI supports thinking, skills can grow.&lt;/p&gt;




&lt;h2&gt;
  
  
  Skills that should not fade
&lt;/h2&gt;

&lt;p&gt;You can offload a lot to tools, but there are core skills that remain essential—especially when systems break, requirements shift, or stakes rise.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Protect these:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Debugging mental models&lt;/strong&gt;
Reading logs, tracing state, isolating variables, reasoning about causality.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;System design and trade-offs&lt;/strong&gt;
Boundaries, reliability, scalability, data integrity, operability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security reasoning&lt;/strong&gt;
Authn/authz, injection risks, secrets handling, least privilege, threat modelling basics.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Testing strategy&lt;/strong&gt; (not just writing tests)
What to test, why it matters, where bugs hide, and what failures look like.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code reading and comprehension&lt;/strong&gt;
Navigating large codebases, understanding intent, detecting subtle regressions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance intuition&lt;/strong&gt;
Profiling habits, complexity awareness, and knowing what to measure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Soft skills&lt;/strong&gt;
Explaining intent, asking good questions, giving and receiving feedback, disagreeing constructively, and maintaining psychological safety during pairing and review.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These skills don’t develop automatically from shipping output. They develop from &lt;strong&gt;practice in reasoning, explanation, and correction&lt;/strong&gt;—especially debugging.&lt;/p&gt;




&lt;h2&gt;
  
  
  Goal × Risk
&lt;/h2&gt;

&lt;p&gt;Instead of debating AI in the abstract, decide based on two questions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;What’s the goal right now?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Learning and mentorship&lt;/li&gt;
&lt;li&gt;Delivery speed and throughput&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;What’s the risk level?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Low risk: internal tool, simple UI, non-critical automation&lt;/li&gt;
&lt;li&gt;High risk: security, payments, permissions, data migrations, concurrency, regulated systems&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  The 2×2 matrix
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;A) Learning + Low Risk&lt;/strong&gt; (ideal practice zone)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Default pairing mode:&lt;/strong&gt; Solo or human pairing (optional).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI role:&lt;/strong&gt; Coach and critic, not primary author.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Required safeguards:&lt;/strong&gt; Form a hypothesis and plan first; use AI to critique and suggest alternatives; verify with small tests.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Success criterion:&lt;/strong&gt; You can explain the design and trade-offs in your own words.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;B) Learning + High Risk&lt;/strong&gt; (mentored rigour)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Default pairing mode:&lt;/strong&gt; Human pairing (recommended).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI role:&lt;/strong&gt; Option generator and test/edge-case assistant.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Required safeguards:&lt;/strong&gt; Human explanation, design review, and strong tests; explicitly discuss failure modes and what could go wrong.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Success criterion:&lt;/strong&gt; The developer can justify correctness and risk handling, not just produce working code.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;C) Speed + Low Risk&lt;/strong&gt; (automation sweet spot)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Default pairing mode:&lt;/strong&gt; Mostly solo.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI role:&lt;/strong&gt; Primary accelerator for drafts, boilerplate, refactors, and documentation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Required safeguards:&lt;/strong&gt; Run tests and linting; keep diffs small enough to review; do quick edge-case reasoning.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Success criterion:&lt;/strong&gt; Fast delivery with low review overhead and no recurring regressions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;D) Speed + High Risk&lt;/strong&gt; (trust boundaries)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Default pairing mode:&lt;/strong&gt; Human pairing (recommended).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI role:&lt;/strong&gt; Generate options, draft tests, and surface edge cases.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Required safeguards:&lt;/strong&gt; Humans own design decisions, threat modelling, and correctness arguments (invariants); review standards are stricter than normal.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Success criterion:&lt;/strong&gt; You can defend the approach under scrutiny (security, data integrity, reliability), not just demonstrate that it “seems to work.”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A useful rule of thumb: high delegation plus low verification creates fragility. Moderate delegation plus strong verification creates leverage.&lt;/p&gt;




&lt;h2&gt;
  
  
  AI during human pair programming: when it helps vs when it hurts
&lt;/h2&gt;

&lt;p&gt;Introducing AI into a human pairing session changes the dynamic—sometimes productively, sometimes destructively.&lt;/p&gt;

&lt;h3&gt;
  
  
  When AI tends to hurt pairing
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;It disrupts shared mental modelling&lt;/strong&gt;
Pairing works because two humans build a common understanding. AI can inject answers before the pair has aligned on the problem.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;It removes productive struggle&lt;/strong&gt;
Some struggle is where learning happens, especially for juniors.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;It creates a third-wheel dynamic&lt;/strong&gt;
One person drives, one watches, and AI becomes the main source of solutions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;It can amplify senior dominance&lt;/strong&gt;
If the senior uses AI as a confidence amplifier, the junior can disengage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;It can reduce psychological safety&lt;/strong&gt;
People may feel pressured to accept suggestions or embarrassed to ask questions.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  When AI tends to help pairing
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Mechanical acceleration&lt;/strong&gt;
Boilerplate, API lookups, test scaffolding, docs—let AI do it so humans can focus on reasoning.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rapid option generation&lt;/strong&gt;
AI can offer multiple approaches worth discussing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Neutral tie-breaker for factual disputes&lt;/strong&gt;
Syntax and library facts can be checked quickly (but still verified).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Unblocking when stuck&lt;/strong&gt;
AI can provide a new angle when both partners stall.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  A practical middle ground: time-boxed consults
&lt;/h3&gt;

&lt;p&gt;Instead of AI on or AI off, try norms like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Humans first:&lt;/strong&gt; spend 5 to 10 minutes forming the model, constraints, and plan.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;2-minute AI consult:&lt;/strong&gt; ask a targeted question (not “solve this whole thing”).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Humans decide:&lt;/strong&gt; the pair chooses and can explain the approach.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI for mechanics:&lt;/strong&gt; use AI to draft code and tests after the reasoning is settled.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you want to explicitly protect junior growth, add one more rule:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Junior explains first:&lt;/strong&gt; before asking AI, the junior states their hypothesis and plan (even if it’s incomplete). Then AI can be used to refine it.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  When to pair with humans (human to human)
&lt;/h2&gt;

&lt;p&gt;Human pairing becomes a strategic tool where humans consistently outperform AI:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Architecture and system design&lt;/strong&gt;
Defining boundaries, trade-offs, operational concerns, long-term maintainability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complex problem-solving&lt;/strong&gt;
Debugging multi-layer issues, concurrency, distributed systems, unclear requirements.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mentorship and onboarding&lt;/strong&gt;
Teaching design thinking and helping juniors build durable mental models.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High-stakes review&lt;/strong&gt;
Permissions, payments, security-sensitive logic, migrations, business-critical workflows.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  When to pair with AI (human to AI)
&lt;/h2&gt;

&lt;p&gt;AI shines as a multiplier for tasks where speed and breadth matter:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Routine implementation&lt;/strong&gt;
CRUD, glue code, scaffolding, documentation, repetitive refactors.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Early drafting&lt;/strong&gt;
First drafts of functions, outlines, interface sketches, example implementations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Debugging and exploration support&lt;/strong&gt;
Explaining errors, suggesting hypotheses, generating test cases.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Learning support&lt;/strong&gt;
Summaries, examples, and API explanations—especially when you already have a goal.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A subtle but important detail: if AI assistance disproportionately harms debugging skill, then it’s not enough to use AI to fix the error. Use it to generate hypotheses and tests while you still practise the debugging loop.&lt;/p&gt;




&lt;h2&gt;
  
  
  Verification: how to use AI without getting burned
&lt;/h2&gt;

&lt;p&gt;AI can be helpful and wrong at the same time. Treat outputs as untrusted until verified.&lt;/p&gt;

&lt;h3&gt;
  
  
  A lightweight verification checklist
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Run tests&lt;/strong&gt; (and add tests that would fail if the AI is wrong)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ask for edge cases:&lt;/strong&gt; “What inputs break this?” “What about null, empty, timeout?”&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Extract invariants:&lt;/strong&gt; “What must always be true after this function runs?”&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security sanity check&lt;/strong&gt; (especially for auth, permissions, data handling)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prefer small diffs&lt;/strong&gt; over pasting large chunks blindly&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Explain it in your own words&lt;/strong&gt; before merging&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the tool makes you feel done before you can explain it, you are not done.&lt;/p&gt;

&lt;h3&gt;
  
  
  Where you should be extra strict
&lt;/h3&gt;

&lt;p&gt;Be cautious using AI as the author in areas like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;authentication and authorisation&lt;/li&gt;
&lt;li&gt;money, billing, accounting&lt;/li&gt;
&lt;li&gt;concurrency and threading&lt;/li&gt;
&lt;li&gt;database migrations and data integrity&lt;/li&gt;
&lt;li&gt;cryptography and secrets management&lt;/li&gt;
&lt;li&gt;anything regulated or legally sensitive&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In these areas, AI can still help, but humans must own the reasoning and the risks.&lt;/p&gt;




&lt;h2&gt;
  
  
  A practical philosophy for today
&lt;/h2&gt;

&lt;p&gt;Use AI to accelerate, not replace. Stay mentally engaged. Keep learning. Pair with humans strategically. If you’re not pairing with humans, use AI as a first-pass reviewer and a source of feedback on your PRs—but still do the reasoning work yourself.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final thought
&lt;/h2&gt;

&lt;p&gt;AI is changing software development quickly. We can embrace the benefits without losing the craft, but only if we’re intentional.&lt;/p&gt;

&lt;p&gt;The goal isn’t to resist AI or surrender to it.&lt;br&gt;&lt;br&gt;
It’s to build a partnership where &lt;strong&gt;speed doesn’t replace understanding&lt;/strong&gt;, and where the next generation of developers becomes stronger, not just faster.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>learning</category>
      <category>productivity</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
