<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Nijat</title>
    <description>The latest articles on Forem by Nijat (@namrastanov).</description>
    <link>https://forem.com/namrastanov</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/namrastanov"/>
    <language>en</language>
    <item>
      <title>PR Review Time Is Up 441% — The Real Cost of AI-Accelerated Development</title>
      <dc:creator>Nijat</dc:creator>
      <pubDate>Sat, 25 Apr 2026 12:05:39 +0000</pubDate>
      <link>https://forem.com/code-board/pr-review-time-is-up-441-the-real-cost-of-ai-accelerated-development-1ho6</link>
      <guid>https://forem.com/code-board/pr-review-time-is-up-441-the-real-cost-of-ai-accelerated-development-1ho6</guid>
      <description>&lt;h2&gt;
  
  
  The Numbers Don't Lie
&lt;/h2&gt;

&lt;p&gt;The AI Engineering Report 2026 analyzed telemetry from 22,000 developers across more than 4,000 teams. The headline metrics look impressive: epics completed per developer are up 66%, task throughput is up 34%, and PR merge rates are climbing.&lt;/p&gt;

&lt;p&gt;But dig one layer deeper and the picture shifts dramatically.&lt;/p&gt;

&lt;p&gt;Median time in PR review is up 441%. Average time spent in code review is up nearly 200%. Pull request sizes have grown 51%. And 31% more PRs are merging with zero review — not by policy, but because reviewers can't keep pace with the volume.&lt;/p&gt;

&lt;p&gt;The report calls this pattern the "Acceleration Whiplash."&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bottleneck Has Moved
&lt;/h2&gt;

&lt;p&gt;For years, the constraint in software delivery was writing code. AI has largely removed that constraint. Developers are producing more code, faster, across more contexts than ever before.&lt;/p&gt;

&lt;p&gt;But the rest of the pipeline — review, testing, validation, incident response — was designed for human-paced output. AI has flooded that system with volume it was never built to absorb.&lt;/p&gt;

&lt;p&gt;The result: bugs per developer are up 54%. Incidents per PR have more than tripled. The probability that any given code change causes a production problem has increased dramatically.&lt;/p&gt;

&lt;p&gt;Meanwhile, the industry median cycle time has dropped from 11 days in 2020 to under 7 days in 2026. The biggest driver? AI-assisted code review and better async practices. So the teams that have invested in review infrastructure are pulling ahead, while teams that haven't are drowning in unreviewed code.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Review Problem Is the Real Problem
&lt;/h2&gt;

&lt;p&gt;High-performing teams review PRs within 4 hours. If your average exceeds 24 hours, that's likely your biggest hidden bottleneck — and it cascades through your entire development process.&lt;/p&gt;

&lt;p&gt;The solution isn't to skip review or rubber-stamp AI-generated code. It's to get smarter about where review effort goes. Not every PR carries the same risk. A one-line config change and a 500-line refactor touching authentication logic should not receive the same level of scrutiny.&lt;/p&gt;

&lt;p&gt;This is where tools like automated risk scoring, AI-assisted review triage, and unified PR dashboards earn their keep. Code Board's PR Risk Score, for example, uses heuristics like diff size, CI status, and sensitive file modifications to help teams focus reviewer attention where it matters most.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Matters Now
&lt;/h2&gt;

&lt;p&gt;The data is clear: AI makes teams faster at producing code. It does not automatically make teams faster at shipping quality software. The gap between those two things is where engineering discipline lives.&lt;/p&gt;

&lt;p&gt;Track your review times. Monitor your PR sizes. Know your rework rate. These aren't vanity metrics — they're the early warning signals that tell you whether your AI-driven speed is real or hollow.&lt;/p&gt;

&lt;p&gt;Writing code was never the hard part. Making sure it's good enough to ship always has been.&lt;/p&gt;

</description>
      <category>codereview</category>
      <category>engineeringmetrics</category>
      <category>aidevelopment</category>
      <category>dorametrics</category>
    </item>
    <item>
      <title>The Review Bottleneck: Why More AI Code Means Slower Teams in 2026</title>
      <dc:creator>Nijat</dc:creator>
      <pubDate>Fri, 24 Apr 2026 18:02:33 +0000</pubDate>
      <link>https://forem.com/code-board/the-review-bottleneck-why-more-ai-code-means-slower-teams-in-2026-1e5n</link>
      <guid>https://forem.com/code-board/the-review-bottleneck-why-more-ai-code-means-slower-teams-in-2026-1e5n</guid>
      <description>&lt;h2&gt;
  
  
  The Bottleneck Moved
&lt;/h2&gt;

&lt;p&gt;AI coding tools promised faster development. They delivered — sort of. Developers using AI complete 21% more tasks and merge 98% more pull requests. But PR review time has increased 91%. The bottleneck didn't disappear. It relocated from writing code to verifying it.&lt;/p&gt;

&lt;p&gt;LinearB's 2026 analysis of 8.1 million pull requests across 4,800+ organizations found that developers feel 20% faster but are actually 19% slower. That's a 39-point gap between perceived and actual productivity.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Numbers Are Stark
&lt;/h2&gt;

&lt;p&gt;Sonar's 2026 State of Code Developer Survey of 1,100+ developers confirmed that AI accounts for 42% of all committed code — a number developers expect to reach 65% by 2027. Yet 96% of developers say they don't fully trust AI-generated code to be functionally correct, and only 48% report that they always verify it before committing.&lt;/p&gt;

&lt;p&gt;The Pragmatic Engineer's 2026 survey described a specific archetype — the "Builder" — who is most overwhelmed by reviewing AI-generated code from colleagues. Some teams are seeing 30 PRs per day with only six reviewers. That ratio is unsustainable no matter how you look at it.&lt;/p&gt;

&lt;p&gt;Meanwhile, Lightrun's 2026 report found that 43% of AI-generated code changes require manual debugging in production even after passing QA and staging. Zero percent of surveyed engineering leaders described themselves as "very confident" that AI-generated code will behave correctly once deployed.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Helps
&lt;/h2&gt;

&lt;p&gt;The teams handling this well share a few traits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;They enforce PR size limits regardless of generation speed.&lt;/strong&gt; AI makes it trivially easy to produce massive diffs. That doesn't mean reviewers can absorb them.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;They use AI-assisted review as a first filter.&lt;/strong&gt; The emerging "review sandwich" approach — AI catches surface-level issues first, humans focus on architecture and business logic — reduces human review time by 30-50% according to GitHub's internal data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;They track review metrics as seriously as shipping metrics.&lt;/strong&gt; If you measure deployment frequency but not time-to-review or review queue depth, you're flying blind on half the pipeline.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;They centralize visibility across repos.&lt;/strong&gt; When PRs are scattered across dozens of repositories and providers, stale reviews become invisible. Tools like Code Board exist specifically to aggregate PRs into a single view with risk scoring, so teams can spot bottlenecks before they compound.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Real Question
&lt;/h2&gt;

&lt;p&gt;The teams that are shipping well in 2026 aren't the ones generating the most code. They're the ones whose review culture and tooling adapted to match their new output velocity. If your PR volume doubled this year but your review process didn't change at all, that's where your next problem is hiding.&lt;/p&gt;

</description>
      <category>codereview</category>
      <category>aiassisteddevelopment</category>
      <category>engineeringproductivity</category>
      <category>pullrequests</category>
    </item>
    <item>
      <title>Why PR Risk Scoring Matters More Than PR Count</title>
      <dc:creator>Nijat</dc:creator>
      <pubDate>Wed, 22 Apr 2026 09:02:55 +0000</pubDate>
      <link>https://forem.com/namrastanov/why-pr-risk-scoring-matters-more-than-pr-count-4n97</link>
      <guid>https://forem.com/namrastanov/why-pr-risk-scoring-matters-more-than-pr-count-4n97</guid>
      <description>&lt;h2&gt;
  
  
  The Problem With Treating All PRs Equally
&lt;/h2&gt;

&lt;p&gt;Every engineering team has a review queue. And in most setups, that queue is a flat list sorted by time. A one-line documentation fix sits next to a 600-line database migration, and they get the same visual treatment.&lt;/p&gt;

&lt;p&gt;This creates two failure modes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;High-risk PRs sit too long&lt;/strong&gt; because reviewers skim past them in favor of easier reviews.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Low-risk PRs get over-reviewed&lt;/strong&gt; because there's no signal telling anyone they're safe.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Both waste time. Both hurt throughput. And neither gets better by just adding more reviewers.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Makes a PR "Risky"?
&lt;/h2&gt;

&lt;p&gt;Risk isn't subjective — or at least, it doesn't have to be. There are concrete, measurable signals that correlate strongly with the likelihood of a PR causing problems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Diff size&lt;/strong&gt;: Larger changes have more surface area for bugs. This is well-documented in research on code review effectiveness.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CI pipeline status&lt;/strong&gt;: A failing build is an obvious red flag, but a PR with &lt;em&gt;no&lt;/em&gt; CI run at all is arguably worse — it means nobody knows.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Merge conflicts&lt;/strong&gt;: Active conflicts mean the PR is drifting from the base branch. The longer it sits, the harder the merge.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sensitive file modifications&lt;/strong&gt;: Changes to infrastructure configs, authentication logic, database schemas, or deployment manifests carry outsized blast radius compared to their line count.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;None of these signals alone tell the full story. Combined, they paint a useful picture.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scoring Instead of Guessing
&lt;/h2&gt;

&lt;p&gt;The idea behind PR risk scoring is simple: assign a numeric score (say, 0-100) to every PR based on these heuristics, and surface it before anyone opens the diff.&lt;/p&gt;

&lt;p&gt;This lets teams triage intelligently. A PR scoring 85 gets senior eyes immediately. A PR scoring 12 can be approved with a quick glance.&lt;/p&gt;

&lt;p&gt;At Code Board, we built this directly into the unified PR dashboard. Every PR that appears on your board — regardless of whether it comes from GitHub or GitLab — gets an automatic risk score. No configuration needed, no manual tagging.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bigger Point
&lt;/h2&gt;

&lt;p&gt;Review quality isn't about volume. It's about allocation. The best engineering teams don't review more — they review smarter. Risk scoring is one of the simplest ways to make that shift, and it's surprising how few tools treat it as a first-class feature.&lt;/p&gt;

&lt;p&gt;Stop reviewing PRs in the order they arrived. Start reviewing them in the order they matter.&lt;/p&gt;

</description>
      <category>codereview</category>
      <category>engineeringmanagement</category>
      <category>developerproductivity</category>
    </item>
    <item>
      <title>Why CI Failure Investigation Is Still a Manual Time Sink in 2026</title>
      <dc:creator>Nijat</dc:creator>
      <pubDate>Wed, 22 Apr 2026 07:58:18 +0000</pubDate>
      <link>https://forem.com/code-board/why-ci-failure-investigation-is-still-a-manual-time-sink-in-2026-53d1</link>
      <guid>https://forem.com/code-board/why-ci-failure-investigation-is-still-a-manual-time-sink-in-2026-53d1</guid>
      <description>&lt;h2&gt;
  
  
  The real cost of a red pipeline
&lt;/h2&gt;

&lt;p&gt;When a CI pipeline fails, the actual fix usually takes a few minutes. The investigation that precedes it? That's where the time goes.&lt;/p&gt;

&lt;p&gt;Every developer knows the routine: the build goes red, you click through to the logs, you scroll past pages of dependency installation and environment setup, you locate the actual error, and then you mentally trace it back to your changes. It's not intellectually challenging work. It's just slow, repetitive, and surprisingly draining when it happens multiple times a day.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this problem persists
&lt;/h2&gt;

&lt;p&gt;CI systems are designed to run pipelines, not to help you understand failures. The logs they produce are comprehensive by design — they capture everything so that edge cases are debuggable. But that comprehensiveness works against you in the common case, where the failure is straightforward and buried under noise.&lt;/p&gt;

&lt;p&gt;Most teams develop informal coping strategies. Senior developers learn to &lt;code&gt;Ctrl+F&lt;/code&gt; for specific keywords. Teams write wrapper scripts that format output. Some add custom error messages to their test suites. These all help at the margins, but the fundamental problem remains: you're doing pattern matching and root cause analysis manually, every single time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where AI actually helps
&lt;/h2&gt;

&lt;p&gt;This is one of the areas where AI delivers concrete, measurable value — not hype, just utility. Parsing structured log output, identifying error patterns, and correlating them with code changes is exactly the kind of repetitive analytical task that language models handle well.&lt;/p&gt;

&lt;p&gt;The key is connecting the CI output to the actual diff. Knowing that a test failed is step one. Knowing &lt;em&gt;which lines you changed&lt;/em&gt; caused that failure is step two, and that's where most of the investigation time lives.&lt;/p&gt;

&lt;p&gt;Code Board's CI Failure Intelligence does exactly this — it reads failing CI logs, identifies the root cause, and maps it to specific changes in your pull request, often suggesting a fix with a code snippet. It's not magic. It's just automating the mechanical part of a workflow that developers repeat hundreds of times a year.&lt;/p&gt;

&lt;h2&gt;
  
  
  The bigger picture
&lt;/h2&gt;

&lt;p&gt;Developer productivity isn't just about writing code faster. It's about reducing the friction around everything else — reviews, debugging, context switching. CI failure investigation is one of those friction points that's easy to overlook because each individual instance feels small. But the aggregate cost across a team is significant.&lt;/p&gt;

&lt;p&gt;If your engineering org tracks cycle time or lead time, slow CI debugging is contributing to those numbers more than you might think. It's worth treating as a problem to solve, not just a fact of life.&lt;/p&gt;

</description>
      <category>cicd</category>
      <category>devrel</category>
      <category>engineeringproductivity</category>
    </item>
    <item>
      <title>Why Pull Requests Go Stale — And Why It's a Visibility Problem, Not a People Problem</title>
      <dc:creator>Nijat</dc:creator>
      <pubDate>Mon, 20 Apr 2026 22:21:49 +0000</pubDate>
      <link>https://forem.com/code-board/why-pull-requests-go-stale-and-why-its-a-visibility-problem-not-a-people-problem-343h</link>
      <guid>https://forem.com/code-board/why-pull-requests-go-stale-and-why-its-a-visibility-problem-not-a-people-problem-343h</guid>
      <description>&lt;h2&gt;
  
  
  The Pattern
&lt;/h2&gt;

&lt;p&gt;Every engineering team that works across more than a dozen repositories eventually hits the same wall: a pull request sits open for days — sometimes weeks — not because anyone rejected it, but because no one saw it.&lt;/p&gt;

&lt;p&gt;The developer who opened it assumed the assigned reviewer got the notification. The reviewer was heads-down in a different project and missed it in a flood of GitHub emails. The engineering manager had no reason to check that specific repository on that specific day.&lt;/p&gt;

&lt;p&gt;The PR wasn't blocked. It was invisible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Keeps Happening
&lt;/h2&gt;

&lt;p&gt;Modern engineering teams distribute work across dozens or hundreds of repositories. Microservices architectures, monorepo-to-polyrepo migrations, and multi-team ownership structures all contribute to a world where no single person has a complete mental model of where active work is happening.&lt;/p&gt;

&lt;p&gt;GitHub and GitLab notifications are designed for individual contributors tracking their own work. They're not designed to give anyone — developer, reviewer, or manager — a cross-repo view of what's open and aging.&lt;/p&gt;

&lt;p&gt;Some teams build Slack integrations or custom bots. These work for a while, but they tend to become another source of noise. When a channel posts 30 PR updates a day, people mute it. The signal-to-noise ratio collapses.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Actual Problem
&lt;/h2&gt;

&lt;p&gt;This isn't about lazy reviewers or careless developers. It's a tooling gap. The default experience for managing PRs across multiple repositories provides no aggregated view, no age tracking, and no prioritization.&lt;/p&gt;

&lt;p&gt;Without that, you're relying on individual vigilance to compensate for a systemic blind spot. That doesn't scale.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Good Looks Like
&lt;/h2&gt;

&lt;p&gt;The fix is straightforward in concept: aggregate every open PR into a single view, sorted by age and risk, with enough context to act without clicking through to each repo.&lt;/p&gt;

&lt;p&gt;This is the core idea behind tools like &lt;a href="https://code-board.com" rel="noopener noreferrer"&gt;Code Board&lt;/a&gt;, which pulls PRs from GitHub and GitLab into one Kanban board with automatic risk scoring and staleness tracking. But regardless of what tool you use, the principle matters: &lt;strong&gt;visibility should be a default, not a side quest.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Takeaway
&lt;/h2&gt;

&lt;p&gt;If PRs are going stale on your team, resist the urge to blame individuals. Look at the system instead. Ask whether your tooling gives anyone a complete picture of what's waiting for review across all your repositories.&lt;/p&gt;

&lt;p&gt;If the answer is no, that's where the problem lives.&lt;/p&gt;

</description>
      <category>codereview</category>
      <category>engineeringmanagement</category>
      <category>developerproductivity</category>
    </item>
    <item>
      <title>Review Latency Is a Visibility Problem, Not a People Problem</title>
      <dc:creator>Nijat</dc:creator>
      <pubDate>Sun, 19 Apr 2026 21:41:27 +0000</pubDate>
      <link>https://forem.com/code-board/review-latency-is-a-visibility-problem-not-a-people-problem-14d3</link>
      <guid>https://forem.com/code-board/review-latency-is-a-visibility-problem-not-a-people-problem-14d3</guid>
      <description>&lt;h2&gt;
  
  
  The Real Bottleneck in Your Development Cycle
&lt;/h2&gt;

&lt;p&gt;Most engineering teams think they have a review speed problem. They set SLA targets, schedule dedicated review blocks, and nag people in Slack. But after looking at patterns across many teams this week, one thing became very clear: the bottleneck is rarely how fast someone reviews a PR. It's how long it takes before anyone even notices the PR exists.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Notification Graveyard
&lt;/h2&gt;

&lt;p&gt;GitHub and GitLab both send notifications. Email pings, in-app badges, Slack integrations. And yet PRs still sit for hours — sometimes days — before getting a first review.&lt;/p&gt;

&lt;p&gt;Why? Because the signal-to-noise ratio is terrible. A developer working across multiple repositories might get dozens of notifications per day. Email notifications get filtered. Slack messages scroll past. The GitHub notification tab becomes a graveyard of unread items nobody will ever revisit.&lt;/p&gt;

&lt;p&gt;The PR was there. The notification was sent. But nobody actually &lt;em&gt;saw&lt;/em&gt; it with enough context to act.&lt;/p&gt;

&lt;h2&gt;
  
  
  Process Won't Save You
&lt;/h2&gt;

&lt;p&gt;The natural response is to add more process. Mandatory reviewer assignments. Daily standup check-ins about open PRs. Automated reminders. These things can help at the margins, but they're treating symptoms, not the cause.&lt;/p&gt;

&lt;p&gt;The cause is fragmentation. When your team's work lives across 15 repositories, two git platforms, and three communication tools, there is no single place where someone can glance and understand what needs attention right now.&lt;/p&gt;

&lt;h2&gt;
  
  
  Visibility Is the Multiplier
&lt;/h2&gt;

&lt;p&gt;The teams that ship fastest aren't necessarily the ones with the most disciplined review habits. They're the ones where open PRs are impossible to miss.&lt;/p&gt;

&lt;p&gt;This can look different depending on your setup. Some teams use dashboards. Some use Kanban boards for PRs. At Code Board, we built a unified view across repos and providers with risk scoring specifically because we kept seeing this pattern — the biggest wins come from surfacing the right PR to the right person at the right time.&lt;/p&gt;

&lt;p&gt;But regardless of tooling, the principle holds: if you want to reduce cycle time, don't start by asking people to review faster. Start by making sure they see what needs reviewing.&lt;/p&gt;

&lt;h2&gt;
  
  
  What to Do This Week
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Measure your team's time-to-first-review, not just time-to-merge&lt;/li&gt;
&lt;li&gt;Check how many PRs sat for more than 24 hours without any reviewer interaction&lt;/li&gt;
&lt;li&gt;Ask your team honestly: do you always know when there's a PR waiting for you?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The answers will tell you whether you have a discipline problem or a visibility problem. In most cases, it's the latter.&lt;/p&gt;

</description>
      <category>codereview</category>
      <category>devrel</category>
      <category>engineeringmanagement</category>
    </item>
  </channel>
</rss>
