<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Prakhar Singh</title>
    <description>The latest articles on Forem by Prakhar Singh (@prakharsingh_17).</description>
    <link>https://forem.com/prakharsingh_17</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/prakharsingh_17"/>
    <language>en</language>
    <item>
      <title>Evaluating LLM code reviewers: an offline harness for precision, recall, and routing"</title>
      <dc:creator>Prakhar Singh</dc:creator>
      <pubDate>Wed, 13 May 2026 19:12:40 +0000</pubDate>
      <link>https://forem.com/prakharsingh_17/evaluating-llm-code-reviewers-an-offline-harness-for-precision-recall-and-routing-880</link>
      <guid>https://forem.com/prakharsingh_17/evaluating-llm-code-reviewers-an-offline-harness-for-precision-recall-and-routing-880</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;If you cannot measure it, you cannot route it. Why offline evaluation is the difference between a code reviewer that improves over time and one the team dismisses within a sprint.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Chat evaluations are vibes-based: thumbs-up on "was this helpful?" measured against no particular ground truth. Code review needs something stricter. A reviewer that flags five real bugs and one bogus warning is useful; one that flags one real bug and five bogus warnings is dismissed within a sprint. Offline evaluation answers the question before the reviewer ships. It tells you which model to route a given change to, when to escalate, and whether the system is getting better or worse over time. Without it, every routing decision is a guess.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building the evaluation set
&lt;/h2&gt;

&lt;p&gt;Start with past pull requests that carry human accept/reject outcomes. This is your ground truth. Filter aggressively: comments the author dismissed within seconds, comments where the reviewer later admitted they were wrong, comments on code that has since been deleted. What remains is a set of (diff, finding, accept/reject) triples where the human label is trustworthy enough to score against.&lt;/p&gt;

&lt;p&gt;Three slices determine whether the set is useful. &lt;strong&gt;Change type&lt;/strong&gt;: a model that catches null-safety regressions perfectly may miss concurrency bugs entirely, and vice versa. If your eval set is 90% style nits, the score tells you nothing about correctness. &lt;strong&gt;File ownership&lt;/strong&gt;: different teams write different code, and an evaluator that scores well on backend services may crater on frontend components. &lt;strong&gt;Language&lt;/strong&gt;: a Python reviewer handles types as optional annotations; a TypeScript reviewer treats them as structural contracts. A single aggregate score hides per-slice failures. Slice and score separately.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scoring: precision, recall, and the dimensions that matter
&lt;/h2&gt;

&lt;p&gt;Precision and recall trade off against each other. In code review, precision matters more than recall. A missed real bug is an opportunity cost. A bogus flag is a trust cost, and trust collapses non-linearly: two or three bad comments in a single pull request are enough for a developer to start dismissing the bot reflexively, and once that habit forms, the reviewer's signal-to-noise ratio becomes irrelevant because nobody is reading it. Target recall above 0.7 and precision above 0.85 before any output reaches a developer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-tier labeling.&lt;/strong&gt; Not all findings are created equal, and collapsing everything into accept/reject loses signal. A three-tier scheme works better in practice: &lt;strong&gt;Hard Reject&lt;/strong&gt; for factually wrong or harmful findings, &lt;strong&gt;Soft Reject&lt;/strong&gt; for valid-but-low-value suggestions (style nits, marginal improvements, technically-correct-but-low-priority), and &lt;strong&gt;Accept&lt;/strong&gt; for good catches. Three tiers let you compute precision at different strictness levels: Hard-Reject-only precision captures the rate of genuinely harmful false positives, while Soft+Hard Reject precision captures developer tolerance more broadly. The two numbers tell different stories, and both matter for calibration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Self-consistency over N samples.&lt;/strong&gt; Run the same diff through the reviewer multiple times. If it produces different findings each time, the model is underspecified for the task. Low self-consistency correlates with high false-positive rate in production, and it is a cheaper signal to measure than full precision/recall against ground truth. Track it per model version and per slice.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Severity-aware precision.&lt;/strong&gt; A bogus "use const instead of let" suggestion is an eye-roll. A bogus security or null-dereference claim is a trust-destroyer. Weighted precision, where false positives are scored by their potential impact rather than counted equally, tracks closer to actual developer tolerance than raw precision. Label severity on the evaluation set (critical, medium, low) and weight false positives accordingly: a false-critical costs 10x a false-low in the weighted score. The number that predicts whether your reviewer stays in the loop is almost never raw precision.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Confidence calibration.&lt;/strong&gt; The reviewer should know when it does not know. A comment emitted with low confidence should be suppressed by the routing layer rather than surfaced with a disclaimer. Surfacing it anyway is tempting (more coverage) but the disclaimer carries no weight with a developer who already distrusts the tool. Calibrate a threshold on the offline eval set: what is the lowest confidence score at which precision stays above 0.85? Discard everything below it.&lt;/p&gt;

&lt;h2&gt;
  
  
  From evaluation to routing
&lt;/h2&gt;

&lt;p&gt;Offline evaluation is not a one-time gate. It is the mechanism that drives routing decisions in production. A classification router sends simple changes to a cheap fast model and complex changes to a frontier model, but the classification policy itself needs evaluation: what threshold defines "complex"? A fallback chain escalates from cheap to expensive when self-consistency drops, but the escalation threshold needs evaluation too. Both thresholds are hyperparameters, and offline eval is how you tune them.&lt;br&gt;
Evaluation-driven A/B routing ties this together. Maintain an offline evaluation set, score every model variant against it on the relevant slices, and route production traffic to whichever variant scores highest per slice. When a new model ships, the evaluation set tells you whether it is an upgrade or a regression before any user sees it. When a slice degrades, traffic shifts back automatically. This is the only routing strategy that adapts to model updates without manual intervention.&lt;br&gt;
Ensemble disagreement is itself a routing signal. When a cheap model signs off on a change but a frontier model flags something, the disagreement is worth surfacing regardless of which model is "correct." Disagreement rate between model pairs, tracked over time on the eval set, often catches regressions faster than raw precision shifts: if two models that agreed 95% of the time last week now agree 80%, something changed, and the eval set alone may not tell you what.&lt;/p&gt;

&lt;p&gt;This evaluation harness feeds the routing decisions described in the companion article on &lt;a href="https://prakharsingh.github.io/notes/agentic-code-review/" rel="noopener noreferrer"&gt;agentic code review in production&lt;/a&gt;: offline evaluation scores drive the model router and fallback chain thresholds directly.&lt;/p&gt;

&lt;h2&gt;
  
  
  The closed feedback loop
&lt;/h2&gt;

&lt;p&gt;The offline evaluation set decays. The codebase evolves, old patterns become obsolete, new patterns emerge. Every accepted or dismissed comment in production must feed back into the ground truth set. A dismissed comment becomes a negative example: same diff, same finding, but ground truth equals reject. Next time the model proposes something similar, the offline eval catches it before a developer sees it. An accepted comment becomes a positive example and reinforces the pattern.&lt;/p&gt;

&lt;p&gt;Retrieval-augmented generation over the repository's past review threads can surface similar past comments, making it easier to spot when the model is proposing a finding that a human reviewer already dismissed under slightly different wording.&lt;/p&gt;

&lt;p&gt;This feedback loop is where most teams underinvest. They build the eval set once, ship the reviewer, and treat the eval as a static artifact. The reviewer plateaus, the team stops trusting it, and the project is shelved. The loop is what separates a reviewer that improves release over release from one that is disabled within a quarter. Without it, the false-positive rate is whatever the underlying model happens to produce. With it, the rate trends down per release.&lt;/p&gt;

&lt;h2&gt;
  
  
  Open challenges
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Ground truth drift.&lt;/strong&gt; An evaluation set built from last year's pull requests scores last year's patterns. As the codebase adds new modules, changes languages, or adopts new frameworks, the ground truth ages out. Periodic re-labeling, sampled from recent production dismissals and accepts, keeps the set relevant. Freshness weighting (recent examples count more than stale ones) is a lighter-weight alternative.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This article was originally published on &lt;a href="https://prakharsingh.github.io/notes/evaluating-llm-code-reviewers/" rel="noopener noreferrer"&gt;prakharsingh.github.io/notes/evaluating-llm-code-reviewers/&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>llm</category>
      <category>codereview</category>
      <category>evaluation</category>
      <category>ai</category>
    </item>
    <item>
      <title>Agentic code review in production: orchestration, evaluation, and the cost of being wrong</title>
      <dc:creator>Prakhar Singh</dc:creator>
      <pubDate>Tue, 12 May 2026 09:29:37 +0000</pubDate>
      <link>https://forem.com/prakharsingh_17/agentic-code-review-in-production-orchestration-evaluation-and-the-cost-of-being-wrong-3090</link>
      <guid>https://forem.com/prakharsingh_17/agentic-code-review-in-production-orchestration-evaluation-and-the-cost-of-being-wrong-3090</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;What "agentic" actually buys you over a linter, why single-model approaches stall, and why false positives — not raw model capability — determine whether the system stays in the loop.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;Agentic&lt;/em&gt; has become a marketing flag, but in code review it carries a precise technical meaning: the system, not the user, decides which tools to invoke against a change, in what order, and how to weight their findings. A linter runs a fixed pipeline. A single-pass language-model reviewer reads the diff and emits comments end-to-end. An agentic reviewer chooses between a compiler, a type checker, a test runner, a secret scanner, a static analyzer, and one or more language-model calls — then arbitrates their disagreements before surfacing a review comment.&lt;/p&gt;

&lt;p&gt;The model is one tool among several. The system's value is in the arbitration policy that decides which findings reach the developer.&lt;/p&gt;

&lt;h2&gt;
  
  
  The orchestration problem
&lt;/h2&gt;

&lt;p&gt;Single-model approaches stall on three axes that pull in different directions: accuracy, latency, and cost. A frontier model gives the strongest multi-step reasoning on a non-trivial change but typically adds several seconds of latency and an order-of-magnitude cost premium per call; a small open-weights model returns in under a second but misses subtle invariants. Three routing strategies cover most of the production space:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Task-classification routing.&lt;/strong&gt; A lightweight classifier (a smaller model or a rules layer) decides which downstream model handles a request. Style nits, dead-code removal, and import-order checks go to a fast cheap model; logic changes and concurrency reasoning go to a stronger one. This works as long as the classifier is calibrated; misclassification lands hard-reasoning prompts on under-powered models and produces confident-sounding nonsense.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fallback chains.&lt;/strong&gt; Try the cheap model first; if self-consistency across N samples is low — or a cheap verifier disagrees — escalate. This is robust against classifier drift but doubles cost on the long tail.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Evaluation-driven A/B routing.&lt;/strong&gt; Maintain an offline evaluation set of past pull requests with human accept/reject outcomes; score model variants on precision and recall against that ground truth and route traffic to whichever variant scores highest on the relevant slice. This is the only strategy that adapts when a model is updated.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In practice production systems combine all three: classify, fall back on low confidence, and let offline evaluations reshape weights every release cycle.&lt;/p&gt;

&lt;h2&gt;
  
  
  Grounding with static analysis and retrieval
&lt;/h2&gt;

&lt;p&gt;A pure language-model review hallucinates fixes — proposing API calls that do not exist, citing version-specific behavior incorrectly, suggesting refactors that break other call sites the model never saw. Two anchors push the hallucination rate down.&lt;/p&gt;

&lt;p&gt;First, deterministic static analyzers run in parallel with the language model. Type errors, null dereferences, missing &lt;code&gt;await&lt;/code&gt;, unused imports — these are cheap, deterministic, and not worth a model call. The agent uses their output as ground truth and frames its review around facts the static analyzer surfaced, not facts the model invented.&lt;/p&gt;

&lt;p&gt;Second, retrieval-augmented generation over the repository itself: prior review threads, commit messages, and the project's design documents. Most code review observations are not novel. The same patterns get flagged across files — null-safety regressions, missing index migrations, inconsistent error wrapping. Retrieving prior review comments scoped to the touched files, modules, or owners shifts the model from generic best-practice advice to comments that match the codebase's established conventions.&lt;/p&gt;

&lt;h2&gt;
  
  
  False positives as the dominant cost
&lt;/h2&gt;

&lt;p&gt;Developer trust in an automated reviewer collapses non-linearly: a handful of bad comments is usually enough for the team to start dismissing the bot reflexively. The arithmetic is unforgiving: a 5% false-positive rate at twenty review comments per pull request is one bogus flag per PR. Within a sprint, the team stops reading the bot's output.&lt;/p&gt;

&lt;p&gt;Three controls keep the rate manageable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Confidence thresholding&lt;/strong&gt; — never surface a comment below a calibrated threshold, even if the model is willing to speak.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dedup against historical dismissals&lt;/strong&gt; — if a reviewer dismissed an analogous comment six months ago, the same shape of comment on the same file is suspect this time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A closed feedback loop&lt;/strong&gt; — every dismissed or accepted comment becomes training signal for the next routing decision and threshold update.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The third is where most teams underinvest. Without the loop the false-positive rate is whatever the underlying model happens to produce. With it, the rate trends down per release.&lt;/p&gt;

&lt;h2&gt;
  
  
  Compliance as a routing constraint
&lt;/h2&gt;

&lt;p&gt;Compliance is not a bolt-on check. It belongs at the same layer as task classification — a first-class routing input, not a separate stage tacked on at the end.&lt;/p&gt;

&lt;p&gt;Code touching regulated data — protected health information, payment card numbers, EU resident identifiers — has to route differently. GDPR shapes both transfer (no diffs leaving the controller's processors without a Data Processing Agreement) and retention (logged prompts and completions are themselves processing activity). HIPAA obligations — Business Associate Agreements and minimum-necessary access — determine which model endpoints are eligible to process diffs containing PHI. PCI-DSS controls dictate cardholder-data redaction before model invocation. SOC 2 controls dictate operational guarantees on the reviewer service itself. Bolting any of this on after the fact produces gaps that surface during the first audit, not during development.&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing
&lt;/h2&gt;

&lt;p&gt;Agentic code review is a coordination system with a language model embedded in it, not a language model with tools attached. The hard problems are not in the model — they are in the routing, the grounding, the evaluation, and the feedback loops that decide what the system does next time. Teams that treat the model as the product underinvest in everything that actually determines whether the product stays in use.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://prakharsingh.github.io/notes/agentic-code-review/" rel="noopener noreferrer"&gt;prakharsingh.github.io/notes/agentic-code-review&lt;/a&gt; on 12 May 2026. I'm &lt;a href="https://prakharsingh.github.io/" rel="noopener noreferrer"&gt;Prakhar Singh&lt;/a&gt;, Founding Engineer at Devzy AI building an agentic AI system for automated code review across CLI, PR, and IDE surfaces.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>codereview</category>
      <category>llm</category>
      <category>devtools</category>
    </item>
  </channel>
</rss>
