<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Erik Israni</title>
    <description>The latest articles on Forem by Erik Israni (@erik_israni).</description>
    <link>https://forem.com/erik_israni</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/erik_israni"/>
    <language>en</language>
    <item>
      <title>Why the Best Open Source Teams Treat Writing, Testing, and Reviewing as Three Separate Jobs</title>
      <dc:creator>Erik Israni</dc:creator>
      <pubDate>Thu, 05 Mar 2026 15:18:44 +0000</pubDate>
      <link>https://forem.com/erik_israni/why-the-best-open-source-teams-treat-writing-testing-and-reviewing-as-three-separate-jobs-4beg</link>
      <guid>https://forem.com/erik_israni/why-the-best-open-source-teams-treat-writing-testing-and-reviewing-as-three-separate-jobs-4beg</guid>
      <description>&lt;p&gt;You adopted AI to move faster, and it worked. Output went up. Features that used to take days landed in hours. Contributors shipped more. And then something broke: &lt;strong&gt;AI accelerated the part of development that creates work for reviewers, but didn't do anything about review itself.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The result is what I call the &lt;strong&gt;AI Throughput Gap&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Traditional workflow:
Write -----&amp;gt; Test -----&amp;gt; Review

AI-assisted workflow:
AI Writes Faster ---&amp;gt; PR Volume Explodes ---&amp;gt; Review Becomes the Bottleneck
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;PRs pile up faster than any one person can clear them&lt;/li&gt;
&lt;li&gt;Edge cases slip through because reviewers are moving too fast&lt;/li&gt;
&lt;li&gt;Contributors wait days for feedback and go quiet&lt;/li&gt;
&lt;li&gt;You merge something that looked fine and spend the next week fielding issues&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This isn't a productivity problem. It's a tooling problem. Somewhere along the way, the idea took hold that if AI can help you write the code, it can handle the rest too. Review included.&lt;/p&gt;

&lt;p&gt;It can't. Not the same way. Understanding why is the difference between shipping confidently and shipping slop.&lt;/p&gt;




&lt;h2&gt;
  
  
  What "Agentic" AI Is Actually Optimized For
&lt;/h2&gt;

&lt;p&gt;When you're building something, describing a feature, watching an AI scaffold a component, iterating on logic, the model is operating in &lt;strong&gt;creation mode&lt;/strong&gt;. Its job is to generate, move forward, and produce the next thing based on what you want.&lt;/p&gt;

&lt;p&gt;That's powerful. But it's directional. The model is optimized for momentum.&lt;/p&gt;

&lt;p&gt;Reviewing code is the opposite. Instead of moving forward, you're scanning laterally across an entire change, looking for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What's missing&lt;/li&gt;
&lt;li&gt;What's fragile&lt;/li&gt;
&lt;li&gt;What works in isolation but breaks under load&lt;/li&gt;
&lt;li&gt;What conflicts with something three files away&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The questions are fundamentally different:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Creation mode asks:&lt;/strong&gt; &lt;em&gt;Does this do what it's supposed to do?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Review mode asks:&lt;/strong&gt; &lt;em&gt;What does this do that we didn't intend? What did we forget? What's going to bite us six months from now?&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;These are different cognitive jobs. And that difference is exactly why context matters so much, which is what makes collapsing them into one tool so costly.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Hidden Cost of One-Session Review
&lt;/h2&gt;

&lt;p&gt;When an AI helps you write a feature, it absorbs your intent. It knows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What you were trying to build&lt;/li&gt;
&lt;li&gt;The decisions you made along the way&lt;/li&gt;
&lt;li&gt;The tradeoffs you rationalized in real time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That context is valuable while building. It becomes a liability when reviewing.&lt;/p&gt;

&lt;p&gt;A good code reviewer doesn't know what you &lt;em&gt;meant&lt;/em&gt; to do. They only see what you &lt;em&gt;actually did&lt;/em&gt;. The distance between intent and implementation is exactly where bugs live, where security gaps hide, where forgotten edge cases sit quietly waiting.&lt;/p&gt;

&lt;p&gt;Using the same AI session that wrote your code to review it is like proofreading your own writing immediately after finishing it. Your brain fills in what should be there. You miss what isn't.&lt;/p&gt;

&lt;p&gt;Human engineering teams figured this out long ago. That's why code review exists as a discipline separate from implementation. AI teams are relearning the same lesson. And the solution is the same: &lt;strong&gt;separate the jobs.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Three Jobs Framework
&lt;/h2&gt;

&lt;p&gt;Your development workflow is three genuinely distinct jobs, each with its own goal and failure mode.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Writing&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Goal: Produce working code that implements intent&lt;/li&gt;
&lt;li&gt;Failure mode: Building the wrong thing, or in a way that's hard to maintain&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Testing&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Goal: Confirm the code does what it claims&lt;/li&gt;
&lt;li&gt;Failure mode: False confidence from tests that pass but miss the cases that matter&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Reviewing&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Goal: Find what writing and testing missed&lt;/li&gt;
&lt;li&gt;Failure mode: A clean merge on code that causes problems nobody saw coming&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What only review catches:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Security vulnerabilities&lt;/li&gt;
&lt;li&gt;Performance edge cases&lt;/li&gt;
&lt;li&gt;Style inconsistencies that become technical debt&lt;/li&gt;
&lt;li&gt;Architectural drift that compounds over time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Collapsing these into one tool or one session doesn't make the workflow efficient. It makes each job worse. The teams shipping reliably with AI aren't using it to do everything at once. They're using it to do each job better, separately.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI doesn't replace good process. It amplifies it when you separate the jobs.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Matters More in Open Source
&lt;/h2&gt;

&lt;p&gt;For a closed product, a bad merge is a bad day. You fix it, ship a patch, move on.&lt;/p&gt;

&lt;p&gt;In open source, the blast radius is everyone who depends on you.&lt;/p&gt;

&lt;p&gt;OSS maintainers are already stretched thin:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Often one or two people managing contributions from dozens&lt;/li&gt;
&lt;li&gt;Reviewing PRs from contributors you've never met&lt;/li&gt;
&lt;li&gt;Holding context across a codebase that keeps growing&lt;/li&gt;
&lt;li&gt;Every merge carries downstream consequences for users who built on your project&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A security vulnerability in a popular library doesn't just affect your users. It affects their users. A performance regression in a widely-adopted package ripples outward in ways that are hard to track and harder to undo.&lt;/p&gt;

&lt;p&gt;And yet OSS maintainers are typically the least resourced to handle this well. No dedicated QA. No security review team. Just you, your contributors, and whatever time you can carve out.&lt;/p&gt;

&lt;p&gt;AI increased code throughput for everyone. Maintainers still have the same number of hours. That gap has to close somewhere, and right now it's closing on your review queue.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Purpose-Built AI Review Actually Looks Like
&lt;/h2&gt;

&lt;p&gt;A coding assistant and a code reviewer are doing completely different jobs. &lt;a href="https://app.kilo.ai/code-reviews" rel="noopener noreferrer"&gt;Kilo's Code Reviewer&lt;/a&gt; doesn't pick up where your last prompt left off. It reads a completed change fresh, the way a senior engineer would.&lt;/p&gt;

&lt;p&gt;Here's what that looks like in practice:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Reads diffs, not prompt history.&lt;/strong&gt; It sees only what changed, with no prior context about what you intended.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compares against codebase patterns.&lt;/strong&gt; It flags when a change drifts from established patterns in the rest of the project.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Runs security checks automatically.&lt;/strong&gt; Each PR gets checked for common vulnerabilities before it touches your main branch.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Runs on every PR, automatically.&lt;/strong&gt; External contributor submissions get a thorough first pass before you ever have to look at them.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For maintainers, that translates to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Issues caught before merge, not after&lt;/li&gt;
&lt;li&gt;Contributors getting faster, more substantive feedback, which keeps them engaged&lt;/li&gt;
&lt;li&gt;Consistent review quality even when you're offline or heads-down on something else&lt;/li&gt;
&lt;li&gt;A review queue that doesn't require you to be everywhere at once&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Right Tool for Each Job
&lt;/h2&gt;

&lt;p&gt;The goal isn't more AI in your workflow for its own sake. It's the right tool for each job, applied where it actually helps.&lt;/p&gt;

&lt;p&gt;You've already seen what's possible when a tool is matched to the task. Review is just the next job in line, and it deserves the same intentionality.&lt;/p&gt;

&lt;p&gt;The teams that look back on this period and feel good about how they built won't be the ones who used AI the most. They'll be the ones who used it most clearly.&lt;/p&gt;




&lt;h2&gt;
  
  
  Free Tooling for Open Source Maintainers
&lt;/h2&gt;

&lt;p&gt;The most critical software in the ecosystem shouldn't lose to commercial projects because it can't afford the tooling. That's why we built the Kilo OSS Sponsorship Program.&lt;/p&gt;

&lt;p&gt;We're already supporting over 280 open source projects with access to Kilo's full platform, including credits for the Code Reviewer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Three sponsorship tiers based on project size and maturity:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tier&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;th&gt;Who It's For&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Seed&lt;/td&gt;
&lt;td&gt;$9K/year&lt;/td&gt;
&lt;td&gt;Early-stage or smaller OSS projects&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Growth&lt;/td&gt;
&lt;td&gt;$24K/year&lt;/td&gt;
&lt;td&gt;Established projects with active contributors&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Premier&lt;/td&gt;
&lt;td&gt;$48K/year&lt;/td&gt;
&lt;td&gt;High-impact projects with broad adoption&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;No strings attached. Takes about 2 minutes to apply.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://kilo.ai/oss" rel="noopener noreferrer"&gt;Apply for the Kilo OSS Sponsorship Program →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you're maintaining something people depend on, this is for you.&lt;/p&gt;

</description>
      <category>codereview</category>
      <category>agents</category>
      <category>ai</category>
      <category>testing</category>
    </item>
    <item>
      <title>We're Sponsoring Open Source Projects with Free AI Code Reviews (and Up to $200/mo in Credits)</title>
      <dc:creator>Erik Israni</dc:creator>
      <pubDate>Wed, 25 Feb 2026 16:41:01 +0000</pubDate>
      <link>https://forem.com/erik_israni/were-sponsoring-open-source-projects-with-free-ai-code-reviews-and-up-to-200mo-in-credits-1ecf</link>
      <guid>https://forem.com/erik_israni/were-sponsoring-open-source-projects-with-free-ai-code-reviews-and-up-to-200mo-in-credits-1ecf</guid>
      <description>&lt;h2&gt;
  
  
  What We Built and Why
&lt;/h2&gt;

&lt;p&gt;Kilo is an open-source agentic coding platform. That means IDE extensions for VS Code and JetBrains, a CLI, cloud agents, the works. We process a lot of code. And one of the things we built that we're most proud of is our Code Review feature: automated, intelligent reviews that run on your pull requests and flag issues across performance, security, style, and test coverage.&lt;/p&gt;

&lt;p&gt;We use it ourselves. Our contributors use it. And we kept thinking: the projects that would benefit most from this are the ones with the least resources to pay for it.&lt;/p&gt;

&lt;p&gt;So we built a sponsorship program. 280 Participants and counting!&lt;/p&gt;




&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;p&gt;The &lt;strong&gt;Kilo Open Source Sponsorship Program&lt;/strong&gt; provides qualifying open-source projects with free access to Kilo's tooling , starting with Code Reviews, and scaling up based on your project's size and activity.&lt;/p&gt;

&lt;p&gt;There are three tiers:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Seed&lt;/strong&gt; — For smaller and early-stage projects. 5 Enterprise seats + automated Code Reviews on your public PRs. Worth about $9,000/year.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Growth&lt;/strong&gt; — For large, active projects with regular contributors. 15 Enterprise seats + $20–100/month in Kilo Credits + Code Reviews. Worth about $27,000/year.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Premier&lt;/strong&gt; — For high-visibility projects with significant communities. 25 Enterprise seats + $200/month in Kilo Credits + Code Reviews. Worth about $48,000/year.&lt;/p&gt;

&lt;p&gt;Tier placement is made on a rolling basis, once we review your application and place you based on repository activity, community size, and project visibility.&lt;/p&gt;




&lt;p&gt;What You Actually Get&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automated Code Reviews on your public PRs.&lt;/strong&gt; Kilo shows up in your pull request review flow and reviews for performance, security, style, and test coverage. It's not a static code analysis tool. It reads the code, understands the context, and gives you the kind of feedback you'd expect from a senior engineer on your team, one that has access to endless coffee, never gets tired, and never misses a PR.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enterprise access.&lt;/strong&gt; All accepted projects get full Enterprise seats. That means advanced team features, usage analytics, and priority support for everyone working on your project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kilo Credits (Growth and Premier tiers).&lt;/strong&gt; Credits you can use across any of Kilo's tools, including IDE extension, CLI, cloud agents, app builder. You pay-per-token at provider cost, no markup.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Access to free models.&lt;/strong&gt; All tiers include access to free models at no cost.. Currently Grok Code Fast 1, Mistral Devstral 2, and KAT-Coder-Pro V1.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Visibility.&lt;/strong&gt; We'll feature your project on our website and potentially in marketing materials. If you want your project to get in front of Kilo's 1M+ users, this is one way to do that.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A direct support channel.&lt;/strong&gt; Selected projects get a dedicated channel to talk to Kilo team members directly. Not a ticket queue.&lt;/p&gt;




&lt;h2&gt;
  
  
  What We Ask In Return
&lt;/h2&gt;

&lt;p&gt;Three things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Your project must be open source with a public repository and a valid OSS license (MIT or equivalent). &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You enable Code Reviews on your public PRs, Kilo will appear in your review flow.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You consent to being featured in Kilo's OSS showcase: website features, marketing materials, links to public PRs using Code Reviews.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That's it. No vendor lock-in, no exclusivity requirement, no "you must tweet about us" clause.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why We're Doing This
&lt;/h2&gt;

&lt;p&gt;Honestly? A few reasons.&lt;/p&gt;

&lt;p&gt;Open source is infrastructure. A lot of the code that runs the internet is maintained by people working nights and weekends with no budget for tooling. That's always felt like an oversight in how the tech industry thinks about sustainability.&lt;/p&gt;

&lt;p&gt;We also think AI-powered code review is genuinely better when it runs on real, diverse, production open source code. Your contributors write differently than our internal team does. Your PR patterns are different. Your codebases have different histories and constraints. Every project that runs Kilo Code Reviews helps make the tool sharper for the whole community.&lt;/p&gt;

&lt;p&gt;And yes, we want more developers to know Kilo exists. We think the best way to do that is to be useful first.&lt;/p&gt;




&lt;h2&gt;
  
  
  Who Should Apply
&lt;/h2&gt;

&lt;p&gt;If your project checks these boxes, you're a good candidate:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Public repository with an active OSS license&lt;/li&gt;
&lt;li&gt;Regular PRs being opened (doesn't have to be high-volume, just active)&lt;/li&gt;
&lt;li&gt;Maintainers who would actually use the tooling&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We're reviewing applications on a rolling basis. Applying doesn't guarantee acceptance, we'll reach out directly to projects we accept.&lt;/p&gt;




&lt;h2&gt;
  
  
  Apply
&lt;/h2&gt;

&lt;p&gt;👉 &lt;a href="//kilo.ai/oss"&gt;kilo.ai/oss&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Questions? Drop into our &lt;a href="https://discord.gg/MURhHhNND4" rel="noopener noreferrer"&gt;Discord&lt;/a&gt; or email &lt;a href="mailto:hi@kilocode.ai"&gt;hi@kilocode.ai&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;And if you're an OSS maintainer who's skeptical of sponsored programs in general: fair. We'd rather you check out the code (&lt;a href="//github.com/Kilo-Org/kilocode"&gt;github.com/Kilo-Org/kilocode&lt;/a&gt;) and make your own judgment than take our word for it.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I'm Erik, one of Kilo's community managers. I spend most of my time in our &lt;a href="https://discord.gg/MURhHhNND4" rel="noopener noreferrer"&gt;Discord&lt;/a&gt; working with developers who are building with AI and supporting the open source projects that power the ecosystem. If you're maintaining an OSS project or just want to talk shop, come find me.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Kilo is the open-source agentic coding platform , IDE extensions for VS Code and JetBrains, CLI, cloud agents, and more. 1M users, 21T+ tokens processed, #1 app on OpenRouter. &lt;a href="http://kilo.ai/" rel="noopener noreferrer"&gt;kilo.ai&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>codereview</category>
      <category>agents</category>
    </item>
  </channel>
</rss>
