<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: ushironoko</title>
    <description>The latest articles on Forem by ushironoko (@ushironoko).</description>
    <link>https://forem.com/ushironoko</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/ushironoko"/>
    <language>en</language>
    <item>
      <title>How octorus Renders 300K Lines of Diff at High Speed</title>
      <dc:creator>ushironoko</dc:creator>
      <pubDate>Fri, 13 Feb 2026 07:03:59 +0000</pubDate>
      <link>https://forem.com/ushironoko/how-octorus-renders-300k-lines-of-diff-at-high-speed-h4p</link>
      <guid>https://forem.com/ushironoko/how-octorus-renders-300k-lines-of-diff-at-high-speed-h4p</guid>
      <description>&lt;p&gt;&lt;a href="https://dev.to/ushironoko/octorus-a-rust-built-tui-tool-where-ai-autonomously-reviews-fixes-code-while-you-browse-pr-diffs-5gh9"&gt;In a previous post&lt;/a&gt;, I introduced my TUI tool. This time, I'd like to talk about the performance optimizations behind &lt;a href="https://github.com/ushironoko/octorus" rel="noopener noreferrer"&gt;octorus&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Do We Mean by "Fast"?
&lt;/h2&gt;

&lt;p&gt;"Fast" can mean many things. Even just for rendering, there's initial display speed, syntax highlighting speed, scroll smoothness (fps), and more.&lt;/p&gt;

&lt;p&gt;Perceived speed and internal speed aren't always the same. No matter how much you optimize with zero-copy or caching, if the PR is massive, the API call becomes the bottleneck. And without rendering-level optimizations, the UI can freeze entirely.&lt;/p&gt;

&lt;p&gt;In octorus, I push internal optimizations as far as possible while also applying web-app-style thinking (FCP / LCP / INP) to the TUI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Core Concept
&lt;/h2&gt;

&lt;p&gt;The fundamental approach is to asynchronously build and consume caches based on the current display state. By maintaining &lt;strong&gt;5 layers of caching&lt;/strong&gt;, the perceived initial display time approaches 0ms, while also improving fps and minimizing allocations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Session Cache for PR Data
&lt;/h3&gt;

&lt;p&gt;PR data is fetched via &lt;code&gt;gh api&lt;/code&gt; when a PR is opened. The fetched diff and comment data are cached in memory. This cache remains valid for the entire octorus session — even when switching between PRs. Each PR is fetched only once.&lt;/p&gt;

&lt;p&gt;The cache isn't unlimited; when the maximum entry count is reached, the oldest entries are evicted.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ushironoko/octorus/blob/main/src/cache.rs#L98-L104" rel="noopener noreferrer"&gt;src/cache.rs#L98-L104&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Background Processing for Diff Cache Construction
&lt;/h3&gt;

&lt;p&gt;When a PR is opened and data arrives, diff parsing and syntax highlighting begin asynchronously in the background. By the time the user actually opens a diff, most of the work is already done — making the perceived latency effectively 0ms. This is the single biggest win for user experience.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ushironoko/octorus/blob/main/src/app.rs#L792-L831" rel="noopener noreferrer"&gt;src/app.rs#L792-L831&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  DiffCache Construction
&lt;/h3&gt;

&lt;p&gt;The background processing aims to finish before the user opens a diff, but what happens when the diff is enormous — hundreds of thousands of lines — or when the language has complex syntax (like Haskell), making highlighting significantly heavier? Blocking the user until processing completes would be a terrible experience.&lt;/p&gt;

&lt;p&gt;To solve this, octorus can display diffs in a plain (unhighlighted) state while highlighting is still in progress. Once highlighting completes, the view seamlessly transitions to the fully highlighted version. Here's an example with a 300K-line diff:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7kcex2ez6tq2ijajj8p1.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7kcex2ez6tq2ijajj8p1.gif" alt=" " width="720" height="599"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The cache exists in two locations: the &lt;strong&gt;active display cache&lt;/strong&gt; (the file currently being viewed) and the &lt;strong&gt;prefetched standby store&lt;/strong&gt; (pre-built in the background).&lt;/p&gt;

&lt;p&gt;When a file is selected, the system first checks if the active cache can be reused (e.g., the user is just scrolling within the same file). If not, it pulls from the standby store (if prefetching finished in time). If neither is available, it builds the cache on the spot.&lt;/p&gt;

&lt;p&gt;When switching from File A to File B, the &lt;code&gt;diff_cache&lt;/code&gt; is replaced with File B's cache. File A's cache remains in the standby store, so switching back to File A hits Stage 2 and restores instantly.&lt;/p&gt;

&lt;p&gt;This cache is scoped per PR. Unlike the session-level API cache, it's discarded when switching PRs. Since octorus also supports opening a PR directly by number, this design keeps the overall behavior consistent — &lt;code&gt;diff_cache&lt;/code&gt; is bound to a single PR's lifetime.&lt;/p&gt;

&lt;h3&gt;
  
  
  Efficient Highlighting via CST + Semantic Boundary-Level Interning
&lt;/h3&gt;

&lt;p&gt;So far I've covered caching of diff data itself. Now let's talk about optimizing the highlighting process.&lt;/p&gt;

&lt;p&gt;Each line in &lt;code&gt;DiffCache&lt;/code&gt; is stored as a sequence of styled &lt;code&gt;Span&lt;/code&gt;s. If each Span naively held a &lt;code&gt;String&lt;/code&gt;, every occurrence of the same token would trigger a separate allocation. To avoid this, I adopted &lt;a href="https://crates.io/crates/lasso" rel="noopener noreferrer"&gt;&lt;code&gt;lasso::Rodeo&lt;/code&gt;&lt;/a&gt;, a string interner.&lt;/p&gt;

&lt;p&gt;An interner returns the same reference for identical strings. So even if &lt;code&gt;let&lt;/code&gt; appears hundreds of times, only one copy exists in memory.&lt;/p&gt;

&lt;p&gt;A typical &lt;code&gt;String&lt;/code&gt; takes 24 bytes (pointer + length + capacity) plus ~8 bytes for the highlight style — about 32 bytes total. A &lt;code&gt;lasso::Rodeo&lt;/code&gt; reference (&lt;code&gt;Spur&lt;/code&gt;) is just 4 bytes.&lt;/p&gt;

&lt;p&gt;This reduces not only per-Span size but also eliminates duplication. For a 1,000-line diff where &lt;code&gt;let&lt;/code&gt; appears 200 times:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;code&gt;String&lt;/code&gt;&lt;/th&gt;
&lt;th&gt;&lt;code&gt;Rodeo&lt;/code&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Reference × 200&lt;/td&gt;
&lt;td&gt;24 B × 200 = 4,800 B&lt;/td&gt;
&lt;td&gt;4 B × 200 = 800 B&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;String body&lt;/td&gt;
&lt;td&gt;3 B × 200 = 600 B&lt;/td&gt;
&lt;td&gt;3 B × 1 = 3 B&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Total&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;5,400 B&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;803 B&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;However, the effectiveness of interning depends heavily on granularity. Interning entire lines yields near-zero deduplication; interning individual characters makes the management overhead dominate.&lt;/p&gt;

&lt;p&gt;The key insight is to &lt;strong&gt;reuse &lt;a href="https://github.com/tree-sitter/tree-sitter" rel="noopener noreferrer"&gt;tree-sitter&lt;/a&gt; captures&lt;/strong&gt; as the interning boundary. octorus parses source code extracted from diffs using tree-sitter (the same engine used in Zed, Helix, and other editors).&lt;/p&gt;

&lt;p&gt;tree-sitter parses source code into a CST (Concrete Syntax Tree) and returns captures like &lt;code&gt;@keyword&lt;/code&gt;, &lt;code&gt;@function.call&lt;/code&gt;, etc. These correspond precisely to semantic units of programming languages (&lt;code&gt;fn&lt;/code&gt;, &lt;code&gt;let&lt;/code&gt;, &lt;code&gt;{&lt;/code&gt;, ...) — making them an ideal granularity for interning.&lt;/p&gt;

&lt;p&gt;In other words, &lt;strong&gt;tree-sitter provides both the style information for highlighting and the optimal split boundaries for interning&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;During initial cache construction, a Rodeo for plain diff is initialized, and tokens like &lt;code&gt;+&lt;/code&gt; and &lt;code&gt;-&lt;/code&gt; are interned with their fixed colors. This is what enables the "display plain first" behavior mentioned earlier. Meanwhile, a highlighted Rodeo is built in the background.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ushironoko/octorus/blob/main/src/ui/diff_view.rs#L32-L91" rel="noopener noreferrer"&gt;src/ui/diff_view.rs#L32-L91&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since Rodeo internally uses an arena allocator, there's no need for individual drops — freeing the arena frees all interned strings at once. Furthermore, the Rodeo is moved into &lt;code&gt;DiffCache&lt;/code&gt;, so it's bound to the DiffCache's lifetime. When the cache is dropped, all interning data is cleanly released. The fact that tree-sitter parse/query frequency equals Rodeo cache construction frequency is another nice alignment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ushironoko/octorus/blob/main/src/app.rs#L88-L99" rel="noopener noreferrer"&gt;src/app.rs#L88-L99&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Resolving Overlapping Captures
&lt;/h4&gt;

&lt;p&gt;tree-sitter captures are returned per syntax tree node, so parent and child nodes can overlap in range. For example, in &lt;code&gt;#[derive(Debug, Clone)]&lt;/code&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;@attribute&lt;/code&gt; covers the entire range &lt;code&gt;[0..23)&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;@constructor&lt;/code&gt; individually captures &lt;code&gt;Debug [9..14)&lt;/code&gt; and &lt;code&gt;Clone [16..21)&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Naively processing from the start, &lt;code&gt;@attribute&lt;/code&gt;'s style would advance the cursor to position 23, and the inner &lt;code&gt;@constructor&lt;/code&gt; captures would be missed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[0..23)  @attribute            "#[derive(Debug, Clone)]"  ← style applied to entire range
[1..2)   @punctuation.bracket  "["                         ← nested
[8..9)   @punctuation.bracket  "("                         ← nested
[9..14)  @constructor          "Debug"                     ← nested
[16..21) @constructor          "Clone"                     ← nested
[21..22) @punctuation.bracket  ")"                         ← nested
[22..23) @punctuation.bracket  "]"                         ← nested
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The solution: generate independent &lt;strong&gt;start/end events&lt;/strong&gt; for each capture, sort them by position, and sweep left-to-right in a single pass. Active captures are managed on a stack, so the innermost (most specific) style always takes priority.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(0,  start, @attribute)
(9,  start, @constructor) ← takes priority
(14, end,   @constructor)
             ↓ falls back to @attribute
(16, start, @constructor) ← takes priority again
(21, end,   @constructor)
(23, end,   @attribute)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The time complexity is O(m log m) where m is the number of captures — independent of line length. For minified JS with extremely long lines, this scales only with capture count, not byte length. A naive byte-map approach would require O(n) memory and traversal for line length n, so the gap widens with longer lines.&lt;/p&gt;

&lt;h3&gt;
  
  
  Parser and Query Caching
&lt;/h3&gt;

&lt;p&gt;Some languages don't map 1:1 from file extension to a single parser/highlight query. Vue and Svelte are prime examples — their Single File Components combine HTML, JS, and CSS in one file.&lt;/p&gt;

&lt;p&gt;This means highlighting a single file requires initializing 3 parsers/queries. If a PR contains 50 &lt;code&gt;.vue&lt;/code&gt; or &lt;code&gt;.svelte&lt;/code&gt; files, that's 150 initializations.&lt;/p&gt;

&lt;p&gt;To solve this, once a parser/query is created, it's stored in a &lt;code&gt;ParserPool&lt;/code&gt; cache shared across all files. No matter how many files there are, only 3 initializations are needed. Given that some query compilations involve nearly 100KB of data, this is a non-trivial optimization.&lt;/p&gt;

&lt;h2&gt;
  
  
  Other Optimizations
&lt;/h2&gt;

&lt;p&gt;Beyond the multi-layer cache, several smaller optimizations contribute to the overall experience.&lt;/p&gt;

&lt;h3&gt;
  
  
  Viewport-Restricted Rendering
&lt;/h3&gt;

&lt;p&gt;octorus uses the &lt;a href="https://github.com/ratatui/ratatui" rel="noopener noreferrer"&gt;ratatui&lt;/a&gt; crate for TUI rendering.&lt;/p&gt;

&lt;p&gt;Rather than rendering all lines, only the visible range is sliced and passed to ratatui. Pre-rendering transformations (Span → Line conversion) and Rodeo string lookups are also limited to this range. Simple, but more directly impactful on perceived performance than something like ParserPool.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ushironoko/octorus/blob/main/src/ui/diff_view.rs#L644-L655" rel="noopener noreferrer"&gt;src/ui/diff_view.rs#L644-L655&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Lazy Composition of Comment Markers
&lt;/h3&gt;

&lt;p&gt;Comment data is intentionally excluded from the cache. In octorus, comments are fetched after the diff data, so they're composed at render time via iterator composition.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ushironoko/octorus/blob/main/src/ui/diff_view.rs#L520-L529" rel="noopener noreferrer"&gt;src/ui/diff_view.rs#L520-L529&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As a result, comment markers appear slightly after the diff viewer opens (noticeable on very large diffs).&lt;/p&gt;

&lt;h3&gt;
  
  
  No Moves Between Cache Construction and Rendering
&lt;/h3&gt;

&lt;p&gt;The Rodeo is moved into DiffCache during cache construction, but after that, everything through rendering is purely borrowed. As mentioned earlier, since the Rodeo is owned by DiffCache, dropping the cache drops all interning data — guaranteeing no lifetime leaks across the entire pipeline. This is less of an "optimization" and more of a strength of Rust's ownership system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing Thoughts
&lt;/h2&gt;

&lt;p&gt;By choosing Rust, octorus has been able to introduce optimizations incrementally. None of the techniques described here were introduced all at once — they were spread across dozens of PRs. The ability to start with a naive implementation for correctness and layer in zero-copy and multi-stage caching later is a testament to Rust's scalability.&lt;/p&gt;

&lt;p&gt;Beyond raw speed, octorus also features &lt;strong&gt;AI-Rally&lt;/strong&gt;, a powerful AI-assisted review capability. Give it a try!&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://github.com/ushironoko/octorus" rel="noopener noreferrer"&gt;github.com/ushironoko/octorus&lt;/a&gt;&lt;/p&gt;

</description>
      <category>rust</category>
      <category>tui</category>
      <category>performance</category>
      <category>treesitter</category>
    </item>
    <item>
      <title>octorus: A Rust-built TUI tool where AI autonomously reviews &amp; fixes code while you browse PR diffs</title>
      <dc:creator>ushironoko</dc:creator>
      <pubDate>Fri, 13 Feb 2026 07:03:39 +0000</pubDate>
      <link>https://forem.com/ushironoko/octorus-a-rust-built-tui-tool-where-ai-autonomously-reviews-fixes-code-while-you-browse-pr-diffs-5gh9</link>
      <guid>https://forem.com/ushironoko/octorus-a-rust-built-tui-tool-where-ai-autonomously-reviews-fixes-code-while-you-browse-pr-diffs-5gh9</guid>
      <description>&lt;p&gt;&lt;a href="https://github.com/ushironoko/octorus" rel="noopener noreferrer"&gt;https://github.com/ushironoko/octorus&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I used to rely on &lt;a href="https://github.com/pwntester/octo.nvim" rel="noopener noreferrer"&gt;octo.nvim&lt;/a&gt; for reviewing PRs, but after switching to &lt;a href="https://helix-editor.com/" rel="noopener noreferrer"&gt;Helix&lt;/a&gt;, I needed a TUI-based PR viewer as a replacement. Nothing quite fit, so I built one myself. It supports inline review comments and syntax highlighting on diffs, which alone makes it pretty handy.&lt;/p&gt;

&lt;p&gt;But in modern development workflows, it's increasingly common to have AI write code and then use skills or commands to review it. So I also built a feature called &lt;strong&gt;AI-Rally&lt;/strong&gt; — kick it off with a single keystroke, and two AI agents take turns reviewing and fixing code until they're satisfied.&lt;/p&gt;

&lt;p&gt;You can have AI-Rally running in the background while you browse the PR diff and leave your own reviews. It also picks up bot reviews from services like CodeRabbit and Copilot and addresses them accordingly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Usage
&lt;/h2&gt;

&lt;p&gt;Install:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cargo &lt;span class="nb"&gt;install &lt;/span&gt;octorus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Initialize:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;or init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;or &lt;span class="nt"&gt;--repo&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;org_name&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;/&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;repo_name&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--pr&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;pr_number&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ok, If that's too much hassle, you can do this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;or
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This launches the PR diff viewer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm0cnsfgq2xrzu1lc9qi1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm0cnsfgq2xrzu1lc9qi1.png" alt=" " width="800" height="562"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With octorus, you can browse PR diffs, leave inline comments/suggestions, and view all comments at a glance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6hgi5co6di06ge6keofm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6hgi5co6di06ge6keofm.png" alt="inline comments" width="800" height="597"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Comments on the PR are listed separately — review comments and discussion comments are displayed in distinct sections.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs8stl53sqii1yes1pd0e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs8stl53sqii1yes1pd0e.png" alt="comment list" width="800" height="165"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So far it's a straightforward diff viewer. Here's where it gets interesting — press &lt;code&gt;A&lt;/code&gt;. Two AI agents split into a &lt;strong&gt;reviewer&lt;/strong&gt; and a &lt;strong&gt;reviewee&lt;/strong&gt;, and they start a back-and-forth cycle of reviewing and fixing (up to the configured &lt;code&gt;max_iterations&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feo809y8z0a8hftyrs4pl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feo809y8z0a8hftyrs4pl.png" alt="AI-Rally in action" width="800" height="861"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Press &lt;code&gt;b&lt;/code&gt; to send the rally to the background so you can continue your own review. As the agents finish reviewing or fixing, they automatically post comments and push commits to the PR. You can customize this behavior through custom prompts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuration
&lt;/h2&gt;

&lt;p&gt;Running &lt;code&gt;or init&lt;/code&gt; generates a config file and custom prompt markdown files under &lt;code&gt;~/.config/octorus/&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Here's the default configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="py"&gt;editor&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"vi"&lt;/span&gt; &lt;span class="c"&gt;# Editor launched for writing PR comments (nvim, hx, etc.)&lt;/span&gt;

&lt;span class="nn"&gt;[diff]&lt;/span&gt;
&lt;span class="py"&gt;theme&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"base16-ocean.dark"&lt;/span&gt; &lt;span class="c"&gt;# Syntax highlighting theme for the diff viewer&lt;/span&gt;

&lt;span class="nn"&gt;[keybindings]&lt;/span&gt;
&lt;span class="py"&gt;approve&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"a"&lt;/span&gt;
&lt;span class="py"&gt;request_changes&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"r"&lt;/span&gt;
&lt;span class="py"&gt;comment&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"c"&lt;/span&gt;
&lt;span class="py"&gt;suggestion&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"s"&lt;/span&gt;

&lt;span class="nn"&gt;[ai]&lt;/span&gt;
&lt;span class="py"&gt;reviewer&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"claude"&lt;/span&gt; &lt;span class="c"&gt;# claude or codex — must be installed on your machine&lt;/span&gt;
&lt;span class="py"&gt;reviewee&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"claude"&lt;/span&gt;
&lt;span class="py"&gt;max_iterations&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;
&lt;span class="py"&gt;timeout_secs&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;600&lt;/span&gt;

&lt;span class="c"&gt;# Custom prompt directory (default: ~/.config/octorus/prompts/)&lt;/span&gt;
&lt;span class="c"&gt;# prompt_dir = "/custom/path/to/prompts"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Both Claude and Codex are launched in headless mode. This means &lt;a href="https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview" rel="noopener noreferrer"&gt;Claude Code&lt;/a&gt; or &lt;a href="https://github.com/openai/codex" rel="noopener noreferrer"&gt;Codex CLI&lt;/a&gt; must be installed on your machine. Personally, I use Codex as the reviewer and Claude Code as the reviewee.&lt;/p&gt;

&lt;p&gt;Custom prompts are generated at:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/.config/octorus/prompts/
├── reviewer.md    # Prompt for the reviewer agent
├── reviewee.md    # Prompt for the reviewee agent
└── rereview.md    # Prompt for re-review iterations
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Closing thoughts
&lt;/h2&gt;

&lt;p&gt;I'd occasionally see people on X (Twitter) having AI agents reply to each other, and I always thought it'd be cool to apply that to code review. When I happened to build a TUI diff viewer, it turned out to be the perfect home for this idea. I'm sure many people have built similar workflows with custom skills, but I like that octorus lets you start one with just pressing &lt;code&gt;A&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;If you find it interesting, I'd appreciate a star on GitHub — or even better, just give it a try.&lt;/p&gt;

&lt;p&gt;Fun fact: the AI-Rally finished its run while I was writing this article. Convenient, right?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F83bzkl8cmhjht5crhjrq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F83bzkl8cmhjht5crhjrq.png" alt="AI-Rally completed" width="800" height="839"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you'd like, please also see the following article, which explains the diff performance of octorus in detail.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/ushironoko/how-octorus-renders-300k-lines-of-diff-at-high-speed-h4p"&gt;https://dev.to/ushironoko/how-octorus-renders-300k-lines-of-diff-at-high-speed-h4p&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>rust</category>
      <category>tui</category>
      <category>cli</category>
    </item>
  </channel>
</rss>
