<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: huoru</title>
    <description>The latest articles on Forem by huoru (@huoru).</description>
    <link>https://forem.com/huoru</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/huoru"/>
    <language>en</language>
    <item>
      <title>Last month I watched Claude Code confidently rebuild a Redis queue my team had abandoned three weeks earlier.

The decision lived in a Slack thread, two PR comments, and three engineers' heads. Code search saw the files. It couldn't see the decision.</title>
      <dc:creator>huoru</dc:creator>
      <pubDate>Thu, 07 May 2026 02:40:35 +0000</pubDate>
      <link>https://forem.com/huoru/last-month-i-watched-claude-code-confidently-rebuild-a-redis-queue-my-team-had-abandoned-three-5895</link>
      <guid>https://forem.com/huoru/last-month-i-watched-claude-code-confidently-rebuild-a-redis-queue-my-team-had-abandoned-three-5895</guid>
      <description>&lt;div class="ltag__link--embedded"&gt;
  &lt;div class="crayons-story "&gt;
  &lt;a href="https://dev.to/huoru/we-have-code-review-we-need-intent-review-1i38" class="crayons-story__hidden-navigation-link"&gt;We Have Code Review. We Need Intent Review.&lt;/a&gt;


  &lt;div class="crayons-story__body crayons-story__body-full_post"&gt;
    &lt;div class="crayons-story__top"&gt;
      &lt;div class="crayons-story__meta"&gt;
        &lt;div class="crayons-story__author-pic"&gt;

          &lt;a href="/huoru" class="crayons-avatar  crayons-avatar--l  "&gt;
            &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F251582%2F443f776f-8a81-4a3b-b8dc-1381d83eb1c5.jpg" alt="huoru profile" class="crayons-avatar__image"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
        &lt;div&gt;
          &lt;div&gt;
            &lt;a href="/huoru" class="crayons-story__secondary fw-medium m:hidden"&gt;
              huoru
            &lt;/a&gt;
            &lt;div class="profile-preview-card relative mb-4 s:mb-0 fw-medium hidden m:inline-block"&gt;
              
                huoru
                
              
              &lt;div id="story-author-preview-content-3623781" class="profile-preview-card__content crayons-dropdown branded-7 p-4 pt-0"&gt;
                &lt;div class="gap-4 grid"&gt;
                  &lt;div class="-mt-4"&gt;
                    &lt;a href="/huoru" class="flex"&gt;
                      &lt;span class="crayons-avatar crayons-avatar--xl mr-2 shrink-0"&gt;
                        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F251582%2F443f776f-8a81-4a3b-b8dc-1381d83eb1c5.jpg" class="crayons-avatar__image" alt=""&gt;
                      &lt;/span&gt;
                      &lt;span class="crayons-link crayons-subtitle-2 mt-5"&gt;huoru&lt;/span&gt;
                    &lt;/a&gt;
                  &lt;/div&gt;
                  &lt;div class="print-hidden"&gt;
                    
                      Follow
                    
                  &lt;/div&gt;
                  &lt;div class="author-preview-metadata-container"&gt;&lt;/div&gt;
                &lt;/div&gt;
              &lt;/div&gt;
            &lt;/div&gt;

          &lt;/div&gt;
          &lt;a href="https://dev.to/huoru/we-have-code-review-we-need-intent-review-1i38" class="crayons-story__tertiary fs-xs"&gt;&lt;time&gt;May 7&lt;/time&gt;&lt;span class="time-ago-indicator-initial-placeholder"&gt;&lt;/span&gt;&lt;/a&gt;
        &lt;/div&gt;
      &lt;/div&gt;

    &lt;/div&gt;

    &lt;div class="crayons-story__indention"&gt;
      &lt;h2 class="crayons-story__title crayons-story__title-full_post"&gt;
        &lt;a href="https://dev.to/huoru/we-have-code-review-we-need-intent-review-1i38" id="article-link-3623781"&gt;
          We Have Code Review. We Need Intent Review.
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;div class="crayons-story__tags"&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/ai"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;ai&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/programming"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;programming&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/claude"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;claude&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/softwareengineering"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;softwareengineering&lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="crayons-story__bottom"&gt;
        &lt;div class="crayons-story__details"&gt;
          &lt;a href="https://dev.to/huoru/we-have-code-review-we-need-intent-review-1i38" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left"&gt;
            &lt;div class="multiple_reactions_aggregate"&gt;
              &lt;span class="multiple_reactions_icons_container"&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/sparkle-heart-5f9bee3767e18deb1bb725290cb151c25234768a0e9a2bd39370c382d02920cf.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
              &lt;/span&gt;
              &lt;span class="aggregate_reactions_counter"&gt;2&lt;span class="hidden s:inline"&gt; reactions&lt;/span&gt;&lt;/span&gt;
            &lt;/div&gt;
          &lt;/a&gt;
            &lt;a href="https://dev.to/huoru/we-have-code-review-we-need-intent-review-1i38#comments" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left flex items-center"&gt;
              Comments


              &lt;span class="hidden s:inline"&gt;Add Comment&lt;/span&gt;
            &lt;/a&gt;
        &lt;/div&gt;
        &lt;div class="crayons-story__save"&gt;
          &lt;small class="crayons-story__tertiary fs-xs mr-2"&gt;
            9 min read
          &lt;/small&gt;
            
              &lt;span class="bm-initial"&gt;
                

              &lt;/span&gt;
              &lt;span class="bm-success"&gt;
                

              &lt;/span&gt;
            
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;/div&gt;


</description>
      <category>ai</category>
      <category>claude</category>
      <category>productivity</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>We Have Code Review. We Need Intent Review.</title>
      <dc:creator>huoru</dc:creator>
      <pubDate>Thu, 07 May 2026 02:22:52 +0000</pubDate>
      <link>https://forem.com/huoru/we-have-code-review-we-need-intent-review-1i38</link>
      <guid>https://forem.com/huoru/we-have-code-review-we-need-intent-review-1i38</guid>
      <description>&lt;h1&gt;
  
  
  We Have Code Review. We Need Intent Review.
&lt;/h1&gt;

&lt;p&gt;Last month I watched Claude Code confidently rebuild a Redis queue that my team had abandoned three weeks earlier.&lt;/p&gt;

&lt;p&gt;The repo had &lt;code&gt;redis.go&lt;/code&gt;, TODOs scattered around, and &lt;code&gt;redis&lt;/code&gt; configured in &lt;code&gt;docker-compose.yml&lt;/code&gt;. Claude saw all of this and reasonably wanted to finish what looked like a half-built feature.&lt;/p&gt;

&lt;p&gt;The problem: we'd already decided not to use Redis. Replication lag had been causing duplicate billing events, and the team made the call to rip it out. That decision lived in a Slack thread, a couple of PR comments, and the heads of three engineers.&lt;/p&gt;

&lt;p&gt;Code search saw the Redis files. It couldn't see the decision.&lt;/p&gt;

&lt;p&gt;This is a specific failure mode I keep running into, and the more I look at it, the more I think we're missing an entire layer of review in our AI-assisted workflows.&lt;/p&gt;

&lt;p&gt;We have code review. We need intent review.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem Isn't What the AI Wrote
&lt;/h2&gt;

&lt;p&gt;Most discussions about AI coding focus on the output: did it produce good code? Was the architecture sound? Are the tests passing?&lt;/p&gt;

&lt;p&gt;But the failure mode I'm describing is different. The agent isn't writing &lt;em&gt;bad&lt;/em&gt; code. It's writing &lt;em&gt;reasonable&lt;/em&gt; code based on the wrong historical premise.&lt;/p&gt;

&lt;p&gt;Think about how a senior engineer behaves when they touch unfamiliar code. They don't just read the function signature and start typing. They:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check the commit history to understand why it was written&lt;/li&gt;
&lt;li&gt;Look for related PRs that touched this area&lt;/li&gt;
&lt;li&gt;Ask the team in Slack: "anyone know why we did it this way?"&lt;/li&gt;
&lt;li&gt;Pause before changing patterns that look intentional but unfamiliar&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This isn't because senior engineers are smarter than AI agents. It's because they have &lt;strong&gt;humility toward unfamiliar code&lt;/strong&gt;. They've been burned enough times to know that what looks like dead weight is sometimes load-bearing, and what looks like a half-built feature is sometimes an explicitly abandoned approach.&lt;/p&gt;

&lt;p&gt;AI agents don't have this humility. They're optimized for agreement, not interrogation. If the code looks half-built, the agreeable thing to do is finish it. If a TODO looks reasonable, the agreeable thing is to address it. The agent extends rather than questions.&lt;/p&gt;

&lt;p&gt;This works fine for greenfield code. It breaks badly in any codebase older than a few weeks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Existing Solutions Don't Solve This
&lt;/h2&gt;

&lt;p&gt;When I describe this problem, people usually point to existing tools. Each of these has a real role, but none of them fully addresses what I'm calling intent review.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AGENTS.md / CLAUDE.md&lt;/strong&gt; — These work for the not-todos you can anticipate. You can write "don't use Redis" once you know Redis is dead. But what about the decision you'll make next week? Or the one your colleague made yesterday? AGENTS.md only covers the past you've already documented. New decisions don't write themselves into the file.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ADRs / RFCs&lt;/strong&gt; — Heavy enough that most teams stop maintaining them after the first quarter. Even when maintained, they're written for human readers, not for agents querying contextually before a code change.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Wiki / Notion / Confluence&lt;/strong&gt; — Documentation drifts from code. The wiki page from six months ago still says "we use Redis." Agents don't naturally query wikis the way they query code, and even if they did, they'd find a free-form page that doesn't structure decisions vs. risks vs. rejected alternatives.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PR descriptions&lt;/strong&gt; — Buried in GitHub. Searchable in theory, ignored in practice. Agents don't proactively pull PR descriptions for related code regions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agent harnesses with built-in memory&lt;/strong&gt; — Tools like jcode capture session memory automatically and inject it into future sessions. This works within the harness. But the moment you switch agents, change tools, or have a teammate who uses a different setup, the memory is gone. You're locked into the harness.&lt;/p&gt;

&lt;p&gt;Each of these is a partial answer to a real problem. None of them gives an agent reliable, structured access to your team's actual historical decisions before it touches unfamiliar code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Code Review vs. Intent Review
&lt;/h2&gt;

&lt;p&gt;Code review answers: &lt;strong&gt;"Is this change well-implemented?"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Reviewers look at the diff. They check correctness, style, test coverage. They might catch a bug or suggest a better pattern. This is essential, and it works.&lt;/p&gt;

&lt;p&gt;But code review is a poor place to ask: &lt;strong&gt;"Is this change well-conceived in light of what the team already decided?"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That question requires the reviewer to know the entire decision history of the codebase. To remember why a particular pattern exists. To recall that Redis was abandoned three weeks ago for replication reasons. Most reviewers don't have this context. Even the original author often doesn't, six months later.&lt;/p&gt;

&lt;p&gt;Intent review is a different kind of review. It happens &lt;strong&gt;before&lt;/strong&gt; code is written, not after. It asks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What does the team already know about this area?&lt;/li&gt;
&lt;li&gt;What decisions were made, and why?&lt;/li&gt;
&lt;li&gt;What was considered and rejected, and for what reasons?&lt;/li&gt;
&lt;li&gt;What risks were identified that haven't been mitigated yet?&lt;/li&gt;
&lt;li&gt;What architectural claims are load-bearing?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For a human engineer, this kind of review happens informally—a quick Slack message, a glance at the commit log, a 30-second conversation with the person who wrote the original code.&lt;/p&gt;

&lt;p&gt;For an AI agent, it doesn't happen at all. There's no equivalent to "let me ask the team." The agent reads the current state of the code and infers from there. The historical signal isn't accessible.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Intent Review Looks Like
&lt;/h2&gt;

&lt;p&gt;Intent review needs three properties to actually work:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Decisions need to be structured, not free-form.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A wiki page that says "we decided to use JWT" is fine for human reading. An agent needs to know: what was decided, what was rejected, why, what risks were identified, what files this touches, what subsystems it affects, what claims about the architecture were load-bearing.&lt;/p&gt;

&lt;p&gt;Free-form prose hides this structure. An agent has to re-parse the entire document every time. Structured records let agents query specifically: "show me decisions affecting auth that mention session handling."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Decisions need to live near the code, not in a separate system.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Wiki pages drift from code. Notion pages get forgotten. Slack threads get buried. The only thing that reliably stays connected to code is git itself.&lt;/p&gt;

&lt;p&gt;If decisions live in git, they survive everything that affects code: clones, forks, branches, time. Six months from now when the agent is touching this area, the decision is right there—same place as the code, fetched together, queryable in the same workflow.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. The query must happen automatically, before the change.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If the agent has to be reminded "check the wiki first," it won't. If decisions live in a sidebar that requires explicit lookup, agents skip it. The query has to be part of the agent's normal workflow—the same way it grep's for symbol definitions before refactoring.&lt;/p&gt;

&lt;p&gt;This is the hard part. An agent that already runs &lt;code&gt;grep&lt;/code&gt; and &lt;code&gt;read_file&lt;/code&gt; before editing should also run something like &lt;code&gt;intent_context&lt;/code&gt; before editing. The friction of looking up decisions has to be lower than the friction of guessing.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'm Building
&lt;/h2&gt;

&lt;p&gt;I've been working on this with a tool called Mainline. It records team intents and decisions as structured records in git itself, where any agent can query them before making changes.&lt;/p&gt;

&lt;p&gt;Architecturally:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight conf"&gt;&lt;code&gt;&lt;span class="n"&gt;refs&lt;/span&gt;/&lt;span class="n"&gt;heads&lt;/span&gt;/&lt;span class="err"&gt;_&lt;/span&gt;&lt;span class="n"&gt;mainline&lt;/span&gt;/&lt;span class="n"&gt;actor&lt;/span&gt;/&amp;lt;&lt;span class="n"&gt;id&lt;/span&gt;&amp;gt;   &lt;span class="c"&gt;# per-developer append-only log
&lt;/span&gt;&lt;span class="n"&gt;refs&lt;/span&gt;/&lt;span class="n"&gt;notes&lt;/span&gt;/&lt;span class="n"&gt;mainline&lt;/span&gt;/&lt;span class="n"&gt;intents&lt;/span&gt;       &lt;span class="c"&gt;# links between commits and intents
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each sealed intent contains:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;summary.what&lt;/code&gt; and &lt;code&gt;summary.why&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;decisions[]&lt;/code&gt; with rationale and rejected alternatives&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;risks[]&lt;/code&gt; with mitigations&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;fingerprint&lt;/code&gt; covering touched files, subsystems, and architectural claims&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Before an agent changes code, it runs something like &lt;code&gt;mainline context auth&lt;/code&gt; and pulls structured records about past decisions affecting the auth area. After completing work, it seals a new intent with what was decided, what was considered, and what risks remain.&lt;/p&gt;

&lt;p&gt;A few design choices that took me a while to settle on:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Process-based CLI, not a daemon.&lt;/strong&gt; I considered a background daemon that would auto-capture activity. I looked at how that played out in similar tools (git-ai's daemon hits issues with macOS sleep, socket handling, zombie processes). Git itself is already a battle-tested protocol for this kind of thing. I didn't need to reinvent it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Intent-level, not line-level.&lt;/strong&gt; Line-level attribution (which file, which line, by whom) sounds appealing but is fragile. A formatter run, a &lt;code&gt;git mv&lt;/code&gt;, a copy-paste, an &lt;code&gt;--amend&lt;/code&gt;—any of these can break line-level tracking. Intent is task-level. Text transformations don't change the semantic meaning of "we decided X for reason Y."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Explicit seal, not automatic capture.&lt;/strong&gt; Auto-capture sounds magical, but it produces a lot of noise: every keystroke and tool call gets recorded, and querying that later is harder than it looks. Explicit sealing has friction—the agent has to actively summarize what happened—but the resulting record is high-signal.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Append-only and immutable.&lt;/strong&gt; Sealed intents can't be edited, only superseded. The original "what we thought at the time" stays intact. If we changed our minds, we add a new intent that supersedes the old one. The history of how thinking evolved is preserved, not overwritten.&lt;/p&gt;

&lt;p&gt;The tool is open source (Apache 2.0) and currently in private beta. It's at &lt;a href="https://mainline.sh" rel="noopener noreferrer"&gt;mainline.sh&lt;/a&gt; if you want to look. I'm not pitching here—the point of this post is the idea, not the tool. If you build a different implementation of intent review that works better, I'd be genuinely interested in seeing it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where This Fits
&lt;/h2&gt;

&lt;p&gt;There's a lot of energy right now in what people are calling compound engineering—the idea that each completed task should make the next one easier through accumulated learning. Tools like learning loops in Claude Code, the work Every is doing, the Ralph Loop pattern, all gesture at the same insight: AI coding workflows should accumulate knowledge instead of resetting every session.&lt;/p&gt;

&lt;p&gt;I think compound engineering is heading the right direction, but it's missing a foundational layer. Most current implementations of "compound" rely on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;.claude/&lt;/code&gt; files that humans maintain manually&lt;/li&gt;
&lt;li&gt;Plain-text plans committed to the repo&lt;/li&gt;
&lt;li&gt;Vector memory inside an agent harness&lt;/li&gt;
&lt;li&gt;Retrospectives that update config files&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These work, but they share a property: the knowledge is captured somewhere &lt;em&gt;adjacent&lt;/em&gt; to git, not inside it. Switch tools, switch agents, switch teammates, and the compounding stops.&lt;/p&gt;

&lt;p&gt;Intent review—done right—is the git-native foundation that compound engineering needs. If decisions live in git refs and notes, then any agent, any tool, any teammate can participate in the compounding. The knowledge isn't trapped in one harness.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'm Still Figuring Out
&lt;/h2&gt;

&lt;p&gt;I've been dogfooding Mainline for about a month with a small team. Some things have surprised me.&lt;/p&gt;

&lt;p&gt;The friction isn't where I expected. I thought engineers would resist writing structured seal records. They mostly didn't—it turns out the agent does the writing, and engineers just review and adjust. The actual friction is teaching agents when to seal. Too eager and you get a flood of trivial intents. Too conservative and important decisions go unrecorded.&lt;/p&gt;

&lt;p&gt;The benefit shows up later than I expected. The first week, mainline feels like overhead. By week three, you start hitting moments where you ask "why did we do this?" and the answer is right there in the intent log. By week six, agents start using past intents as context without being asked. The compounding is real, but it's not immediate.&lt;/p&gt;

&lt;p&gt;Cross-actor coordination is harder than single-user. When it's just me, sealed intents are mostly notes-to-self. When two engineers work in the same codebase, intents become a coordination protocol. We had to add explicit conflict detection (when two intents claim incompatible architectural changes) because otherwise we'd discover the conflict at PR review time, which is too late.&lt;/p&gt;

&lt;p&gt;I don't think I've solved intent review. I think I've built one specific implementation of an idea that the field needs to develop further. Different teams will need different implementations. The structured-records-in-git approach is one path; others will work too.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Means for AI Coding
&lt;/h2&gt;

&lt;p&gt;Code review is fundamental to software engineering. We don't ship code without review, even when the author is excellent. We don't trust raw output, even from senior engineers. We have a layer of structured human attention applied to every change.&lt;/p&gt;

&lt;p&gt;AI agents are now writing significant fractions of new code in many teams. They're not getting reviewed less—if anything, the review burden is going up, because reviewing AI-generated code is often more cognitively demanding than reviewing human code. The reviewer can't ask the AI "why did you do it this way?" the same way they'd ask a colleague. The intent isn't legible.&lt;/p&gt;

&lt;p&gt;Intent review isn't a replacement for code review. It's a complementary layer. Code review catches bugs in implementation. Intent review catches bugs in framing—the agent solving the wrong problem, finishing abandoned work, contradicting recent decisions, ignoring identified risks.&lt;/p&gt;

&lt;p&gt;Without intent review, every code review has to do double duty: check the implementation &lt;em&gt;and&lt;/em&gt; verify the framing. Most of the time, framing checks fail silently. The reviewer doesn't know that Redis was abandoned three weeks ago, or that this auth pattern was deliberately avoided, or that this risk was already identified and mitigated elsewhere. They approve the code, and the broken framing ships.&lt;/p&gt;

&lt;p&gt;I don't think this is a problem we can solve with better prompts or larger context windows or more capable models. It's a structural problem about where institutional knowledge lives. Right now it lives in heads, Slack, and PR comments—places agents don't reliably look. We need it to live somewhere agents do reliably look. For most teams, that means git itself.&lt;/p&gt;

&lt;p&gt;We have code review. We need intent review.&lt;/p&gt;

&lt;p&gt;If you're seeing this same failure mode in your team, I'd be curious to hear how you're handling it—or whether you've decided it's not a real problem. Either is useful to know.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Mainline is at &lt;a href="https://mainline.sh" rel="noopener noreferrer"&gt;mainline.sh&lt;/a&gt;. Source on &lt;a href="https://github.com/mainline-org/mainline" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;. Apache 2.0. Currently in private beta with a small group. If you have a 5+ person team using AI coding heavily and want to try it, reach out.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>claude</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>I built Hika - an AI-powered PKM tool that thinks differently about knowledge exploration</title>
      <dc:creator>huoru</dc:creator>
      <pubDate>Sun, 02 Feb 2025 04:17:02 +0000</pubDate>
      <link>https://forem.com/huoru/i-built-hika-an-ai-powered-pkm-tool-that-thinks-differently-about-knowledge-exploration-gch</link>
      <guid>https://forem.com/huoru/i-built-hika-an-ai-powered-pkm-tool-that-thinks-differently-about-knowledge-exploration-gch</guid>
      <description>&lt;p&gt;Hey PKM community! I'm one of the creators of Hika, a new AI-powered knowledge search tool, and I wanted to share why we built it and our thoughts on the future of Personal Knowledge Management.&lt;/p&gt;

&lt;h1&gt;
  
  
  Why We Built Hika
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://hika.fyi?utm=redditPKM" rel="noopener noreferrer"&gt;https://hika.fyi?utm=redditPKM&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As a heavy user of AI products and someone deeply interested in knowledge management tools, I noticed that while there are many AI search tools out there, they all feel surprisingly similar. Most importantly, they weren't really helping me think better or understand topics more deeply.&lt;/p&gt;

&lt;p&gt;The main issues I kept running into were:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AI models weren't really understanding what I was trying to learn&lt;/li&gt;
&lt;li&gt;Links and images were being thrown at me without actually helping me understand&lt;/li&gt;
&lt;li&gt;When I got interested in a specific part of an answer, there was no way to dig deeper&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  How Hika is Different
&lt;/h1&gt;

&lt;p&gt;Instead of following the usual Perplexity-style format (text + links + related questions + images), we decided to approach it differently:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Paragraph-Level Exploration&lt;/strong&gt;: We split answers into segments because you might be interested in several different aspects of a response. Each segment can be explored further with follow-up questions or deeper dives.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Visual Knowledge Mapping&lt;/strong&gt;: We use charts and diagrams not just to illustrate points, but to provide a completely different perspective on the information. This gives you an immediate "big picture" view while also highlighting connections between concepts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Depth Over Convenience&lt;/strong&gt;: Rather than providing "lazy answers," we focus on giving you multiple "clues" for multidimensional thinking about a single question.&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  Our Philosophy
&lt;/h1&gt;

&lt;p&gt;We don't believe AI will completely replace human thinking anytime soon (or that it should). When people think about complex topics, they naturally organize information in a network-like structure, extracting and processing useful information according to their own unique standards. This personal processing can't be quantified, which is why current AI search tools can't give everyone a satisfying answer.&lt;/p&gt;

&lt;p&gt;That's why instead of trying to give complete, one-shot answers, we focused on making Hika really good at deep information exploration. In other words, we want to make your thinking process smarter and more efficient, not replace it.&lt;/p&gt;

&lt;h1&gt;
  
  
  Early Days
&lt;/h1&gt;

&lt;p&gt;We're still in the early stages of development, and we know there are many areas to improve - performance, user experience, and functionality. We're actively working on iterations and improvements.&lt;/p&gt;

&lt;h1&gt;
  
  
  Looking for Your Thoughts
&lt;/h1&gt;

&lt;p&gt;As fellow PKM enthusiasts, I'd love to hear your thoughts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How do you see AI fitting into your knowledge management process?&lt;/li&gt;
&lt;li&gt;What features would make your PKM workflow more effective?&lt;/li&gt;
&lt;li&gt;What are your biggest pain points with current AI search tools?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're interested in trying Hika or want to share feedback, you can find us at &lt;a href="https://hika.fyi?utm=redditPKM" rel="noopener noreferrer"&gt;https://hika.fyi?utm=redditPKM&lt;/a&gt; We're actively working on improvements and really value community input!&lt;/p&gt;

</description>
      <category>pkm</category>
    </item>
    <item>
      <title>Tired of AI Giving You 'Lazy Answers'? Here's Our Solution</title>
      <dc:creator>huoru</dc:creator>
      <pubDate>Fri, 20 Dec 2024 14:05:26 +0000</pubDate>
      <link>https://forem.com/huoru/tired-of-ai-giving-you-lazy-answers-heres-our-solution-1o4j</link>
      <guid>https://forem.com/huoru/tired-of-ai-giving-you-lazy-answers-heres-our-solution-1o4j</guid>
      <description>&lt;h1&gt;
  
  
  Hika AI: Rethinking Knowledge Search
&lt;/h1&gt;

&lt;p&gt;Hika AI is an AI-powered "knowledge search tool" designed to help you quickly extend your knowledge in related fields or dive deep into key points of a question through Hika's "different perspectives," providing more comprehensive results when searching for information.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://hika.fyi/" rel="noopener noreferrer"&gt;hika search&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why We Created Hika
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F61dz8oapwf6oe3wvlb3b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F61dz8oapwf6oe3wvlb3b.png" alt=" " width="800" height="471"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My partner and I are heavy users of AI products and have long been interested in knowledge and information-based products. When using AI search products, we found that despite the abundance of options, most are very similar and often feel inadequate. The reasons for this are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The underlying models aren't smart enough to truly understand our intentions. &lt;/li&gt;
&lt;li&gt;The answers provided contain many links and images, but these don't necessarily help us better understand the question. &lt;/li&gt;
&lt;li&gt;When we're interested in a particular part of the answer, we can't delve deeper.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We realized that these issues stem from the product industry's fundamental understanding of the relationship between AI and human interaction, which has become a consensus for this entire category of products. Therefore, we wanted to try a different approach.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyhtytyvtt9d7xz06w9z0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyhtytyvtt9d7xz06w9z0.png" alt=" " width="800" height="484"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How Hika Differs from Other AI Search Tools
&lt;/h2&gt;

&lt;p&gt;Existing AI search products are mostly similar to Perplexity, offering text answers + links + related questions + images. They look like public accounts, with exploration focused more on form rather than the substance of the question itself. Hika's approach differs in the following ways:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fahx8rf7d6kpc8x3gw6ib.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fahx8rf7d6kpc8x3gw6ib.png" alt=" " width="800" height="443"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Enhanced Text Answers&lt;/em&gt;: We retain the text portion as the main body of the answer while adding targeted professional knowledge sources to meet the basic effectiveness of Q&amp;amp;A.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Paragraph-Based Exploration&lt;/em&gt;: We divide answers into paragraphs because you're likely to be interested in several points within a single answer, which are usually distributed across different paragraphs. Hika can further explore these paragraphs interactively, providing associated follow-up questions or more in-depth answers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Graphical Explanations&lt;/em&gt;: We provide graphical explanations. Compared to text statements, charts themselves offer a different angle of interpretation. Hika's charts can extend related knowledge points while giving you an immediate "global perspective."&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo4fy12hii6aaahkfkya9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo4fy12hii6aaahkfkya9.png" alt=" " width="800" height="279"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, we aim to provide you with multiple dimensions of thinking "clues" for a single question, rather than a one-step "lazy answer."&lt;/p&gt;

&lt;h2&gt;
  
  
  Our Philosophy
&lt;/h2&gt;

&lt;p&gt;We don't believe AI can completely replace human intelligence in the short term. When thinking about problems, people tend to organize information in a web-like structure, extracting and processing useful information before reaching conclusions. Everyone has their own unique processing method and standards, which cannot be quantified. This is why current AI searches can't provide satisfactory answers to everyone.&lt;/p&gt;

&lt;p&gt;Therefore, we don't aim for one-step answers. Instead, we focus Hika on deep-diving into information, greatly improving the efficiency of web-like information acquisition. In other words, using Hika, your thinking can become smarter and more efficient than before.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Note&lt;/em&gt;: Hika is currently in the early stages of product development and has many issues, including performance, user experience, and functional processes. We will continue to iterate on it extensively, and we are eager to hear your voice, whether in agreement or criticism.&lt;/p&gt;

&lt;p&gt;We're glad you can participate in Hika's growth and hope you enjoy using it!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>rag</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
