<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Raio</title>
    <description>The latest articles on Forem by Raio (@raio).</description>
    <link>https://forem.com/raio</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/raio"/>
    <language>en</language>
    <item>
      <title>They Use 5 Layers. I Use 2. Here's Why I Write Zero Code.</title>
      <dc:creator>Raio</dc:creator>
      <pubDate>Wed, 11 Feb 2026 03:02:20 +0000</pubDate>
      <link>https://forem.com/raio/they-use-5-layers-i-use-2-heres-why-i-write-zero-code-1dbg</link>
      <guid>https://forem.com/raio/they-use-5-layers-i-use-2-heres-why-i-write-zero-code-1dbg</guid>
      <description>&lt;p&gt;&lt;em&gt;A response to &lt;a href="https://x.com/rohit4verse/status/2020501497377968397" rel="noopener noreferrer"&gt;"How to be a 100x Engineer with AI"&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  I Read It, Nodded, Then Stopped to Think
&lt;/h2&gt;

&lt;p&gt;Last week, &lt;a href="https://x.com/rohit4verse" rel="noopener noreferrer"&gt;@rohit4verse&lt;/a&gt; posted a thread on what separates 100x engineers from "vibe coders" in 2026. His core argument is clear: what matters is &lt;strong&gt;ownership&lt;/strong&gt;. Plan before you execute. Verify everything. Build persistent context. Don't blindly trust AI output.&lt;/p&gt;

&lt;p&gt;I agree completely.&lt;/p&gt;

&lt;p&gt;But I arrived at the same conclusions from an entirely different world. Not web apps, not mobile apps, not PC software — &lt;strong&gt;automotive engine control system&lt;/strong&gt; design. I've spent 15 years developing motorcycle ECUs. Traction control, quickshift systems, throttle control. A world where "move fast and break things" can literally injure a rider. Agile? I've heard of it, but that word has never appeared in our development process.&lt;/p&gt;

&lt;p&gt;That different origin led me to a radically simpler stack.&lt;/p&gt;




&lt;h2&gt;
  
  
  5 Layers vs. 2 Layers
&lt;/h2&gt;

&lt;p&gt;Rohit describes the "2026 top workflow" as a 5-layer stack:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;AI-first IDE&lt;/strong&gt; (Cursor, Windsurf)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Terminal coding agent&lt;/strong&gt; (Claude Code, Gemini CLI)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Background agents&lt;/strong&gt; (Codex, Jules, Devin)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Chat models&lt;/strong&gt; (Claude, ChatGPT, Gemini)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI code review tools&lt;/strong&gt; (Codium, Copilot Workspace)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For engineers who write code, all five layers make sense. It's a powerful setup.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I don't write code.&lt;/strong&gt; I'm a controls engineer with 15 years of ECU development. My stack looks like this:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Layer&lt;/th&gt;
&lt;th&gt;Role&lt;/th&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Design AI&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Specification, architecture, handover documents&lt;/td&gt;
&lt;td&gt;Claude.ai / ChatGPT / Gemini&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Implementation AI&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Code writing, builds, debugging&lt;/td&gt;
&lt;td&gt;Claude Code (terminal)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;That's it. Two layers. The human sits between them as the &lt;strong&gt;verification gate&lt;/strong&gt; — reviewing every handover, confirming every result.&lt;/p&gt;

&lt;p&gt;Why don't I need the other three?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AI-first IDE&lt;/strong&gt;: I don't edit code inline. The Design AI writes structured instruction documents; the Implementation AI executes them. No IDE needed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Background agents&lt;/strong&gt;: Useful for parallel PR processing across large codebases. But my workflow is sequential and deliberate — each step is verified before the next begins.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI code review tools&lt;/strong&gt;: My protocol has verification built in at every handover point. The human &lt;em&gt;is&lt;/em&gt; the review layer.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Rohit's article says 100x engineering is about "doing less." I took that literally: &lt;strong&gt;I reduced "doing" to zero lines of code.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Three Failure Modes They Haven't Named
&lt;/h2&gt;

&lt;p&gt;Rohit says "verify everything" and "build persistent context." Absolutely right.&lt;/p&gt;

&lt;p&gt;But &lt;em&gt;why&lt;/em&gt; do things go wrong without those habits? Through the experience of &lt;a href="https://dev.to/raio/i-built-an-android-app-in-4-days-with-zero-android-experience-using-claude-code-and-a-two-layer-2p44"&gt;building an Android app in 4 days with zero prior Android experience&lt;/a&gt;, and through earlier project failures, I've identified three failure modes that deserve names. Listed from most frequently encountered and easiest to detect:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Context Evaporation&lt;/strong&gt; — As conversations grow long or sessions reset, accumulated design decisions and context silently disappear. The AI starts making suggestions that contradict earlier architectural choices — not from rebellion, but from amnesia. This is the one everyone notices first: "Why are you asking about something we already decided?"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Shallow Fix Swamp&lt;/strong&gt; — AI patches symptoms instead of understanding root causes. Each fix creates the precondition for the next failure. Every step forward sinks you deeper into the swamp. The endless loop of "I fixed it, but now something else is broken."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Completion Fraud&lt;/strong&gt; — AI confidently reports "Done!" without genuine verification. It's not lying on purpose; it simply has no mechanism to verify its own work against reality. This one is the most dangerous because &lt;strong&gt;it's the hardest to detect.&lt;/strong&gt; If you don't independently confirm, the truth surfaces much later, buried under layers of subsequent changes.&lt;/p&gt;

&lt;p&gt;Naming these isn't academic — it's operational. Once you have names, you can build specific countermeasures into your workflow. (I wrote about how these emerged from a real project failure in &lt;a href="https://dev.to/raio/the-protocol-was-born-from-wreckage-how-i-learned-to-stop-trusting-ai-and-start-engineering-it-bn1"&gt;my previous article&lt;/a&gt;).&lt;/p&gt;




&lt;h2&gt;
  
  
  The Blueprint Was Drawn Long Ago
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fitn00z9921nlvuke27n8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fitn00z9921nlvuke27n8.png" alt="Raiko studying a glowing cyan blueprint with a massive Matrix-code building towering behind her" width="800" height="573"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here's what struck me most reading Rohit's thread: &lt;strong&gt;everything he recommends — meticulous specification, verification at every checkpoint, persistent documentation, ownership of outcomes — is standard practice in safety-critical industries.&lt;/strong&gt; Automotive software engineers have been doing this for decades.&lt;/p&gt;

&lt;p&gt;Think of it this way.&lt;/p&gt;

&lt;p&gt;In architecture, there are people who design buildings where families live and workers spend their days. These architects may have never hammered a nail — the only time they pick up a hammer might be for weekend DIY. But they produce precise blueprints that structural engineers and construction crews follow, because &lt;strong&gt;if the design is wrong, the building collapses and people get hurt.&lt;/strong&gt; There's a clear chain: specification → verification → accountability.&lt;/p&gt;

&lt;p&gt;On the other hand, there are people who build stage sets for theater productions. The "building" only needs to look convincing from the audience's perspective. Structural requirements are minimal. If something doesn't work, you rebuild it before the next show. What matters is speed and visual impact, not decades of durability.&lt;/p&gt;

&lt;p&gt;I believe much of the web development world has evolved closer to the "stage set" model — and this is not a criticism. It's a rational optimization. When a one-second delay is a UX inconvenience rather than a safety hazard, when you &lt;em&gt;can&lt;/em&gt; rebuild quickly, the build-measure-learn cycle makes perfect sense. That approach has produced incredible innovation.&lt;/p&gt;

&lt;p&gt;But now AI coding agents have changed the equation. When an AI can generate thousands of lines in minutes, the cost of &lt;em&gt;writing&lt;/em&gt; code approaches zero — but the cost of &lt;em&gt;wrong&lt;/em&gt; code stays the same, or gets worse because it's harder to spot in the volume. Suddenly, the skills that matter most aren't writing speed but &lt;strong&gt;specification precision&lt;/strong&gt; and &lt;strong&gt;verification discipline&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;These are exactly the skills that safety-critical industries have refined over decades. And AI is becoming the powerful wings that let this "slow, old-fashioned" approach produce at startup speed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flnfumecuvz0n6940qdkx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flnfumecuvz0n6940qdkx.png" alt="Raiko's wing awakening — devil wings erupting from her back in a garage, tools flying from the shockwave" width="800" height="333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;I'll go deeper into this in an upcoming article — how automotive V-model development maps directly onto AI-assisted workflows. Stay tuned.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Do Even Less
&lt;/h2&gt;

&lt;p&gt;Rohit's thread concludes that 100x engineers have always been about "doing less" — and AI just makes "less" even smaller, if you build the right system around it.&lt;/p&gt;

&lt;p&gt;I'd push that further: &lt;strong&gt;the ultimate "less" is writing zero code yourself.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Not "no-code" in the platform sense. I mean: you own the specification, you own the verification, you own the architecture — and you delegate &lt;em&gt;all&lt;/em&gt; implementation to AI through structured handover documents, the same way an architect delegates construction through blueprints.&lt;/p&gt;

&lt;p&gt;That's how I built &lt;a href="https://dev.to/raio/i-built-an-android-app-in-4-days-with-zero-android-experience-using-claude-code-and-a-two-layer-2p44"&gt;ExitWatcher — an Android app in 4 days with zero Android experience&lt;/a&gt;. Not by learning Kotlin. By writing precise specifications and verifying every output.&lt;/p&gt;

&lt;p&gt;The protocol that makes this possible — the Two-Layer AI Protocol — is what this &lt;a href="https://dev.to/raio/the-protocol-was-born-from-wreckage-how-i-learned-to-stop-trusting-ai-and-start-engineering-it-bn1"&gt;article series&lt;/a&gt; is about. If you're interested in how safety-critical engineering principles can make AI coding dramatically more reliable, the deep dive is coming soon.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This is a bonus article in my series on the Two-Layer AI Protocol. Read the full series:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;&lt;a href="https://dev.to/raio/i-built-an-android-app-in-4-days-with-zero-android-experience-using-claude-code-and-a-two-layer-2p44"&gt;Article 1: I Built an Android App in 4 Days with Zero Experience&lt;/a&gt;&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;&lt;a href="https://dev.to/raio/the-protocol-was-born-from-wreckage-how-i-learned-to-stop-trusting-ai-and-start-engineering-it-bn1"&gt;Article 2: The Protocol Was Born from Wreckage&lt;/a&gt;&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>claude</category>
      <category>ai</category>
      <category>workflow</category>
      <category>productivity</category>
    </item>
    <item>
      <title>The Protocol Was Born from Wreckage — How I Learned to Stop Trusting AI and Start Engineering It</title>
      <dc:creator>Raio</dc:creator>
      <pubDate>Sun, 08 Feb 2026 19:01:17 +0000</pubDate>
      <link>https://forem.com/raio/the-protocol-was-born-from-wreckage-how-i-learned-to-stop-trusting-ai-and-start-engineering-it-bn1</link>
      <guid>https://forem.com/raio/the-protocol-was-born-from-wreckage-how-i-learned-to-stop-trusting-ai-and-start-engineering-it-bn1</guid>
      <description>&lt;p&gt;In my &lt;a href="https://dev.to/raio/i-built-an-android-app-in-4-days-with-zero-android-experience-using-claude-code-and-a-two-layer-2p44"&gt;last article&lt;/a&gt;, I wrote about building an Android app from scratch in 4 days. I promised to share the details of the two-layer protocol that made it possible. This is that article.&lt;/p&gt;

&lt;p&gt;But this isn't a "how-to." This is a "why."&lt;/p&gt;

&lt;p&gt;The protocol wasn't designed at a desk. It was born in the middle of a disaster — from the determination to never go through that again.&lt;/p&gt;




&lt;h2&gt;
  
  
  Before the Protocol — A Project Called CanAna
&lt;/h2&gt;

&lt;p&gt;My first serious project with Claude Code was CanAna — a CAN bus analysis tool. CAN (Controller Area Network) is the communication protocol that connects ECUs in cars and motorcycles. Analyzing CAN data is core to my day job: ECU tuning for automotive and motorcycle systems.&lt;/p&gt;

&lt;p&gt;The approach was simple. Design in ChatGPT, hand it off to Claude Code for implementation. Get "Done!" back, move on. The tools were already separated into two layers. But there were no operational rules between them. No quality control on instructions, no verification criteria. Just throw it over the wall and take whatever comes back.&lt;/p&gt;

&lt;h3&gt;
  
  
  Context Evaporation
&lt;/h3&gt;

&lt;p&gt;At the start, I ran everything in a single ChatGPT conversation. Design planning, requirement specs, generating prompts for Claude Code — all in one thread. Design documents existed only inside that chat. I never saved anything locally.&lt;/p&gt;

&lt;p&gt;At first, this felt efficient. All context in one place, no file management overhead. But as the chat grew, I started to notice something was off. The AI's responses were getting thinner. Less precise. Subtle details I remembered discussing were now absent from its answers.&lt;/p&gt;

&lt;p&gt;Then I caught the AI steering the project in a direction we hadn't agreed on. It wasn't malicious — with token limits approaching, the AI was compressing older context. Design decisions made days ago were silently dropped. The plan itself was drifting, and because I had no local copy of the original design documents, I couldn't point to the exact moment it diverged.&lt;/p&gt;

&lt;p&gt;By the time I was certain something had changed, my own memory was too fuzzy to reconstruct what we'd originally decided. I could tell "this isn't what we planned" — but not precisely what the plan had been.&lt;/p&gt;

&lt;p&gt;Design intent and accumulated decisions gradually evaporating as AI context compresses. It doesn't happen all at once. It creeps. This is what I later came to call &lt;strong&gt;Context Evaporation&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Shallow Fix Swamp
&lt;/h3&gt;

&lt;p&gt;The real damage showed up when I hit the more complex implementation steps.&lt;/p&gt;

&lt;p&gt;Here's the thing about CanAna: the tool had to interpret messy, real-world sensor data. Human operators don't produce textbook-perfect input signals. What looks obvious to a trained engineer's eye — "this is clearly a ramp-up operation" — requires careful logic to detect programmatically. I worked through these detection challenges with the Design AI, and we arrived at sound approaches.&lt;/p&gt;

&lt;p&gt;But Context Evaporation was already advancing on the ChatGPT side — the Design AI. The Design AI is supposed to hold the "why" behind each piece of logic and issue instructions that cover all related areas when a fix is needed. But as the chat grew longer, that "why" was silently dropping out. So when the AI issued a fix instruction for one detection routine, the fact that three other routines depended on the same logic and needed the same update had already vanished from the Design AI's view.&lt;/p&gt;

&lt;p&gt;It's not the Implementation AI's fault for missing this. Claude Code, as the implementation layer, isn't in a position to evaluate What and Why. Its job is to write vast amounts of code quickly and correctly using its deep knowledge — and that's a perfect score. But out in the world, the mainstream approach is to dump What, Why, and implementation all onto the Implementation AI at once. No wonder it doesn't work.&lt;/p&gt;

&lt;p&gt;But back to the story.&lt;/p&gt;

&lt;p&gt;I'd fix module A. Module B would break three days later — same root cause, different symptom. Fix B. Module C breaks. Layer after layer of surface patches stacked up until no one — not even the AI — could trace the original design intent.&lt;/p&gt;

&lt;p&gt;The AI fixes symptoms without understanding root causes, and each fix creates the conditions for the next failure. This is the &lt;strong&gt;Shallow Fix Swamp&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Completion Fraud
&lt;/h3&gt;

&lt;p&gt;And underneath all of this was the most basic problem: I couldn't trust "Done."&lt;/p&gt;

&lt;p&gt;The AI would report implementation complete. I'd run it — broken. Point it out — "Fixed!" Run it again — still broken. This loop played out dozens of times.&lt;/p&gt;

&lt;p&gt;The AI wasn't lying, exactly. It has a structural bias against saying "I don't know." It's either convinced its implementation is correct, or at least believes it should be. Sometimes it would claim to have verified the output — when in reality it had shaped the code to make the checks pass rather than making the code actually work.&lt;/p&gt;

&lt;p&gt;Reporting confident success in the absence of genuine verification — this is &lt;strong&gt;Completion Fraud&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;As a control systems engineer, this felt like a dark mirror of something I knew well. Back in the early 2000s, electronic controls for simple single-cylinder commuter motorcycles were developed without flowcharts or specifications — the design engineer wrote the code directly. But when that code was carried over to multi-cylinder models, problems kept surfacing. At first, they powered through with sheer grit — just patch it and ship. But a few great predecessors realized "this can't go on," made the courageous decision to stop, and established the department's first coding standards. I could smell that same acrid scent from those wild-west days of ECU software, faint but unmistakable, at the back of my nose.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Aftermath
&lt;/h3&gt;

&lt;p&gt;By the end, CanAna's codebase was scorched earth. Some parts worked, but nobody could explain why. Fix one thing and something else would break. Ask the AI to investigate and it would return confidently wrong guesses.&lt;/p&gt;

&lt;p&gt;But I didn't abandon the project. I pushed through. The MVP — all seven implementation steps — was completed through sheer persistence, grinding through each issue with the Design AI one problem at a time.&lt;/p&gt;

&lt;p&gt;It was functional. But I knew the codebase couldn't sustain further development. The architecture was held together by patches on patches — 600-line god classes, duplicated logic scattered across modules, fixes that only worked by accident.&lt;/p&gt;

&lt;p&gt;So I made a deliberate decision: pause CanAna, and build the infrastructure to do it right.&lt;/p&gt;

&lt;p&gt;That infrastructure had two pillars. One was Bridgiron — a support tool to bridge context between Design AI and Implementation AI (covered in the next article). The other was a set of markdown documents defining chat operation rules and verification criteria — the prototype of what would later evolve into the Vol.1–4 Handover Documents. With both wings in place — the tool and the protocol — I went back to CanAna and ran a structured refactoring: had the Implementation AI audit its own code, analyzed the findings with the Design AI, and executed a planned cleanup — seven refactoring steps, each with clear scope and verification.&lt;/p&gt;

&lt;p&gt;The result: CanAna now runs as a stable MVP. It's currently on pause while I work on other projects, but it's &lt;em&gt;organized&lt;/em&gt; — ready to pick back up.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;(→ &lt;a href="https://www.youtube.com/watch?v=xilg-_4ZMJs" rel="noopener noreferrer"&gt;CanAna Demo video&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fruq6g0m02chy4kdq6v9.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffruq6g0m02chy4kdq6v9.png" alt="BridgironGirl with spiral eyes, slumped over desk" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What Actually Went Wrong
&lt;/h2&gt;

&lt;p&gt;I stopped, made coffee, and sat with a warm mug, replaying the whole experience in my head. The problem wasn't the AI's capability. It was how I was using it — or more precisely, the fact that I hadn't &lt;em&gt;engineered&lt;/em&gt; how to use it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. There was no process between design and implementation.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The tools were separated — ChatGPT for design, Claude Code for implementation. But the judgment of what counts as OK and what counts as NG was left entirely to ChatGPT's discretion. No explicit verification criteria, no handover documents, no definition of done.&lt;/p&gt;

&lt;p&gt;The result: every time a problem surfaced, ChatGPT would propose a superficial patch that never reached the root cause, and Claude Code would implement it. As patches stacked on patches, context compressed, and the original design intent evaporated. And both ChatGPT and Claude Code would report "Fix complete!" with full confidence.&lt;/p&gt;

&lt;p&gt;I was happily stamping my seal of approval on everything. I wasn't watching closely enough, and by the time I realized the codebase had reached a point of no return, it was far too late.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. There was no mechanism to preserve context.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AI memory lives and dies with the chat session. When the chat ends, context vanishes. Human teams have documentation, wikis, verbal handoffs. My AI collaboration had none of that.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. There was no "investigate → discuss → fix" flow.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When something broke, the AI would immediately say "I'll fix it." But there was zero guarantee that "fix" was correct. On a human team, you identify the cause first, agree on the approach, then implement. The AI had no such brake.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Completion reports were self-assessed.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When the AI says "Done," you have no choice but to believe it — until you test on a real device. And since the AI has a bias against admitting uncertainty, it reports ambiguous results with full confidence.&lt;/p&gt;

&lt;p&gt;Of course, a developer with strong programming skills could mitigate this by reading and reviewing the AI's generated code directly. But CanAna was implemented in Python — because the PeakCAN USB device's API was provided for Python. Python isn't a language I'm proficient in. My only way to verify the AI's output was black-box testing: run it and check the results. And at each individual development stage, it &lt;em&gt;appeared&lt;/em&gt; to be working fine.&lt;/p&gt;

&lt;p&gt;That's when I noticed the parallel to my day job.&lt;/p&gt;

&lt;p&gt;In safety-critical embedded software, there's an established development methodology called the &lt;strong&gt;V-model&lt;/strong&gt; (V-process). The left side of the V defines requirements and design at increasing levels of detail. The right side verifies and validates at each corresponding level. Design and verification are structurally separated — and that separation is what catches defects before they reach production.&lt;/p&gt;

&lt;p&gt;Think about drive-by-wire: the system that translates your throttle input into actual engine response. If the control logic has an undetected flaw, the consequences are immediate and physical. When you're driving with your girlfriend and the car doesn't suddenly go haywire and slam into a wall — that's not luck. It's because the V-model process guarantees the quality of every piece of software running in that car. You don't ship until every stakeholder can trace every decision back to its origin and sign off with full accountability. That's not individual diligence — it's enforced by organizational development rules. Every design decision is documented in flowcharts and specifications. Every implementation is reviewed against those specs. The designer and the implementer are never the same person — by policy, not by accident.&lt;/p&gt;

&lt;p&gt;What I'd been doing with CanAna was the opposite of everything I practiced at work: &lt;strong&gt;no process between design and implementation, no structured verification, no documentation trail.&lt;/strong&gt; I was letting the AI play every role at once — unsupervised.&lt;/p&gt;

&lt;p&gt;No wonder it burned.&lt;/p&gt;

&lt;p&gt;The V-model's full implications for AI-augmented development will be explored in a later article in this series. For now, the key insight was simple: the engineering discipline I'd spent 15 years building wasn't obsolete in the age of AI. It was exactly what AI collaboration was missing.&lt;/p&gt;




&lt;h2&gt;
  
  
  Building the Protocol
&lt;/h2&gt;

&lt;p&gt;From the CanAna wreckage, I established four rules.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rule 1: Separate the layers — and lay a protocol between them.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Separating Design AI and Implementation AI is the foundational premise of this protocol. The Design AI handles requirements analysis, architecture design, and creation of structured instruction files. It never touches code. The Implementation AI receives those instructions and executes scoped tasks only. It makes no design decisions. The human sits between them — real-device testing, Git operations, and final judgment calls stay with the human.&lt;/p&gt;

&lt;p&gt;But what CanAna taught me is that separating the layers alone isn't enough. What matters is defining the interface between them — the format of instruction files, explicit completion criteria, structured reporting. Separating tools doesn't create separation. Separating processes does.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rule 2: Anchor context in documents.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Don't rely on AI memory. Maintain project context, design intent, and history in structured markdown documents. Open a new chat, say "read this document," and context is restored.&lt;/p&gt;

&lt;p&gt;I call these "Handover Documents."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rule 3: Enforce investigate → discuss → fix.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When something breaks, the first instruction to the Implementation AI is: "Investigate only. Do not fix." I receive the findings, analyze the cause with the Design AI, and agree on an approach. Only the agreed-upon fix gets sent to the Implementation AI.&lt;/p&gt;

&lt;p&gt;Enforcing this single rule nearly eliminated the shallow fix swamp.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rule 4: Structure completion reports.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Force the Implementation AI to report in a structured format: what was done, what changed, what remains. Self-reporting accuracy goes up, and even when the AI slips in something inaccurate, the structure makes contradictions easier to spot.&lt;/p&gt;

&lt;p&gt;The concrete implementation of these rules — templates, naming conventions, operational know-how — will be released progressively across later articles in this series.&lt;/p&gt;




&lt;h2&gt;
  
  
  Proof — Did It Actually Work?
&lt;/h2&gt;

&lt;p&gt;The first real test of this protocol was &lt;a href="https://github.com/VTRiot/Bridgiron" rel="noopener noreferrer"&gt;Bridgiron&lt;/a&gt; — a development support tool built specifically to make this protocol easier to operate.&lt;/p&gt;

&lt;p&gt;Bridgiron's development went smoothly. The "completion fraud," the "shallow fix swamp," the "context evaporation" that plagued CanAna — none of it happened. When problems arose, the investigate → discuss → fix flow resolved them reliably.&lt;/p&gt;

&lt;p&gt;Then came ExitWatcher. Android and Kotlin — a tech stack I had zero experience with. Despite that, the MVP was complete in 4 days. 53 structured instruction files, 7 step-chats, and 1 assembly chat orchestrating the entire project.&lt;/p&gt;

&lt;p&gt;Proof that the protocol works independent of any specific domain or tech stack.&lt;/p&gt;

&lt;p&gt;→ &lt;a href="https://dev.to/raio/i-built-an-android-app-in-4-days-with-zero-android-experience-using-claude-code-and-a-two-layer-2p44"&gt;I Built an Android App in 4 Days With Zero Android Experience — Using Claude Code and a Two-Layer AI Protocol&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What You Can Take Away Today
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z7bkew1ytwlq6vdvqyki.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz7bkew1ytwlq6vdvqyki.png" alt="BridgironGirl at her desk, building a profile" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Throughout this dev.to series, I'll be releasing polished, public-facing versions of the AI workflow protocol used to build ExitWatcher — one piece at a time, alongside each article.&lt;/p&gt;

&lt;p&gt;Here's the first piece.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vol.1 — Your personal profile document.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The single most impactful handover document is the one that tells the AI who &lt;em&gt;you&lt;/em&gt; are. Your technical background, strengths and weaknesses, thinking habits, how you want to collaborate with AI. With just this one document, AI behavior changes dramatically. You stop re-explaining yourself every time you open a new chat.&lt;/p&gt;

&lt;p&gt;I built a prompt that generates this document automatically. Drop it into a new Claude.ai chat, and the AI interviews you. In about 10 minutes, you'll have your own custom profile document.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;One strong recommendation: use voice input.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When typing, people unconsciously compress their thoughts. You think "I'll just write the key points" and strip away context and thinking habits that are actually critical. With voice, you can say whatever comes to mind. Tangents are fine. The AI will organize everything.&lt;/p&gt;

&lt;p&gt;One of the most important things in AI collaboration is getting what's in your head into the AI with minimal friction. Voice input minimizes that friction.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to Use
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Download the prompt file (MD file) from GitHub&lt;/li&gt;
&lt;li&gt;Open a &lt;strong&gt;new chat&lt;/strong&gt; in your preferred chat AI (Claude.ai, ChatGPT, etc.)&lt;/li&gt;
&lt;li&gt;Type the following message, attach the downloaded MD file, and send:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Follow the instructions in the attached file and begin the interview.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;The AI will start interviewing you — answer the questions (voice input recommended)&lt;/li&gt;
&lt;li&gt;When all questions are done, the AI outputs your personal profile document as Markdown&lt;/li&gt;
&lt;li&gt;Copy the output and save it as a &lt;code&gt;.md&lt;/code&gt; file&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Requirements:&lt;/strong&gt; Any chat AI that runs in a web browser and accepts file attachments (Claude.ai, ChatGPT, Gemini, etc.). Desktop or mobile.&lt;/p&gt;

&lt;p&gt;→ &lt;a href="https://github.com/VTRiot/two-layer-ai-protocol/blob/main/vol1-generator/Prompt_vol1_generator_en_r1.md" rel="noopener noreferrer"&gt;Vol.1 Generator Prompt on GitHub&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  A Profile Is Something You Grow
&lt;/h3&gt;

&lt;p&gt;One more thing. This profile document isn't a one-time artifact.&lt;/p&gt;

&lt;p&gt;After completing each project, ask the AI you've been working with: &lt;em&gt;"Based on our work together, is there anything I should add to my profile?"&lt;/em&gt; The AI has been observing your thinking patterns and decision-making habits. Feed that feedback back into your profile, and it gets sharper and denser with every project.&lt;/p&gt;

&lt;p&gt;As your AI collaboration evolves, so does your profile.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;The next article covers Bridgiron — the tool I built to support this protocol, and the lessons learned from building it.&lt;/p&gt;

&lt;p&gt;The remaining pieces of the protocol will be released progressively with each article. The goal of this series is to get you to the point where you can build your own.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Control systems engineer, 15 years in motorcycle ECU development. Currently exploring AI-augmented development workflows and documenting what works.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>claude</category>
      <category>ai</category>
      <category>workflow</category>
      <category>productivity</category>
    </item>
    <item>
      <title>I Built an Android App in 4 Days With Zero Android Experience — Using Claude Code and a Two-Layer AI Protocol</title>
      <dc:creator>Raio</dc:creator>
      <pubDate>Sat, 07 Feb 2026 17:30:00 +0000</pubDate>
      <link>https://forem.com/raio/i-built-an-android-app-in-4-days-with-zero-android-experience-using-claude-code-and-a-two-layer-2p44</link>
      <guid>https://forem.com/raio/i-built-an-android-app-in-4-days-with-zero-android-experience-using-claude-code-and-a-two-layer-2p44</guid>
      <description>&lt;p&gt;Four days. That's how long it took to go from zero Android experience to a working MVP — with background processing, push notifications, data scraping, chart visualization, and a 10-level alert system.&lt;/p&gt;

&lt;p&gt;I'm not a programmer. I'm a control systems engineer. For 15 years, I've been designing control logic for motorcycle ECUs — traction control, quickshifters, drive-by-wire systems for European sport bike manufacturers. My job is drawing flowcharts and writing specifications, then handing them to software engineers who turn them into code. I read C, but I don't write it professionally. My strongest language is VBA.&lt;/p&gt;

&lt;p&gt;Kotlin, Jetpack Compose, Gradle — completely foreign territory.&lt;/p&gt;

&lt;p&gt;But I had a few weeks off and a real problem to solve.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;I hold a life insurance product that includes an investment component. The provider's online portal shows almost nothing useful — no charts, no trend visualization, no way to compare performance against a benchmark index. The product had been slowly declining in value, and after the new year, it dropped several steps further. I needed to see the actual data, not just a feeling.&lt;/p&gt;

&lt;p&gt;No existing app could do this. So I decided to build one.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Two-Layer Protocol
&lt;/h2&gt;

&lt;p&gt;Before this project, I built a CAN bus analysis tool (CanAna) using AI in the same way. The two-layer structure — separating Design AI from Implementation AI — already existed at that point. What didn't exist was a protocol. Design documents lived inside a single ChatGPT chat. Nothing was saved locally. There were no structured handover documents, no investigation flow, no rules. When the chat's token limit approached, context silently compressed, and design decisions made days ago drifted without anyone noticing. The result was scorched earth.&lt;/p&gt;

&lt;p&gt;That disaster taught me the structure alone isn't enough — you need the protocol. Here's what I built from the wreckage:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Design AI&lt;/strong&gt; (Claude.ai) — handles requirements analysis, system design, and creates structured implementation instructions as Markdown files. Never touches code. The reason for having a Design AI in the loop is simple: I want to brush up vague requirements to specification level using screenshots, diagrams, and multimodal input — something that's difficult with CLI-based Claude Code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implementation AI&lt;/strong&gt; (Claude Code) — receives specific, scoped instructions and executes them. Writes code, runs builds, reports results. Doesn't make design decisions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Me&lt;/strong&gt; — the bridge. I relay instructions, test on real devices, make judgment calls, and handle Git.&lt;/p&gt;

&lt;p&gt;When something breaks: investigate first, discuss options, then — and only then — fix. No blind thrashing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feook8jjd1z57ejsxvntu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feook8jjd1z57ejsxvntu.png" alt="Two-Layer AI Protocol overview" width="800" height="346"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How I arrived at this protocol — what exactly went wrong with CanAna, and what rules emerged from the wreckage — is &lt;a href="https://dev.to/raio/the-protocol-was-born-from-wreckage-how-i-learned-to-stop-trusting-ai-and-start-engineering-it-bn1"&gt;covered in detail in Article 2&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Timeline
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Day 1 — Drawing the Map
&lt;/h3&gt;

&lt;p&gt;I can read C, but that's in the embedded world. What I actually write day-to-day is VBA and flowcharts. I know what object-oriented programming is — conceptually. I've never used Java, Node.js, or Kotlin. I had zero experience building standalone applications, let alone Android apps. This was my first time opening Android Studio.&lt;/p&gt;

&lt;p&gt;The first thing the Design AI and I did was select the tech stack. Kotlin, Jetpack Compose, Material3 — none of these were familiar to me. But the Design AI explained the rationale: these are the de facto standard for Android development in 2026, and there's no reason for a beginner to choose older technologies. That made sense.&lt;/p&gt;

&lt;p&gt;Next, we decomposed the entire project into 7 steps. Environment setup, project scaffold, data layer, data normalization, detection logic, UI, background processing. This became the roadmap for the next four days.&lt;/p&gt;

&lt;p&gt;By the end of Day 1, builds were passing on both the emulator and the physical device (Xiaomi Mi 13T / Android 15). I hadn't written a single line — not even a single character — of code yet, but I'd confirmed the development foundation was working.&lt;/p&gt;

&lt;h3&gt;
  
  
  Day 2 — Two Walls at Once
&lt;/h3&gt;

&lt;p&gt;When I started implementing the data layer, I hit two walls simultaneously.&lt;/p&gt;

&lt;p&gt;First wall. I'd planned to fetch data from Sony Life Insurance via CSV download, but when the Implementation AI hit the URL from my design doc, it returned a 404. The CSV endpoint had been discontinued. We immediately switched to HTML scraping. But Sony Life's HTML tables were riddled with colspan and rowspan attributes, and the target column positions weren't fixed. It took 4 instruction prompts to establish a dynamic column detection approach.&lt;/p&gt;

&lt;p&gt;Second wall. The original design called for Room (Android's ORM) as the local database. Builds wouldn't pass. AGP 9.0 had reduced kapt support, creating a Kotlin version incompatibility with Room's annotation processor. We tried migrating to KSP — still incompatible. After 6 prompts of trial and error, we abandoned Room entirely. SharedPreferences + Gson instead. For an app like ExitWatcher, where normalized data amounts to a few thousand records at most, Gson provides more than sufficient performance.&lt;/p&gt;

&lt;p&gt;With zero Android experience, I had no way to predict these landmines. What mattered was how fast we cut our losses after hitting them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Day 3 — Substance Builds Up
&lt;/h3&gt;

&lt;p&gt;With the data fetching infrastructure in place, the app's substance grew rapidly.&lt;/p&gt;

&lt;p&gt;The benchmark index (eMAXIS) data was available as CSV, though encoded in Shift_JIS. With two disparate data sources ready, I implemented the DataNormalizer. It rebases both datasets to 100 as of October 1, 2025, and joins them via date-based INNER JOIN. This isn't just displaying numbers — it's statistical processing that transforms two differently-scaled datasets into a comparable form.&lt;/p&gt;

&lt;p&gt;In the afternoon, I moved to detection logic design. Four independent judgment axes: exit price judgment (holdings vs. exit threshold), trend judgment (rate of change over the last N days), deadline judgment (relative performance vs. last defense line), and expiration judgment (days remaining until target date). These results are aggregated into a 10-level ranking system, from S++ to G, each with its own color and emoji. Starting with ✨ and ending with ☠.&lt;/p&gt;

&lt;p&gt;When I displayed Japanese text on the emulator, the fonts were broken — CJK fallback was rendering Chinese characters. Bundled Noto Sans JP to fix it. A small issue, but the kind that's embarrassing if you leave it.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frj5exuuzs4hmftv87l5l.png" alt="ExitWatcher rank detail: 10-tier rating system from S++ to G" width="800" height="2993"&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;table&gt;
&lt;tr&gt;
&lt;th&gt;Rank&lt;/th&gt;
&lt;th&gt;Japanese&lt;/th&gt;
&lt;th&gt;English&lt;/th&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;S++&lt;/td&gt;
&lt;td&gt;絶好調&lt;/td&gt;
&lt;td&gt;Excellent&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;S+&lt;/td&gt;
&lt;td&gt;好調&lt;/td&gt;
&lt;td&gt;Healthy&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;S&lt;/td&gt;
&lt;td&gt;順調&lt;/td&gt;
&lt;td&gt;On Track&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;A&lt;/td&gt;
&lt;td&gt;許容範囲&lt;/td&gt;
&lt;td&gt;Acceptable&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;B&lt;/td&gt;
&lt;td&gt;要観察&lt;/td&gt;
&lt;td&gt;Watch&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;C&lt;/td&gt;
&lt;td&gt;注意&lt;/td&gt;
&lt;td&gt;Caution&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;D&lt;/td&gt;
&lt;td&gt;警戒&lt;/td&gt;
&lt;td&gt;Warning&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;E&lt;/td&gt;
&lt;td&gt;危険&lt;/td&gt;
&lt;td&gt;Danger&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;F&lt;/td&gt;
&lt;td&gt;撤退検討&lt;/td&gt;
&lt;td&gt;Consider Exit&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;G&lt;/td&gt;
&lt;td&gt;即撤退&lt;/td&gt;
&lt;td&gt;Exit Now&lt;/td&gt;
&lt;/tr&gt;
&lt;/table&gt;



&lt;table&gt;
&lt;tr&gt;
&lt;th&gt;Japanese UI&lt;/th&gt;
&lt;th&gt;English&lt;/th&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ランク詳細説明&lt;/td&gt;
&lt;td&gt;Rank Detail&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;条件&lt;/td&gt;
&lt;td&gt;Condition&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;説明&lt;/td&gt;
&lt;td&gt;Description&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;現在&lt;/td&gt;
&lt;td&gt;Current&lt;/td&gt;
&lt;/tr&gt;
&lt;/table&gt;




&lt;/td&gt;
&lt;br&gt;
&lt;br&gt;&lt;br&gt;
&lt;/tr&gt;
&lt;br&gt;
&lt;br&gt;&lt;br&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuj01a0q1w4l44mc49fkj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuj01a0q1w4l44mc49fkj.png" alt="Day 3 — running on fumes" width="800" height="565"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Day 4 — The Final Push
&lt;/h3&gt;

&lt;p&gt;The last day had the highest task density of all.&lt;/p&gt;

&lt;p&gt;First, charts. The Design AI initially recommended Vico, a Compose-native chart library. I was convinced by the explanation and chose Vico. But before implementation started, a thought occurred to me — why not ask Claude Code which library it's more proficient with? I suggested this to the Design AI, who agreed. Claude Code's answer: "I'm more comfortable with MPAndroidChart." We changed course. This experience led to a new rule: before committing to a library, check the Implementation AI's proficiency first.&lt;/p&gt;

&lt;p&gt;With MPAndroidChart, we built a dual-line chart, a Y=100 reference line, tap-to-inspect markers, and an 8-level time range filter (1 month through ALL). The X-axis was initially index-based, causing uneven time spacing — fixed by switching to epochDay-based values.&lt;/p&gt;

&lt;p&gt;The UI has 6 screens. Main screen (overall judgment card + rank display), chart screen, settings screen, rank detail screen, normalized data table, and debug screen. The settings screen allows users to modify and persist all detection logic parameters from the UI.&lt;/p&gt;

&lt;p&gt;Background processing. WorkManager with a OneTimeWorkRequest chain and exponential backoff (×3: 08:15 → 09:00 → 11:15 → 17:00). Sony Life's site only exposes the last 20 business days of data, so the app fetches daily in the background, accumulates history, and merge-deduplicates. If you want historical data, you have to collect it yourself.&lt;/p&gt;

&lt;p&gt;Push notifications use edge detection. They fire only when the rank crosses a threshold in either the deterioration or improvement direction. Users can configure those thresholds in settings.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhgf33oldfnfwps6lyti7.png" alt="ExitWatcher performance comparison chart: Sony fund vs eMAXIS benchmark" width="800" height="1733"&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;table&gt;
&lt;tr&gt;
&lt;th&gt;Japanese UI&lt;/th&gt;
&lt;th&gt;English&lt;/th&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;パフォーマンス比較&lt;/td&gt;
&lt;td&gt;Performance Comparison&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ソニー世界株式型GI&lt;/td&gt;
&lt;td&gt;Sony Global Equity GI&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;eMAXISオルカン&lt;/td&gt;
&lt;td&gt;eMAXIS All Country&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;基準値&lt;/td&gt;
&lt;td&gt;Baseline&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;基準日: 2025/10/01 = 100&lt;/td&gt;
&lt;td&gt;Base date: 2025/10/01 = 100&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;表示データ: 23件&lt;/td&gt;
&lt;td&gt;Data points: 23&lt;/td&gt;
&lt;/tr&gt;
&lt;/table&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
&lt;b&gt;Time filters:&lt;/b&gt; 1/2M, 1M, 3M, 6M, 1Y, 3Y, 5Y&lt;br&gt;
&lt;/p&gt;


&lt;/td&gt;
&lt;br&gt;
&lt;br&gt;&lt;br&gt;
&lt;/tr&gt;
&lt;br&gt;
&lt;br&gt;&lt;br&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn5qnqxrnrmjy3rcuw0jd.png" alt="ExitWatcher main screen comparison: Rank D (warning) vs Rank S+ (healthy)" width="800" height="864"&gt;
&lt;/td&gt;
&lt;td&gt;
&lt;table&gt;
&lt;tr&gt;
&lt;th&gt;Japanese UI&lt;/th&gt;
&lt;th&gt;English&lt;/th&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;総合判定&lt;/td&gt;
&lt;td&gt;Overall Verdict&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;D 警戒&lt;/td&gt;
&lt;td&gt;D — Warning&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;S+ 好調&lt;/td&gt;
&lt;td&gt;S+ — Healthy&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;積立金額&lt;/td&gt;
&lt;td&gt;Invested Amount&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;撤退価格&lt;/td&gt;
&lt;td&gt;Exit Price&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;トレンド(5日)&lt;/td&gt;
&lt;td&gt;Trend (5-day)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;相対パフォ&lt;/td&gt;
&lt;td&gt;Relative Performance&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;残り日数&lt;/td&gt;
&lt;td&gt;Days Remaining&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;正規化データ&lt;/td&gt;
&lt;td&gt;Normalized Data&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;パフォーマンス比較&lt;/td&gt;
&lt;td&gt;Performance Comparison&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;設定&lt;/td&gt;
&lt;td&gt;Settings&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;終了&lt;/td&gt;
&lt;td&gt;Exit&lt;/td&gt;
&lt;/tr&gt;
&lt;/table&gt;




&lt;/td&gt;
&lt;br&gt;
&lt;br&gt;&lt;br&gt;
&lt;/tr&gt;
&lt;br&gt;
&lt;br&gt;&lt;br&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;br&gt;&lt;br&gt;
That evening, I installed the APK on the physical device and ran through everything. All screens worked. Background fetching ran. Notifications fired. v1.0.0 MVP complete.

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4yd7myhge38y38fltioe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4yd7myhge38y38fltioe.png" alt="YATTAAA! MVP complete!" width="800" height="573"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What the Numbers Mean
&lt;/h2&gt;

&lt;p&gt;53 structured instruction prompts. 7 step-chats. One assembly chat orchestrating everything. 4 days.&lt;/p&gt;

&lt;p&gt;What these numbers represent is density.&lt;/p&gt;

&lt;p&gt;Each instruction prompt follows a defined format — what to do, what the completion criteria are, how to report results. Once the Design AI and I align on a requirement, the Design AI automatically generates the instruction prompt in a format the Implementation AI can readily consume. The Implementation AI executes according to that format and returns results in an equally structured form. The Design AI and I review, then issue the next instruction. This cycle ran 53 times.&lt;/p&gt;

&lt;p&gt;The 7 steps correspond to development phases: environment setup, scaffold, data layer, normalization, detection logic, UI, background processing. Each step ran in an independent chat, with result reports flowing up to the parent chat — the assembly.&lt;/p&gt;

&lt;p&gt;In other words, this isn't a story about "I asked an AI and waited 4 days for an app." It's a story about a human orchestrating 53 concrete cycles of judgment and execution, and driving them to completion in 4 days.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Failed
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Room + kapt on AGP 9.0.&lt;/strong&gt; An incompatibility impossible to predict without Android experience. 6 prompts to evaluate three options (downgrade AGP, migrate to KSP, abandon Room), landing on SharedPreferences. Time lost: about 2 hours.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vico chart library.&lt;/strong&gt; The Design AI recommended it. I was convinced and chose it. But when I asked the Implementation AI which library it was more proficient with, it said MPAndroidChart. Because we checked before implementation started, we changed course with zero rework. Lesson: verify the Implementation AI's proficiency before committing to a library.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sony Life CSV discontinuation.&lt;/strong&gt; The data source assumed in the design simply didn't exist. Pivoting to HTML scraping and handling the complex colspan table structure took 4 prompts.&lt;/p&gt;

&lt;p&gt;None of these failures derailed the project, because the two-layer protocol contained them. When we hit a wall, the Implementation AI didn't try to thrash its way through alone — it investigated and reported. The Design AI and I discussed options. Only after agreement did we proceed with a fix. This "investigate first, discuss, then fix" flow kept the damage contained and the loss-cutting decisions fast.&lt;/p&gt;

&lt;h2&gt;
  
  
  This Wasn't a One-Off
&lt;/h2&gt;

&lt;p&gt;Before this project, I built &lt;a href="https://github.com/VTRiot/Bridgiron" rel="noopener noreferrer"&gt;Bridgiron&lt;/a&gt; — a desktop development support tool — using the same protocol. Same two-layer separation, same structured handover documents, same discipline.&lt;/p&gt;

&lt;p&gt;I've been using Claude Code for about 4 weeks. In that time, I've systematized a reusable protocol for AI-augmented development that works across domains and tech stacks.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;How this protocol was born is &lt;a href="https://dev.to/raio/the-protocol-was-born-from-wreckage-how-i-learned-to-stop-trusting-ai-and-start-engineering-it-bn1"&gt;already written&lt;/a&gt;. It's the story of what I learned from CanAna's scorched earth, and what rules I established from the wreckage.&lt;/p&gt;

&lt;p&gt;Next, I'll write about Bridgiron — the tool that supports the protocol. After that, I'll dig into why the V-model development philosophy from 15 years of automotive ECU work translates directly to AI-augmented development.&lt;/p&gt;

&lt;p&gt;If you feel like "AI improves development speed but quality won't stabilize" — it might not be a prompt problem. It might be a process problem.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Control systems engineer, 15 years in motorcycle ECU development. Specializing in traction control and quickshifter logic design for European sport bike manufacturers. Currently systematizing AI-augmented development workflows and documenting what works.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Series:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Article 1: You're reading it&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://dev.to/raio/the-protocol-was-born-from-wreckage-how-i-learned-to-stop-trusting-ai-and-start-engineering-it-bn1"&gt;Article 2: The Protocol Was Born from Wreckage&lt;/a&gt; — How the protocol was born&lt;/li&gt;
&lt;li&gt;Article 3: Bridgiron (coming soon)&lt;/li&gt;
&lt;li&gt;Article 4: Control Engineer × AI (coming soon)&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>claude</category>
      <category>ai</category>
      <category>android</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
