<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Wrought</title>
    <description>The latest articles on Forem by Wrought (@usewrought).</description>
    <link>https://forem.com/usewrought</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/usewrought"/>
    <language>en</language>
    <item>
      <title>I Built a Real-Time Artemis II 3D Tracker in One Session — Here's the Engineering Pipeline That Made It Possible</title>
      <dc:creator>Wrought</dc:creator>
      <pubDate>Fri, 03 Apr 2026 22:14:47 +0000</pubDate>
      <link>https://forem.com/usewrought/i-built-a-real-time-artemis-ii-3d-tracker-in-one-session-heres-the-engineering-pipeline-that-1h11</link>
      <guid>https://forem.com/usewrought/i-built-a-real-time-artemis-ii-3d-tracker-in-one-session-heres-the-engineering-pipeline-that-1h11</guid>
      <description>&lt;p&gt;On April 1, 2026, four astronauts launched aboard Orion on Artemis II — humanity's first crewed voyage beyond low Earth orbit since Apollo 17 in 1972.&lt;/p&gt;

&lt;p&gt;I wanted to track it. Not on a static NASA page. Not on someone else's stream overlay. I wanted an interactive 3D visualization with real telemetry, in my browser, that I built myself.&lt;/p&gt;

&lt;p&gt;Six hours - one afternoon - later, I had one. Live at &lt;a href="https://artemis-tracker-murex.vercel.app" rel="noopener noreferrer"&gt;artemis-tracker-murex.vercel.app&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh75apxj5r0uvek29gm1d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh75apxj5r0uvek29gm1d.png" alt="Plan view of the Artemis II tracker showing the full free-return trajectory from Earth to Moon" width="800" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;47 files. ~8,000 lines of TypeScript. 15 unit tests. 5 serverless API proxies. Degree-8 Lagrange interpolation at 60fps. An AI mission chatbot. Deep Space Network status. Deployed on Vercel.&lt;/p&gt;

&lt;p&gt;Built in a single session using &lt;a href="https://claude.ai/code" rel="noopener noreferrer"&gt;Claude Code&lt;/a&gt; with a structured engineering pipeline called &lt;a href="https://wrought-web.vercel.app" rel="noopener noreferrer"&gt;Wrought&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This post isn't about "look what AI can do." It's about what happens when you give an AI agent &lt;strong&gt;engineering discipline&lt;/strong&gt; instead of just a prompt.&lt;/p&gt;




&lt;h2&gt;
  
  
  What the App Does
&lt;/h2&gt;

&lt;p&gt;ARTEMIS is a real-time 3D mission tracker that combines three NASA data sources into one interactive visualization:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;OEM Ephemeris Files&lt;/strong&gt; from NASA's AROW system — actual spacecraft state vectors (position and velocity) at 4-minute intervals, interpolated to 60fps using Lagrange polynomials&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deep Space Network XML&lt;/strong&gt; — live antenna status from Goldstone, Canberra, and Madrid&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;JPL Horizons API&lt;/strong&gt; — Moon position in the same J2000 reference frame as the spacecraft data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The result: you can watch Orion move along its trajectory in real time, see its speed, distance from Earth, distance to the Moon, and which ground stations are currently talking to it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhzx3ymrhsh2vo20jz26i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhzx3ymrhsh2vo20jz26i.png" alt="Earth view with Orion's trajectory arcing toward the Moon" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There's also an AI chatbot powered by Gemini 2.5 Flash. Common questions like "How long is the mission?" resolve instantly via client-side quick-answer buttons — no API call needed. Free-text questions hit a curated knowledge base through a system prompt.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Finox52pop3k388vsnv39.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Finox52pop3k388vsnv39.png" alt="AI mission chatbot with quick-answer buttons" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Problem with "Vibe Coding"
&lt;/h2&gt;

&lt;p&gt;Every week, someone posts "I built X in 20 minutes with AI." And every week, the comments are the same: &lt;em&gt;Does it have tests? How's the error handling? What happens when the API is down? Did you actually read the code?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;These are fair questions. The dirty secret of AI-assisted speed runs is that most of them produce code that works for a demo and breaks in production. The AI generates plausible code. You accept it. Ship it. Move on.&lt;/p&gt;

&lt;p&gt;The issue isn't speed — it's the absence of process. No design review. No architecture decision records. No code review. No root cause analysis when something goes wrong. Just "prompt → code → deploy → pray."&lt;/p&gt;

&lt;p&gt;I wanted to show there's a better way.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Pipeline
&lt;/h2&gt;

&lt;p&gt;Wrought is a structured engineering pipeline I built for Claude Code. It enforces a specific sequence of skills for every significant piece of work:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Finding → Research → Design → Blueprint → Implementation → Code Review&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Each stage produces a documented artifact. Each artifact feeds the next stage. The AI agent can't skip ahead — the pipeline is the process.&lt;/p&gt;

&lt;p&gt;Here's how it played out for ARTEMIS.&lt;/p&gt;

&lt;h3&gt;
  
  
  Stage 1: Finding
&lt;/h3&gt;

&lt;p&gt;Every task starts with a &lt;strong&gt;Findings Tracker&lt;/strong&gt; — a structured document that captures what you're building, why, and tracks it through every stage.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Finding: Interactive Artemis II Live Visualization
Type: Gap
Severity: High
Rationale: Artemis II launched 2026-04-01. No unified interactive tracker exists.
  NASA data is scattered across OEM files, XML feeds, and on-demand APIs.
  Mission window is ~10 days — time-sensitive opportunity.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This isn't bureaucracy. It's cross-session memory. If I stop working and come back tomorrow, the tracker tells me exactly where I left off and what decisions were already made.&lt;/p&gt;

&lt;h3&gt;
  
  
  Stage 2: Research
&lt;/h3&gt;

&lt;p&gt;Before building a chatbot, I needed to decide &lt;em&gt;how&lt;/em&gt; to build it. The research skill evaluated three approaches:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Approach&lt;/th&gt;
&lt;th&gt;Pros&lt;/th&gt;
&lt;th&gt;Cons&lt;/th&gt;
&lt;th&gt;Verdict&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;FAQ Bot (pattern matching)&lt;/td&gt;
&lt;td&gt;Zero cost, instant&lt;/td&gt;
&lt;td&gt;Can't handle novel questions&lt;/td&gt;
&lt;td&gt;Too rigid&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;System Prompt + LLM&lt;/td&gt;
&lt;td&gt;Simple, full knowledge in context&lt;/td&gt;
&lt;td&gt;Per-query API cost&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Selected&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;RAG (vector search)&lt;/td&gt;
&lt;td&gt;Scales to large corpora&lt;/td&gt;
&lt;td&gt;Massive overengineering for ~3K tokens of facts&lt;/td&gt;
&lt;td&gt;Overkill&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The entire Artemis II knowledge base — mission timeline, crew bios, spacecraft specs, orbital mechanics — fits in about 3,000 tokens. That's smaller than this blog post. Building a RAG pipeline with embeddings, a vector database, and chunking strategy for 3,000 tokens of content would have been absurd.&lt;/p&gt;

&lt;p&gt;The research also surfaced that Gemini 2.5 Flash has a genuinely free tier (15 requests per minute, 1,000 per day, no credit card). Claude would have been higher quality, but the $0 budget constraint made Gemini the pragmatic choice.&lt;/p&gt;

&lt;h3&gt;
  
  
  Stage 3: Design
&lt;/h3&gt;

&lt;p&gt;The design stage evaluated four architecture options with weighted scoring:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Option&lt;/th&gt;
&lt;th&gt;Stack&lt;/th&gt;
&lt;th&gt;Score&lt;/th&gt;
&lt;th&gt;Why Not&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;A: Vite + R3F&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Vite, React, React Three Fiber, Vercel&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;8.6/10&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Selected&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;B: Vite + 2D&lt;/td&gt;
&lt;td&gt;Vite, React, Canvas 2D&lt;/td&gt;
&lt;td&gt;6.2/10&lt;/td&gt;
&lt;td&gt;No depth perception for 3D trajectory&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;C: Next.js + R3F&lt;/td&gt;
&lt;td&gt;Next.js, React, R3F&lt;/td&gt;
&lt;td&gt;7.8/10&lt;/td&gt;
&lt;td&gt;SSR adds hydration complexity for a pure client app&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;D: Vanilla Three.js&lt;/td&gt;
&lt;td&gt;Three.js, no framework&lt;/td&gt;
&lt;td&gt;5.4/10&lt;/td&gt;
&lt;td&gt;Manual state management, no HMR for scene&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The design document also specified the data pipeline for all four NASA sources, the coordinate system (J2000 Earth-centered, 1 unit = 10,000 km), the HUD layout, and camera presets.&lt;/p&gt;

&lt;p&gt;Key insight: &lt;strong&gt;Next.js was actively wrong for this project.&lt;/strong&gt; There's no content to server-render. No SEO to optimize. No dynamic routes. It's a WebGL canvas that talks to APIs. Vite gives you sub-second HMR and a smaller bundle without the hydration tax.&lt;/p&gt;

&lt;h3&gt;
  
  
  Stage 4: Blueprint
&lt;/h3&gt;

&lt;p&gt;The blueprint translated the design into an implementation spec: 48 files across 8 phases, with acceptance criteria for each phase. It specified the exact file structure, the interfaces for the OEM parser and interpolator, the Zustand store shape, and the serverless proxy signatures.&lt;/p&gt;

&lt;p&gt;This is where the pipeline pays dividends. By the time implementation starts, the AI agent has:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A clear architecture to follow (not just a vague prompt)&lt;/li&gt;
&lt;li&gt;Specific interfaces to implement (not ad-hoc decisions mid-code)&lt;/li&gt;
&lt;li&gt;An explicit scope boundary (what to build AND what not to build)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Stage 5: Implementation
&lt;/h3&gt;

&lt;p&gt;One iteration. All 15 tests passing. Build succeeds. Deployed to Vercel.&lt;/p&gt;

&lt;p&gt;That's not AI magic — that's the upstream work paying off. When the design document specifies "Zustand store with scalar selectors to avoid re-render storms" and the blueprint defines the exact store interface, the implementation becomes an execution problem, not a design problem.&lt;/p&gt;

&lt;h3&gt;
  
  
  Stage 6: Code Review (This Is Where It Gets Interesting)
&lt;/h3&gt;

&lt;p&gt;After implementation, the pipeline runs a &lt;strong&gt;multi-agent code review&lt;/strong&gt; called &lt;code&gt;/forge-review&lt;/code&gt;. Four specialized agents review the code in parallel:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Complexity Analyst&lt;/strong&gt; — algorithmic time/space complexity&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Paradigm Enforcer&lt;/strong&gt; — consistency within files&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Efficiency Sentinel&lt;/strong&gt; — performance anti-patterns&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Structure Reviewer&lt;/strong&gt; — access patterns vs. data structure selection&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The first review found &lt;strong&gt;5 critical issues&lt;/strong&gt;:&lt;/p&gt;

&lt;h4&gt;
  
  
  Critical 1: 60fps Re-Render Storm
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;The HUD reads spacecraft state as an object from Zustand.
Zustand creates a new object reference every update.
React re-renders all HUD components every frame.
At 60fps, that's 60 full React reconciliation passes per second.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The fix: DataDriver writes position to a shared &lt;code&gt;useRef&lt;/code&gt; for the 3D scene (zero React overhead), and throttles Zustand store updates to 4Hz for the HUD. Each HUD card uses a scalar selector (&lt;code&gt;state =&amp;gt; state.spacecraft.speed&lt;/code&gt;) instead of selecting the whole object.&lt;/p&gt;

&lt;h4&gt;
  
  
  Critical 2: O(n) Linear Scan in Hot Path
&lt;/h4&gt;

&lt;p&gt;The Lagrange interpolator was doing a linear search through all state vectors to find the nearest data point. With a 3,232-line OEM file at 60fps, that's ~194,000 comparisons per second.&lt;/p&gt;

&lt;p&gt;The fix: binary search. O(log n). The data is already sorted by epoch.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Binary search for closest data point&lt;/span&gt;
&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;lo&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;hi&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;vectors&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;lo&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nx"&gt;hi&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;mid&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;lo&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;hi&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;vectors&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;mid&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;epochMs&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nx"&gt;t&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nx"&gt;lo&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;mid&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="nx"&gt;hi&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;mid&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Critical 3: Per-Frame Memory Allocations
&lt;/h4&gt;

&lt;p&gt;The interpolator was calling &lt;code&gt;.slice()&lt;/code&gt; and &lt;code&gt;.map()&lt;/code&gt; inside the hot loop — allocating new arrays every frame. At 60fps, that's 120+ garbage-collected arrays per second.&lt;/p&gt;

&lt;p&gt;The fix: module-level reusable buffer, direct array indexing instead of &lt;code&gt;.map()&lt;/code&gt;, and accepting &lt;code&gt;epochMs&lt;/code&gt; as a number instead of creating Date objects.&lt;/p&gt;

&lt;h4&gt;
  
  
  Critical 4: Stale Closure in useChat
&lt;/h4&gt;

&lt;p&gt;The chat hook captured &lt;code&gt;messages&lt;/code&gt; in a closure at mount time. When a user sent a second message, the API call used the stale initial array — losing the conversation history.&lt;/p&gt;

&lt;p&gt;The fix: &lt;code&gt;useRef&lt;/code&gt; tracking the latest messages array, with &lt;code&gt;useCallback&lt;/code&gt; reading from the ref instead of the closure.&lt;/p&gt;

&lt;h4&gt;
  
  
  Critical 5: StrictMode Double-Mount Breaking Polls
&lt;/h4&gt;

&lt;p&gt;A &lt;code&gt;fetchedRef&lt;/code&gt; guard prevented re-fetching in StrictMode's double-mount cycle, but also broke cleanup — orphaning intervals and timeouts.&lt;/p&gt;

&lt;p&gt;The fix: remove the ref guard entirely. Use &lt;code&gt;AbortController&lt;/code&gt; for cancellation. Clean up all intervals and timeouts in the effect's return function.&lt;/p&gt;




&lt;h3&gt;
  
  
  The Re-Review
&lt;/h3&gt;

&lt;p&gt;After fixing all five criticals through an RCA (root cause analysis) cycle, the pipeline ran a second code review:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Severity&lt;/th&gt;
&lt;th&gt;First Review&lt;/th&gt;
&lt;th&gt;Re-Review&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Critical&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;0&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Warning&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Suggestion&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The two remaining warnings were both benign edge cases (a module-level buffer that's safe in single-threaded browsers, and a ref timing gap that's correctly compensated for). Zero criticals.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This is the pipeline's value proposition.&lt;/strong&gt; Without the review stage, those five bugs would have shipped. The linear scan and re-render storm would have made the app noticeably janky on mobile. The stale closure would have broken multi-turn chat. And you'd never know until users complained.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Numbers
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Source files&lt;/td&gt;
&lt;td&gt;47&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Lines of code&lt;/td&gt;
&lt;td&gt;~8,000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Unit tests&lt;/td&gt;
&lt;td&gt;15&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Implementation iterations&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Critical bugs caught&lt;/td&gt;
&lt;td&gt;5 (all fixed)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Build time&lt;/td&gt;
&lt;td&gt;~2 seconds&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Bundle size&lt;/td&gt;
&lt;td&gt;149KB app + 1.1MB Three.js/R3F&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;NASA data sources&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Serverless proxies&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pipeline artifacts&lt;/td&gt;
&lt;td&gt;12 documents&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Production deploys&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Upstream design eliminates downstream churn.&lt;/strong&gt; The implementation completed in one iteration because the blueprint answered most design questions in advance. "What shape is the Zustand store?" isn't a question you want the AI deciding mid-implementation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Code review catches real bugs, not style nits.&lt;/strong&gt; The forge-review found a 60fps re-render storm and an O(n) hot path — genuine performance issues that would have shipped silently without the review stage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. RAG is the new microservices.&lt;/strong&gt; Not everything needs it. A 3,000-token knowledge base doesn't need embeddings, vector search, and a retrieval pipeline. System prompt stuffing is boring and effective.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. The audit trail is the product.&lt;/strong&gt; Every decision — why Vite over Next.js, why Gemini over Claude, why Lagrange over Runge-Kutta — is documented in the pipeline artifacts. Six months from now, when someone asks "why did you build it this way?", the answer exists in the design doc, not in someone's memory.&lt;/p&gt;




&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Live app&lt;/strong&gt;: &lt;a href="https://artemis-tracker-murex.vercel.app" rel="noopener noreferrer"&gt;artemis-tracker-murex.vercel.app&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Source code&lt;/strong&gt;: &lt;a href="https://github.com/fluxforgeai/ARTEMIS" rel="noopener noreferrer"&gt;github.com/fluxforgeai/ARTEMIS&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Wrought pipeline&lt;/strong&gt;: &lt;a href="https://wrought-web.vercel.app" rel="noopener noreferrer"&gt;wrought-web.vercel.app&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The full pipeline artifacts — finding, research, design, blueprint, reviews, RCAs — are all in the repo's &lt;code&gt;docs/&lt;/code&gt; directory. The process is as open as the code.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built with &lt;a href="https://claude.ai/code" rel="noopener noreferrer"&gt;Claude Code&lt;/a&gt; + &lt;a href="https://wrought-web.vercel.app" rel="noopener noreferrer"&gt;Wrought&lt;/a&gt; by &lt;a href="https://fluxforge.ai" rel="noopener noreferrer"&gt;FluxForge AI&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>space</category>
      <category>react</category>
    </item>
  </channel>
</rss>
