<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Ola Prøis</title>
    <description>The latest articles on Forem by Ola Prøis (@olaproeis).</description>
    <link>https://forem.com/olaproeis</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/olaproeis"/>
    <language>en</language>
    <item>
      <title>Is Dev.to a community or a club? I'm leaving, but first, let me explain.</title>
      <dc:creator>Ola Prøis</dc:creator>
      <pubDate>Sun, 22 Mar 2026 15:58:12 +0000</pubDate>
      <link>https://forem.com/olaproeis/is-devto-a-community-or-a-club-im-leaving-but-first-let-me-explain-37ah</link>
      <guid>https://forem.com/olaproeis/is-devto-a-community-or-a-club-im-leaving-but-first-let-me-explain-37ah</guid>
      <description>&lt;p&gt;After some months of posting on dev.to, I'm moving on. Not out of frustration (well, maybe a little), but because I think this platform has a structural problem worth naming.&lt;/p&gt;

&lt;p&gt;I write about different stuff, and joined some challenges. My posts take real effort, they're not "10 JavaScript tips" listicles. And yet, week after week, the same handful of authors dominate the trending section. The weekly top badges? Same names. The homepage? You already know who'll be there.&lt;/p&gt;

&lt;p&gt;I don't think those authors are doing anything wrong. But I do think the platform's reaction-based ranking creates a feedback loop: popular authors get more visibility → more reactions → more visibility. New or niche voices don't stand a chance unless they game the same system.&lt;/p&gt;

&lt;p&gt;Dev.to markets itself as an inclusive community, but from where I'm standing, it's started to feel more like a club where membership is measured in follower count.&lt;/p&gt;

&lt;p&gt;Some honest questions before I go:&lt;/p&gt;

&lt;p&gt;Does anyone else experience this, especially writing in less mainstream areas like Rust, systems, or DevOps?&lt;/p&gt;

&lt;p&gt;Is there a better platform for technical writing that actually surfaces quality content? (Hashnode? Lobsters? A personal blog?)&lt;br&gt;
I'm not here to burn bridges. I genuinely hope the platform evolves. But for now, I'm putting my energy into spaces that feel less like shouting into a void.&lt;/p&gt;

</description>
      <category>devto</category>
      <category>discuss</category>
    </item>
    <item>
      <title>I Built a Quantum Circuit Simulator Without Understanding Quantum Physics</title>
      <dc:creator>Ola Prøis</dc:creator>
      <pubDate>Tue, 17 Mar 2026 13:12:51 +0000</pubDate>
      <link>https://forem.com/olaproeis/i-built-a-quantum-circuit-simulator-without-understanding-quantum-physics-m5g</link>
      <guid>https://forem.com/olaproeis/i-built-a-quantum-circuit-simulator-without-understanding-quantum-physics-m5g</guid>
      <description>&lt;p&gt;A few days ago I set myself a weird challenge. I'm not a quantum physicist. I'm not a Rust developer. But I wanted to know: &lt;strong&gt;can you build something genuinely useful in a technical domain you don't personally understand, by directing AI carefully and being honest about what you don't know?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The answer, I think, is yes. But it's more complicated than that.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Gap I Saw
&lt;/h2&gt;

&lt;p&gt;I was looking at quantum computing tools and noticed something odd. Every visual quantum circuit editor is either a web app, a Python library that generates static plots, or a cloud-based service. Nothing just runs as a native binary on your desktop. No install process. No Python environment to manage. No browser required.&lt;/p&gt;

&lt;p&gt;For someone learning quantum computing, that friction matters. You want to drag a gate onto a wire, see the Bloch sphere update in real time, step through the circuit gate by gate, and understand what's happening. You shouldn't need to &lt;code&gt;pip install&lt;/code&gt; anything or log into IBM's cloud.&lt;/p&gt;

&lt;p&gt;So I decided to build that. Even though I don't really understand quantum mechanics.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;Visual Editor&lt;/th&gt;
&lt;th&gt;Native Desktop&lt;/th&gt;
&lt;th&gt;Open Source&lt;/th&gt;
&lt;th&gt;Real-time Sim&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;KetGrid&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Qiskit Composer&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;❌ (web)&lt;/td&gt;
&lt;td&gt;Partial&lt;/td&gt;
&lt;td&gt;Cloud&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Quirk&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;❌ (web)&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅ (JS)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;QPanda&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  How It Was Built
&lt;/h2&gt;

&lt;p&gt;KetGrid is written entirely in Rust using &lt;a href="https://github.com/emilk/egui" rel="noopener noreferrer"&gt;egui&lt;/a&gt; for the GUI. &lt;strong&gt;The entire codebase was generated through AI-assisted development.&lt;/strong&gt; I provided the direction, the architecture decisions, the requirements, and the review. The AI wrote the code.&lt;/p&gt;

&lt;p&gt;I want to be completely transparent about this. It's in the README. It's not something I'm trying to hide, and I think it's actually part of what makes this interesting as an experiment.&lt;/p&gt;

&lt;p&gt;The architecture ended up as a clean Cargo workspace with three crates:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvsv9x2fdxgntdtrq09s3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvsv9x2fdxgntdtrq09s3.png" alt="KetGrid Architecture: three-crate Cargo workspace" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;ketgrid-core&lt;/code&gt;&lt;/strong&gt; handles the circuit data model, gate definitions, and serialization formats&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;ketgrid-sim&lt;/code&gt;&lt;/strong&gt; runs the state vector simulation engine with Rayon parallelism and gate fusion&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;ketgrid-gui&lt;/code&gt;&lt;/strong&gt; is the egui application: the editor, visualizations, and everything you interact with&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The sim crate uses &lt;a href="https://github.com/rayon-rs/rayon" rel="noopener noreferrer"&gt;rayon&lt;/a&gt; for parallelization on larger circuits and &lt;a href="https://nalgebra.org/" rel="noopener noreferrer"&gt;nalgebra&lt;/a&gt; for complex matrix operations.&lt;/p&gt;




&lt;h2&gt;
  
  
  What It Does Right Now
&lt;/h2&gt;

&lt;p&gt;As of &lt;strong&gt;v0.1.0&lt;/strong&gt;, released today:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmnd0dt4hgn1g25cpc0kn.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmnd0dt4hgn1g25cpc0kn.gif" alt="KetGrid Demo" width="1583" height="1181"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Circuit Editor&lt;/strong&gt;: Drag gates from a palette onto qubit wires and see the state vector update in real time at 60fps. Right-click context menus, undo/redo (100 operations deep), and keyboard shortcuts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bloch Spheres&lt;/strong&gt;: Per-qubit Bloch sphere for each qubit that updates as you build, computed from the reduced density matrix.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step-Through Mode&lt;/strong&gt;: Advance one gate column at a time with playback controls to watch the quantum state evolve. This is where the learning happens.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Probability Visualization&lt;/strong&gt;: Phase-aware histogram coloring with toggleable amplitude tables. Entanglement between qubits is shown directly on the wires themselves.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;21 Built-In Examples&lt;/strong&gt;: Bell states, quantum teleportation, Grover's algorithm, QFT, Deutsch-Jozsa, Bernstein-Vazirani, Simon's, superdense coding, and the three-qubit Shor error correction code. All searchable from a built-in browser.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Export&lt;/strong&gt;: OpenQASM 2.0, Qiskit Python code, SVG circuit diagrams, and a native JSON format.&lt;/p&gt;

&lt;p&gt;The current ceiling is around &lt;strong&gt;14 qubits&lt;/strong&gt; in real time. GPU acceleration is planned for v0.3 to push that to 25+.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I'm Genuinely Unsure About
&lt;/h2&gt;

&lt;p&gt;Here's where I need to be honest. &lt;strong&gt;I don't know if this is useful.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I &lt;em&gt;think&lt;/em&gt; the teaching and learning use case is real. Step-through mode with live Bloch spheres is something I haven't seen anywhere else in a native offline app. The 14-qubit ceiling is fine for Bell states, teleportation, Grover, and QFT.&lt;/p&gt;

&lt;p&gt;But I don't know if the people who would benefit from this will find it, or whether the qubit ceiling is a dealbreaker for them, or whether I'm missing obvious things that would make it more useful.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where I Want to Take It
&lt;/h2&gt;

&lt;p&gt;The roadmap has three major workstreams after v0.1.0:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fimzk274fxp3t0oxkbuce.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fimzk274fxp3t0oxkbuce.png" alt="KetGrid Roadmap - three parallel workstreams" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  GPU Acceleration (Workstream A)
&lt;/h3&gt;

&lt;p&gt;Push from ~14 to 25+ qubits using &lt;a href="https://wgpu.rs/" rel="noopener noreferrer"&gt;wgpu&lt;/a&gt; compute shaders. Cross-platform by default: Vulkan on Linux, DX12 on Windows, Metal on macOS. The crossover point where GPU beats CPU is around 13-15 qubits.&lt;/p&gt;

&lt;h3&gt;
  
  
  Quantum Phenomena Visualization (Workstream B)
&lt;/h3&gt;

&lt;p&gt;Make quantum mechanics &lt;em&gt;visceral&lt;/em&gt;. Animated amplitude flow showing how gate operations redistribute probability across basis states. Measurement collapse as a dramatic visual event. Entanglement propagation, "spooky action at a distance" made visible. A combined "Quantum Playground" mode for circuits ≤6 qubits.&lt;/p&gt;

&lt;h3&gt;
  
  
  Interactive Bell Inequality Experiment (Workstream C)
&lt;/h3&gt;

&lt;p&gt;This is the one I'm most excited about. The idea is a mode where you run a &lt;strong&gt;CHSH test&lt;/strong&gt;: the most important experiment in quantum physics. You collect measurement outcomes across multiple shots, watch the S-value rise, and see it &lt;strong&gt;cross the classical limit of 2&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;That crossing is the empirical proof that quantum mechanics cannot be explained by classical hidden variables.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"Classical physics says S cannot exceed 2. You just measured S = 2.7. Welcome to quantum mechanics."&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;No existing tool lets you run this interactively and watch it happen. I don't know enough about this domain to know if I'm overestimating how compelling that would be. That's partly why I'm writing this.&lt;/p&gt;




&lt;h2&gt;
  
  
  The AI Development Angle
&lt;/h2&gt;

&lt;p&gt;I want to say something about the process because I think it's worth discussing.&lt;/p&gt;

&lt;p&gt;Building software in a domain you don't understand forces you to think very carefully about what you're asking for. You can't rely on instinct. You have to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Describe behavior precisely&lt;/strong&gt;: no hand-waving&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Review output for correctness&lt;/strong&gt;: even when you can't fully evaluate the implementation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Be honest about what you're missing&lt;/strong&gt;4: and design around that honesty&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There were things the AI got wrong that I didn't catch until much later. There are probably things in the current codebase that experienced Rust developers would consider bad patterns. That's part of the journey and I'm open about it.&lt;/p&gt;

&lt;p&gt;What surprised me is how far you can get with this approach. The circuit editor works. The simulation is real. The export formats are correct. &lt;strong&gt;The thing I built actually does what I intended.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  If You Want to Try It
&lt;/h2&gt;

&lt;p&gt;KetGrid is &lt;strong&gt;MIT licensed&lt;/strong&gt;, fully open source, and there are pre-built binaries for Windows, macOS, and Linux on the releases page. No install process, just download and run.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Or build from source&lt;/span&gt;
git clone https://github.com/OlaProeis/KetGrid.git
&lt;span class="nb"&gt;cd &lt;/span&gt;KetGrid
cargo run &lt;span class="nt"&gt;--release&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; ketgrid-gui
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you're a quantum computing researcher, a physics educator, or a Rust developer, and you try it and have opinions, I'd really like to hear them. I'm specifically looking for feedback on whether the tool is actually useful in its current state, or whether specific roadmap items need to happen first before it's worth your time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/OlaProeis/KetGrid" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; | &lt;a href="https://github.com/OlaProeis/KetGrid/releases" rel="noopener noreferrer"&gt;Releases&lt;/a&gt; | &lt;a href="https://github.com/OlaProeis/KetGrid/blob/main/ROADMAP.md" rel="noopener noreferrer"&gt;Roadmap&lt;/a&gt;&lt;/p&gt;

</description>
      <category>rust</category>
      <category>ai</category>
      <category>opensource</category>
      <category>programming</category>
    </item>
    <item>
      <title>From AI Studio to Cursor: Building Archlyze with Google Gemini</title>
      <dc:creator>Ola Prøis</dc:creator>
      <pubDate>Wed, 04 Mar 2026 14:43:48 +0000</pubDate>
      <link>https://forem.com/olaproeis/from-ai-studio-to-cursor-building-archlyze-with-google-gemini-4522</link>
      <guid>https://forem.com/olaproeis/from-ai-studio-to-cursor-building-archlyze-with-google-gemini-4522</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/mlh-built-with-google-gemini-02-25-26"&gt;Built with Google Gemini: Writing Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built with Google Gemini
&lt;/h2&gt;

&lt;p&gt;I built &lt;strong&gt;Archlyze&lt;/strong&gt;, a browser-based AI code analyzer that lets you paste in source code and get an instant, structured analysis: what the code does, potential issues, security concerns, and architectural patterns.&lt;/p&gt;

&lt;p&gt;The entire thing runs without a traditional backend. Gemini &lt;em&gt;is&lt;/em&gt; the backend. The API is called directly from the browser, which surprised me with how capable and practical that turned out to be for a tool like this. It felt like a different mental model, instead of building a server to orchestrate AI calls, I just... talked directly to the model from the client.&lt;/p&gt;

&lt;p&gt;Archlyze is also getting a significant update today! with an &lt;strong&gt;Executive Briefing&lt;/strong&gt; feature, a full on-demand presentation layer powered by Gemini that transforms raw code analysis into a management-ready deck. It includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Executive Translation&lt;/strong&gt;: Gemini rewrites the analysis in plain, jargon-free language for non-technical stakeholders&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code Walkthrough&lt;/strong&gt;: Side-by-side view with full source and plain English explanations, section by section&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Architecture Diagram&lt;/strong&gt;: Auto-generated Mermaid.js flowchart rendered live in the presentation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security Scorecard&lt;/strong&gt;: Visual risk heatmap with a 0–100 score and per-component health table&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multilingual Briefings&lt;/strong&gt;: Configurable output language: English, Norwegian, Japanese, or others&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Export&lt;/strong&gt;: Download as Markdown, PowerPoint (.pptx), or PDF&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Editable&lt;/strong&gt;: Toggle edit mode to tweak the Markdown before exporting&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It runs entirely on-demand and doesn't touch the standard analysis pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;👉 &lt;a href="https://dev.to/olaproeis/building-an-ai-code-analyzer-with-google-ai-studio-and-finishing-it-in-cursor-388g"&gt;Read my original build post here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flh9w5n8msi299xpba0y2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flh9w5n8msi299xpba0y2.png" alt="Code Analysis" width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxhu5cd4anltg9ttzd91n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxhu5cd4anltg9ttzd91n.png" alt="Deck" width="800" height="520"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://archlyze.vercel.app/" rel="noopener noreferrer"&gt;https://archlyze.vercel.app/&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;p&gt;The biggest lesson was about &lt;strong&gt;workflow switching&lt;/strong&gt;. I started the project entirely in Google AI Studio, iterating on the React setup through a lot of trial and error, probably 10+ attempts before things clicked. That repetition wasn't wasted though; years of working with Cursor taught me that reiteration is just part of the process, not a sign something is wrong.&lt;/p&gt;

&lt;p&gt;At a certain point, when the project had good momentum but was growing in complexity, I switched to Cursor. Not because AI Studio failed me, but because I needed more control over session state and context. Cursor gives me tighter, more predictable context windows and better tooling for managing a growing codebase. The two tools ended up being complementary rather than competing: AI Studio for exploration and prototyping, Cursor for refinement and complexity management.&lt;/p&gt;

&lt;p&gt;The other surprise was how naturally Gemini fits as a "zero-backend" architecture. For the right kind of project, especially developer tools and internal utilities, calling the model directly from the client is a completely legitimate approach, not a hack.&lt;/p&gt;

&lt;h2&gt;
  
  
  Google Gemini Feedback
&lt;/h2&gt;

&lt;p&gt;Honestly impressed, especially early on. Gemini was fast, coherent, and produced genuinely useful output right out of the gate for a project like this.&lt;/p&gt;

&lt;p&gt;Where I ran into friction: &lt;strong&gt;context management over long sessions&lt;/strong&gt;. AI Studio feels designed around a long, continuous conversation, and that works well up to a point. But as the project grew and my prompting got more layered and less structured, I started losing confidence in whether the model was still tracking the full picture. I'm not sure if it was the model, my prompting style, or the lack of a rules/system-prompt setup, but it felt like complexity was the ceiling.&lt;/p&gt;

&lt;p&gt;For focused, well-scoped tasks? Excellent. For managing a multi-session project with evolving requirements? I'd want more control, custom instructions, explicit context resets, and better visibility into what the model is "holding." That's ultimately why I moved to Cursor for the harder parts.&lt;/p&gt;

&lt;p&gt;But the core capability is strong. The speed and quality on the initial build genuinely impressed me, and using it as a live backend for a browser app is an approach I'll absolutely use again when it fits.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>geminireflections</category>
      <category>gemini</category>
    </item>
    <item>
      <title>DEV.to Weekend Challenge Submission - forgeStat</title>
      <dc:creator>Ola Prøis</dc:creator>
      <pubDate>Sun, 01 Mar 2026 19:23:34 +0000</pubDate>
      <link>https://forem.com/olaproeis/devto-weekend-challenge-submission-forgestat-1l6</link>
      <guid>https://forem.com/olaproeis/devto-weekend-challenge-submission-forgestat-1l6</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/weekend-2026-02-28"&gt;DEV Weekend Challenge: Community&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Community
&lt;/h2&gt;

&lt;p&gt;I built &lt;strong&gt;forgeStat&lt;/strong&gt; for the open-source community, the maintainers, contributors, and users who make GitHub the heartbeat of collaborative software development.&lt;/p&gt;

&lt;p&gt;If you've ever maintained an open-source project, you know the struggle: you juggle GitHub's web UI, email notifications, third-party analytics tools, and browser tabs just to answer basic questions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is my project growing?&lt;/li&gt;
&lt;li&gt;Are issues piling up faster than I can close them?&lt;/li&gt;
&lt;li&gt;Who's contributing and how active are they?&lt;/li&gt;
&lt;li&gt;Are there security vulnerabilities I missed?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This tool is for the solo maintainer burning the midnight oil, the team managing hundreds of repos, and the curious developer who wants to understand any project's health at a glance. The terminal is our natural habitat, why leave it to check GitHub?&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;forgeStat&lt;/strong&gt; is a real-time GitHub repository dashboard that runs entirely in your terminal. It gives you a single-screen view of everything happening in a repository: stars, issues, PRs, contributors, releases, velocity metrics, and security alerts, all without leaving the command line.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Features
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;8 Real-Time Metric Panels&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Stars&lt;/strong&gt;: Sparkline charts showing 30-day, 90-day, and 1-year trends with milestone predictions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Issues&lt;/strong&gt;: Open issues grouped by label, sortable by age and activity&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pull Requests&lt;/strong&gt;: Open, draft, ready, and merged counts with average merge time&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Contributors&lt;/strong&gt;: Top contributors by commits + new contributors tracking&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Releases&lt;/strong&gt;: Release history with publish dates and average release intervals&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Velocity&lt;/strong&gt;: Weekly opened vs closed/merged metrics (4/8/12 week views)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security&lt;/strong&gt;: Dependabot vulnerability alerts broken down by severity&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CI Status&lt;/strong&gt;: GitHub Actions success rate and recent run history&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Repository Health Score (0-100)&lt;/strong&gt;&lt;br&gt;
A comprehensive grade based on four dimensions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Activity (25%): Commit velocity, PR merge rate, CI success&lt;/li&gt;
&lt;li&gt;Community (25%): Contributor diversity, new contributors, issue engagement&lt;/li&gt;
&lt;li&gt;Maintenance (25%): Release cadence, security alerts, health files&lt;/li&gt;
&lt;li&gt;Growth (25%): Star trends, forks, watchers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Interactive TUI Experience&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Zoom mode for full-screen panel details (Enter)&lt;/li&gt;
&lt;li&gt;Mini-map for bird's-eye overview (m)&lt;/li&gt;
&lt;li&gt;Fuzzy finder for quick repo switching (f)&lt;/li&gt;
&lt;li&gt;Diff mode to compare snapshots (d)&lt;/li&gt;
&lt;li&gt;Command palette with Vim-style commands (:)&lt;/li&gt;
&lt;li&gt;Mouse support with draggable panel resizing&lt;/li&gt;
&lt;li&gt;6 built-in themes + custom theme support&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pong mini-game during loading&lt;/strong&gt; - Play while fetching large repos (&amp;gt;5k stars)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;CLI Output Modes&lt;/strong&gt;&lt;br&gt;
When you don't need the full TUI:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;--json&lt;/code&gt; for complete data export&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--summary&lt;/code&gt; for compact status checks&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--report&lt;/code&gt; for markdown health reports&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--watchlist&lt;/code&gt; for multi-repo dashboards&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--compare&lt;/code&gt; for side-by-side repo comparison&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Loading Screen Experience&lt;/strong&gt;&lt;br&gt;
Fetching data from GitHub can take time, especially for popular repos. The loading screen turns this wait into a delightful experience:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Twinkling starfield background&lt;/strong&gt; with subtle animations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-time progress tracking&lt;/strong&gt; see which endpoint is being fetched and page-by-page progress for star history&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Animated cyan border&lt;/strong&gt; with pulsing glow effect&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pong mini-game&lt;/strong&gt; automatically appears for repos with &amp;gt;5,000 stars. Use ↑/↓ to play against the AI while you wait!&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auto-refresh every 10 minutes&lt;/strong&gt; keeps your data fresh while you leave the app running&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For huge repos like &lt;code&gt;torvalds/linux&lt;/code&gt; or &lt;code&gt;facebook/react&lt;/code&gt;, you'll get a warning: "⚠ This repo has 182.2k stars, loading may take a while!" &lt;br&gt;
Then you can challenge the AI to a game of Pong.&lt;/p&gt;




&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjgmbwppm0ko2o6qnyfs0.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjgmbwppm0ko2o6qnyfs0.gif" alt="forgeStat demo" width="760" height="493"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Quick Demo Commands
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Interactive TUI - the full experience&lt;/span&gt;
forgeStat ratatui-org/ratatui

&lt;span class="c"&gt;# Try a large repo to see the Pong game while loading!&lt;/span&gt;
forgeStat microsoft/vscode

&lt;span class="c"&gt;# Quick summary for CI pipelines&lt;/span&gt;
forgeStat facebook/react &lt;span class="nt"&gt;--summary&lt;/span&gt;

&lt;span class="c"&gt;# Export data for analysis&lt;/span&gt;
forgeStat microsoft/vscode &lt;span class="nt"&gt;--json&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; vscode-metrics.json

&lt;span class="c"&gt;# Compare two competing projects&lt;/span&gt;
forgeStat react &lt;span class="nt"&gt;--compare&lt;/span&gt; vue

&lt;span class="c"&gt;# Multi-repo dashboard&lt;/span&gt;
forgeStat &lt;span class="nt"&gt;--watchlist&lt;/span&gt; microsoft/vscode,rust-lang/rust
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The 8 Metric Panels
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbhm5el2zoc02a7pcsgcl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbhm5el2zoc02a7pcsgcl.png" alt="forgeStat's 8 metric panels" width="800" height="515"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Code
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Repository&lt;/strong&gt;: &lt;a href="https://github.com/olaproeis/forgeStat" rel="noopener noreferrer"&gt;https://github.com/olaproeis/forgeStat&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Built with &lt;strong&gt;Rust&lt;/strong&gt; using:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;ratatui&lt;/code&gt; for the terminal UI&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;octocrab&lt;/code&gt; for GitHub API&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;tokio&lt;/code&gt; for async runtime&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;serde&lt;/code&gt; for serialization&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  How I Built It
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Stack
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;th&gt;Technology&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Language&lt;/td&gt;
&lt;td&gt;Rust 1.74+&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;TUI Framework&lt;/td&gt;
&lt;td&gt;ratatui&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GitHub API&lt;/td&gt;
&lt;td&gt;octocrab + reqwest&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Async Runtime&lt;/td&gt;
&lt;td&gt;tokio&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CLI Parsing&lt;/td&gt;
&lt;td&gt;clap&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Config/Cache&lt;/td&gt;
&lt;td&gt;serde + toml + serde_json&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Architecture
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmivzq06mb0yk7ueiurbd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmivzq06mb0yk7ueiurbd.png" alt="forgeStat architecture" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh5kikidaxdiq47oqmoe6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh5kikidaxdiq47oqmoe6.png" alt="Data flow pipeline" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Technical Decisions
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Cache-First Architecture&lt;/strong&gt;: Data is cached locally with a 15-minute TTL. This enables:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Instant startup when revisiting repos&lt;/li&gt;
&lt;li&gt;Full offline mode support&lt;/li&gt;
&lt;li&gt;Respect for GitHub's rate limits&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Parallel Fetching&lt;/strong&gt;: All 8 metrics are fetched concurrently using &lt;code&gt;tokio::join!&lt;/code&gt;, making the most of the async runtime.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Modular TUI&lt;/strong&gt;: Each feature gets its own file in &lt;code&gt;tui/app/&lt;/code&gt; - keeps the codebase maintainable as features grow.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Zero-Config by Default&lt;/strong&gt;: Works out of the box with sensible defaults. Optional GitHub token unlocks higher rate limits and private repo access.&lt;/p&gt;

&lt;h3&gt;
  
  
  Challenges Faced
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Rate Limiting&lt;/strong&gt;: GitHub's 60 req/hour for unauthenticated requests required smart caching and batching strategies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;TUI Layout Complexity&lt;/strong&gt;: 8 panels that need to work on any terminal size, with resizable borders, zoom states, and a mini-map overlay. Solved with ratatui's constraint system and careful state management.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Large Repository Loading Times&lt;/strong&gt;: Repos with 100k+ stars can take 1-2 minutes to fetch due to GitHub API pagination. Rather than showing a boring spinner, I built an engaging loading screen with twinkling starfield background, animated progress bars, and a playable &lt;strong&gt;Pong mini-game&lt;/strong&gt; that appears automatically for large repos. Users can pass the time playing while their data loads!&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  What I'd Do Differently
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Start with a more robust testing strategy from day one&lt;/li&gt;
&lt;li&gt;Consider a plugin architecture for custom metrics&lt;/li&gt;
&lt;li&gt;Plan for internationalization earlier (date formats, etc.)&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Try It Out
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Installation (NB! Not all tested, might be better to clone and build)
&lt;/h3&gt;

&lt;h3&gt;
  
  
  Windows (PowerShell)
&lt;/h3&gt;

&lt;p&gt;iwr &lt;a href="https://github.com/OlaProeis/forgeStat/releases/latest/download/install.ps1" rel="noopener noreferrer"&gt;https://github.com/OlaProeis/forgeStat/releases/latest/download/install.ps1&lt;/a&gt; -UseBasicParsing | iex&lt;/p&gt;

&lt;h3&gt;
  
  
  macOS / Linux
&lt;/h3&gt;

&lt;p&gt;curl -fsSL &lt;a href="https://github.com/OlaProeis/forgeStat/releases/latest/download/install.sh" rel="noopener noreferrer"&gt;https://github.com/OlaProeis/forgeStat/releases/latest/download/install.sh&lt;/a&gt; | bash&lt;/p&gt;

&lt;h3&gt;
  
  
  Homebrew (macOS/Linux)
&lt;/h3&gt;

&lt;p&gt;brew tap olaproeis/tap&lt;br&gt;
brew install forgeStat&lt;/p&gt;

&lt;h3&gt;
  
  
  Cargo (any platform after clone)
&lt;/h3&gt;

&lt;p&gt;cargo install forgeStat&lt;/p&gt;

&lt;h3&gt;
  
  
  Or download directly from releases:
&lt;/h3&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://github.com/OlaProeis/forgeStat/releases/latest" rel="noopener noreferrer"&gt;https://github.com/OlaProeis/forgeStat/releases/latest&lt;/a&gt;
&lt;/h3&gt;




&lt;h2&gt;
  
  
  Closing Thoughts
&lt;/h2&gt;

&lt;p&gt;The terminal is where developers live. Bringing GitHub insights there feels natural. I hope this tool helps others keep their projects healthy and their communities thriving.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Star the repo&lt;/strong&gt; ⭐ if you find it useful, and &lt;strong&gt;open an issue&lt;/strong&gt; if you have ideas for improvement, its untested on MacOS and Linux!&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Thanks for reading! Built with ❤️ for the open-source community.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>weekendchallenge</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Dotfiles Coach Update: Interactive TUI and RAG Search for Shell History</title>
      <dc:creator>Ola Prøis</dc:creator>
      <pubDate>Mon, 16 Feb 2026 11:43:53 +0000</pubDate>
      <link>https://forem.com/olaproeis/dotfiles-coach-update-interactive-tui-and-rag-search-for-shell-history-p06</link>
      <guid>https://forem.com/olaproeis/dotfiles-coach-update-interactive-tui-and-rag-search-for-shell-history-p06</guid>
      <description>&lt;p&gt;A couple of weeks ago I posted about &lt;a href="https://dev.to/olaproeis/dotfiles-coach-your-shell-history-is-full-of-automation-gold-you-just-dont-know-it-yet-4g52"&gt;Dotfiles Coach&lt;/a&gt;, a CLI tool that analyses your shell history and uses GitHub Copilot CLI to generate aliases and functions based on your actual workflow.&lt;/p&gt;

&lt;p&gt;Since then I've added two features that I think make the tool a lot more useful, and I wanted to share them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Search: find forgotten commands by describing them
&lt;/h2&gt;

&lt;p&gt;Ever typed a really useful command three weeks ago and have no idea what it was? Instead of scrolling through thousands of history lines or piping through grep, you just describe what you're looking for:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;dotfiles-coach search &lt;span class="s2"&gt;"that docker compose command"&lt;/span&gt;
dotfiles-coach search &lt;span class="s2"&gt;"kubernetes pods"&lt;/span&gt;
dotfiles-coach search &lt;span class="s2"&gt;"ffmpeg convert"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It tokenizes your query and every command in your history, then scores them with a weighted mix of exact keyword overlap, fuzzy Levenshtein matching, substring detection, frequency, and recency. The top results come back ranked in a table with scores and usage counts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff6jsesjes6fo94vsl05x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff6jsesjes6fo94vsl05x.png" alt="Search Pipeline" width="800" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The whole thing runs 100% locally. No network, no Copilot needed for the core search. But if you add &lt;code&gt;--explain&lt;/code&gt;, it sends just the top result (scrubbed of secrets) to Copilot for a plain-English explanation of what the command does and when you'd use it.&lt;/p&gt;

&lt;p&gt;This quickly became my favorite feature. It's the kind of thing you don't think you need until you use it once, and then you use it all the time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Interactive TUI: review suggestions before applying
&lt;/h2&gt;

&lt;p&gt;The original version dumped all Copilot suggestions to your terminal at once. That works, but when you have 20+ suggestions it's hard to decide which ones to keep and which to skip.&lt;/p&gt;

&lt;p&gt;Now you can add &lt;code&gt;--interactive&lt;/code&gt; to either &lt;code&gt;suggest&lt;/code&gt; or &lt;code&gt;apply&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;dotfiles-coach suggest &lt;span class="nt"&gt;--interactive&lt;/span&gt;
dotfiles-coach apply &lt;span class="nt"&gt;--interactive&lt;/span&gt; &lt;span class="nt"&gt;--dry-run&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This launches a terminal UI built with ink (React for CLIs) where you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scroll through suggestions one at a time&lt;/li&gt;
&lt;li&gt;Press Enter to mark one for apply, Space to ignore it&lt;/li&gt;
&lt;li&gt;Press &lt;code&gt;e&lt;/code&gt; to open the code in your editor, tweak it, and come back&lt;/li&gt;
&lt;li&gt;Press &lt;code&gt;a&lt;/code&gt; to approve everything at once&lt;/li&gt;
&lt;li&gt;Press &lt;code&gt;q&lt;/code&gt; when you're done&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The TUI re-renders after each editor session so you can keep reviewing. If you're piping output or running in CI, it falls back to non-interactive mode automatically.&lt;/p&gt;

&lt;h2&gt;
  
  
  Updated pipeline
&lt;/h2&gt;

&lt;p&gt;The workflow diagram now includes search as its own branch:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0fwl14d0ai5oinn9bny6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0fwl14d0ai5oinn9bny6.png" alt="Dotfiles Coach Workflow" width="800" height="233"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And the internal architecture shows how search sits alongside the suggest pipeline, sharing the same parser layer but going straight to the tokenizer/scorer without touching Copilot:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcdvtd0re8zzjo0lvo9q2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcdvtd0re8zzjo0lvo9q2.png" alt="Dotfiles Coach Architecture" width="800" height="993"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Updated numbers
&lt;/h2&gt;

&lt;p&gt;The test suite grew quite a bit with these additions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;425 automated tests&lt;/strong&gt; (up from 291), across 22 test files&lt;/li&gt;
&lt;li&gt;102 tests just for the search scorer (tokenization, fuzzy matching, edge cases)&lt;/li&gt;
&lt;li&gt;The interactive TUI has TTY detection and graceful fallback, tested in both modes&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Try it out
&lt;/h2&gt;

&lt;p&gt;Everything works with the bundled sample data, no real history or Copilot subscription needed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/OlaProeis/dotfiles-coach.git
&lt;span class="nb"&gt;cd &lt;/span&gt;dotfiles-coach
npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; npm run build &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; npm &lt;span class="nb"&gt;link&lt;/span&gt;

&lt;span class="c"&gt;# Search (100% local)&lt;/span&gt;
dotfiles-coach search &lt;span class="s2"&gt;"docker"&lt;/span&gt; &lt;span class="nt"&gt;--shell&lt;/span&gt; bash &lt;span class="nt"&gt;--history-file&lt;/span&gt; tests/fixtures/sample_bash_history.txt

&lt;span class="c"&gt;# Suggest with interactive TUI&lt;/span&gt;
dotfiles-coach suggest &lt;span class="nt"&gt;--interactive&lt;/span&gt; &lt;span class="nt"&gt;--shell&lt;/span&gt; bash &lt;span class="nt"&gt;--history-file&lt;/span&gt; tests/fixtures/sample_bash_history.txt &lt;span class="nt"&gt;--min-frequency&lt;/span&gt; 1

&lt;span class="c"&gt;# Apply with interactive review (dry run)&lt;/span&gt;
dotfiles-coach apply &lt;span class="nt"&gt;--interactive&lt;/span&gt; &lt;span class="nt"&gt;--dry-run&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you don't have Copilot installed, set &lt;code&gt;DOTFILES_COACH_USE_MOCK_COPILOT=1&lt;/code&gt; first and the mock client will return realistic sample suggestions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Repo:&lt;/strong&gt; &lt;a href="https://github.com/OlaProeis/dotfiles-coach" rel="noopener noreferrer"&gt;github.com/OlaProeis/dotfiles-coach&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hope these additions make the tool more useful. Happy to answer any questions in the comments.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>githubchallenge</category>
      <category>cli</category>
      <category>githubcopilot</category>
    </item>
    <item>
      <title>Building an AI Code Analyzer with Google AI Studio (And Finishing It in Cursor)</title>
      <dc:creator>Ola Prøis</dc:creator>
      <pubDate>Wed, 11 Feb 2026 18:20:59 +0000</pubDate>
      <link>https://forem.com/olaproeis/building-an-ai-code-analyzer-with-google-ai-studio-and-finishing-it-in-cursor-388g</link>
      <guid>https://forem.com/olaproeis/building-an-ai-code-analyzer-with-google-ai-studio-and-finishing-it-in-cursor-388g</guid>
      <description>&lt;p&gt;&lt;em&gt;This post is my submission for &lt;a href="https://dev.to/deved/build-apps-with-google-ai-studio"&gt;DEV Education Track: Build Apps with Google AI Studio&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Archlyze&lt;/strong&gt; is a browser-only SPA that uses Google Gemini to analyze source code (Rust, Python, JS/TS, Go, and more): it extracts components and dependencies, flags issues with severity, suggests fixes and unit tests, and generates flowcharts/UML/data-flow diagrams via gemini. We started from a detailed prompt in AI Studio (refined with Perplexity and Gemini), then finished and extended the app in Cursor with folder import, &lt;code&gt;.gitignore&lt;/code&gt; parsing, session history, model selection, and Markdown export.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fspxtrllv30yueqr9xd3k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fspxtrllv30yueqr9xd3k.png" alt="Features" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ljxh1yjx514m0xe8agv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ljxh1yjx514m0xe8agv.png" alt="Screenshot" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;! I ran out of free tier credits, so don't have a screenshot with images yet, will upload to the github when i have. apparently it can take from moments to a few weeks to get billing on your google api.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://archlyze.vercel.app/" rel="noopener noreferrer"&gt;https://archlyze.vercel.app/&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  My Experience
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Starting in AI Studio.&lt;/strong&gt; We began by drafting the concept and prompt with different models (Perplexity and Gemini), then took it into Google AI Studio to build the first version. We hit a recurring issue early on: &lt;strong&gt;React and tooling versions&lt;/strong&gt;. The generated app kept failing to run or build, and we had to iterate many times (10+ attempts) before we got a working setup.&lt;/p&gt;

&lt;h4&gt;
  
  
  The Prompt:
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Build 'RustFlow Analyzer'—a web app for comprehensive Rust file analysis. Users upload a .rs file (max 500 lines) or paste code. 
Gemini 1.5 Pro performs deep analysis: extract all functions, structs, traits, impls; map dependencies and call relationships; identify ownership patterns, error handling, and common anti-patterns (unnecessary clones, unwrap() abuse, missing lifetimes).

Display results in three panels:

Code View: Original code with syntax highlighting (Monaco editor), clickable line numbers to jump to explanations

Analysis Panel: Collapsible sections for each major code block (functions/structs) with plain-English explanations, detected issues marked with warning/error badges, and best-practice suggestions

Visual Panel: Use Imagen to generate architecture diagrams—function call graphs (boxes + arrows showing invocation flow), struct relationship diagrams (ownership/borrowing visualized with solid/dashed lines), and module dependency trees. Include toggle buttons for diagram types (flowchart, UML, data flow). Style: professional developer docs aesthetic, Rust brand colors (orange #CE422B, dark gray), clean minimalist lines.

Features: file upload (.rs), smart truncation warning if &amp;gt;500 lines, 'Regenerate Diagram' button for each section, export full report as HTML with embedded images, dark/light theme, example files (basic HTTP server, CLI parser, async tokio app). Add 'Share Analysis' to generate unique URL with results cached. Loading states with progress indicators, error handling for invalid Rust syntax or oversized files. Deploy-ready SPA with responsive mobile layout
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Evolving the idea.&lt;/strong&gt; Once the base was stable, we shifted from Rust-only to multi-language support and added a lot of features: folder import, &lt;code&gt;.gitignore&lt;/code&gt;-aware filtering, session history, dependency detection, issue severity, auto-fix, unit test generation, and diagram types. Doing this in Cursor (with the same codebase) felt natural: AI Studio gave us the initial structure and Gemini integration; Cursor helped us refactor, extend, and keep the code consistent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Structured output and “thinking” models.&lt;/strong&gt; The biggest technical takeaway was &lt;strong&gt;relying on a strict JSON schema&lt;/strong&gt; for analysis. Without explicit &lt;code&gt;required&lt;/code&gt; fields and clear descriptions in the &lt;code&gt;responseSchema&lt;/code&gt;, Gemini 2.5 sometimes returned minimal or inconsistent JSON. Once we defined the schema properly, the analysis results became reliable enough to drive the UI (components, issues, dependencies). I also learned that Gemini 2.5’s “thinking” phase means longer response times, so we added &lt;strong&gt;scaled timeouts&lt;/strong&gt; (e.g. 60s for small files, 180s for large ones) and loading states so the app doesn’t feel broken while the model is reasoning.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffn7u37jolxvidav6h057.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffn7u37jolxvidav6h057.png" alt="architecture" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Surprises.&lt;/strong&gt; (1) How much the app could do &lt;strong&gt;without a backend&lt;/strong&gt;, API key in the browser, direct calls to Gemini, and LocalStorage for settings and theme made the architecture simple. (2) Image generation (`gemini-2.5-flash-&lt;/p&gt;

&lt;p&gt;Overall, the track was a solid way to go from “idea + prompt” to a real SPA: AI Studio for fast prototyping and Gemini integration, then Cursor to harden and extend it into something we’d actually use.&lt;/p&gt;

&lt;p&gt;Repo:&lt;br&gt;
&lt;a href="https://github.com/OlaProeis/Archlyze" rel="noopener noreferrer"&gt;https://github.com/OlaProeis/Archlyze&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Try it:&lt;br&gt;
&lt;a href="https://archlyze.vercel.app/" rel="noopener noreferrer"&gt;https://archlyze.vercel.app/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>deved</category>
      <category>learngoogleaistudio</category>
      <category>ai</category>
      <category>gemini</category>
    </item>
    <item>
      <title>Dotfiles Coach: Your Shell History is Full of Automation Gold (You Just Don't Know It Yet)</title>
      <dc:creator>Ola Prøis</dc:creator>
      <pubDate>Sun, 08 Feb 2026 14:21:45 +0000</pubDate>
      <link>https://forem.com/olaproeis/dotfiles-coach-your-shell-history-is-full-of-automation-gold-you-just-dont-know-it-yet-4g52</link>
      <guid>https://forem.com/olaproeis/dotfiles-coach-your-shell-history-is-full-of-automation-gold-you-just-dont-know-it-yet-4g52</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/github-2026-01-21"&gt;GitHub Copilot CLI Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/OlaProeis/dotfiles-coach" rel="noopener noreferrer"&gt;Dotfiles Coach&lt;/a&gt;&lt;/strong&gt; is a CLI tool that mines your shell history for repeated patterns and uses GitHub Copilot CLI to generate smart aliases, functions, and safety improvements -- tailored to &lt;em&gt;your&lt;/em&gt; actual workflow.&lt;/p&gt;

&lt;p&gt;Every developer types the same commands hundreds of times. &lt;code&gt;git add . &amp;amp;&amp;amp; git commit -m "..." &amp;amp;&amp;amp; git push&lt;/code&gt;. &lt;code&gt;docker compose up -d &amp;amp;&amp;amp; docker compose logs -f&lt;/code&gt;. &lt;code&gt;cd ~/projects/thing &amp;amp;&amp;amp; npm run dev&lt;/code&gt;. We all know we &lt;em&gt;should&lt;/em&gt; create aliases and shell functions for these, but who has the time to audit their own history?&lt;/p&gt;

&lt;p&gt;Dotfiles Coach does it for you:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;analyze&lt;/code&gt;&lt;/strong&gt; -- Reads your shell history (Bash, Zsh, or PowerShell) and identifies the most repeated command patterns using frequency analysis with Levenshtein-based grouping. It also flags dangerous commands like &lt;code&gt;rm -rf&lt;/code&gt; without safeguards, &lt;code&gt;chmod 777&lt;/code&gt;, or &lt;code&gt;sudo&lt;/code&gt; with wildcards. This step is 100% local -- no network, no AI.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;suggest&lt;/code&gt;&lt;/strong&gt; -- Takes the top patterns, scrubs all secrets locally, and sends them to &lt;code&gt;gh copilot suggest&lt;/code&gt; to generate shell-specific aliases, functions, and one-liners. This is where Copilot shines -- it understands your workflow context and produces suggestions that actually make sense for &lt;em&gt;your&lt;/em&gt; habits.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;apply&lt;/code&gt;&lt;/strong&gt; -- Writes approved suggestions to a dedicated shell config file (&lt;code&gt;~/.dotfiles_coach_aliases.sh&lt;/code&gt; or &lt;code&gt;~/.dotfiles_coach_profile.ps1&lt;/code&gt;). It creates backups automatically, supports dry-run previews, and &lt;strong&gt;never&lt;/strong&gt; auto-sources anything -- you stay in control.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;report&lt;/code&gt;&lt;/strong&gt; -- Generates a shareable Markdown or JSON report combining analysis results and Copilot suggestions. Great for documentation or team sharing.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Why I built this
&lt;/h3&gt;

&lt;p&gt;I realized I was typing &lt;code&gt;git status&lt;/code&gt;, &lt;code&gt;git add .&lt;/code&gt;, &lt;code&gt;git commit -m&lt;/code&gt; as three separate commands dozens of times a day. I wanted something that looks at &lt;em&gt;my&lt;/em&gt; real behavior and suggests automation that fits &lt;em&gt;me&lt;/em&gt; -- and I wanted Copilot to be the brain behind those suggestions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Privacy-first design
&lt;/h3&gt;

&lt;p&gt;The part I'm most proud of: &lt;strong&gt;mandatory secret scrubbing&lt;/strong&gt;. Before any shell history data touches Copilot, it passes through 13 regex-based filters that catch passwords, API tokens, SSH keys, AWS credentials, GitHub/GitLab/npm tokens, Bearer headers, URLs with embedded credentials, &lt;code&gt;npm config set&lt;/code&gt; auth commands, Base64 blobs, and more. Every match is replaced with &lt;code&gt;[REDACTED]&lt;/code&gt;. This layer cannot be disabled -- it's architecturally enforced, not opt-in.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7swt7ldlmsisr127nuog.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7swt7ldlmsisr127nuog.png" alt="Privacy Flow" width="800" height="264"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The numbers
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;291 automated tests&lt;/strong&gt; across 20 test files&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;13 secret-scrubbing patterns&lt;/strong&gt; (all unit tested)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-shell support&lt;/strong&gt; -- Bash, Zsh, and PowerShell with auto-detection&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;3-tier response parser&lt;/strong&gt; for Copilot output (JSON fences, raw JSON, regex fallback)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Zero API tokens required&lt;/strong&gt; -- piggybacks on your existing &lt;code&gt;gh auth&lt;/code&gt; session&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;GitHub repo:&lt;/strong&gt; &lt;a href="https://github.com/OlaProeis/dotfiles-coach" rel="noopener noreferrer"&gt;https://github.com/OlaProeis/dotfiles-coach&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  How it works
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0fwl14d0ai5oinn9bny6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0fwl14d0ai5oinn9bny6.png" alt="Dotfiles Coach Workflow" width="800" height="233"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Quick start
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install&lt;/span&gt;
git clone https://github.com/OlaProeis/dotfiles-coach.git
&lt;span class="nb"&gt;cd &lt;/span&gt;dotfiles-coach
npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; npm run build

&lt;span class="c"&gt;# 1. Analyze your shell history (100% offline)&lt;/span&gt;
node dist/cli.js analyze

&lt;span class="c"&gt;# 2. Get Copilot-powered suggestions&lt;/span&gt;
node dist/cli.js suggest

&lt;span class="c"&gt;# 3. Preview what would be written (dry-run)&lt;/span&gt;
node dist/cli.js apply &lt;span class="nt"&gt;--dry-run&lt;/span&gt;

&lt;span class="c"&gt;# 4. Apply suggestions to a file&lt;/span&gt;
node dist/cli.js apply

&lt;span class="c"&gt;# 5. Generate a report&lt;/span&gt;
node dist/cli.js report &lt;span class="nt"&gt;--output&lt;/span&gt; report.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Try it without Copilot (mock mode)
&lt;/h3&gt;

&lt;p&gt;You can test the full pipeline without a Copilot subscription using bundled fixture data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# PowerShell&lt;/span&gt;
&lt;span class="nv"&gt;$env&lt;/span&gt;:DOTFILES_COACH_USE_MOCK_COPILOT &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"1"&lt;/span&gt;
node dist/cli.js analyze &lt;span class="nt"&gt;--shell&lt;/span&gt; bash &lt;span class="nt"&gt;--history-file&lt;/span&gt; tests/fixtures/sample_bash_history.txt &lt;span class="nt"&gt;--min-frequency&lt;/span&gt; 1
node dist/cli.js suggest &lt;span class="nt"&gt;--shell&lt;/span&gt; bash &lt;span class="nt"&gt;--history-file&lt;/span&gt; tests/fixtures/sample_bash_history.txt &lt;span class="nt"&gt;--min-frequency&lt;/span&gt; 1
node dist/cli.js apply &lt;span class="nt"&gt;--dry-run&lt;/span&gt;

&lt;span class="c"&gt;# Bash/Zsh&lt;/span&gt;
&lt;span class="nv"&gt;DOTFILES_COACH_USE_MOCK_COPILOT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 node dist/cli.js suggest &lt;span class="nt"&gt;--shell&lt;/span&gt; bash &lt;span class="nt"&gt;--history-file&lt;/span&gt; tests/fixtures/sample_bash_history.txt &lt;span class="nt"&gt;--min-frequency&lt;/span&gt; 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Architecture
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcdvtd0re8zzjo0lvo9q2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcdvtd0re8zzjo0lvo9q2.png" alt="Dotfiles Coach Architecture" width="800" height="993"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The pipeline flows through clearly separated layers: &lt;strong&gt;parsers&lt;/strong&gt; (one per shell) feed into &lt;strong&gt;analyzers&lt;/strong&gt; (frequency + safety), which are scrubbed by the &lt;strong&gt;secret scrubber&lt;/strong&gt;, then passed to the &lt;strong&gt;Copilot client&lt;/strong&gt; (real or mock), and finally formatted by &lt;strong&gt;formatters&lt;/strong&gt; (table, markdown, JSON) or written safely by &lt;strong&gt;file operations&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tech stack
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Area&lt;/th&gt;
&lt;th&gt;Choice&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Runtime&lt;/td&gt;
&lt;td&gt;Node.js 18+ (ESM)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Language&lt;/td&gt;
&lt;td&gt;TypeScript (strict mode)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CLI framework&lt;/td&gt;
&lt;td&gt;&lt;code&gt;commander&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Terminal UI&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;chalk&lt;/code&gt;, &lt;code&gt;ora&lt;/code&gt;, &lt;code&gt;boxen&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Copilot integration&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;execa&lt;/code&gt; wrapping &lt;code&gt;gh copilot suggest&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;String similarity&lt;/td&gt;
&lt;td&gt;&lt;code&gt;fast-levenshtein&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;File I/O&lt;/td&gt;
&lt;td&gt;&lt;code&gt;fs-extra&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tests&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;vitest&lt;/code&gt; (291 tests)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  My Experience with GitHub Copilot CLI
&lt;/h2&gt;

&lt;h3&gt;
  
  
  How Copilot CLI powers the tool itself
&lt;/h3&gt;

&lt;p&gt;Dotfiles Coach doesn't use an npm SDK or REST API for Copilot -- there isn't one for Copilot CLI. Instead, it wraps &lt;code&gt;gh copilot suggest&lt;/code&gt; as a child process via &lt;code&gt;execa&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;execa&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;gh&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;copilot&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;subcommand&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;timeout&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="nx"&gt;_000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;GH_PROMPT_DISABLED&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The tool builds structured prompts from your history patterns (frequency data, command sequences, shell type) and sends them to Copilot. Copilot's response is then parsed through a 3-tier strategy:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;JSON extraction from markdown fences&lt;/strong&gt; -- Copilot often wraps structured output in &lt;code&gt;&lt;/code&gt;`&lt;code&gt;json&lt;/code&gt; blocks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Raw JSON detection&lt;/strong&gt; -- Sometimes it returns bare JSON arrays&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Regex fallback&lt;/strong&gt; -- For conversational responses, we extract suggestions via pattern matching&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This approach was born out of necessity: Copilot CLI's output format isn't guaranteed to be machine-readable, so the parser had to be resilient.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Copilot CLI helped during development
&lt;/h3&gt;

&lt;p&gt;Beyond being &lt;em&gt;in&lt;/em&gt; the tool, Copilot CLI was my constant companion &lt;em&gt;building&lt;/em&gt; the tool. Some examples:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Designing the secret scrubber:&lt;/strong&gt;&lt;br&gt;
I used &lt;code&gt;gh copilot suggest&lt;/code&gt; to brainstorm regex patterns for detecting secrets in shell history. Copilot caught edge cases I hadn't considered -- like AWS access keys always starting with &lt;code&gt;AKIA&lt;/code&gt;, or GitLab tokens starting with &lt;code&gt;glpat-&lt;/code&gt;. The final scrubber has 13 battle-tested patterns.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Shell compatibility:&lt;/strong&gt;&lt;br&gt;
When implementing PowerShell history parsing (which stores plain-text commands in &lt;code&gt;ConsoleHost_history.txt&lt;/code&gt; via PSReadLine, a different location and convention than Bash/Zsh &lt;code&gt;~/.bash_history&lt;/code&gt;), Copilot CLI helped me understand the format differences and platform-specific path resolution logic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Safety rules:&lt;/strong&gt;&lt;br&gt;
The dangerous pattern detection module flags commands like &lt;code&gt;rm -rf /&lt;/code&gt; without &lt;code&gt;-i&lt;/code&gt;, &lt;code&gt;chmod 777&lt;/code&gt;, &lt;code&gt;sudo&lt;/code&gt; with wildcards, and unquoted variable expansion in &lt;code&gt;rm&lt;/code&gt; commands. Copilot helped me think through edge cases -- like catching &lt;code&gt;rm -rfi&lt;/code&gt; (which &lt;em&gt;does&lt;/em&gt; have &lt;code&gt;-i&lt;/code&gt;) versus &lt;code&gt;rm -rf&lt;/code&gt; (which doesn't).&lt;/p&gt;

&lt;h3&gt;
  
  
  What surprised me
&lt;/h3&gt;

&lt;p&gt;The biggest surprise was how well Copilot CLI handles &lt;em&gt;context&lt;/em&gt;. When I feed it a set of command patterns like:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;`&lt;br&gt;
git add . (47 times)&lt;br&gt;
git commit -m "..." (45 times)&lt;br&gt;
git push origin main (38 times)&lt;br&gt;
`&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;It doesn't just suggest three separate aliases -- it recognizes the &lt;em&gt;sequence&lt;/em&gt; and suggests a single &lt;code&gt;gcp&lt;/code&gt; function that does all three with a commit message argument. That kind of workflow-aware intelligence is what makes this tool genuinely useful rather than just a fancy &lt;code&gt;alias&lt;/code&gt; generator.&lt;/p&gt;

&lt;h3&gt;
  
  
  The testing story
&lt;/h3&gt;

&lt;p&gt;With 291 tests across 20 files, testability was a core design goal. The &lt;code&gt;MockCopilotClient&lt;/code&gt; returns canned fixture data, which means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Every test runs without network access&lt;/li&gt;
&lt;li&gt;CI/CD doesn't need Copilot credentials&lt;/li&gt;
&lt;li&gt;Contributors can run the full suite without a subscription&lt;/li&gt;
&lt;li&gt;The mock is toggled by a single env var: &lt;code&gt;DOTFILES_COACH_USE_MOCK_COPILOT=1&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This pattern made development incredibly fast -- I could iterate on the suggestion formatting, caching, and apply logic without hitting Copilot every time.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Try it out:&lt;/strong&gt; &lt;a href="https://github.com/OlaProeis/dotfiles-coach" rel="noopener noreferrer"&gt;github.com/OlaProeis/dotfiles-coach&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Your shell history is full of automation gold. Let Copilot help you find it.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>githubchallenge</category>
      <category>cli</category>
      <category>githubcopilot</category>
    </item>
    <item>
      <title>I Built a Full Project Management App in 2 days Using Claude 4.6</title>
      <dc:creator>Ola Prøis</dc:creator>
      <pubDate>Sat, 07 Feb 2026 11:40:56 +0000</pubDate>
      <link>https://forem.com/olaproeis/i-built-a-full-project-management-app-in-2-days-using-claude-47-1e1g</link>
      <guid>https://forem.com/olaproeis/i-built-a-full-project-management-app-in-2-days-using-claude-47-1e1g</guid>
      <description>&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; I built Ironpad, a local-first, file-based project &amp;amp; knowledge management system using Rust, Vue 3, and AI-assisted development in about 2 days. The code is open source, Here's how it went.&lt;/p&gt;




&lt;h2&gt;
  
  
  What is Ironpad?
&lt;/h2&gt;

&lt;p&gt;Ironpad is a personal project management and note-taking system where &lt;strong&gt;files are the database&lt;/strong&gt;. Every note, task, and project is a plain Markdown file with YAML frontmatter. No cloud, no vendor lock-in, no proprietary formats.&lt;/p&gt;

&lt;p&gt;You can edit your data in Ironpad's browser UI, or open the same files in VS Code, Obsidian, Vim, whatever you prefer. Changes sync in real-time via WebSocket. Everything is versioned automatically with Git.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fywer4sqbolpq8dffd24h.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fywer4sqbolpq8dffd24h.jpg" alt="Ironpad Screenshot" width="800" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Key features:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;WYSIWYG Markdown editor&lt;/strong&gt; (Milkdown, ProseMirror-based) with formatting toolbar&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Project management&lt;/strong&gt; with tasks, subtasks, tags, due dates, and recurrence&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Calendar view&lt;/strong&gt; with color-coded task urgency&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dashboard&lt;/strong&gt; showing all projects with active task summaries&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Git integration&lt;/strong&gt; — auto-commit, diff viewer, push/fetch, conflict detection&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-time sync&lt;/strong&gt; — edit in VS Code, see changes instantly in the browser&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Full-text search&lt;/strong&gt; powered by ripgrep (Ctrl+K)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dark theme&lt;/strong&gt; by default, 5 MB binary, ~20 MB RAM, sub-second startup&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/OlaProeis/ironPad" rel="noopener noreferrer"&gt;github.com/OlaProeis/ironPad&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  The Tech Stack
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fty71yugr0uz1he91tjl7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fty71yugr0uz1he91tjl7.png" alt="Tech stack diagram" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Layer&lt;/th&gt;
&lt;th&gt;Technology&lt;/th&gt;
&lt;th&gt;Why&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Backend&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Rust + Axum 0.8 + Tokio&lt;/td&gt;
&lt;td&gt;Performance, safety, tiny binary&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Frontend&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Vue 3 + Vite + TypeScript&lt;/td&gt;
&lt;td&gt;Composition API, fast dev experience&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Editor&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Milkdown (ProseMirror)&lt;/td&gt;
&lt;td&gt;WYSIWYG markdown rendering&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;State&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Pinia&lt;/td&gt;
&lt;td&gt;Clean, minimal state management&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Data&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Markdown + YAML frontmatter&lt;/td&gt;
&lt;td&gt;Human-readable, editor-agnostic&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Version Control&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Git (via git2 crate)&lt;/td&gt;
&lt;td&gt;Automatic history for everything&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Search&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;ripgrep&lt;/td&gt;
&lt;td&gt;Battle-tested, sub-100ms results&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Real-time&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;WebSocket&lt;/td&gt;
&lt;td&gt;Instant sync with external editors&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Why Rust instead of Electron?
&lt;/h3&gt;

&lt;p&gt;This was a deliberate choice. Here's the comparison:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flbm62d2rgvaaa0tlhqvi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flbm62d2rgvaaa0tlhqvi.png" alt="Electron vs Ironpad comparison" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Electron App&lt;/th&gt;
&lt;th&gt;Ironpad (Rust)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Bundle size&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;150–300 MB&lt;/td&gt;
&lt;td&gt;~5 MB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;RAM usage&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;200–500 MB&lt;/td&gt;
&lt;td&gt;~20 MB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Startup&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;2–5 seconds&lt;/td&gt;
&lt;td&gt;&amp;lt; 500ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Browser&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Bundled Chromium&lt;/td&gt;
&lt;td&gt;Your system browser&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Distribution&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Complex installer&lt;/td&gt;
&lt;td&gt;Single executable&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Every user already has a browser. Why bundle another one?&lt;/p&gt;

&lt;p&gt;The Rust backend serves an API, the Vue frontend runs in whatever browser you already use. Double-click the executable, it opens your browser, you're working. That's it.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Development Process: Built With AI
&lt;/h2&gt;

&lt;p&gt;Here's where it gets interesting. &lt;strong&gt;Ironpad was built entirely using AI-assisted development.&lt;/strong&gt; Not just autocomplete, the architecture, the PRD, the implementation, the debugging. All of it.&lt;/p&gt;

&lt;p&gt;I call the approach &lt;strong&gt;Open Method&lt;/strong&gt;: not just open source code, but the open development process. It is documented in the repo under &lt;a href="https://github.com/OlaProeis/ironPad/tree/main/docs/ai-workflow" rel="noopener noreferrer"&gt;&lt;code&gt;docs/ai-workflow/&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg5zcud539ah9vsn1tp7e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg5zcud539ah9vsn1tp7e.png" alt="AI development workflow" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The 6-Phase Workflow
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Phase 1: Multi-AI Consultation&lt;/strong&gt;&lt;br&gt;
Before writing a single line of code, I discussed the idea with multiple AI models, Claude for architecture, Perplexity for library research, Gemini for second opinions. Five minutes getting different perspectives saves hours of rework.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Example:&lt;/em&gt; When designing the task system, one model suggested storing tasks as checkboxes in a single &lt;code&gt;tasks.md&lt;/code&gt; file. Another pointed out that individual task files with frontmatter would be more flexible and avoid concurrent edit conflicts. Individual files was the right call.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 2: PRD Creation&lt;/strong&gt;&lt;br&gt;
We wrote a detailed Product Requirements Document covering everything: features, API design, data models, edge cases, and explicitly what's &lt;em&gt;not&lt;/em&gt; in scope. The PRD went through 3 versions, incorporating feedback about concurrency control, file watching, git conflict handling, and frontmatter automation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 3: Task Decomposition&lt;/strong&gt;&lt;br&gt;
This was a lighter project, with a new model, so i decided to skip task-master and just make a checklist document, testing how much the AI could handle in one go, i was impressed!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 4: Context Loading&lt;/strong&gt;&lt;br&gt;
AI models have training cutoffs, so I used:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Context7&lt;/strong&gt; (MCP tool) to pull current documentation for Axum, Vue 3, and Milkdown&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ai-context.md&lt;/strong&gt; — a lean ~100-line cheat sheet telling the AI how code should fit the codebase&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Phase 5: Implementation&lt;/strong&gt;&lt;br&gt;
Build features in focused sessions. Test. Update checklist. Repeat.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 6: Verification&lt;/strong&gt;&lt;br&gt;
The AI writes the code. I verify the product. Run it, click the buttons, try the edge cases. Never trust "this should work."&lt;/p&gt;
&lt;h3&gt;
  
  
  Tools Used
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;Role&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Cursor IDE&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Primary development environment (VS Code fork with AI integration)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Claude Opus 4.5/4.6&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Architecture, implementation, debugging&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Perplexity AI&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Library research, version checking&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Google Gemini&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Second opinions, catching blind spots&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;Context7&lt;/strong&gt; (MCP)&lt;/td&gt;
&lt;td&gt;Up-to-date library documentation&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;


&lt;h2&gt;
  
  
  The 200K → 1M Context Window Shift
&lt;/h2&gt;

&lt;p&gt;Midway through development, Claude's context window jumped from 200K to 1M tokens. This was the single biggest change in the project's workflow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvpzebclpux52nx9yaqi8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvpzebclpux52nx9yaqi8.png" alt="Context window comparison" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Before: 200K tokens (Claude Opus 4.5)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Could hold ~3-5 files at once&lt;/li&gt;
&lt;li&gt;Had to split features into micro-tasks&lt;/li&gt;
&lt;li&gt;Required handover documents between every task&lt;/li&gt;
&lt;li&gt;Cross-file bugs were hard to find&lt;/li&gt;
&lt;li&gt;~15-20 min overhead per task (context setup)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  After: 1M tokens (Claude Opus 4.6)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Entire codebase (80+ files) fits in one context&lt;/li&gt;
&lt;li&gt;Full features implemented in single sessions&lt;/li&gt;
&lt;li&gt;Handovers only needed between days&lt;/li&gt;
&lt;li&gt;Cross-file bugs found automatically&lt;/li&gt;
&lt;li&gt;~0 min overhead per task&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  The Codebase Audit
&lt;/h3&gt;

&lt;p&gt;The clearest demonstration: I loaded the entire Ironpad codebase into a single context and asked "what's wrong?"&lt;/p&gt;

&lt;p&gt;The AI found &lt;strong&gt;16 issues&lt;/strong&gt;, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Auto-commit silently broken&lt;/strong&gt; — a flag never set to &lt;code&gt;true&lt;/code&gt; anywhere. Finding this required reading &lt;code&gt;main.rs&lt;/code&gt;, &lt;code&gt;git.rs&lt;/code&gt;, and every route handler simultaneously.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Operator precedence bug&lt;/strong&gt; — &lt;code&gt;0 &amp;gt; 0&lt;/code&gt; evaluated before &lt;code&gt;??&lt;/code&gt; in JavaScript&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Missing atomic writes&lt;/strong&gt; — only 1 of 8 write paths used the safe atomic pattern&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;14 of 16 issues were fixed in a single session. Zero compilation errors introduced.&lt;/p&gt;

&lt;p&gt;This kind of comprehensive audit simply wasn't possible at 200K tokens.&lt;/p&gt;


&lt;h2&gt;
  
  
  What Worked (and What Didn't)
&lt;/h2&gt;
&lt;h3&gt;
  
  
  What worked well
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. PRD-first development&lt;/strong&gt;&lt;br&gt;
The single highest-leverage activity. The AI produces dramatically better code when it knows exactly what success looks like. Time spent on the PRD pays off 10x during implementation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Rust's strict compiler&lt;/strong&gt;&lt;br&gt;
Rust is excellent for AI-assisted development because the compiler catches entire categories of bugs before runtime. With dynamic languages, bugs hide until production. With Rust, &lt;code&gt;cargo check&lt;/code&gt; is a mechanical verification pass that eliminates memory safety, type mismatches, and missing error handling in one step.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. The ai-context.md pattern&lt;/strong&gt;&lt;br&gt;
A lean (~100 line) architectural cheat sheet that tells the AI how to write code for this specific codebase. Without it, the AI invents new patterns. With it, code consistently matches existing conventions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Fresh chats over long conversations&lt;/strong&gt;&lt;br&gt;
Context accumulates noise. By the third task in a single chat, the AI references irrelevant earlier context. Starting fresh with a focused handover produced consistently better results.&lt;/p&gt;
&lt;h3&gt;
  
  
  What didn't work
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Trusting "this should work"&lt;/strong&gt;&lt;br&gt;
The AI confidently says this every single time. Without exception. Early on I'd take its word and move on. Then things would break two features later.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Fix:&lt;/em&gt; Test everything yourself. Click the buttons. Try the edge cases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Vague requirements&lt;/strong&gt;&lt;br&gt;
"Add search" produces mediocre results. "Add full-text search with ripgrep, triggered by Ctrl+K, showing filename and matching line with context, limited to 5 matches per file" produces excellent results.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Over-engineering&lt;/strong&gt;&lt;br&gt;
The AI tends to add abstractions and generalization you don't need yet. It builds for a future that may never come.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Fix:&lt;/em&gt; Explicitly state YAGNI. Call it out. "Simplify this" works surprisingly well.&lt;/p&gt;


&lt;h2&gt;
  
  
  The Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbf9z7igejqklg2gg1ipp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbf9z7igejqklg2gg1ipp.png" alt="Architecture diagram" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The design is intentionally simple:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;User launches executable
         ↓
   Rust Backend (Axum)
   ├── REST API (notes, projects, tasks, git, search)
   ├── WebSocket server (real-time sync)
   ├── File watcher (external edit detection)
   └── Git auto-commit (60s batching)
         ↓
   Vue 3 Frontend (in your browser)
   ├── Milkdown WYSIWYG editor
   ├── Dashboard, Calendar, Task views
   ├── Pinia state management
   └── WebSocket client
         ↓
   Plain Markdown files on disk
   (editable with any tool)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Core design decisions:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Files are the database.&lt;/strong&gt; No SQLite, no IndexedDB. The filesystem is the source of truth.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backend owns metadata.&lt;/strong&gt; IDs, timestamps, and frontmatter are auto-managed. Users never manually edit metadata.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;External editing is a first-class citizen.&lt;/strong&gt; The file watcher detects changes from VS Code/Obsidian/Vim and syncs them to the browser UI in real-time via WebSocket.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Git for everything.&lt;/strong&gt; Auto-commit every 60 seconds, manual commit with custom messages, full diff viewer built into the UI.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  API surface (29 endpoints):
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Notes:       GET/POST /api/notes, GET/PUT/DEL /api/notes/:id
Projects:    GET/POST /api/projects, GET/PUT /api/projects/:id
Tasks:       GET/POST /api/projects/:id/tasks, GET/PUT/DEL per task
             PUT toggle, PUT meta, GET /api/tasks (cross-project)
Daily notes: GET/POST /api/daily, /api/daily/today, /api/daily/:date
Assets:      POST upload, GET serve
Search:      GET /api/search?q=
Git:         status, commit, push, fetch, log, diff, remote, conflicts
WebSocket:   /ws (real-time file change notifications)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  The Future of Ironpad
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzs28uuhvzo4a99zhbfzb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzs28uuhvzo4a99zhbfzb.png" alt="Roadmap" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Coming in v0.2.0
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Task comments &amp;amp; activity log&lt;/strong&gt; — date-stamped entries per task, with the latest comment shown as a summary in the task list&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Recurring tasks on calendar&lt;/strong&gt; — daily/weekly/monthly tasks expanded across the calendar grid&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  On the horizon (v0.3.x)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Calendar drag-and-drop rescheduling&lt;/li&gt;
&lt;li&gt;Week and day calendar views&lt;/li&gt;
&lt;li&gt;Sort task list by due date or priority&lt;/li&gt;
&lt;li&gt;Improved overdue indicators&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Longer term
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Quick-add task from anywhere&lt;/li&gt;
&lt;li&gt;Bulk task actions&lt;/li&gt;
&lt;li&gt;Task templates&lt;/li&gt;
&lt;li&gt;Cross-project tag filtering&lt;/li&gt;
&lt;li&gt;Kanban board view&lt;/li&gt;
&lt;li&gt;Backlinks between notes&lt;/li&gt;
&lt;li&gt;Graph view of note connections&lt;/li&gt;
&lt;li&gt;Export to PDF/HTML&lt;/li&gt;
&lt;li&gt;Custom themes&lt;/li&gt;
&lt;li&gt;Mobile-friendly responsive layout&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What will NOT happen
&lt;/h3&gt;

&lt;p&gt;Ironpad will stay local-first. No cloud sync service, no user accounts, no SaaS. If you want remote access, push your data folder to a Git remote. That's by design, not a missing feature.&lt;/p&gt;




&lt;h2&gt;
  
  
  Try It / Contribute
&lt;/h2&gt;

&lt;p&gt;Ironpad is MIT licensed and open source.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quick start:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Download from &lt;a href="https://github.com/OlaProeis/ironPad/releases" rel="noopener noreferrer"&gt;Releases&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Run the executable&lt;/li&gt;
&lt;li&gt;Your browser opens — start working&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Or build from source:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/OlaProeis/ironPad.git
&lt;span class="nb"&gt;cd &lt;/span&gt;ironPad/backend &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; cargo run    &lt;span class="c"&gt;# API server&lt;/span&gt;
&lt;span class="nb"&gt;cd &lt;/span&gt;ironPad/frontend &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; npm run dev &lt;span class="c"&gt;# Dev server&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The complete AI development workflow documentation is in &lt;a href="https://github.com/OlaProeis/ironPad/tree/main/docs/ai-workflow" rel="noopener noreferrer"&gt;&lt;code&gt;docs/ai-workflow/&lt;/code&gt;&lt;/a&gt; — including the PRD, method, tools, and lessons learned.&lt;/p&gt;

&lt;p&gt;If you have ideas or find bugs, &lt;a href="https://github.com/OlaProeis/ironPad/issues" rel="noopener noreferrer"&gt;open an issue&lt;/a&gt;. PRs welcome.&lt;/p&gt;




&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/OlaProeis/ironPad" rel="noopener noreferrer"&gt;github.com/OlaProeis/ironPad&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Previous article:&lt;/strong&gt; &lt;a href="https://dev.to/olaproeis/the-ai-development-workflow-i-actually-use-549i"&gt;The AI Development Workflow I Actually Use&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI workflow docs:&lt;/strong&gt; &lt;a href="https://github.com/OlaProeis/ironPad/tree/main/docs/ai-workflow" rel="noopener noreferrer"&gt;docs/ai-workflow/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;Built with Rust, Vue, and a lot of AI conversations. The tools keep getting better, but the process of using them well still matters.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>showdev</category>
      <category>rust</category>
      <category>productivity</category>
      <category>opensource</category>
    </item>
    <item>
      <title>From 200K to 1M: How Claude Opus 4.6 Changed My AI Development Workflow Overnight</title>
      <dc:creator>Ola Prøis</dc:creator>
      <pubDate>Fri, 06 Feb 2026 00:15:11 +0000</pubDate>
      <link>https://forem.com/olaproeis/from-200k-to-1m-how-claude-opus-46-changed-my-ai-development-workflow-overnight-36gl</link>
      <guid>https://forem.com/olaproeis/from-200k-to-1m-how-claude-opus-46-changed-my-ai-development-workflow-overnight-36gl</guid>
      <description>&lt;p&gt;&lt;em&gt;A follow-up to &lt;a href="https://dev.to/olaproeis/the-ai-development-workflow-i-actually-use-549i"&gt;The AI Development Workflow I Actually Use&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;I wrote about my AI development workflow a couple of weeks ago. Task Master for structured tasks, Context7 for current docs, handover documents between fresh chats, multiple AI perspectives before coding. That workflow shipped working software.&lt;/p&gt;

&lt;p&gt;Today, a significant part of that workflow became optional.&lt;/p&gt;

&lt;p&gt;Claude Opus 4.6 launched in Cursor on February 5th, 2026, with a 1 million token context window. I've been using Opus 4.5 with its 200K limit for months. The jump to 1M isn't an incremental improvement. It changes what's possible in a single conversation.&lt;/p&gt;

&lt;p&gt;This is what happened when I tested it on a real project.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Project: ironPad
&lt;/h2&gt;

&lt;p&gt;ironPad is a local-first, file-based project management system I've been building with AI. Rust backend (Axum), Vue 3 frontend, markdown files as the database, Git integration for versioning. It's a real application, not a demo.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tech stack:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Backend&lt;/strong&gt;: Rust, Axum 0.8, Tokio, git2, notify (file watching)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Frontend&lt;/strong&gt;: Vue 3, Vite, TypeScript, Pinia, Milkdown (WYSIWYG editor)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data&lt;/strong&gt;: Plain Markdown files with YAML frontmatter&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-time&lt;/strong&gt;: WebSocket sync between UI and filesystem&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The codebase has around 80 files across backend and frontend. Not massive, but too large for 200K context.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Old Way: 200K Context
&lt;/h2&gt;

&lt;p&gt;With Opus 4.5 and 200K tokens, my workflow for this project looked like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Break big features into 3-5 tasks&lt;/strong&gt; — because the AI can only hold a few files at once&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Write handover documents&lt;/strong&gt; between each chat — so the next session knows what happened&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Carefully select which files to show&lt;/strong&gt; — can't load everything, so I'd pick the 3-5 most relevant files&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Repeat context-setting&lt;/strong&gt; every session — paste the handover, re-explain the architecture, point to the right files&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It worked. I shipped features. But there was some friction in the handover system.&lt;br&gt;
The handover system was my solution to a constraint. A good solution, but still a workaround.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fplxw53j45ewbx1g0dfsh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fplxw53j45ewbx1g0dfsh.png" alt="AI context" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  The New Way: 1M Context
&lt;/h2&gt;

&lt;p&gt;Today I opened a fresh chat with Opus 4.6 and said: &lt;em&gt;"Load the entire codebase into your context and analyze it."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;That's it. No carefully selected files. No handover document. No context-setting preamble.&lt;/p&gt;

&lt;p&gt;The AI proceeded to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;List the entire project structure&lt;/strong&gt; — every directory, every file&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Read every single source file&lt;/strong&gt; — all Rust backend code, all Vue components, all stores, all configs, all documentation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hold all of it simultaneously&lt;/strong&gt; — ~80 files, thousands of lines of code, across two languages and multiple frameworks&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Then I asked: &lt;em&gt;"Are there any bugs or improvements we should make?"&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  What It Found
&lt;/h3&gt;

&lt;p&gt;The AI identified &lt;strong&gt;16 issues&lt;/strong&gt; across the entire codebase. Not surface-level stuff. Deep, cross-file bugs that required understanding how multiple components interact:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real bugs:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Auto-commit was silently broken&lt;/strong&gt; — The background task checked a &lt;code&gt;pending_changes&lt;/code&gt; flag, but nothing in the entire codebase ever set it to &lt;code&gt;true&lt;/code&gt;. Auto-commits never fired. This is the kind of bug that requires reading &lt;code&gt;main.rs&lt;/code&gt;, &lt;code&gt;git.rs&lt;/code&gt;, and every route handler to spot. No single file reveals the problem.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;JavaScript operator precedence bug&lt;/strong&gt; — &lt;code&gt;remote.value?.ahead ?? 0 &amp;gt; 0&lt;/code&gt; evaluates &lt;code&gt;0 &amp;gt; 0&lt;/code&gt; first due to precedence, making push/pull buttons always show wrong state.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Port binding race condition&lt;/strong&gt; — The server checked if a port was available, dropped the connection, then tried to bind again. Another process could grab the port in between.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Own saves triggering "external edit" dialogs&lt;/strong&gt; — Only one of eight write paths called &lt;code&gt;mark_file_saved()&lt;/code&gt;. The file watcher would detect the app's own saves and pop up "File changed externally. Reload?" for task and project saves.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Architectural improvements:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Non-atomic writes risking data corruption in 3 route files&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;confirm()&lt;/code&gt; blocking the UI thread&lt;/li&gt;
&lt;li&gt;WebSocket reconnect using fixed delay instead of exponential backoff&lt;/li&gt;
&lt;li&gt;120 lines of duplicated task parsing logic&lt;/li&gt;
&lt;li&gt;Missing CORS middleware&lt;/li&gt;
&lt;li&gt;No path traversal validation on asset endpoints&lt;/li&gt;
&lt;li&gt;Debug &lt;code&gt;console.log&lt;/code&gt; left in production code&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  What It Fixed
&lt;/h3&gt;

&lt;p&gt;I said: &lt;em&gt;"Can you fix all of these please?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In one session, the AI:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Rewrote the auto-commit system to simply try committing every 60 seconds (the existing &lt;code&gt;commit_all()&lt;/code&gt; already handled "no changes" gracefully)&lt;/li&gt;
&lt;li&gt;Fixed the port binding by returning the &lt;code&gt;TcpListener&lt;/code&gt; directly instead of dropping and rebinding&lt;/li&gt;
&lt;li&gt;Made &lt;code&gt;atomic_write()&lt;/code&gt; public and switched all write paths to use it (which also solved the &lt;code&gt;mark_file_saved()&lt;/code&gt; problem automatically)&lt;/li&gt;
&lt;li&gt;Added frontmatter helper functions and deduplicated the task parsing code&lt;/li&gt;
&lt;li&gt;Replaced the blocking &lt;code&gt;confirm()&lt;/code&gt; with a non-blocking notification banner&lt;/li&gt;
&lt;li&gt;Added CORS, path validation, exponential backoff&lt;/li&gt;
&lt;li&gt;Fixed the operator precedence bug&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F88t61gsir0fsnqz7vpub.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F88t61gsir0fsnqz7vpub.png" alt="Bug-fixes graph" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Result:&lt;/strong&gt; &lt;code&gt;cargo check&lt;/code&gt; passes. Zero lint errors on the frontend. 14 issues fixed, 2 intentionally deferred (one was a large library migration, the other a minor constant duplication across files).&lt;/p&gt;

&lt;p&gt;This was done in a single conversation. No handovers. No task splitting. No lost context.&lt;/p&gt;


&lt;h2&gt;
  
  
  What Actually Changed
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Before: The Handover Tax
&lt;/h3&gt;

&lt;p&gt;With 200K context, every bigger task or change had overhead, and we had to split it up into tasks.,&lt;/p&gt;

&lt;p&gt;That overhead was the cost of the constraint. Good handover-systems made it manageable, but it was never free.&lt;/p&gt;
&lt;h3&gt;
  
  
  After: Direct Work
&lt;/h3&gt;

&lt;p&gt;With 1M context, the full codebase audit looked like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Time for entire audit + fixes:
  Loading codebase:     ~2 min (AI reads all files)
  Analysis:             ~3 min (AI identifies 16 issues)
  Fixing all issues:    ~15 min (AI applies all fixes)
  Verification:         ~1 min (cargo check + lint)

  Total:                ~20 min
  Overhead:             ~0 min
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The same work with 200K context would have been 5+ separate sessions, each needing its own handover, each limited to the files it could see at once. Some of the cross-file bugs (like the auto-commit issue) might never have been found because no single session would have had both &lt;code&gt;main.rs&lt;/code&gt; and &lt;code&gt;git.rs&lt;/code&gt; and all the route handlers in context simultaneously.&lt;/p&gt;




&lt;h2&gt;
  
  
  Does This Kill the Handover Workflow?
&lt;/h2&gt;

&lt;p&gt;No. But it changes when you need it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Still valuable:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Collaborating with someone who needs to understand what you've done&lt;/li&gt;
&lt;li&gt;Documenting decisions for your future self&lt;/li&gt;
&lt;li&gt;Projects larger than 1M tokens&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;No longer necessary:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Splitting a feature into artificial micro-tasks just to fit context&lt;/li&gt;
&lt;li&gt;Writing handovers between closely related tasks&lt;/li&gt;
&lt;li&gt;Carefully curating which files the AI can see&lt;/li&gt;
&lt;li&gt;Re-explaining architecture every session&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The handover system goes from "required for every task" to "useful for session boundaries." That's a big shift.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Broader Pattern
&lt;/h2&gt;

&lt;p&gt;What I've noticed building ironPad is that each AI capability jump doesn't just make existing tasks faster, it enables tasks that weren't practical before.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A full codebase audit wasn't practical at 200K.&lt;/strong&gt; You could audit individual files, but finding bugs that span the entire system required a human to manually trace connections across files and then describe them to the AI. Now the AI just sees everything.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cross-cutting refactors weren't practical at 200K.&lt;/strong&gt; Changing how atomic writes work across 6 files, while also updating the file watcher integration, while also ensuring frontmatter helpers are available everywhere, that's a single coherent change when you can see all the files. At 200K, it's 3-4 sessions with risk of inconsistency between them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architecture-level reasoning wasn't practical at 200K.&lt;/strong&gt; The auto-commit bug is a perfect example. The &lt;code&gt;AutoCommitState&lt;/code&gt; was created in &lt;code&gt;main.rs&lt;/code&gt;, the &lt;code&gt;mark_changed()&lt;/code&gt; method existed in &lt;code&gt;git.rs&lt;/code&gt;, but no route handler had access to it. Finding that requires understanding the full request flow from HTTP handler through service layer. That's trivial with the whole codebase loaded.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's Next for ironPad
&lt;/h2&gt;

&lt;p&gt;The project is open source, i released it 30 minutes ago on GitHub.&lt;/p&gt;

&lt;p&gt;We're also going &lt;strong&gt;open method&lt;/strong&gt;. Not just the code, but the process. How every feature was built with AI, what prompts worked, what didn't, how the workflow evolved from 200K to 1M context.&lt;/p&gt;

&lt;p&gt;Because the tools keep getting better, but the process of using them well still matters. A 1M context window doesn't help if you don't know what to ask for.&lt;/p&gt;




&lt;h2&gt;
  
  
  Try It Yourself
&lt;/h2&gt;

&lt;p&gt;The core of what worked today:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Load everything.&lt;/strong&gt; Don't curate files. Let the AI see the whole picture.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ask open questions first.&lt;/strong&gt; "What's wrong?" before "Fix this specific thing." The AI found bugs I didn't know existed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Let it work in batches.&lt;/strong&gt; The AI fixed 14 issues in one session because it could see all the dependencies between them.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Verify mechanically.&lt;/strong&gt; &lt;code&gt;cargo check&lt;/code&gt; and lint tools confirm correctness faster than reading every line.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Keep your structured workflow for session boundaries.&lt;/strong&gt; Handovers and PRDs still matter for smaller tasks and bigger projects. They just aren't needed between every micro-task anymore.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The context window went from a limitation you worked around to a space you fill with your entire project. That changes the game.&lt;/p&gt;




&lt;p&gt;*ironPad is being built in the open. Follow the project on GitHub&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/OlaProeis/ironPad" rel="noopener noreferrer"&gt;https://github.com/OlaProeis/ironPad&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>rust</category>
      <category>programming</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Why I Deleted egui’s TextEdit and Wrote a Text Editor From Scratch</title>
      <dc:creator>Ola Prøis</dc:creator>
      <pubDate>Fri, 30 Jan 2026 09:40:26 +0000</pubDate>
      <link>https://forem.com/olaproeis/why-i-deleted-eguis-textedit-and-wrote-a-text-editor-from-scratch-3p55</link>
      <guid>https://forem.com/olaproeis/why-i-deleted-eguis-textedit-and-wrote-a-text-editor-from-scratch-3p55</guid>
      <description>&lt;p&gt;When I started Ferrite, I didn’t plan to write a text editor.&lt;/p&gt;

&lt;p&gt;Ferrite began as a side project: a fast, native Markdown editor built with Rust and egui. I didn’t know Rust particularly well, I definitely didn’t know GUI programming, and I &lt;em&gt;really&lt;/em&gt; didn’t want to reinvent things that already existed.&lt;/p&gt;

&lt;p&gt;egui had a &lt;code&gt;TextEdit&lt;/code&gt; widget. It worked. So I used it.&lt;/p&gt;

&lt;p&gt;For a long time, that decision was absolutely the right one.&lt;/p&gt;

&lt;p&gt;But in Ferrite v0.2.6, I deleted &lt;code&gt;egui::TextEdit&lt;/code&gt; entirely and replaced it with a custom editor written from scratch.&lt;/p&gt;

&lt;p&gt;This is the story of why.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Editor That Was “Good Enough”
&lt;/h2&gt;

&lt;p&gt;Ferrite shipped a lot of features on top of egui’s &lt;code&gt;TextEdit&lt;/code&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;WYSIWYG Markdown editing&lt;/li&gt;
&lt;li&gt;Split view (raw + rendered)&lt;/li&gt;
&lt;li&gt;Minimap&lt;/li&gt;
&lt;li&gt;Syntax highlighting&lt;/li&gt;
&lt;li&gt;Search and replace&lt;/li&gt;
&lt;li&gt;Undo/redo&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At the time, it felt almost magical that this worked at all.&lt;/p&gt;

&lt;p&gt;I wrote previously about shipping Ferrite without really knowing Rust, and that was true here as well. egui’s abstraction let me move fast and focus on &lt;em&gt;features&lt;/em&gt;, not editor internals.&lt;/p&gt;

&lt;p&gt;And for small to medium files, everything was fine.&lt;/p&gt;

&lt;p&gt;That’s the key part: &lt;strong&gt;until it wasn’t&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Bug That Broke the Illusion
&lt;/h2&gt;

&lt;p&gt;The issue that forced my hand was simple to describe and horrifying to observe.&lt;/p&gt;

&lt;p&gt;Opening a &lt;strong&gt;4MB text file&lt;/strong&gt; caused Ferrite to use &lt;strong&gt;1.8GB of RAM&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;An &lt;strong&gt;80MB file&lt;/strong&gt; was effectively unusable.&lt;/p&gt;

&lt;p&gt;This wasn’t a slow leak or a missing &lt;code&gt;drop&lt;/code&gt;. It was architectural.&lt;/p&gt;

&lt;p&gt;After a lot of profiling, the root causes became clear:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The entire document was being cloned every frame&lt;/li&gt;
&lt;li&gt;Undo relied on full snapshots&lt;/li&gt;
&lt;li&gt;Search paths allocated whole-document copies&lt;/li&gt;
&lt;li&gt;Bracket matching scanned the entire buffer repeatedly&lt;/li&gt;
&lt;li&gt;egui’s text model assumed “the whole text exists, every frame”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;None of this is &lt;em&gt;wrong&lt;/em&gt; for an immediate‑mode UI.&lt;/p&gt;

&lt;p&gt;But it is fundamentally incompatible with large files.&lt;/p&gt;

&lt;p&gt;At some point I had to admit something uncomfortable:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I wasn’t dealing with a bug anymore.\&lt;br&gt;
I was dealing with the limits of the abstraction I chose.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Why You Can’t Patch Your Way Out of This
&lt;/h2&gt;

&lt;p&gt;I tried to optimize my way out.&lt;/p&gt;

&lt;p&gt;I really did.&lt;/p&gt;

&lt;p&gt;I added guards. Thresholds. Debounces. Hashes. Special‑case logic for “large files”. I fixed one O(N) path only to discover another one hiding behind it.&lt;/p&gt;

&lt;p&gt;But egui’s &lt;code&gt;TextEdit&lt;/code&gt; makes some assumptions that are deeply baked in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The full document is owned by the widget&lt;/li&gt;
&lt;li&gt;Text is a &lt;code&gt;String&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Layout happens eagerly&lt;/li&gt;
&lt;li&gt;Undo is snapshot‑based&lt;/li&gt;
&lt;li&gt;Rendering isn’t viewport‑aware&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Those assumptions are totally reasonable for UI text fields.&lt;/p&gt;

&lt;p&gt;They are &lt;strong&gt;not&lt;/strong&gt; reasonable for a code editor.&lt;/p&gt;

&lt;p&gt;At some point, optimizations stop being improvements and start being denial.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Moment the Plan Changed
&lt;/h2&gt;

&lt;p&gt;Rewriting the editor wasn’t originally an impulsive decision.&lt;/p&gt;

&lt;p&gt;From early on, I knew there were things I eventually wanted that egui’s &lt;code&gt;TextEdit&lt;/code&gt; simply wasn’t built for. Features like &lt;strong&gt;proper code folding&lt;/strong&gt;, &lt;strong&gt;reliable sync scrolling&lt;/strong&gt;, and deeper editor–preview integration were already pushing against its limits.&lt;/p&gt;

&lt;p&gt;Because of that, a custom editor had been part of the roadmap for &lt;strong&gt;v0.3.0&lt;/strong&gt;. It was a “someday” project, something to tackle once the surrounding features were stable.&lt;/p&gt;

&lt;p&gt;Then the large‑file issue landed.&lt;/p&gt;

&lt;p&gt;Opening a 4MB file using gigabytes of memory wasn’t just inconvenient; it was a hard failure. At that point, the rewrite stopped being a future improvement and became an immediate requirement.&lt;/p&gt;

&lt;p&gt;The question was no longer &lt;em&gt;if&lt;/em&gt; the editor needed to be replaced, but &lt;em&gt;when&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;So I moved the plan forward.&lt;/p&gt;

&lt;p&gt;What was meant to be a v0.3.0 rewrite became the core of &lt;strong&gt;v0.2.6&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It was still intimidating, text editors are iceberg problems, but postponing it would have meant shipping an editor that couldn’t scale or be trusted.&lt;/p&gt;

&lt;p&gt;So I did the thing earlier than planned.&lt;/p&gt;

&lt;p&gt;I deleted the editor and started over.&lt;/p&gt;




&lt;h2&gt;
  
  
  FerriteEditor: What Changed Architecturally
&lt;/h2&gt;

&lt;p&gt;The new editor, &lt;code&gt;FerriteEditor&lt;/code&gt;, is built around a few core ideas.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Rope‑based Text Storage
&lt;/h3&gt;

&lt;p&gt;Instead of storing text as a &lt;code&gt;String&lt;/code&gt;, Ferrite uses the &lt;code&gt;ropey&lt;/code&gt; crate.&lt;/p&gt;

&lt;p&gt;This gives:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;O(log n) inserts and deletes&lt;/li&gt;
&lt;li&gt;Cheap slicing&lt;/li&gt;
&lt;li&gt;Predictable memory usage&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Large files stop being special cases and start being normal input.&lt;/p&gt;




&lt;h3&gt;
  
  
  2. Virtual Scrolling
&lt;/h3&gt;

&lt;p&gt;FerriteEditor only renders what you can see.&lt;/p&gt;

&lt;p&gt;Not “most of the document”. Not “everything and hope it’s fine”.&lt;/p&gt;

&lt;p&gt;Just:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Visible lines&lt;/li&gt;
&lt;li&gt;Plus a small buffer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This single change unlocked editing &lt;strong&gt;100MB+ files&lt;/strong&gt; smoothly.&lt;/p&gt;




&lt;h3&gt;
  
  
  3. Viewport‑Aware Everything
&lt;/h3&gt;

&lt;p&gt;Operations that used to scan the entire document are now windowed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Bracket matching looks at ~200 lines around the cursor&lt;/li&gt;
&lt;li&gt;Syntax highlighting is per‑line, per‑viewport&lt;/li&gt;
&lt;li&gt;Search highlights are capped to what’s visible&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If it’s off‑screen, it doesn’t matter &lt;em&gt;right now&lt;/em&gt;.&lt;/p&gt;




&lt;h3&gt;
  
  
  4. Operation‑Based Undo
&lt;/h3&gt;

&lt;p&gt;Undo is no longer “clone the document”.&lt;/p&gt;

&lt;p&gt;Instead, edits are stored as operations and grouped over time (500ms).&lt;/p&gt;

&lt;p&gt;For large files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Undo depth is reduced&lt;/li&gt;
&lt;li&gt;Memory stays bounded&lt;/li&gt;
&lt;li&gt;Editing remains responsive&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  5. Caching Where It Matters
&lt;/h3&gt;

&lt;p&gt;Text layout is expensive.&lt;/p&gt;

&lt;p&gt;Ferrite uses an LRU cache for rendered lines (&lt;code&gt;egui&lt;/code&gt; galleys), so scrolling doesn’t recreate text every frame.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Numbers That Actually Matter
&lt;/h2&gt;

&lt;p&gt;Here’s what changed in practice:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Opening a &lt;strong&gt;4MB file&lt;/strong&gt; went from using &lt;strong&gt;around 1.8GB of RAM&lt;/strong&gt; to &lt;strong&gt;roughly 80MB&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Files around &lt;strong&gt;80MB&lt;/strong&gt;, which were previously unusable, now open and edit smoothly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bracket matching&lt;/strong&gt;, which used to scan the entire document every frame, now operates on a small window of roughly &lt;strong&gt;20KB around the cursor&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Undo/redo&lt;/strong&gt; no longer relies on full document snapshots, but on a compact, operation‑based history.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This wasn’t a micro‑optimization release.&lt;/p&gt;

&lt;p&gt;It was a survival release.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Nearly Broke Along the Way
&lt;/h2&gt;

&lt;p&gt;Some things were far harder than expected:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;IME support&lt;/strong&gt;\
Handling composition text correctly is non‑optional if you want global users.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Word wrap + cursor math&lt;/strong&gt;\
Visual rows vs logical lines is where many editors quietly fail.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;UTF‑8 offsets&lt;/strong&gt;\
Byte offsets and character offsets must never be confused. I confused them. More than once.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scrolling accuracy&lt;/strong&gt;\
“Scroll to line 3000” sounds easy. It is not.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;None of these problems are visible in screenshots.&lt;/p&gt;

&lt;p&gt;All of them are visible to users.&lt;/p&gt;




&lt;h2&gt;
  
  
  When You Should &lt;em&gt;Not&lt;/em&gt; Rewrite Your Editor
&lt;/h2&gt;

&lt;p&gt;This is important.&lt;/p&gt;

&lt;p&gt;You should not rewrite your editor because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It feels “cleaner”&lt;/li&gt;
&lt;li&gt;You want control&lt;/li&gt;
&lt;li&gt;Someone on the internet told you to&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I only did this because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The old path was fundamentally blocked&lt;/li&gt;
&lt;li&gt;Users were hitting real limits&lt;/li&gt;
&lt;li&gt;Features like multi‑cursor and large‑file support required it&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Rewrites are expensive. This one was too.&lt;/p&gt;




&lt;h2&gt;
  
  
  What This Unlocks Next
&lt;/h2&gt;

&lt;p&gt;With the new editor in place, Ferrite can now do things that were previously impossible or fragile:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Stable multi‑cursor editing&lt;/li&gt;
&lt;li&gt;Reliable large‑file workflows&lt;/li&gt;
&lt;li&gt;Better editor‑preview synchronization&lt;/li&gt;
&lt;li&gt;Predictable memory usage&lt;/li&gt;
&lt;li&gt;Long‑term maintainability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The editor is no longer the thing holding the project back.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;egui’s &lt;code&gt;TextEdit&lt;/code&gt; wasn’t a mistake.&lt;/p&gt;

&lt;p&gt;It let Ferrite exist.&lt;/p&gt;

&lt;p&gt;But software grows, and sometimes the abstractions that helped you move fast become the ones you need to leave behind.&lt;/p&gt;

&lt;p&gt;Writing a text editor from scratch was the hardest part of Ferrite so far.&lt;/p&gt;

&lt;p&gt;It was also the most important.&lt;/p&gt;




&lt;p&gt;If you’ve been following Ferrite since the early days: thank you.\&lt;br&gt;
And if you’re building something similar: I hope this saves you a few wrong turns.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://getferrite.dev" rel="noopener noreferrer"&gt;https://getferrite.dev&lt;/a&gt;&lt;/p&gt;

</description>
      <category>programming</category>
      <category>rust</category>
      <category>ai</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Beyond Open Source: Why AI-Assisted Projects Need Open Method</title>
      <dc:creator>Ola Prøis</dc:creator>
      <pubDate>Sat, 24 Jan 2026 13:03:08 +0000</pubDate>
      <link>https://forem.com/olaproeis/beyond-open-source-why-ai-assisted-projects-need-open-method-fc9</link>
      <guid>https://forem.com/olaproeis/beyond-open-source-why-ai-assisted-projects-need-open-method-fc9</guid>
      <description>&lt;p&gt;A few weeks ago, someone on Hacker News called my project "open weights."&lt;/p&gt;

&lt;p&gt;I'd just shared &lt;a href="https://github.com/olaproeis/ferrite" rel="noopener noreferrer"&gt;Ferrite&lt;/a&gt;, a markdown editor with 100% AI-generated Rust code. The source was on GitHub, MIT licensed, the whole deal. But this commenter argued that without sharing the prompts and process that created it, I was essentially doing the AI equivalent of releasing model weights without the training data. The code was visible, but the &lt;em&gt;inputs&lt;/em&gt; weren't.&lt;/p&gt;

&lt;p&gt;At first I pushed back. The code compiles. It runs. Anyone can fork it, modify it, contribute. Isn't that what open source means?&lt;/p&gt;

&lt;p&gt;But the comment stuck with me. And the more I thought about it, the more I realized they had a point.&lt;/p&gt;

&lt;h2&gt;
  
  
  The gap in "open source"
&lt;/h2&gt;

&lt;p&gt;Open source was designed for a world where humans wrote code. The implicit assumption was always: if you can read the code, you can understand how it was made. A skilled developer could look at a function and reverse-engineer the thinking behind it. The code &lt;em&gt;was&lt;/em&gt; the process, more or less.&lt;/p&gt;

&lt;p&gt;(This is idealized, of course. Plenty of open source projects have opaque decision-making - closed maintainer discussions, undocumented tribal knowledge. But the &lt;em&gt;potential&lt;/em&gt; for transparency was there. The code contained enough signal to reconstruct intent.)&lt;/p&gt;

&lt;p&gt;AI-assisted development breaks that assumption.&lt;/p&gt;

&lt;p&gt;When Claude writes a function based on my prompt, the code tells you &lt;em&gt;what&lt;/em&gt; it does, but not &lt;em&gt;why&lt;/em&gt; it exists in that form. Was this the first attempt or the fifteenth? What constraints did I give? What did I reject along the way? What context did the AI have about the rest of the codebase?&lt;/p&gt;

&lt;p&gt;The code is a snapshot. The process is invisible.&lt;/p&gt;

&lt;p&gt;This isn't necessarily a problem for using the software. But it's a problem if you want to learn from it, build on it, or understand the decisions behind it. Traditional open source invited you into the workshop. AI-assisted open source shows you the finished product and locks the door.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fww216zk22ngtit0mj8kz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fww216zk22ngtit0mj8kz.png" alt="graph" width="800" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What would "open method" look like?
&lt;/h2&gt;

&lt;p&gt;After that HN comment, I went back and documented everything. The actual workflow is now public: how I use multiple AIs for ideation, how I structure Product Requirements Documents, how I break tasks into subtasks, how I write handover prompts between sessions, how I maintain an AI context file so the model knows the codebase patterns.&lt;/p&gt;

&lt;p&gt;The full thing is in the repo under &lt;code&gt;docs/ai-workflow/&lt;/code&gt;. It's not polished. Some of it is rough notes. But it's real.&lt;/p&gt;

&lt;p&gt;And here's what I learned: sharing this stuff is harder than sharing code.&lt;/p&gt;

&lt;p&gt;Code has conventions. Linters. Tests. There's a shared understanding of what "good" looks like. But AI prompts? There's no standard format. No agreed-upon level of detail. Do I share every failed attempt? Just the successful ones? The iterative refinements?&lt;/p&gt;

&lt;p&gt;I landed on sharing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The workflow documentation&lt;/strong&gt; (how I actually work with AI, step by step)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Historical PRDs&lt;/strong&gt; (the requirements documents I feed to the AI)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Handover prompts&lt;/strong&gt; (what I tell the AI at the start of each session)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Task breakdowns&lt;/strong&gt; (how features get decomposed into implementable chunks)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Is this enough? I don't know. But it's more than just the code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters beyond my project
&lt;/h2&gt;

&lt;p&gt;I keep seeing AI-assisted projects pop up on GitHub. Some are explicit about it, some aren't. And I think we're heading toward a world where a significant chunk of open source code will be AI-generated or AI-assisted.&lt;/p&gt;

&lt;p&gt;If that's true, we need to figure out what transparency means in this context.&lt;/p&gt;

&lt;p&gt;The conversation is already starting. &lt;a href="https://blog.mozilla.ai/ai-generated-code-isnt-cheating-oss-needs-to-talk-about-it/" rel="noopener noreferrer"&gt;Mozilla.ai recently argued&lt;/a&gt; that "without an AI Coding policy that promotes transparency alongside innovation, Open Source codebases are going to struggle." They've implemented PR templates asking contributors to disclose their level of AI usage. &lt;a href="https://coder.com/docs/about/contributing/AI_CONTRIBUTING" rel="noopener noreferrer"&gt;Coder has published formal guidelines&lt;/a&gt; requiring disclosure when AI is the primary author, plus verification evidence that the code actually works.&lt;/p&gt;

&lt;p&gt;This is good progress. But &lt;em&gt;disclosure&lt;/em&gt; - saying "AI wrote this" - is different from &lt;em&gt;method&lt;/em&gt; - sharing how it was written. Both matter, but they serve different purposes. Disclosure helps reviewers calibrate their trust. Method helps others learn, replicate, and build on your approach.&lt;/p&gt;

&lt;p&gt;The traditional open source licenses don't cover either. MIT and GPL talk about distribution, modification, attribution. They don't say anything about documenting your process. There's no legal requirement to share your prompts.&lt;/p&gt;

&lt;p&gt;But there's a difference between legal requirements and community norms. Open source thrived because of a culture of sharing knowledge, not just code. READMEs, contribution guides, architectural decision records, commit messages that explain &lt;em&gt;why&lt;/em&gt; - all of this is technically optional but practically essential.&lt;/p&gt;

&lt;p&gt;I'm arguing we need to extend that culture to AI-assisted development. Not as a legal requirement, but as a community expectation. If you're releasing AI-generated code, consider releasing the method too.&lt;/p&gt;

&lt;p&gt;There's also a selfish reason to do this: &lt;strong&gt;open method protects the author&lt;/strong&gt;. If your code has quirks or unconventional patterns, the method shows why. It proves you guided the AI - that you made architectural decisions, rejected bad suggestions, iterated toward something intentional. Without it, people might assume you just pasted output without thinking. With it, you're demonstrating the craft behind the code.&lt;/p&gt;

&lt;h2&gt;
  
  
  What to call it
&lt;/h2&gt;

&lt;p&gt;"Open weights" doesn't quite fit. That term has a specific meaning in ML - releasing model parameters without training data or code. It's about what you're withholding, not what you're sharing.&lt;/p&gt;

&lt;p&gt;"Reproducible development" is closer, but sounds academic. And true reproducibility might be impossible anyway - you can't perfectly reproduce an AI interaction. Run the same prompt twice and you'll get different output. This isn't about deterministic reproduction like a build script. It's about &lt;em&gt;conceptual&lt;/em&gt; reproduction - understanding the approach well enough to build something similar, or to pick up where someone left off.&lt;/p&gt;

&lt;p&gt;I've been thinking of it as &lt;strong&gt;"open method"&lt;/strong&gt; - sharing not just the code, but the process that created it. The prompts, the workflow, the decisions. Enough that someone else could understand not just &lt;em&gt;what&lt;/em&gt; you built, but &lt;em&gt;how&lt;/em&gt; you built it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqtj342m205skwrbfc9tz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqtj342m205skwrbfc9tz.png" alt="graph" width="800" height="890"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This parallels how academia handles research. "Open science" doesn't just mean publishing results. It means sharing data, methodology, analysis code. A paper without methodology gets rejected - you can't just say "trust me, the results are valid." Yet in software, we accept the binary without the lab notes all the time. We're used to it because the code &lt;em&gt;was&lt;/em&gt; the lab notes. With AI-assisted development, that's no longer true.&lt;/p&gt;

&lt;p&gt;Software development with AI needs the same shift academia made: recognize that outputs alone aren't enough.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical suggestions
&lt;/h2&gt;

&lt;p&gt;If you're releasing AI-assisted code and want to practice "open method":&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Document your workflow.&lt;/strong&gt; Not a polished tutorial, just an honest description of how you actually work. What tools? What prompts? What iteration looks like?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Save your PRDs.&lt;/strong&gt; If you write requirements documents for the AI, keep them. They're the closest thing to "training data" for your specific project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Keep handover context.&lt;/strong&gt; Whatever you tell the AI at the start of sessions - system prompts, context files, architectural notes - consider making it available.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note significant decisions.&lt;/strong&gt; When you rejected an AI suggestion or chose between approaches, a quick note about why helps future readers.&lt;/p&gt;

&lt;p&gt;None of this needs to be perfect. The bar should be "useful to someone trying to understand how this was built," not "publishable academic paper."&lt;/p&gt;

&lt;h2&gt;
  
  
  The tradeoffs
&lt;/h2&gt;

&lt;p&gt;I should acknowledge: there are reasons &lt;em&gt;not&lt;/em&gt; to share everything.&lt;/p&gt;

&lt;p&gt;Prompts can reveal proprietary thinking - your secret sauce for getting good results. Sharing failed iterations might expose security vulnerabilities or embarrassing dead ends. Some companies have legitimate IP concerns about their AI workflows.&lt;/p&gt;

&lt;p&gt;This isn't all-or-nothing. You can share your general approach without revealing every prompt. You can document the workflow without exposing sensitive business logic. The goal is enough transparency to be useful, not a livestream of your entire development process.&lt;/p&gt;

&lt;p&gt;But I'd argue the default should shift toward openness, especially for open source projects. If you're already sharing the code, sharing the method is a natural extension.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's still missing
&lt;/h2&gt;

&lt;p&gt;I don't think sharing my workflow documentation solves this problem. It's one example. What we actually need are conventions - the way we have conventions for READMEs, for commit messages, for contribution guides.&lt;/p&gt;

&lt;p&gt;What should an "open method" disclosure look like? Is there a standard format? Should it live in the repo, in the PR, somewhere else? How much detail is enough - the final working prompt, or the fifteen failed attempts before it?&lt;/p&gt;

&lt;p&gt;We also lack tooling. We have git for code, but we don't have good version control for chat sessions. My "handover prompts" are markdown files I manually maintain. That works, but it's friction. The first tool that makes capturing AI development context as easy as &lt;code&gt;git commit&lt;/code&gt; will unlock a lot more openness.&lt;/p&gt;

&lt;p&gt;Here's a concrete starting point: what if GitHub repos had an &lt;code&gt;AI_METHOD.md&lt;/code&gt; alongside &lt;code&gt;README.md&lt;/code&gt; and &lt;code&gt;CONTRIBUTING.md&lt;/code&gt;? A standard template that answers: What AI tools were used? What's the general workflow? Where can I find example prompts or PRDs? It's not a perfect solution, but it's a convention - and conventions are how communities coordinate.&lt;/p&gt;

&lt;p&gt;I don't have all the answers yet. But I think starting to share, even imperfectly, is how we'll figure it out. The early open source movement didn't have perfect conventions either. They emerged through practice, through people trying things and seeing what worked.&lt;/p&gt;

&lt;p&gt;If AI-assisted development is going to become normal, we need to normalize showing our work. Not because there's anything wrong with using AI, but because transparency builds trust, enables learning, and strengthens the open source ecosystem we all benefit from.&lt;/p&gt;

&lt;p&gt;The code is the output. The method is the craft. Both can be open.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;I'm curious what you think.&lt;/strong&gt; If you were reviewing an AI-assisted PR, what documentation would actually help you trust it? What would you want to see in an "open method" disclosure? Let's figure this out together in the comments.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I wrote more about the specific workflow in &lt;a href="https://dev.to/olaproeis/the-ai-development-workflow-i-actually-use-549i"&gt;The AI Development Workflow I Actually Use&lt;/a&gt;, and the story of building Ferrite in &lt;a href="https://dev.to/olaproeis/i-shipped-an-800-star-markdown-editor-without-knowing-rust-28g6"&gt;I shipped an 800-star Markdown editor without knowing Rust&lt;/a&gt;. The full workflow documentation is in the &lt;a href="https://github.com/olaproeis/ferrite/tree/master/docs/ai-workflow" rel="noopener noreferrer"&gt;Ferrite repo&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>ai</category>
      <category>programming</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Beyond Open Source: Why AI-Assisted Projects Need Open Method</title>
      <dc:creator>Ola Prøis</dc:creator>
      <pubDate>Sat, 24 Jan 2026 13:03:07 +0000</pubDate>
      <link>https://forem.com/olaproeis/beyond-open-source-why-ai-assisted-projects-need-open-method-31pl</link>
      <guid>https://forem.com/olaproeis/beyond-open-source-why-ai-assisted-projects-need-open-method-31pl</guid>
      <description>&lt;p&gt;A few weeks ago, someone on Hacker News called my project "open weights."&lt;/p&gt;

&lt;p&gt;I'd just shared &lt;a href="https://github.com/olaproeis/ferrite" rel="noopener noreferrer"&gt;Ferrite&lt;/a&gt;, a markdown editor with 100% AI-generated Rust code. The source was on GitHub, MIT licensed, the whole deal. But this commenter argued that without sharing the prompts and process that created it, I was essentially doing the AI equivalent of releasing model weights without the training data. The code was visible, but the &lt;em&gt;inputs&lt;/em&gt; weren't.&lt;/p&gt;

&lt;p&gt;At first I pushed back. The code compiles. It runs. Anyone can fork it, modify it, contribute. Isn't that what open source means?&lt;/p&gt;

&lt;p&gt;But the comment stuck with me. And the more I thought about it, the more I realized they had a point.&lt;/p&gt;

&lt;h2&gt;
  
  
  The gap in "open source"
&lt;/h2&gt;

&lt;p&gt;Open source was designed for a world where humans wrote code. The implicit assumption was always: if you can read the code, you can understand how it was made. A skilled developer could look at a function and reverse-engineer the thinking behind it. The code &lt;em&gt;was&lt;/em&gt; the process, more or less.&lt;/p&gt;

&lt;p&gt;(This is idealized, of course. Plenty of open source projects have opaque decision-making - closed maintainer discussions, undocumented tribal knowledge. But the &lt;em&gt;potential&lt;/em&gt; for transparency was there. The code contained enough signal to reconstruct intent.)&lt;/p&gt;

&lt;p&gt;AI-assisted development breaks that assumption.&lt;/p&gt;

&lt;p&gt;When Claude writes a function based on my prompt, the code tells you &lt;em&gt;what&lt;/em&gt; it does, but not &lt;em&gt;why&lt;/em&gt; it exists in that form. Was this the first attempt or the fifteenth? What constraints did I give? What did I reject along the way? What context did the AI have about the rest of the codebase?&lt;/p&gt;

&lt;p&gt;The code is a snapshot. The process is invisible.&lt;/p&gt;

&lt;p&gt;This isn't necessarily a problem for using the software. But it's a problem if you want to learn from it, build on it, or understand the decisions behind it. Traditional open source invited you into the workshop. AI-assisted open source shows you the finished product and locks the door.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fww216zk22ngtit0mj8kz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fww216zk22ngtit0mj8kz.png" alt="graph" width="800" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What would "open method" look like?
&lt;/h2&gt;

&lt;p&gt;After that HN comment, I went back and documented everything. The actual workflow is now public: how I use multiple AIs for ideation, how I structure Product Requirements Documents, how I break tasks into subtasks, how I write handover prompts between sessions, how I maintain an AI context file so the model knows the codebase patterns.&lt;/p&gt;

&lt;p&gt;The full thing is in the repo under &lt;code&gt;docs/ai-workflow/&lt;/code&gt;. It's not polished. Some of it is rough notes. But it's real.&lt;/p&gt;

&lt;p&gt;And here's what I learned: sharing this stuff is harder than sharing code.&lt;/p&gt;

&lt;p&gt;Code has conventions. Linters. Tests. There's a shared understanding of what "good" looks like. But AI prompts? There's no standard format. No agreed-upon level of detail. Do I share every failed attempt? Just the successful ones? The iterative refinements?&lt;/p&gt;

&lt;p&gt;I landed on sharing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The workflow documentation&lt;/strong&gt; (how I actually work with AI, step by step)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Historical PRDs&lt;/strong&gt; (the requirements documents I feed to the AI)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Handover prompts&lt;/strong&gt; (what I tell the AI at the start of each session)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Task breakdowns&lt;/strong&gt; (how features get decomposed into implementable chunks)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Is this enough? I don't know. But it's more than just the code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters beyond my project
&lt;/h2&gt;

&lt;p&gt;I keep seeing AI-assisted projects pop up on GitHub. Some are explicit about it, some aren't. And I think we're heading toward a world where a significant chunk of open source code will be AI-generated or AI-assisted.&lt;/p&gt;

&lt;p&gt;If that's true, we need to figure out what transparency means in this context.&lt;/p&gt;

&lt;p&gt;The conversation is already starting. &lt;a href="https://blog.mozilla.ai/ai-generated-code-isnt-cheating-oss-needs-to-talk-about-it/" rel="noopener noreferrer"&gt;Mozilla.ai recently argued&lt;/a&gt; that "without an AI Coding policy that promotes transparency alongside innovation, Open Source codebases are going to struggle." They've implemented PR templates asking contributors to disclose their level of AI usage. &lt;a href="https://coder.com/docs/about/contributing/AI_CONTRIBUTING" rel="noopener noreferrer"&gt;Coder has published formal guidelines&lt;/a&gt; requiring disclosure when AI is the primary author, plus verification evidence that the code actually works.&lt;/p&gt;

&lt;p&gt;This is good progress. But &lt;em&gt;disclosure&lt;/em&gt; - saying "AI wrote this" - is different from &lt;em&gt;method&lt;/em&gt; - sharing how it was written. Both matter, but they serve different purposes. Disclosure helps reviewers calibrate their trust. Method helps others learn, replicate, and build on your approach.&lt;/p&gt;

&lt;p&gt;The traditional open source licenses don't cover either. MIT and GPL talk about distribution, modification, attribution. They don't say anything about documenting your process. There's no legal requirement to share your prompts.&lt;/p&gt;

&lt;p&gt;But there's a difference between legal requirements and community norms. Open source thrived because of a culture of sharing knowledge, not just code. READMEs, contribution guides, architectural decision records, commit messages that explain &lt;em&gt;why&lt;/em&gt; - all of this is technically optional but practically essential.&lt;/p&gt;

&lt;p&gt;I'm arguing we need to extend that culture to AI-assisted development. Not as a legal requirement, but as a community expectation. If you're releasing AI-generated code, consider releasing the method too.&lt;/p&gt;

&lt;p&gt;There's also a selfish reason to do this: &lt;strong&gt;open method protects the author&lt;/strong&gt;. If your code has quirks or unconventional patterns, the method shows why. It proves you guided the AI - that you made architectural decisions, rejected bad suggestions, iterated toward something intentional. Without it, people might assume you just pasted output without thinking. With it, you're demonstrating the craft behind the code.&lt;/p&gt;

&lt;h2&gt;
  
  
  What to call it
&lt;/h2&gt;

&lt;p&gt;"Open weights" doesn't quite fit. That term has a specific meaning in ML - releasing model parameters without training data or code. It's about what you're withholding, not what you're sharing.&lt;/p&gt;

&lt;p&gt;"Reproducible development" is closer, but sounds academic. And true reproducibility might be impossible anyway - you can't perfectly reproduce an AI interaction. Run the same prompt twice and you'll get different output. This isn't about deterministic reproduction like a build script. It's about &lt;em&gt;conceptual&lt;/em&gt; reproduction - understanding the approach well enough to build something similar, or to pick up where someone left off.&lt;/p&gt;

&lt;p&gt;I've been thinking of it as &lt;strong&gt;"open method"&lt;/strong&gt; - sharing not just the code, but the process that created it. The prompts, the workflow, the decisions. Enough that someone else could understand not just &lt;em&gt;what&lt;/em&gt; you built, but &lt;em&gt;how&lt;/em&gt; you built it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqtj342m205skwrbfc9tz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqtj342m205skwrbfc9tz.png" alt="graph" width="800" height="890"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This parallels how academia handles research. "Open science" doesn't just mean publishing results. It means sharing data, methodology, analysis code. A paper without methodology gets rejected - you can't just say "trust me, the results are valid." Yet in software, we accept the binary without the lab notes all the time. We're used to it because the code &lt;em&gt;was&lt;/em&gt; the lab notes. With AI-assisted development, that's no longer true.&lt;/p&gt;

&lt;p&gt;Software development with AI needs the same shift academia made: recognize that outputs alone aren't enough.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical suggestions
&lt;/h2&gt;

&lt;p&gt;If you're releasing AI-assisted code and want to practice "open method":&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Document your workflow.&lt;/strong&gt; Not a polished tutorial, just an honest description of how you actually work. What tools? What prompts? What iteration looks like?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Save your PRDs.&lt;/strong&gt; If you write requirements documents for the AI, keep them. They're the closest thing to "training data" for your specific project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Keep handover context.&lt;/strong&gt; Whatever you tell the AI at the start of sessions - system prompts, context files, architectural notes - consider making it available.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note significant decisions.&lt;/strong&gt; When you rejected an AI suggestion or chose between approaches, a quick note about why helps future readers.&lt;/p&gt;

&lt;p&gt;None of this needs to be perfect. The bar should be "useful to someone trying to understand how this was built," not "publishable academic paper."&lt;/p&gt;

&lt;h2&gt;
  
  
  The tradeoffs
&lt;/h2&gt;

&lt;p&gt;I should acknowledge: there are reasons &lt;em&gt;not&lt;/em&gt; to share everything.&lt;/p&gt;

&lt;p&gt;Prompts can reveal proprietary thinking - your secret sauce for getting good results. Sharing failed iterations might expose security vulnerabilities or embarrassing dead ends. Some companies have legitimate IP concerns about their AI workflows.&lt;/p&gt;

&lt;p&gt;This isn't all-or-nothing. You can share your general approach without revealing every prompt. You can document the workflow without exposing sensitive business logic. The goal is enough transparency to be useful, not a livestream of your entire development process.&lt;/p&gt;

&lt;p&gt;But I'd argue the default should shift toward openness, especially for open source projects. If you're already sharing the code, sharing the method is a natural extension.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's still missing
&lt;/h2&gt;

&lt;p&gt;I don't think sharing my workflow documentation solves this problem. It's one example. What we actually need are conventions - the way we have conventions for READMEs, for commit messages, for contribution guides.&lt;/p&gt;

&lt;p&gt;What should an "open method" disclosure look like? Is there a standard format? Should it live in the repo, in the PR, somewhere else? How much detail is enough - the final working prompt, or the fifteen failed attempts before it?&lt;/p&gt;

&lt;p&gt;We also lack tooling. We have git for code, but we don't have good version control for chat sessions. My "handover prompts" are markdown files I manually maintain. That works, but it's friction. The first tool that makes capturing AI development context as easy as &lt;code&gt;git commit&lt;/code&gt; will unlock a lot more openness.&lt;/p&gt;

&lt;p&gt;Here's a concrete starting point: what if GitHub repos had an &lt;code&gt;AI_METHOD.md&lt;/code&gt; alongside &lt;code&gt;README.md&lt;/code&gt; and &lt;code&gt;CONTRIBUTING.md&lt;/code&gt;? A standard template that answers: What AI tools were used? What's the general workflow? Where can I find example prompts or PRDs? It's not a perfect solution, but it's a convention - and conventions are how communities coordinate.&lt;/p&gt;

&lt;p&gt;I don't have all the answers yet. But I think starting to share, even imperfectly, is how we'll figure it out. The early open source movement didn't have perfect conventions either. They emerged through practice, through people trying things and seeing what worked.&lt;/p&gt;

&lt;p&gt;If AI-assisted development is going to become normal, we need to normalize showing our work. Not because there's anything wrong with using AI, but because transparency builds trust, enables learning, and strengthens the open source ecosystem we all benefit from.&lt;/p&gt;

&lt;p&gt;The code is the output. The method is the craft. Both can be open.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;I'm curious what you think.&lt;/strong&gt; If you were reviewing an AI-assisted PR, what documentation would actually help you trust it? What would you want to see in an "open method" disclosure? Let's figure this out together in the comments.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I wrote more about the specific workflow in &lt;a href="https://dev.to/olaproeis/the-ai-development-workflow-i-actually-use-549i"&gt;The AI Development Workflow I Actually Use&lt;/a&gt;, and the story of building Ferrite in &lt;a href="https://dev.to/olaproeis/i-shipped-an-800-star-markdown-editor-without-knowing-rust-28g6"&gt;I shipped an 800-star Markdown editor without knowing Rust&lt;/a&gt;. The full workflow documentation is in the &lt;a href="https://github.com/olaproeis/ferrite/tree/master/docs/ai-workflow" rel="noopener noreferrer"&gt;Ferrite repo&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>discuss</category>
      <category>opensource</category>
      <category>rust</category>
    </item>
  </channel>
</rss>
