<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: drape dev</title>
    <description>The latest articles on Forem by drape dev (@drape_dev).</description>
    <link>https://forem.com/drape_dev</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/drape_dev"/>
    <language>en</language>
    <item>
      <title>I Built a Mobile IDE With AI Agents - Here's How It Works Under the Hood</title>
      <dc:creator>drape dev</dc:creator>
      <pubDate>Wed, 11 Feb 2026 15:03:14 +0000</pubDate>
      <link>https://forem.com/drape_dev/i-built-a-mobile-ide-with-ai-agents-heres-how-it-works-under-the-hood-36aj</link>
      <guid>https://forem.com/drape_dev/i-built-a-mobile-ide-with-ai-agents-heres-how-it-works-under-the-hood-36aj</guid>
      <description>&lt;p&gt;I've been building &lt;strong&gt;Drape&lt;/strong&gt;, a mobile IDE for iOS that lets you code, preview, and ship apps entirely from your phone. The AI doesn't just suggest code - it's an autonomous agent that reads your project, writes files, runs commands, and iterates.&lt;/p&gt;

&lt;p&gt;In this post I want to share how it works under the hood, the architectural decisions I made, and the challenges of building a real IDE on mobile.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Drape Does
&lt;/h2&gt;

&lt;p&gt;Before diving into the tech, here's what the user experience looks like:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You create a new project (or clone from GitHub)&lt;/li&gt;
&lt;li&gt;You describe what you want: &lt;em&gt;"A dashboard with revenue analytics and a chart"&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;The AI picks a model (Gemini 3.0, GPT, Claude - your choice), reads the project context, and starts coding&lt;/li&gt;
&lt;li&gt;Files appear in the file explorer, code is written, dependencies are installed&lt;/li&gt;
&lt;li&gt;A live preview shows your app running in real time&lt;/li&gt;
&lt;li&gt;You iterate: &lt;em&gt;"Make the chart interactive"&lt;/em&gt; → AI edits the code → preview updates&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;All of this happens on your phone. No laptop, no desktop, no SSH into a remote machine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fegdzbsnnevfpxfp8zpzp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fegdzbsnnevfpxfp8zpzp.png" alt=" " width="800" height="1422"&gt;&lt;/a&gt;## Architecture Overview&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌─────────────────────────────┐
│         iOS App             │
│    (React Native + Expo)    │
│                             │
│  ┌─────────┐ ┌───────────┐ │
│  │ AI Chat │ │  Terminal  │ │
│  │         │ │  (xterm)   │ │
│  ├─────────┤ ├───────────┤ │
│  │  File   │ │   Live    │ │
│  │ Manager │ │  Preview  │ │
│  └────┬────┘ └─────┬─────┘ │
│       │             │       │
└───────┼─────────────┼───────┘
        │    SSE      │ WebView
        ▼             ▼
┌─────────────────────────────┐
│       Backend (Node.js)     │
│                             │
│  ┌──────────┐ ┌───────────┐│
│  │  Agent   │ │    VM     ││
│  │  Loop    │ │ Manager   ││
│  └────┬─────┘ └─────┬─────┘│
│       │              │      │
└───────┼──────────────┼──────┘
        │              │
        ▼              ▼
┌──────────────┐ ┌───────────┐
│  AI Models   │ │  Fly.io   │
│ (Gemini/GPT/ │ │ Micro-VMs │
│  Claude)     │ │           │
└──────────────┘ └───────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The App Layer
&lt;/h3&gt;

&lt;p&gt;Built with &lt;strong&gt;React Native + Expo&lt;/strong&gt; using the new architecture (Fabric). State management is split across multiple &lt;strong&gt;Zustand&lt;/strong&gt; stores:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;chatStore&lt;/strong&gt;: Conversation history, message state&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;tabStore&lt;/strong&gt;: Terminal items per tab (each project gets tabs)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;uiStore&lt;/strong&gt;: UI state (modals, panels, selections)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;workstationStore&lt;/strong&gt;: Project/VM state&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The app has four main views: AI Chat, Terminal, File Manager, and Live Preview. They all share state and update in real time.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Agent Loop
&lt;/h3&gt;

&lt;p&gt;This is the core of the AI experience. It's not a simple request/response - it's an &lt;strong&gt;agentic loop&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;User message
    ↓
Agent receives message + project context
    ↓
┌─→ Model decides next action ──┐
│   (edit_file, run_command,     │
│    read_file, ask_user)        │
│         ↓                      │
│   Execute action on VM         │
│         ↓                      │
│   Observe result               │
│         ↓                      │
└── Need more actions? ←────────┘
         ↓ (done)
    Final response to user
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The agent can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Read files&lt;/strong&gt;: Understands project structure and existing code&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create/edit files&lt;/strong&gt;: Writes new components, modifies existing ones&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Run terminal commands&lt;/strong&gt;: &lt;code&gt;npm install&lt;/code&gt;, &lt;code&gt;git commit&lt;/code&gt;, build scripts&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Check preview&lt;/strong&gt;: Verifies the app is running correctly&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All events stream to the app via &lt;strong&gt;Server-Sent Events (SSE)&lt;/strong&gt;. The app processes them in real time - you see the AI "thinking", then files appearing, terminal commands running, and the preview updating.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cloud VMs
&lt;/h3&gt;

&lt;p&gt;Every project gets its own &lt;strong&gt;Fly.io micro-VM&lt;/strong&gt;. This is a real Linux environment with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Node.js + npm&lt;/li&gt;
&lt;li&gt;Full filesystem&lt;/li&gt;
&lt;li&gt;Network access&lt;/li&gt;
&lt;li&gt;Process management&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Why not WebAssembly or a browser sandbox? Because real projects need real environments. You can't run a Node.js server in WASM (well, not reliably). The VM approach adds some latency but gives you an authentic development experience.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe9mi1caers2o7rthc3ef.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe9mi1caers2o7rthc3ef.png" alt=" " width="800" height="1422"&gt;&lt;/a&gt;## Challenges I Faced&lt;/p&gt;
&lt;h3&gt;
  
  
  1. SSE Streaming + React State
&lt;/h3&gt;

&lt;p&gt;The AI agent generates a stream of events: &lt;code&gt;thinking&lt;/code&gt;, &lt;code&gt;text&lt;/code&gt;, &lt;code&gt;tool_start&lt;/code&gt;, &lt;code&gt;tool_input&lt;/code&gt;, &lt;code&gt;tool_complete&lt;/code&gt;, etc. Processing these correctly while keeping React state consistent was the hardest part.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key lesson&lt;/strong&gt;: Never merge events in-place. Always append as new events. If you mutate an existing array element, React refs that track "last processed index" won't detect the change because the array length didn't change.&lt;/p&gt;
&lt;h3&gt;
  
  
  2. Fabric View Management
&lt;/h3&gt;

&lt;p&gt;React Native's new architecture (Fabric) tracks native subview indices precisely. If you try to insert a native overlay view into a Fabric-managed view hierarchy, it shifts indices and causes crashes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Attempt to unmount a view which has a different index
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;: Add overlays to &lt;code&gt;UIWindow&lt;/code&gt; directly (outside the Fabric tree) and track position with &lt;code&gt;CADisplayLink&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Multi-Model AI Support
&lt;/h3&gt;

&lt;p&gt;Each model (Gemini, GPT, Claude) has different APIs, capabilities, and quirks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Different message formats&lt;/li&gt;
&lt;li&gt;Different tool calling conventions&lt;/li&gt;
&lt;li&gt;Different streaming formats&lt;/li&gt;
&lt;li&gt;Different context window sizes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The backend abstracts these differences so the frontend just sees a unified event stream.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Mobile UX for Development
&lt;/h3&gt;

&lt;p&gt;Coding on a phone sounds terrible. Making it feel good required:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Smart defaults (AI does the heavy lifting, you guide)&lt;/li&gt;
&lt;li&gt;Split views that actually work on a phone screen&lt;/li&gt;
&lt;li&gt;Quick actions instead of typing everything&lt;/li&gt;
&lt;li&gt;File tree that's easy to navigate with touch&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/5.png" alt="File manager and workspace" width="800" height="400"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Full workspace with file explorer&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Android version&lt;/li&gt;
&lt;li&gt;Collaborative editing&lt;/li&gt;
&lt;li&gt;More AI models&lt;/li&gt;
&lt;li&gt;Plugin system&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;p&gt;Drape is free to start. Download it and build something.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;drape.info&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I'd love to hear from anyone who's tried mobile development before, or anyone building AI-powered dev tools. What features would make you switch from your laptop?&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Follow me for more posts about building dev tools, AI agents, and mobile development.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webview</category>
      <category>ai</category>
      <category>mobile</category>
      <category>javascript</category>
    </item>
  </channel>
</rss>
