<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: nmelo</title>
    <description>The latest articles on Forem by nmelo (@nmelo).</description>
    <link>https://forem.com/nmelo</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/nmelo"/>
    <language>en</language>
    <item>
      <title>I built a native desktop GUI for the beads issue tracker</title>
      <dc:creator>nmelo</dc:creator>
      <pubDate>Fri, 06 Mar 2026 13:30:12 +0000</pubDate>
      <link>https://forem.com/nmelo/i-built-a-native-desktop-gui-for-the-beads-issue-tracker-2c9f</link>
      <guid>https://forem.com/nmelo/i-built-a-native-desktop-gui-for-the-beads-issue-tracker-2c9f</guid>
      <description>&lt;p&gt;&lt;a href="https://github.com/steveyegge/beads" rel="noopener noreferrer"&gt;Beads&lt;/a&gt; is an issue tracker designed for AI-assisted development. Your agents create and manage issues via CLI while you code. It's gotten popular fast — 50K+ users, 30+ community tools.&lt;/p&gt;

&lt;p&gt;I wanted a way to see all my beads issues without opening VS Code. So I built &lt;strong&gt;Beadbox&lt;/strong&gt; — a native desktop app using Tauri v2 + Next.js.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it does
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Epic tree view&lt;/strong&gt; with progress bars — see your project hierarchy at a glance&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-time sync&lt;/strong&gt; — WebSocket watches your local &lt;code&gt;.beads/&lt;/code&gt; directory, updates hit the UI in seconds&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dependency badges&lt;/strong&gt; — which issues are blocked and by what&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-workspace&lt;/strong&gt; — switch between projects instantly&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Inline editing&lt;/strong&gt; — update status, priority, assignee without leaving the app&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Keyboard navigation&lt;/strong&gt; — j/k to move, Enter to open, Esc to close&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The stack
&lt;/h2&gt;

&lt;p&gt;Tauri v2 wraps the whole thing as a native app. Rust spawns a Node.js sidecar that runs the Next.js server + a WebSocket server. The WebView points at localhost. The web app doesn't know it's inside a native wrapper.&lt;/p&gt;

&lt;p&gt;Bundle is ~160MB (84MB is Node.js itself). For comparison, a bare Electron app starts around 200MB, and you get native window chrome + system WebView instead of bundled Chromium.&lt;/p&gt;

&lt;h2&gt;
  
  
  Runs everywhere
&lt;/h2&gt;

&lt;p&gt;macOS (Apple Silicon + Intel), Linux (.deb + AppImage), and Windows. Code-signed and notarized on macOS.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew tap beadbox/cask &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; brew &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--cask&lt;/span&gt; beadbox
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or grab a binary from &lt;a href="https://beadbox.app" rel="noopener noreferrer"&gt;beadbox.app&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;No account needed. No cloud. Reads your local beads database directly. Free during beta.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/beadbox/beadbox" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/p&gt;

</description>
      <category>showdev</category>
      <category>tauri</category>
      <category>nextjs</category>
    </item>
    <item>
      <title>I ship software with 13 AI agents. Here's what that actually looks like</title>
      <dc:creator>nmelo</dc:creator>
      <pubDate>Tue, 03 Mar 2026 15:30:49 +0000</pubDate>
      <link>https://forem.com/nmelo/i-ship-software-with-13-ai-agents-heres-what-that-actually-looks-like-420c</link>
      <guid>https://forem.com/nmelo/i-ship-software-with-13-ai-agents-heres-what-that-actually-looks-like-420c</guid>
      <description>&lt;p&gt;This is my terminal right now.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fasht7ys4w12o2dzimieb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fasht7ys4w12o2dzimieb.png" alt="13 Claude Code agents running in tmux panes" width="800" height="557"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;13 Claude Code agents, each in its own tmux pane, working on the same codebase. Not as an experiment. Not as a flex. This is how I ship software every single day.&lt;/p&gt;

&lt;p&gt;The project is &lt;a href="https://beadbox.app" rel="noopener noreferrer"&gt;Beadbox&lt;/a&gt;, a real-time dashboard for monitoring AI coding agents. It's built by the very agent fleet it monitors. The agents write the code, test it, review it, package it, and ship it. I coordinate.&lt;/p&gt;

&lt;p&gt;If you're running more than two or three agents and wondering how to keep track of what they're all doing, this is what I've landed on after months of iteration. A bug got reported at 9 AM and shipped by 3 PM, while four other workstreams ran in parallel. It doesn't always go smoothly, but the throughput is real.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Roster
&lt;/h2&gt;

&lt;p&gt;Every agent has a &lt;code&gt;CLAUDE.md&lt;/code&gt; file that defines its identity, what it owns, what it doesn't, and how it communicates with other agents. These aren't generic "do anything" assistants. Each one has a narrow job and explicit boundaries.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Group&lt;/th&gt;
&lt;th&gt;Agents&lt;/th&gt;
&lt;th&gt;What they own&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Coordination&lt;/td&gt;
&lt;td&gt;super, pm, owner&lt;/td&gt;
&lt;td&gt;Work dispatch, product specs, business priorities&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Engineering&lt;/td&gt;
&lt;td&gt;eng1, eng2, arch&lt;/td&gt;
&lt;td&gt;Implementation, system design, test suites&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Quality&lt;/td&gt;
&lt;td&gt;qa1, qa2&lt;/td&gt;
&lt;td&gt;Independent validation, release gates&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Operations&lt;/td&gt;
&lt;td&gt;ops, shipper&lt;/td&gt;
&lt;td&gt;Platform testing, builds, release execution&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Growth&lt;/td&gt;
&lt;td&gt;growth, pmm, pmm2&lt;/td&gt;
&lt;td&gt;Analytics, positioning, public content&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The key word is &lt;strong&gt;boundaries&lt;/strong&gt;. eng2 can't close issues. qa1 doesn't write code. pmm never touches the app source. Super dispatches work but doesn't implement. The boundaries exist because without them, agents drift. They "help" by refactoring code that didn't need refactoring, or closing issues that weren't verified, or making architectural decisions they're not qualified to make.&lt;/p&gt;

&lt;p&gt;Every &lt;code&gt;CLAUDE.md&lt;/code&gt; starts with an identity paragraph and a boundary section. Here's an abbreviated version of what eng2's looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## Identity&lt;/span&gt;
Engineer for Beadbox. You implement features, fix bugs, and write tests.
You own implementation quality: the code you write is correct, tested,
and matches the spec.

&lt;span class="gu"&gt;## Boundary with QA&lt;/span&gt;
QA validates your work independently. You provide QA with executable
verification steps. If your DONE comment doesn't let QA verify without
reading source code, it's incomplete.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This pattern scales. When I started with 3 agents, they could share a single loose prompt. At 13, explicit roles and protocols are the difference between coordination and chaos.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Coordination Layer
&lt;/h2&gt;

&lt;p&gt;Three tools hold the fleet together.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/steveyegge/beads" rel="noopener noreferrer"&gt;beads&lt;/a&gt;&lt;/strong&gt; is an open-source, local-first issue tracker built for exactly this workflow. Every task is a "bead" with a status, priority, dependencies, and a comment thread. Agents read and write to the same local database through a CLI called &lt;code&gt;bd&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;bd update bb-viet &lt;span class="nt"&gt;--claim&lt;/span&gt; &lt;span class="nt"&gt;--actor&lt;/span&gt; eng2   &lt;span class="c"&gt;# eng2 claims a bug&lt;/span&gt;
bd show bb-viet                           &lt;span class="c"&gt;# see the full spec + comments&lt;/span&gt;
bd comments add bb-viet &lt;span class="nt"&gt;--author&lt;/span&gt; eng2 &lt;span class="s2"&gt;"PLAN: ..."&lt;/span&gt;  &lt;span class="c"&gt;# eng2 posts their plan&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;gn / gp / ga&lt;/strong&gt; are tmux messaging tools. &lt;code&gt;gn&lt;/code&gt; sends a message to another agent's pane. &lt;code&gt;gp&lt;/code&gt; peeks at another agent's recent output (without interrupting them). &lt;code&gt;ga&lt;/code&gt; queues a non-urgent message.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gn &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="nt"&gt;-w&lt;/span&gt; eng2 &lt;span class="s2"&gt;"[from super] You have work: bb-viet. P2."&lt;/span&gt;  &lt;span class="c"&gt;# dispatch&lt;/span&gt;
gp eng2 &lt;span class="nt"&gt;-n&lt;/span&gt; 40                                               &lt;span class="c"&gt;# check progress&lt;/span&gt;
ga &lt;span class="nt"&gt;-w&lt;/span&gt; super &lt;span class="s2"&gt;"[from eng2] bb-viet complete. Pushed abc123."&lt;/span&gt;  &lt;span class="c"&gt;# report back&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;CLAUDE.md protocols&lt;/strong&gt; define escalation paths, communication format, and completion criteria. Every agent knows: claim the bead, comment your plan before coding, run tests before pushing, comment DONE with verification steps, mark ready for QA, report back to super.&lt;/p&gt;

&lt;p&gt;Here's what that looks like in practice. This is a real bead from earlier today: super assigns the task, eng2 comments a numbered plan, eng2 comments DONE with QA verification steps and checked acceptance criteria, super dispatches to QA.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F81qwm8c1gawr5nkoumf4.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F81qwm8c1gawr5nkoumf4.jpg" alt="A bead comment thread showing the full agent workflow" width="800" height="1641"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Super runs a patrol loop every 5-10 minutes: peek at each active agent's output, check bead status, verify the pipeline hasn't stalled. It's like a production on-call rotation, except the services are AI agents and the incidents are "eng2 has been suspiciously quiet for 20 minutes."&lt;/p&gt;

&lt;h2&gt;
  
  
  A Real Day
&lt;/h2&gt;

&lt;p&gt;Here's what actually happened on a Wednesday in late February 2026.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;9:14 AM&lt;/strong&gt; — A GitHub user named ericinfins opens Issue #2: they can't connect Beadbox to their remote Dolt server. The app only supports local connections. Owner sees it and flags it for super.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;9:30 AM&lt;/strong&gt; — Super dispatches the work. Arch designs a connection auth flow (TLS toggle, username/password fields, environment variable passing). PM writes the spec with acceptance criteria. Eng picks it up and starts implementing.&lt;/p&gt;

&lt;p&gt;Meanwhile, in parallel:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;PM&lt;/strong&gt; files two bugs discovered during release testing. One is cosmetic: the header badge shows "v0.10.0-rc.7" instead of "v0.10.0" on final builds. The other is platform-specific: the screenshot automation tool returns a blank strip on ARM64 Macs because Apple Silicon renders Tauri's WebView through Metal compositing, and the backing store is empty.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Ops&lt;/strong&gt; root-causes the screenshot bug. The fix is elegant: after capture, check if the image height is suspiciously small (under 50px for a window that should be 800px tall), and fall back to coordinate-based screen capture instead.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Growth&lt;/strong&gt; pulls PostHog data and runs an IP correlation analysis. The finding: Reddit ads have generated 96 clicks and zero attributable retained users. GitHub README traffic converts at 15.8%. This very article exists because of that analysis.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Eng1&lt;/strong&gt;, unblocked by arch's Activity Dashboard design, starts building cross-filter state management and utility functions. 687 tests passing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;QA1&lt;/strong&gt; validates the header badge fix: spins up a test server, uses browser automation to verify the badge renders correctly, checks that 665 unit tests pass, marks PASS.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2:45 PM&lt;/strong&gt; — Shipper merges the release candidate PR, pushes the v0.10.0 tag, and triggers the promote workflow. CI builds artifacts for all 5 platforms (macOS ARM, macOS Intel, Linux AppImage, Linux .deb, Windows .exe). Shipper verifies each artifact, updates release notes on both repos, redeploys the website, and updates the Homebrew cask.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3:12 PM&lt;/strong&gt; — Owner replies on GitHub Issue #2:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Good news: v0.10.0 just shipped with full Dolt server auth support. Update and you should be unblocked.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Bug reported in the morning. Fix shipped by afternoon. And while that was happening, the next feature was already being designed, a different bug was being root-caused, analytics were being analyzed, and QA was independently verifying a separate fix.&lt;/p&gt;

&lt;p&gt;That's not because 13 agents are fast. It's because 13 agents are parallel.&lt;/p&gt;

&lt;p&gt;This is the problem &lt;a href="https://beadbox.app" rel="noopener noreferrer"&gt;Beadbox&lt;/a&gt; solves. Real-time visibility into what your entire agent fleet is doing.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Goes Wrong
&lt;/h2&gt;

&lt;p&gt;This is the part most "look at my AI setup" posts leave out.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rate limits hit at high concurrency.&lt;/strong&gt; When 13 agents are all running on the same API account, you burn through tokens fast. On this particular day, super, eng1, and eng2 all hit the rate limit ceiling simultaneously. Everyone stops. You wait. It's the AI equivalent of everyone in the office trying to use the printer at the same time, except the printer costs money per page and there's a page-per-minute cap.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;QA bounces work back.&lt;/strong&gt; This is by design, but it adds cycles. QA rejected a build because the engineer's "DONE" comment didn't include verification steps. The fix worked, but QA couldn't confirm it without reading source code. Back to eng, rewrite the completion comment, back to QA, re-verify. Twenty minutes for what should have been five. The protocol creates friction, but the friction is load-bearing. Every time I've shortcut QA, something broke in production.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Context windows fill up.&lt;/strong&gt; Agents accumulate context over a session. Super has a protocol to send a "save your work" directive at 65% context usage. If you miss the window, the agent loses track of what it was doing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agents get stuck.&lt;/strong&gt; Sometimes an agent hits an error loop and just keeps retrying the same failing command. Super's patrol loop catches this, but only if you're checking frequently enough. I've lost 30 minutes to an agent that was politely failing in silence.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The coordination overhead is real.&lt;/strong&gt; CLAUDE.md files, dispatch protocols, patrol loops, bead comments, completion reports. For a two-agent setup, this is overkill. For 13 agents, it's the minimum viable structure. There's a crossover point around 5 agents where informal coordination stops working and you need explicit protocols or you start losing track of what's happening.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I've Learned
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Specialization beats generalization.&lt;/strong&gt; 13 focused agents outperform 3 "full-stack" ones. When qa1 only validates and never writes code, it catches things eng missed every single time. When arch only designs and never implements, the designs are cleaner because there's no temptation to shortcut the spec to make implementation easier.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Independent QA is non-negotiable.&lt;/strong&gt; QA has its own repo clone. It tests the pushed code, not the working tree. It doesn't trust the engineer's self-report. This sounds slow. It catches bugs on every release.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You need visibility or the fleet drifts.&lt;/strong&gt; At 5+ agents, you can't track state by switching between tmux panes and running &lt;code&gt;bd list&lt;/code&gt; in your head. You need a dashboard that shows you the dependency tree, which agents are working on what, and which beads are blocked. This is the problem I built Beadbox to solve.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The recursive loop matters.&lt;/strong&gt; The agents build Beadbox. Beadbox monitors the agents. When the agents produce a bug in Beadbox, the fleet catches it through the same QA process that caught every other bug. The tool improves because the team that uses it most is the team that builds it. I'm aware this is either brilliant or the most elaborate Rube Goldberg machine ever constructed. The shipped features suggest the former. My token bill suggests the latter.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Stack
&lt;/h2&gt;

&lt;p&gt;If you want to try this yourself, here's what you need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://github.com/steveyegge/beads" rel="noopener noreferrer"&gt;beads&lt;/a&gt;&lt;/strong&gt;: Open-source local-first issue tracker. This is the coordination backbone. Every agent reads and writes to it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://docs.anthropic.com/en/docs/claude-code" rel="noopener noreferrer"&gt;Claude Code&lt;/a&gt;&lt;/strong&gt;: The agent runtime. Each agent is a Claude Code session in a tmux pane with its own CLAUDE.md identity file.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;tmux + gn/gp/ga&lt;/strong&gt;: Terminal multiplexer for running agents side by side. The messaging tools let agents communicate without shared memory.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://beadbox.app" rel="noopener noreferrer"&gt;Beadbox&lt;/a&gt;&lt;/strong&gt;: Real-time visual dashboard that shows you what the fleet is doing. This is what you're reading about.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You don't need all 13 agents to start. Two engineers and a QA agent, coordinated through beads, will change how you think about what a single developer can ship.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;The biggest gap in the current setup is answering three questions at a glance: which agents are active, idle, or stuck? Where is work piling up in the pipeline? And what just happened, filtered by the agent or stage I care about?&lt;/p&gt;

&lt;p&gt;Right now that takes a patrol loop and a lot of &lt;code&gt;gp&lt;/code&gt; commands. So we're building a coordination dashboard directly into Beadbox: an agent status strip across the top, a pipeline flow showing where beads are accumulating, and a cross-filtered event feed where clicking an agent or pipeline stage filters everything else to match. All three layers share the same real-time data source. All three update live.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk6aezndsxsplgb0xr7qq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk6aezndsxsplgb0xr7qq.png" alt="Activity Dashboard design mockup" width="800" height="618"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The 13 agents are building it right now. I'll write about it when it ships.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>programming</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
