<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Darko from Kilo</title>
    <description>The latest articles on Forem by Darko from Kilo (@kilocode).</description>
    <link>https://forem.com/kilocode</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/kilocode"/>
    <language>en</language>
    <item>
      <title>The Arrival of GPT-5.5: OpenAI’s New Deep-Thinking Powerhouse</title>
      <dc:creator>Darko from Kilo</dc:creator>
      <pubDate>Mon, 27 Apr 2026 09:19:50 +0000</pubDate>
      <link>https://forem.com/kilocode/the-arrival-of-gpt-55-openais-new-deep-thinking-powerhouse-53fk</link>
      <guid>https://forem.com/kilocode/the-arrival-of-gpt-55-openais-new-deep-thinking-powerhouse-53fk</guid>
      <description>&lt;p&gt;OpenAI recently &lt;a href="https://openai.com/index/introducing-gpt-5-5/" rel="noopener noreferrer"&gt;rolled out GPT-5.5&lt;/a&gt; and its heavy-duty sibling, GPT-5.5 Pro, and everybody wants to put them to the test.&lt;/p&gt;

&lt;p&gt;If you feel like the model landscape is moving faster and faster, you're right. OpenAI's chief data scientist &lt;a href="https://techcrunch.com/2026/04/23/openai-chatgpt-gpt-5-5-ai-model-superapp/" rel="noopener noreferrer"&gt;told TechCrunch&lt;/a&gt; this week that "the last two years have been surprisingly slow," but what he meant is that now we're really moving — &lt;em&gt;now we're cooking with gas&lt;/em&gt;. And that's a good thing for consumers.&lt;/p&gt;

&lt;p&gt;These SOTA models aren't just becoming smarter and more comprehensive, they're also becoming more token-efficient for larger tasks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What's new?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GPT-5.5&lt;/strong&gt; is OpenAI's latest release for complex professional workloads, building on GPT-5.4 with stronger reasoning, higher reliability, and improved token efficiency on hard tasks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GPT-5.5 Pro&lt;/strong&gt; is OpenAI's high-capability model optimized for deep reasoning and accuracy on complex, high-stakes workloads.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both new models are now available in the &lt;a href="https://kilo.ai/gateway" rel="noopener noreferrer"&gt;Kilo Gateway&lt;/a&gt; and GPT-5.5 is one of our top recommended models out of the gate.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kilo.ai/gateway" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj58omox92dhf8sqjohad.png" alt="GPT-5.5 shown as a top recommended model in the Kilo Gateway model selector" width="800" height="518"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  A New Standard for Complex Work
&lt;/h2&gt;

&lt;p&gt;GPT-5.5 is particularly impressive when it comes to coding and reasoning, and the kind of computer-use and browser skills needed by always-on agents like KiloClaw:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Terminal-Bench 2.0&lt;/strong&gt; (Command-line workflows &amp;amp; tool coordination): 82.7% &lt;em&gt;(vs. GPT-5.4: 75.1% | Claude Opus 4.7: 69.4%)&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Expert-SWE&lt;/strong&gt; (Internal long-horizon coding tasks ~20 hours): 73.1% &lt;em&gt;(vs. GPT-5.4: 68.5%)&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GDPval&lt;/strong&gt; (Knowledge work across 44 occupations): 84.9% &lt;em&gt;(vs. GPT-5.4: 83.0% | Claude Opus 4.7: 80.3%)&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OSWorld-Verified&lt;/strong&gt; (Operating real computer environments): 78.7% &lt;em&gt;(vs. GPT-5.4: 75.0% | Claude Opus 4.7: 78.0%)&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;BrowseComp&lt;/strong&gt;: 84.4% &lt;em&gt;(GPT-5.5 Pro scores 90.1%)&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But benchmarks are only half the story. &lt;strong&gt;We had the privilege of pre-testing the alpha release of GPT-5.5&lt;/strong&gt;, and we're ready to share what this means for builders, agents, and the broader AI ecosystem. First of all, it's exciting to see OpenAI continuing to bridge the gap between execution and high-level strategy. Coming just two days after the release of GPT-5.4 Image 2, a stunning new image generation model for multimodal workflows, GPT-5.5 covers a lot of bases for professional workloads. &lt;strong&gt;This new model can transform how engineering teams scale their most complex autonomous workflows.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In our testing, GPT-5.5 has proven to be tremendously capable at long-context tasks and agentic coding. Where previous generation models would occasionally lose the plot during massive refactoring jobs or deep-reasoning requirements for large codebases, GPT-5.5 stays locked in.&lt;/p&gt;

&lt;p&gt;More importantly for our ecosystem, it has become a formidable daily driver for KiloClaw as well as an excellent fit for getting a new claw up and running and exploring new use cases. We've been using it to run always-on agents handling highly complex, multi-step professional work, and the reliability jump is palpable.&lt;/p&gt;

&lt;p&gt;As we noted in our recent &lt;a href="https://blog.kilo.ai/p/we-gave-claude-opus-47-and-kimi-k26" rel="noopener noreferrer"&gt;deep dive comparing Claude Opus 4.7 and Moonshot's Kimi K2.6&lt;/a&gt;, the frontier of AI is fiercely competitive right now. While Opus 4.7 and Kimi K2.6 brought massive leaps in their own rights, GPT-5.5 introduces a new class of autonomous capability that specifically targets professional, high-stakes workflows where fewer retries and higher reliability directly translate to better outcomes.&lt;/p&gt;

&lt;p&gt;GPT-5.5 is definitely crushing a wide range of benchmarks, which fits with our experience testing the model in Kilo Code and KiloClaw. Significantly, it topped the Artificial Analysis Intelligence Index by 3 points, &lt;a href="https://artificialanalysis.ai/articles/openai-gpt5-5-is-the-new-leading-AI-model" rel="noopener noreferrer"&gt;breaking a three-way tie&lt;/a&gt; with Anthropic and Google.&lt;/p&gt;

&lt;p&gt;In our testing, GPT-5.5 did have some issues with UI-related design tasks, but we found that more specific instructions helped resolve some of those problems.&lt;/p&gt;

&lt;h2&gt;
  
  
  So which one should you use?
&lt;/h2&gt;

&lt;p&gt;GPT-5.5 is priced higher than GPT-5.4, reflecting its heavy-duty reasoning capabilities. And with this new model OpenAI did push up pricing again.&lt;/p&gt;

&lt;p&gt;In fact, GPT-5.5 &lt;strong&gt;($5 / Mtok input, $30 / Mtok output, $0.50 / Mtok cache)&lt;/strong&gt; is more approachable than it might look from the outside. The 5.5 series is more token efficient than 5.4. For hard tasks, this efficiency often results in a &lt;em&gt;lower&lt;/em&gt; actual cost per completed task because the model gets it right on the first try, without needing endless prompt engineering or loop retries.&lt;/p&gt;

&lt;p&gt;GPT-5.5 often reaches higher-quality outputs with fewer retries, so it can be more token-efficient on real workflows even when reasoning is higher. And good news for Kilo Coders: &lt;strong&gt;it's the most token efficient at coding workflows.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We would also like to echo OpenAI's own advice here: "Higher reasoning can use more tokens, so customers should match reasoning effort to the task."&lt;/p&gt;

&lt;p&gt;In-memory prompt caching is &lt;strong&gt;not supported&lt;/strong&gt; for GPT-5.5. Caching for this model relies exclusively on extended prompt caching. During inference, the model caches tokens from previous requests directly on GPU-local storage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Does it Claw?
&lt;/h2&gt;

&lt;p&gt;We're excited to see what Kilo users around the world do with it. Like the new Opus, it's super smart. But is it &lt;em&gt;too smart&lt;/em&gt; for daily tasks? Or will it become your daily driver?&lt;/p&gt;

&lt;p&gt;My prediction is that GPT-5.5 will compete more directly with the latest Opus release for coding, but be more of a top-agent driver in Hermes and OpenClaw workflows like &lt;a href="https://kilo.ai/kiloclaw" rel="noopener noreferrer"&gt;KiloClaw&lt;/a&gt;: sub-agents will likely need to use smaller models or OSS models to remain cost-efficient.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kilo.ai/gateway" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwvprep86439j667bsv7f.png" alt="GPT-5.5 and GPT-5.5 Pro shown in the Kilo model selector" width="680" height="268"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That said, the only way to really&lt;/p&gt;

</description>
      <category>openai</category>
      <category>ai</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>Shell Security Plugin</title>
      <dc:creator>Darko from Kilo</dc:creator>
      <pubDate>Mon, 27 Apr 2026 09:16:14 +0000</pubDate>
      <link>https://forem.com/kilocode/shell-security-plugin-4p16</link>
      <guid>https://forem.com/kilocode/shell-security-plugin-4p16</guid>
      <description>&lt;p&gt;I ran &lt;code&gt;openclaw security audit&lt;/code&gt; on my instance the other day and got back a wall of text. Six findings — one critical, three warnings, two informational. I stared at it for a minute, scrolled through the nested objects, and thought: "Okay, but what should I actually &lt;em&gt;do&lt;/em&gt; about this?"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!E7DS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb4eebf49-f3ea-4980-a9bc-40fae53963f1_1080x298.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F50a7c4amv6n4pmbb73n4.png" alt="Screenshot of openclaw security audit JSON output showing six findings" width="800" height="221"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's the gap the new &lt;a href="https://github.com/Kilo-Org/shell-security" rel="noopener noreferrer"&gt;Shell Security&lt;/a&gt; plugin fills. It takes that same audit output, sends the findings (not your secrets, not your config) to the &lt;a href="https://kilo.ai" rel="noopener noreferrer"&gt;KiloCode&lt;/a&gt; Security Advisor API, and gives you back a prioritized report with specific remediation steps. The whole thing happens in your chat — Telegram, Slack, the Control UI, wherever you talk to your agent.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it does
&lt;/h2&gt;

&lt;p&gt;The plugin is a thin bridge between two things that already exist:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;openclaw security audit&lt;/code&gt;&lt;/strong&gt; — the built-in CLI command that checks your local config for common security foot-guns (weak models without sandboxing, exposed runtime tools, missing trusted proxies, multi-user setups without isolation)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;KiloCode's Security Advisor API&lt;/strong&gt; — an endpoint that takes those findings and returns expert analysis with context-specific remediation guidance&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The plugin runs the audit locally, packages the JSON output, and sends it off. What comes back is a markdown report that covers what was found, why it matters, and what to do about it — organized by priority.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing it
&lt;/h2&gt;

&lt;p&gt;It's currently dev-only but will be released soon!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openclaw plugins &lt;span class="nb"&gt;install&lt;/span&gt; @kilocode/shell-security
openclaw plugins &lt;span class="nb"&gt;enable &lt;/span&gt;shell-security
openclaw gateway restart
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The gateway restart is a one-time thing after install. If you're talking to your agent through Slack or Telegram, you'll see a brief connection blip and then it's back.&lt;/p&gt;

&lt;h2&gt;
  
  
  Two ways to run it
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Slash command (recommended):&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This runs the plugin directly and renders the full report. It bypasses the LLM's summarization layer entirely, so you get the complete output regardless of which model you're running.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Natural language:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can also just say "run a security checkup" or "audit my OpenClaw config" and the agent will call the tool. One thing to know: if you're running a smaller model (Haiku, GPT-x-nano), it might paraphrase or truncate the report. Capable models like Sonnet or GPT's latest handle it fine. When in doubt, use the slash command.&lt;/p&gt;

&lt;h2&gt;
  
  
  First-run authentication
&lt;/h2&gt;

&lt;p&gt;The first time you run it, the plugin prompts you to connect your KiloCode account through a device auth flow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open a URL in your browser&lt;/li&gt;
&lt;li&gt;Enter a code&lt;/li&gt;
&lt;li&gt;Sign in or create a free account&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;/security-checkup&lt;/code&gt; again&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After that, the token is saved and you never see the auth flow again. There's a gateway reload on first auth (the plugin writes the token to your config), but subsequent runs are instant.&lt;/p&gt;

&lt;p&gt;If you're running OpenClaw in CI or a container, you can skip the interactive flow entirely by setting &lt;code&gt;KILOCODE_API_KEY&lt;/code&gt; as an environment variable.&lt;/p&gt;

&lt;h2&gt;
  
  
  What gets sent (and what doesn't)
&lt;/h2&gt;

&lt;p&gt;This matters. Your OpenClaw instance has access to your filesystem, your API keys, your chat history. The plugin doesn't send any of that.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sent:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The JSON output of &lt;code&gt;openclaw security audit&lt;/code&gt; — finding IDs and summaries, no secrets&lt;/li&gt;
&lt;li&gt;Your OpenClaw version and plugin version&lt;/li&gt;
&lt;li&gt;Your instance's public IP (for optional remote probes)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Not sent:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Config file contents&lt;/li&gt;
&lt;li&gt;API keys, secrets, or tokens&lt;/li&gt;
&lt;li&gt;Chat history&lt;/li&gt;
&lt;li&gt;Workspace files&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Everything goes over HTTPS, authenticated with your KiloCode account token.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the report looks like
&lt;/h2&gt;

&lt;p&gt;On my instance, the report came back with findings grouped by severity — the critical one about small models running without sandboxing at the top, followed by the warnings about trusted proxies and multi-user heuristics, and then the informational items. Each finding includes context about why it's a risk and concrete steps to fix it.&lt;/p&gt;

&lt;p&gt;It's... a lot of text right now. The formatting still needs work — the dev release is functional but not polished. There's also a bug where the KiloClaw call-to-action shows up even if you're already a KiloClaw user. These are known rough edges that'll get smoothed out before the stable release.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this is useful
&lt;/h2&gt;

&lt;p&gt;Running &lt;code&gt;openclaw security audit&lt;/code&gt; is already good practice. But JSON output requires you to interpret each finding yourself, look up what the check IDs mean, and figure out the right remediation. The Security Advisor layer turns those findings into specific guidance you can act on immediately.&lt;/p&gt;

&lt;p&gt;For anyone running OpenClaw as a personal assistant (which is most of us), the security surface is real. Your agent has shell access, filesystem access, web browsing. A misconfigured model fallback or an unintended multi-user exposure means your agent could be manipulated by untrusted input. Having something that checks this and explains the results in plain language saves you from reading JSON and guessing at severity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Current status
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://www.npmjs.com/package/@kilocode/shell-security" rel="noopener noreferrer"&gt;npm package&lt;/a&gt; is live and the &lt;a href="https://github.com/Kilo-Org/shell-security" rel="noopener noreferrer"&gt;source is on GitHub&lt;/a&gt; under MIT license. A stable release is coming — the main work remaining is formatting improvements and fixing the conditional CTA logic.&lt;/p&gt;

&lt;p&gt;Install it, run &lt;code&gt;/shell-security&lt;/code&gt;, see what it finds. It takes about thirty seconds.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>New VS Code Extension - Week Three: Memory, Stability, and Moving at Kilo Speed Into the Future</title>
      <dc:creator>Darko from Kilo</dc:creator>
      <pubDate>Fri, 24 Apr 2026 08:16:10 +0000</pubDate>
      <link>https://forem.com/kilocode/new-vs-code-extension-week-three-memory-stability-and-moving-at-kilo-speed-into-the-future-21cd</link>
      <guid>https://forem.com/kilocode/new-vs-code-extension-week-three-memory-stability-and-moving-at-kilo-speed-into-the-future-21cd</guid>
      <description>&lt;p&gt;Three weeks ago we GA'd the &lt;a href="https://blog.kilo.ai/p/new-kilo-for-vs-code-is-live" rel="noopener noreferrer"&gt;completely rebuilt Kilo Code extension&lt;/a&gt; for VS Code. &lt;a href="https://blog.kilo.ai/p/new-vs-code-extension-week-one-what/" rel="noopener noreferrer"&gt;Week one&lt;/a&gt; was about what we were hearing and what we were shipping. &lt;a href="https://blog.kilo.ai/p/new-vs-code-week-two/" rel="noopener noreferrer"&gt;Week two&lt;/a&gt; was about addressing the most urgent feedback and bumps.&lt;/p&gt;

&lt;p&gt;This week is about the two other areas of frequent feedback and challenges: &lt;strong&gt;memory usage on Windows&lt;/strong&gt; and &lt;strong&gt;session stability under sustained use&lt;/strong&gt;. Both are materially better now than they were a week ago. Neither is 100% fixed and "done", we can see from open GitHub issues that some of you still hit rough edges, but the experience is significantly improved especially on Windows when using Agent Manager.&lt;/p&gt;

&lt;p&gt;Across the week we shipped &lt;strong&gt;80+ Kilo PRs&lt;/strong&gt; and merged &lt;strong&gt;three more upstream OpenCode releases&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!NfA1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F100f1bf8-af1b-4b8d-b345-313a8792c17e_1536x1024.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flmll6qxzvcm3w3h0saqv.png" alt="Screenshot showing the Kilo VS Code extension Agent Manager interface with multiple active sessions" width="800" height="534"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Windows Memory: A Big Step Forward
&lt;/h2&gt;

&lt;p&gt;This is the one we know has caused the most pain. Users on Windows reported the Kilo core process climbing into multiple GB of RAM within minutes of opening Agent Manager and staying there. A handful of you sent us heap snapshots — thank you — which helped track down root cause on some harder to reproduce issues.&lt;/p&gt;

&lt;p&gt;The high-level story: Agent Manager was polling git status and diffs through the Kilo core subprocess, and on Windows the combination of IPC round-trips, diff payload sizes, and allocator behavior meant freed memory wasn't being returned to the OS cleanly. In v7.2.20 we've restructured that path (&lt;a href="https://github.com/Kilo-Org/kilocode/pull/9046" rel="noopener noreferrer"&gt;#9046&lt;/a&gt;) and made the extension much more careful about what it holds in memory:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Agent Manager's git work now runs directly in the extension host, not through the core process.&lt;/li&gt;
&lt;li&gt;We cap how much of any single diff we'll read into memory, so opening a very large file no longer causes a spike the allocator can't recover from.&lt;/li&gt;
&lt;li&gt;We also tuned the allocator on the core process itself to release memory back to the OS more promptly on Windows.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you were running on a downgraded 5.x build because of memory issues, this is the release to come back on. If you're still seeing unbounded growth, please keep the issues coming — the heap-snapshot command we added this cycle (&lt;a href="https://github.com/Kilo-Org/kilocode/pull/9034" rel="noopener noreferrer"&gt;#9034&lt;/a&gt;) makes those reports much easier to act on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Session Stability: Fewer Interruptions
&lt;/h2&gt;

&lt;p&gt;The second theme was sessions getting interrupted mid-flow — usually recoverable by sending another message or re-opening the session/extension. Most of the reports we got traced back to a handful of specific state-machine edges, and those are now meaningfully better.&lt;/p&gt;

&lt;p&gt;The one we heard about most often was sessions ending up stuck — most visibly when VS Code was closed while a suggestion prompt was still showing, which left the session permanently marked busy and any follow-up message queued forever. Sessions now go idle correctly while waiting on a suggestion response (&lt;a href="https://github.com/Kilo-Org/kilocode/pull/9199" rel="noopener noreferrer"&gt;#9199&lt;/a&gt;). A related set of stuck states around the end-of-plan flow — where "Start new session" and "Continue here" didn't reliably transition you into the handover session — also got fixed, so those buttons now move you into a new session that stays visibly busy until the handover summary lands (&lt;a href="https://github.com/Kilo-Org/kilocode/pull/9245" rel="noopener noreferrer"&gt;#9245&lt;/a&gt;, &lt;a href="https://github.com/Kilo-Org/kilocode/pull/9300" rel="noopener noreferrer"&gt;#9300&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;Everyday chat behavior got a lot smoother too. The most common irritation was the chat view snapping back to the bottom while you were trying to read earlier context during a streaming response; that no longer happens, and scrolling back through long sessions now correctly reloads earlier history from the virtualized list (&lt;a href="https://github.com/Kilo-Org/kilocode/pull/9236" rel="noopener noreferrer"&gt;#9236&lt;/a&gt;, &lt;a href="https://github.com/Kilo-Org/kilocode/pull/9194" rel="noopener noreferrer"&gt;#9194&lt;/a&gt;). Switching between long sessions in Agent Manager — which used to briefly freeze the UI — is now near-instant, with the chat view self-healing if messages arrived while it was in the background (&lt;a href="https://github.com/Kilo-Org/kilocode/pull/8911" rel="noopener noreferrer"&gt;#8911&lt;/a&gt;). Smaller queue and layout fixes also landed around follow-up prompts and tool output interleaving.&lt;/p&gt;

&lt;p&gt;Finally, a nice performance-and-stability win from the community: &lt;a href="https://github.com/IamCoder18" rel="noopener noreferrer"&gt;@IamCoder18&lt;/a&gt; landed visibility-aware git polling plus resolution caching in Agent Manager's git stats poller (&lt;a href="https://github.com/Kilo-Org/kilocode/pull/8703" rel="noopener noreferrer"&gt;#8703&lt;/a&gt;), meaningfully reducing the number of git subprocesses the extension spawns on repos with many worktrees.&lt;/p&gt;

&lt;h2&gt;
  
  
  New Capabilities This Cycle
&lt;/h2&gt;

&lt;p&gt;Stability was the priority, but we still shipped meaningful new capability:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Fork sessions from any user message&lt;/strong&gt; — both in Agent Manager (&lt;a href="https://github.com/Kilo-Org/kilocode/pull/9207" rel="noopener noreferrer"&gt;#9207&lt;/a&gt;) and in the sidebar (&lt;a href="https://github.com/Kilo-Org/kilocode/pull/9244" rel="noopener noreferrer"&gt;#9244&lt;/a&gt;). Branch at any point without losing the original.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;KiloClaw chat panel in VS Code&lt;/strong&gt; — the KiloClaw group chat experience now lives directly inside the editor (&lt;a href="https://github.com/Kilo-Org/kilocode/pull/7960" rel="noopener noreferrer"&gt;#7960&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Folder @-mentions&lt;/strong&gt; — reference a folder with @ and include its top-level file contents as context (&lt;a href="https://github.com/Kilo-Org/kilocode/pull/9023" rel="noopener noreferrer"&gt;#9023&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Autocomplete backend prewarm&lt;/strong&gt; — inline completions are ready on the first keystroke without having to open the Kilo sidebar first, and autocomplete state refreshes when workspace folders change (&lt;a href="https://github.com/Kilo-Org/kilocode/pull/9305" rel="noopener noreferrer"&gt;#9305&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Heap snapshots from the Command Palette&lt;/strong&gt; — capture a snapshot of the bundled Kilo core directly from VS Code (&lt;a href="https://github.com/Kilo-Org/kilocode/pull/9034" rel="noopener noreferrer"&gt;#9034&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;"Contribute on GitHub" CTA in Marketplace&lt;/strong&gt; — a subtle footer link inviting contributions of new skills, modes, and MCP servers (&lt;a href="https://github.com/Kilo-Org/kilocode/pull/9099" rel="noopener noreferrer"&gt;#9099&lt;/a&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Upstream OpenCode
&lt;/h2&gt;

&lt;p&gt;Three more OpenCode upstream releases merged this cycle — v1.4.4, v1.4.5, and v1.4.6 — bringing continued improvements to session sync, provider compatibility, Windows terminal handling, and the underlying AI SDK layer. Building on a shared open-source foundation continues to pay off: work from the broader OpenCode community lands in Kilo automatically.&lt;/p&gt;

&lt;h2&gt;
  
  
  Codebase Indexing Progress
&lt;/h2&gt;

&lt;p&gt;Community contributor &lt;a href="https://github.com/shssoichiro" rel="noopener noreferrer"&gt;@shssoichiro&lt;/a&gt;'s codebase indexing work (&lt;a href="https://github.com/Kilo-Org/kilocode/pull/6966" rel="noopener noreferrer"&gt;#6966&lt;/a&gt;) remains active. The branch is being kept current against main, review iterations are ongoing, and we're closing in on a form we can land. This is a substantial feature and we want to get it right — thank you for the sustained effort here.&lt;/p&gt;

&lt;h2&gt;
  
  
  Community Update
&lt;/h2&gt;

&lt;p&gt;Some numbers and names from this cycle:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;80+ PRs merged&lt;/strong&gt; on top of the upstream OpenCode work.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;3 upstream OpenCode releases merged&lt;/strong&gt; — v1.4.4, v1.4.5, and v1.4.6.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multiple stable releases&lt;/strong&gt; promoted to the marketplace through the period, with v7.2.20 as the current stable.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thank you to community contributors whose work landed or continued this cycle:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/shssoichiro" rel="noopener noreferrer"&gt;@shssoichiro&lt;/a&gt; — continued work on codebase indexing (&lt;a href="https://github.com/Kilo-Org/kilocode/pull/6966" rel="noopener noreferrer"&gt;#6966&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/IamCoder18" rel="noopener noreferrer"&gt;@IamCoder18&lt;/a&gt; — visibility-aware git polling in GitStatsPoller (&lt;a href="https://github.com/Kilo-Org/kilocode/pull/8703" rel="noopener noreferrer"&gt;#8703&lt;/a&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And broad thanks to every community member who filed heap snapshots, reproduction steps, Discord reports, and sustained the long-running Windows performance thread (&lt;a href="https://github.com/Kilo-Org/kilocode/issues/8030" rel="noopener noreferrer"&gt;#8030&lt;/a&gt;). That conversation is the reason we had the signal we needed to tackle the memory work head-on this week.&lt;/p&gt;

&lt;h2&gt;
  
  
  Moving at Kilo Speed Into the Future
&lt;/h2&gt;

&lt;p&gt;This is the last of the regular weekly updates in this series. The core issues that we highlighted in Week 1 — rate limiting, Plan/Ask strictness, human-in-the-loop controls, config resilience, and Windows memory — are either resolved or meaningfully better. We will continue to focus on smoothing out the rough edges in the near future.&lt;/p&gt;

&lt;p&gt;We will also be driving Kilo further towards the vision of where agentic coding is going, enabling engineering teams to ship at Kilo Speed safely and confidently, faster than ever before. We are excited about this future and believe that the new V7 is on a strong foundation to build on. Agent Manager continues to improve for those who like to run multiple agent sessions in parallel, and will only become more useful as models continue to improve and become more capable and need less oversight. And when a particular change or workstyle requires closer agent supervision and pair programming, you can do that too. The AI landscape is evolving quickly and models keep advancing, and the tools we use need to keep pace.&lt;/p&gt;

&lt;p&gt;To everyone who showed up over these three weeks — the issue filers, the PR authors, the Discord commenters, the prerelease testers, the heap-snapshot senders, and the folks who point to the future with feature requests — &lt;strong&gt;thank you&lt;/strong&gt;. Your feedback, issues, and pull requests are genuinely what makes this community great. We value every piece of it, and we'll keep making the extension better because of it.&lt;/p&gt;

&lt;p&gt;See you in the release notes.&lt;/p&gt;

&lt;p&gt;— Josh and Mark&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Move at Kilo Speed.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>vscode</category>
      <category>webdev</category>
    </item>
    <item>
      <title>The future of Product Managers</title>
      <dc:creator>Darko from Kilo</dc:creator>
      <pubDate>Thu, 23 Apr 2026 12:54:39 +0000</pubDate>
      <link>https://forem.com/kilocode/the-future-of-product-managers-4k28</link>
      <guid>https://forem.com/kilocode/the-future-of-product-managers-4k28</guid>
      <description>&lt;p&gt;A product leader we know has 15 years of experience shipping developer tools. He spent a decade at a household name. He is, genuinely, one of the best product minds we've encountered in this industry.&lt;/p&gt;

&lt;p&gt;He can't get a conversation for a group PM role.&lt;/p&gt;

&lt;p&gt;That is a signal, not a market blip.&lt;/p&gt;

&lt;p&gt;We've spent a lot of time talking about what AI is doing to engineers – how one developer with the right tools now ships what used to require a team of five. But we had an adjacent question: what happens to product managers?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fep6jgi2nbd309il72zk2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fep6jgi2nbd309il72zk2.png" alt="Illustration representing the collapse of the traditional PM-to-engineer shipping funnel" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Shipping isn't a funnel anymore
&lt;/h2&gt;

&lt;p&gt;For years, software development worked like a funnel. PMs turned customer insights into specs. Engineers turned specs into code. The funnel created a natural place for the PM to sit – upstream, owning the translation layer.&lt;/p&gt;

&lt;p&gt;Shipping was expensive. So you needed someone to decide what was worth shipping.&lt;/p&gt;

&lt;p&gt;That's no longer true. Shipping is close to free now. So what is a PM's role now that the funnel has collapsed and PMs aren't filtering a very costly resource (engineering time)? Is there still a place for PMs in this new world?&lt;/p&gt;

&lt;p&gt;As former PMs ourselves, we're watching this shift from two very different vantage points. At Kilo, there are about 40 people and one PM. We operate with a &lt;a href="https://blog.kilo.ai/p/our-engineers-own-a-number" rel="noopener noreferrer"&gt;WAUzer (Weekly Active User) model&lt;/a&gt; – every engineer owns a single product area and is accountable for the weekly active users in that area. Every Monday, Evgeny would stand up for two minutes: here's what I did on cloud agents, here are the numbers, here's my target for next week. He was fast. He was accountable. And across those product areas, we saw roughly 10% week-over-week growth.&lt;/p&gt;

&lt;p&gt;The product hat shifted to engineers. And it worked.&lt;/p&gt;

&lt;p&gt;But, it didn't work everywhere – the VS Code extension had too much surface area for one engineer to own clearly. So we brought in Josh. He runs a pod. He decides what gets built. Traditional PM model.&lt;/p&gt;

&lt;p&gt;At &lt;a href="https://asksolo.ai" rel="noopener noreferrer"&gt;Solo&lt;/a&gt; (Asher's company), it's just two people – one developer – moving at a pace that would have required a team of 10 three years ago. No PM at all. No coordination layer. The product question and the building question sit with the same person.&lt;/p&gt;

&lt;p&gt;Two different experiments. Same conclusion forming.&lt;/p&gt;

&lt;h2&gt;
  
  
  It's always been vibe coding
&lt;/h2&gt;

&lt;p&gt;"PMs were the original vibe coders. We wrote the spec, and the engineers were our LLMs."&lt;/p&gt;

&lt;p&gt;That framing came out of a conversation between us. Because if the spec-to-code handoff is getting absorbed by AI tooling – if engineers can hold the product context and build without a translation layer – then the PM role has to move. The question is where.&lt;/p&gt;

&lt;p&gt;We see two paths forward.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Path one: shift left toward go-to-market.&lt;/strong&gt; The thing that's genuinely hard, even in an AI-native company, is knowing what to build. Not technically – but commercially. What will people pay for? What problem are we actually solving? Who is the buyer, and do we have them before we build?&lt;/p&gt;

&lt;p&gt;That's where PMs might land. Not writing specs, but sitting closer to sales, customer research, and market discovery to orchestrate the product strategy and business rationale for building a feature. A big portion of the PM's role will be saying no to features to prevent bloat and identify customers who are willing to pay for features before building it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Path two: the long thin layer – engineers who wear the product hat.&lt;/strong&gt; Each engineer owns their area completely. Customer conversations, support, metrics, roadmap decisions – all of it. No handoff, no telephone game.&lt;/p&gt;

&lt;p&gt;The upside is accountability. The downside is that it requires people who can go wide – technically sharp AND commercially minded AND customer-facing. That's a rare profile. And at some point, a customer doesn't want your one thin area. They want the whole package.&lt;/p&gt;

&lt;p&gt;Both paths are real. You'll see companies betting on each.&lt;/p&gt;

&lt;p&gt;The traditional shipping funnel is gone. It's dead in startups now and will die in F100s over the next 5 years. The people who figure out the new shape of product ownership – whether that's engineers, PMs who've shifted left, or something we don't have a name for yet – are the ones who'll be standing in three years.&lt;/p&gt;

&lt;p&gt;The senior product leader we mentioned will land somewhere. His experience is real. But the role he's looking for may not look like what it used to. The best thing any PM can do right now is stop waiting for the old model to come back and start experimenting with new models.&lt;/p&gt;

&lt;p&gt;Developers are working in the future. PMs need to join them.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>We Gave Claude Opus 4.7 and Kimi K2.6 the Same Workflow Orchestration Spec</title>
      <dc:creator>Darko from Kilo</dc:creator>
      <pubDate>Thu, 23 Apr 2026 12:47:55 +0000</pubDate>
      <link>https://forem.com/kilocode/we-gave-claude-opus-47-and-kimi-k26-the-same-workflow-orchestration-spec-1b9m</link>
      <guid>https://forem.com/kilocode/we-gave-claude-opus-47-and-kimi-k26-the-same-workflow-orchestration-spec-1b9m</guid>
      <description>&lt;p&gt;&lt;a href="https://www.kimi.com/blog/kimi-k2-6" rel="noopener noreferrer"&gt;Kimi K2.6&lt;/a&gt; launched on April 20, 2026, four days after Anthropic released &lt;a href="https://www.anthropic.com/news/claude-opus-4-7" rel="noopener noreferrer"&gt;Claude Opus 4.7&lt;/a&gt;. We gave both models the same spec for FlowGraph, a persistent workflow orchestration API with DAG validation, atomic worker claims, lease expiry recovery, pause/resume/cancel, and SSE event streaming. Then we reviewed the code and reproduced the edge cases the models' own tests did not cover.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!3LI_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19a9dd82-4071-49c4-9392-942df218e832_1944x1220.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftedrcmifweg0hliyzz7w.png" alt="Scorecard table comparing Claude Opus 4.7 and Kimi K2.6 across benchmark categories" width="800" height="502"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; Claude Opus 4.7 scored &lt;strong&gt;91/100&lt;/strong&gt; and Kimi K2.6 scored &lt;strong&gt;68/100&lt;/strong&gt; on the same build. Kimi K2.6 reached 75% of Claude Opus's score at &lt;strong&gt;19% of the cost&lt;/strong&gt;, but the 25-point gap sits in lease handling, scheduling, and live streaming (the parts its own tests never exercised).&lt;/p&gt;

&lt;h2&gt;
  
  
  Pricing
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!CFgp!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facdf3acc-d975-4947-b5ae-d38e4ca523e3_1642x298.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flluikub5rkrh308py0xw.png" alt="Pricing comparison table showing Claude Opus 4.7 at roughly 5x input and 6x output cost vs Kimi K2.6" width="800" height="145"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Claude Opus 4.7 runs at roughly 5x the input cost and 6x the output cost of Kimi K2.6. That is the gap we wanted to pressure-test.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why a Workflow Orchestration Spec
&lt;/h2&gt;

&lt;p&gt;A workflow engine runs jobs like a nightly settlement: fetch captured payments, charge customers, send receipts, publish analytics. Four steps with dependencies between them, retries when a step fails, and recovery when a worker crashes mid-step. Temporal, Airflow, and AWS Step Functions all solve the same problem at different scales.&lt;/p&gt;

&lt;p&gt;Most of our API comparisons test a wide range of skills (architecture, auth, filtering, error handling). For this test we wanted a single deep build where correctness was the main axis. A workflow engine with DAG validation, atomic step claims, lease expiry recovery, retry scheduling, and pause/resume/cancel semantics has objectively right and wrong answers. Either two workers can win the same step or they can't. Either an expired lease is recovered or it isn't. Either a step becomes runnable when its dependencies succeed or it doesn't.&lt;/p&gt;

&lt;p&gt;The spec also calls out at-least-once execution, deterministic scheduling across all eligible steps, and SQLite as the source of truth. The full spec is 1,042 lines and covers 20 endpoints across workflow definitions, runs, workers, events, health, and metrics.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Prompt
&lt;/h2&gt;

&lt;p&gt;We ran both tests in &lt;a href="https://kilocode.ai/" rel="noopener noreferrer"&gt;Kilo CLI&lt;/a&gt; and gave both models the same prompt:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Read &lt;a class="mentioned-user" href="https://dev.to/spec"&gt;@spec&lt;/a&gt;.md and build the project in the current directory. Treat &lt;a class="mentioned-user" href="https://dev.to/spec"&gt;@spec&lt;/a&gt;.md as the source of truth. Do not simplify this into a mock, toy app, or basic CRUD scaffold. Create all code, configuration, Prisma schema, tests, and README needed for a runnable project. Work autonomously and continue until the implementation is complete. Before you finish, install dependencies, run the test suite, fix any failures you can reproduce, and make sure the project is runnable."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Claude Opus 4.7 ran on high thinking mode. Kimi K2.6 ran on thinking mode. Each model worked in its own empty directory with no shared state.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Each Model Produced
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!1-Kt!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F235d6a11-70ea-4fdc-9e2c-cfa06d1e8392_2716x302.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fttgxo0b114jmma0n9rge.png" alt="Side-by-side comparison of project output from Claude Opus 4.7 and Kimi K2.6" width="800" height="89"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Claude Opus 4.7 finished in about 20 minutes. Kimi K2.6 took longer on the clock, but we are not scoring elapsed time here. Kimi K2.6 was released the day of this test and provider availability is still limited. Wall-clock comparisons against a model as well-supported as Claude Opus 4.7 would distort the picture. Expect that gap to close as more providers host Kimi K2.6.&lt;/p&gt;

&lt;p&gt;Both models delivered the project shape we asked for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prisma with SQLite as the source of truth&lt;/li&gt;
&lt;li&gt;Hono routes for workflow definitions, runs, worker actions, events, health, and metrics&lt;/li&gt;
&lt;li&gt;Conditional &lt;code&gt;updateMany&lt;/code&gt; for step claiming&lt;/li&gt;
&lt;li&gt;Retry and lease-expiry scheduling&lt;/li&gt;
&lt;li&gt;A &lt;code&gt;RunEvent&lt;/code&gt; table for audit logs&lt;/li&gt;
&lt;li&gt;Readmes with setup instructions and at-least-once execution notes&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Both Models Said Their Tests Passed
&lt;/h2&gt;

&lt;p&gt;Claude Opus 4.7 ran 31 tests across 6 files. Every test passed. Kimi K2.6 ran 20 tests inside a single file. Every test passed.&lt;/p&gt;

&lt;p&gt;If we had stopped there, the two implementations would look close. They weren't. A direct code review plus targeted reproductions against isolated SQLite databases surfaced &lt;strong&gt;one real bug in Claude Opus 4.7 and six in Kimi K2.6&lt;/strong&gt;. We will show each one with the line that causes it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Claude Opus 4.7: One Real Bug
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Multi-expired lease recovery leaves retryable siblings on a failed run
&lt;/h3&gt;

&lt;p&gt;The spec says that when a step exhausts retries, the parent run fails and every other non-terminal step becomes &lt;code&gt;blocked&lt;/code&gt;. Claude Opus 4.7's recovery path handles this correctly for a single expired lease. With two expired leases in the same recovery pass, it can undo its own block.&lt;/p&gt;

&lt;p&gt;In &lt;code&gt;src/services/workers.ts&lt;/code&gt;, &lt;code&gt;runRecovery()&lt;/code&gt; loads every expired &lt;code&gt;running&lt;/code&gt; step into memory and iterates:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!u_NU!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5c1d7f06-15bd-4804-84cf-29e47f8aebd5_1994x652.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0s1mzjru9xpiywcrqdms.png" alt="Code snippet showing runRecovery() iterating over expired steps in Claude Opus 4.7" width="800" height="262"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If the first iteration exhausts retries for one step, &lt;code&gt;failRunDueToDeadStep()&lt;/code&gt; fires, the run becomes &lt;code&gt;failed&lt;/code&gt;, and every other non-succeeded step is set to &lt;code&gt;blocked&lt;/code&gt;. That is correct.&lt;/p&gt;

&lt;p&gt;The problem is the second iteration. &lt;code&gt;handleLeaseExpiry()&lt;/code&gt; updates by &lt;code&gt;id&lt;/code&gt; only:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!pecT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91043bb9-9896-4557-8cf3-7d0f7eaba7a9_1994x492.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn0jl4lv6eu9gmv9cyatb.png" alt="Code snippet showing handleLeaseExpiry() updating by id without a status guard" width="800" height="197"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There is no guard on &lt;code&gt;status&lt;/code&gt;, so a step that was just marked &lt;code&gt;blocked&lt;/code&gt; by the prior failure cascade gets updated back to &lt;code&gt;waiting_retry&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;We reproduced it with a run containing two expired running steps: &lt;code&gt;a&lt;/code&gt; with &lt;code&gt;maxAttempts = 1&lt;/code&gt; and &lt;code&gt;b&lt;/code&gt; with &lt;code&gt;maxAttempts = 2&lt;/code&gt;. After recovery:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!kx6e!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F52105575-cea2-4065-8e2d-c2ae18964a2a_1994x276.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc8vyx9ib2lz1xo3f9acz.png" alt="Reproduction output showing step b incorrectly set to waiting_retry after the run had already failed" width="800" height="111"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step &lt;code&gt;b&lt;/code&gt; should have been &lt;code&gt;blocked&lt;/code&gt; because the run had already failed. Instead it is eligible to be claimed again on the next &lt;code&gt;/workers/claim&lt;/code&gt; call.&lt;/p&gt;

&lt;p&gt;Claude Opus 4.7's test suite does not cover this case. It tests single-step lease expiry in isolation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Smaller contract risks
&lt;/h3&gt;

&lt;p&gt;Two smaller issues turned up in review but did not need a full reproduction.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The claim path reads &lt;code&gt;maxClaims * 10&lt;/code&gt; candidates. That is fine most of the time, but a queue with many skipped candidates at the front can hide valid work farther down the ordered list.&lt;/li&gt;
&lt;li&gt;The SSE stream subscribes after replay finishes and treats an unknown &lt;code&gt;afterEventId&lt;/code&gt; as "replay everything." The spec does not define unknown-cursor behavior explicitly, so this is more a looseness than a bug.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Kimi K2.6: Six Confirmed Issues
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Claim ordering is not global across runs
&lt;/h3&gt;

&lt;p&gt;The spec requires that when multiple steps are eligible, claim order is &lt;code&gt;priority&lt;/code&gt; descending, then &lt;code&gt;availableAt&lt;/code&gt; ascending, then &lt;code&gt;createdAt&lt;/code&gt; ascending, &lt;strong&gt;across all eligible steps&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Kimi K2.6's claim loop orders steps inside each run, then iterates runs in whatever order the database returns them:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!qSqZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F969de824-79bf-47e6-9b12-64e756b69c02_1994x1082.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkb0eumgvqvye0902qgqc.png" alt="Code snippet showing Kimi K2.6's claim loop ordering steps per run instead of globally" width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We reproduced this with two active runs on the same queue. One had a step at &lt;code&gt;priority = 10&lt;/code&gt;. The other had a step at &lt;code&gt;priority = 100&lt;/code&gt;. The call to &lt;code&gt;POST /workers/claim&lt;/code&gt; returned the priority 10 step first.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. SSE is replay-only, not live
&lt;/h3&gt;

&lt;p&gt;The spec requires that &lt;code&gt;GET /runs/:id/events/stream&lt;/code&gt; replays stored events and then switches to live streaming.&lt;/p&gt;

&lt;p&gt;Kimi K2.6's stream reads every persisted event, writes them to the stream, and then starts a keepalive timer. Nothing subscribes to new events. The file &lt;code&gt;src/lib/events.ts&lt;/code&gt; even defines an &lt;code&gt;emitAndBroadcast&lt;/code&gt; function and a subscriber map, but the route never wires to them:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!G7L1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc0c7d956-6690-4f15-b98a-7c586f81efe5_1994x813.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx28ly90tld8rpkoj38qg.png" alt="Code snippet showing Kimi K2.6's SSE route with unused emitAndBroadcast function and no live subscription" width="800" height="326"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Clients receive replayed history once, then silence. The README still claims live streaming.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Expired leases can still be completed
&lt;/h3&gt;

&lt;p&gt;The heartbeat endpoint rejects expired leases. The &lt;code&gt;complete&lt;/code&gt; and &lt;code&gt;fail&lt;/code&gt; endpoints do not. We reproduced this by claiming a step, forcing &lt;code&gt;leaseExpiresAt&lt;/code&gt; into the past, and calling &lt;code&gt;POST /step-runs/:id/complete&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!Fr-U!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F761cb687-e134-49dc-95c3-10ad448b6690_1994x115.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fae9gs8h3zbftn128rg8x.png" alt="Reproduction output showing a step marked succeeded despite an expired lease" width="800" height="46"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The step was marked &lt;code&gt;succeeded&lt;/code&gt; on an expired lease. The spec treats lease expiry as a failed attempt. A worker can crash, its lease can expire, recovery can schedule a retry for the next worker, and the original worker can still phone in a "success" afterwards.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. "No active version" returns 404 instead of 409
&lt;/h3&gt;

&lt;p&gt;The spec: if there is no active version and no explicit &lt;code&gt;version&lt;/code&gt;, return &lt;code&gt;409&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Kimi K2.6 raises &lt;code&gt;NOT_FOUND&lt;/code&gt; (404):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!sIdy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feddd5455-d0e0-41f3-bc21-c466d9dbac36_1994x223.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frel8pz5asyg0qpornpo4.png" alt="Code snippet showing Kimi K2.6 returning 404 instead of the spec-required 409" width="800" height="90"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Validation is narrower than the spec
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;CreateRunSchema&lt;/code&gt; and &lt;code&gt;CompleteSchema&lt;/code&gt; use &lt;code&gt;z.record(z.any())&lt;/code&gt; for &lt;code&gt;input&lt;/code&gt;, &lt;code&gt;metadata&lt;/code&gt;, and &lt;code&gt;output&lt;/code&gt;. The spec allows arbitrary JSON payloads. A string, array, or number payload is rejected even though the spec accepts it.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. The clean build path fails
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;npm test&lt;/code&gt; passes. &lt;code&gt;npm run build&lt;/code&gt; does not:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!zw-b!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcee959c-d406-4978-84b3-ea1df77cf582_1994x223.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fny1bkicxeizlrtcr1n4u.png" alt="Terminal output showing npm run build failing on a clean checkout" width="800" height="90"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;package.json&lt;/code&gt; expects &lt;code&gt;npm start&lt;/code&gt; to run &lt;code&gt;node dist/index.js&lt;/code&gt;, so the documented build-and-start flow is broken on a clean checkout.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Each Model Said About Itself
&lt;/h2&gt;

&lt;p&gt;Both models produced end-of-run summaries claiming their implementations were complete and all tests passed. Both were technically true. Neither flagged the issues above.&lt;/p&gt;

&lt;p&gt;Claude Opus 4.7's summary was mostly accurate. It described its recovery path, atomic claim pattern, and event persistence correctly. The one thing it missed was the multi-expired lease interaction.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!XtV5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61573fdc-7a07-4b69-a032-2d13a83af77d_1928x2664.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn4u3uoo4gf95fbegylel.png" alt="Claude Opus 4.7's end-of-run summary describing its implementation" width="800" height="1105"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Kimi K2.6's summary claimed deterministic global scheduling and live SSE streaming. Both of those claims are in the README too. The code does not deliver either.&lt;/p&gt;

&lt;p&gt;"My tests pass" is not the same thing as "my implementation is correct." Both models understood the spec well enough to build most of it. Neither model wrote tests that would have caught its own worst behavior.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!yJJ0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50a961ff-359e-447e-9896-0fd38db22966_1928x2672.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftw8fy3agli5qficyaeg7.png" alt="Kimi K2.6's end-of-run summary incorrectly claiming deterministic scheduling and live SSE streaming" width="800" height="1109"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Scoring
&lt;/h2&gt;

&lt;p&gt;We scored each model on the spec, weighted by how much each category mattered for a correctness-first workflow engine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!QItW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8401fa44-af87-47e1-a9b9-c68f9e769c57_1504x854.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm5pg0q2iy3l61hx0qxoc.png" alt="Scoring breakdown table weighted by category importance for a correctness-first workflow engine" width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Claude Opus 4.7 lost points on the reproduced recovery bug, the bounded claim scan, and the SSE cursor fallback.&lt;/p&gt;

&lt;p&gt;Kimi K2.6 lost points on the six confirmed issues above. The biggest hits are in recovery, scheduling, and streaming, which is exactly where the spec's hardest requirements live.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost vs Quality
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!P7SD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F89be5073-989b-47ed-9398-62efea3ebc75_1038x300.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faw8xfb6ej6wehsh3bj7o.png" alt="Cost vs quality table showing Kimi K2.6 at roughly 4x cheaper per point than Claude Opus 4.7" width="800" height="231"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Kimi K2.6 is about 4x cheaper per point. The missing 23 points are in step-leasing, scheduling, and event streaming, which is where the hardest spec requirements live. Those are the parts that separate "the endpoints exist" from "the system behaves correctly under load."&lt;/p&gt;

&lt;h2&gt;
  
  
  Where Open-Weight Models Stand Right Now
&lt;/h2&gt;

&lt;p&gt;This test sits inside a pattern we've been tracking for a while. MiniMax M2.7 matched Claude Opus 4.6's detection rate on &lt;a href="https://blog.kilocode.ai/" rel="noopener noreferrer"&gt;our last three-part benchmark&lt;/a&gt;. GLM-5.1 scored five points behind Claude Opus 4.6 on &lt;a href="https://blog.kilo.ai/p/we-tested-minimax-m27-against-claude" rel="noopener noreferrer"&gt;our job queue spec&lt;/a&gt;. Kimi K2.6 landed 23 points behind Claude Opus 4.7 here on a harder spec, but still produced the right shape of the system on the first pass.&lt;/p&gt;

&lt;p&gt;The gap on surface coverage &lt;strong&gt;has narrowed meaningfully over the last year.&lt;/strong&gt; The gap on correctness inside hard code paths (lease recovery, cross-run scheduling, streaming semantics) is still there. For work where the bugs only show up under contention or mid-crash, frontier proprietary models are the safer choice today. For work where you need the scaffold, the tables, the endpoint surface, and a starting test suite, open-weight models like Kimi K2.6 are close enough that the price delta matters.&lt;/p&gt;

&lt;p&gt;Kimi K2.6's current pricing ($0.95 / $4 per million tokens) is a starting point, not a floor. Moonshot AI releases open weights, which means Kimi K2.6 will end up hosted on multiple providers, &lt;strong&gt;with pricing and latency converging on whoever runs it most efficiently.&lt;/strong&gt; That is already playing out with MiniMax M2.5, which became the #1 most-used model across every mode in Kilo Code in the months after release. Price competition tends to pull these numbers down further as more hosts come online.&lt;/p&gt;

&lt;p&gt;Being open-weight also means you can &lt;strong&gt;self-host&lt;/strong&gt; or &lt;strong&gt;fine-tune&lt;/strong&gt; Kimi K2.6 if you have data residency requirements, custom workflows, or a cost profile that makes API-only models impractical at scale. That is not a capability Claude Opus 4.7 offers at any price.&lt;/p&gt;

&lt;p&gt;None of that changes the correctness findings above. It does reframe them. At $0.67 with a careful review pass, Kimi K2.6 is a real option now. At $3.56 with fewer corrections needed, Claude Opus 4.7 is the safer call. Which trade-off wins depends on the work. A year ago, that choice did not really exist at this level of complexity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Takeaways
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;For building the scaffold of a complex backend:&lt;/strong&gt; Kimi K2.6 did well. It produced the right project shape, the right tables, the right endpoint surface, and a test suite that passed. For prototyping, exploring a design, or generating a starting point you plan to review carefully, the $0.67 run is a good deal.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For systems where state-machine correctness matters:&lt;/strong&gt; Claude Opus 4.7 pulled clearly ahead. The two implementations look similar in shape but diverge in the code paths that are hard to test casually (lease expiry, cross-run ordering, SSE, expired-lease rejection). If the project needs to behave correctly when leases expire, when multiple runs compete for workers, or when events need to flow live to clients, Claude Opus 4.7's output is closer to something you could ship.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;On trusting model self-reports:&lt;/strong&gt; Both models said they were done. One was mostly right. The other had six spec-level issues in shipped code. "Tests pass" is a necessary signal. It is not a sufficient one for work this correctness-sensitive. A review pass plus a few targeted reproductions closed the gap between what the models said and what they actually built.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Note on Kimi K2.6 Speed
&lt;/h2&gt;

&lt;p&gt;Kimi K2.6 was released the day of this test. Provider availability is limited right now, so the current wall-clock timings understate the model's real speed. We saw similar adoption curves on previous open-weight releases from MiniMax and Z.ai as more providers came online. We expect Kimi K2.6's elapsed time (and its effective cost) to keep dropping as that happens.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Testing performed using &lt;a href="https://kilocode.ai/" rel="noopener noreferrer"&gt;Kilo Code&lt;/a&gt;, a free open-source AI coding assistant for &lt;a href="https://marketplace.visualstudio.com/items?itemName=kilocode.Kilo-Code" rel="noopener noreferrer"&gt;VS Code&lt;/a&gt; and &lt;a href="https://plugins.jetbrains.com/plugin/28350-kilo-code" rel="noopener noreferrer"&gt;JetBrains&lt;/a&gt; with 2,300,000+ Kilo Coders.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>claude</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>Enterprise AI Has a Trust Problem. We’re Hearing It Firsthand.</title>
      <dc:creator>Darko from Kilo</dc:creator>
      <pubDate>Thu, 23 Apr 2026 12:41:29 +0000</pubDate>
      <link>https://forem.com/kilocode/enterprise-ai-has-a-trust-problem-were-hearing-it-firsthand-2mn2</link>
      <guid>https://forem.com/kilocode/enterprise-ai-has-a-trust-problem-were-hearing-it-firsthand-2mn2</guid>
      <description>&lt;p&gt;The last few weeks have been chaotic for anyone paying attention to the AI tooling market. Cursor is &lt;a href="https://blog.kilo.ai/p/congratulations-cursor-on-being-acquired" rel="noopener noreferrer"&gt;set to sell to SpaceX&lt;/a&gt;. Anthropic &lt;a href="https://blog.kilo.ai/p/anthropic-doesnt-want-your-subscription" rel="noopener noreferrer"&gt;pulled the rug&lt;/a&gt; on subscription pricing for businesses. And in the middle of all that noise, our conversations with enterprise teams have been converging on the same frustrations.&lt;/p&gt;

&lt;p&gt;The specifics differ by industry. The underlying problem is consistent: &lt;strong&gt;walled gardens and pricing uncertainty.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Their Ceiling Is Your Ceiling
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjt00rq0enhsdjevziq22.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjt00rq0enhsdjevziq22.png" alt="Illustration representing infrastructure dependency — one lab's ceiling becomes your ceiling" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Take infrastructure trust. A top-three auto manufacturer came to us because their developers were hitting Cursor rate limits and couldn't build while they waited for them to reset. That same company had a second concern, quieter but more significant: they suspected the frontier lab powering their primary tool had oversold capacity and was running into compute headroom issues.&lt;/p&gt;

&lt;p&gt;Whether that was probably true didn't matter. The perception had already taken root. If your workflow depends on one lab's availability, their ceiling is your ceiling.&lt;/p&gt;

&lt;p&gt;Then there's cost visibility. A Director of DevEx at one of the world's largest banks came to us because his developers had existing model agreements with frontier labs, negotiated at the enterprise level, and he wanted them to actually use those models instead of routing everything through a middleman — which isn't possible on vendor-locked tools. On top of that, the other tools he'd evaluated gave him no visibility into token-level costs. When you can't see what you're paying for, you're trusting a vendor's math on your own spend.&lt;/p&gt;

&lt;p&gt;A platform engineer at one of the UK's largest retailers had a similar frustration: his colleague was evaluating a tool with an opaque credit system and finding that developers burned through credits fast when they asked what he called "some juicy questions of the codebase." They wanted powerful models, but they also wanted to know what those models were costing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Routing and Compliance Shouldn't Be Optional
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmoxiz5mg7nt24b6vuw4i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmoxiz5mg7nt24b6vuw4i.png" alt="Illustration representing routing and compliance requirements across different industries" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For others, the issue is routing and compliance. A healthcare software CEO was simultaneously in contract negotiations with two different vendors when he reached out. He wanted to know if there was a more open alternative before he signed with either, and was already writing his own model routing layer internally (a CEO, doing infrastructure work) because "the world changes too much to bet on any one solution."&lt;/p&gt;

&lt;p&gt;A separate healthcare data company came to us for a specific technical reason: they work with PHI and can't route that data through outside vendor infrastructure, but they still need frontier models for tasks that don't touch patient data. They needed one tool that could route differently based on what was actually in the request. That's not an unusual ask. It's compliance.&lt;/p&gt;

&lt;p&gt;And then there's the on-prem and sovereignty tier. A defense contractor with CUI requirements told us that on-prem model routing wasn't optional, it was a contractual necessity. A cloud CTO asked for mixed inference on day one, with some calls going to self-hosted models, others to their existing AWS Bedrock commitments, and the rest through our gateway, because running models is literally his business and single-vendor inference lock-in was a risk he'd already mapped out. The platform engineer at the UK retailer liked the tool he'd been using personally for 18 months, but said plainly, "obviously I can't bring that to my work environment." He needed enterprise data controls with his company's own Bedrock models underneath.&lt;/p&gt;

&lt;p&gt;The AI champion at a major fast food chain put it most directly: closed vendors are building something that looks a lot like OpenClaw but locked inside their own walled garden, and that's precisely why model-agnostic infrastructure matters to her. The capability isn't the moat. Who controls access to the models is.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Data Backs This Up
&lt;/h2&gt;

&lt;p&gt;We see this play out in our usage data too, and the numbers are striking. On an average day this month, Kilo users are actively running &lt;strong&gt;348 different models&lt;/strong&gt;. Yesterday, the top 10 by usage came from six different labs: MiniMax, StepFun, xAI, ByteDance, Anthropic, and NVIDIA. MiniMax was #1 by request volume. The three most popular models combined only covered half of all usage, and a full third of Kilo traffic goes to labs that most people wouldn't have recognized 18 months ago.&lt;/p&gt;

&lt;p&gt;Nearly half of Kilo users run models from more than one lab in a given month, and that share grew from &lt;strong&gt;29% to 46%&lt;/strong&gt; over the last six weeks. Among organizational customers specifically, 42% used models from two or more labs in a single week, generating &lt;strong&gt;1.1 million requests routed to 19 different labs&lt;/strong&gt;. The number of labs with 1,000+ weekly active users on Kilo grew from 8 in January to 12 in April.&lt;/p&gt;

&lt;p&gt;People also aren't just switching between projects. Yesterday, 15% of users routed to two or more models within a single hour. Power users average five labs a month. The average Kilo employee, who has every model available and no spend cap, draws from 5.7 labs per month. Even internally, with unlimited access, nobody settles on one lab. Multi-model isn't a power-user quirk anymore. It's becoming the default way developers work.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cursor &amp;amp; SpaceX: The Cost of Structural Dependency
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fma2bovg35xldap4g1bda.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fma2bovg35xldap4g1bda.png" alt="Illustration representing the cost of structural dependency on model providers" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://blog.kilo.ai/p/congratulations-cursor-on-being-acquired" rel="noopener noreferrer"&gt;Cursor/SpaceX deal&lt;/a&gt; is worth understanding through this lens. Cursor built a genuinely good product and still ended up in a position where the models at the core of their product were controlled by companies now competing directly against them. The $60 billion acquisition option and access to a million H100s is the cost of buying out of that structural dependency — training their own models so they're not reliant on infrastructure providers who also ship competing tools. That's not a Cursor problem. That's just what it costs to not be dependent on your competitors.&lt;/p&gt;

&lt;p&gt;The auto manufacturer waiting on rate limits, the bank that can't see its token costs, the healthcare company that can't route PHI externally, the defense contractor with on-prem requirements, the retailer who loved a tool he couldn't bring to work. These are all expressions of the same structural problem. When you don't own the model layer, the decisions of whoever does become your constraints.&lt;/p&gt;

&lt;p&gt;And as frontier labs move further into tooling, the likelihood of those constraints tightening only goes up. One enterprise customer said it plainly: "I do not like vendor lock-in. All the features that these big companies are making to try and lure you in and get vendor lock-in on their flagship models is not something I'm interested in."&lt;/p&gt;

&lt;p&gt;He's not alone. The market is moving toward infrastructure that stays out of the way, routing intelligently to whatever model fits the task, showing you exactly what it costs, and not requiring you to trust a vendor's judgment about which models you should have access to. The walled garden is a bet that lock-in wins. Increasingly, the developers and enterprise teams we talk to are betting the other way.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Kilo is the all-in-one agentic engineering platform, open-source and model-agnostic. &lt;a href="https://marketplace.visualstudio.com/items?itemName=kilocode.kilo-code" rel="noopener noreferrer"&gt;Install the VS Code extension&lt;/a&gt; or get started at &lt;a href="https://app.kilo.ai" rel="noopener noreferrer"&gt;app.kilo.ai&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
    </item>
    <item>
      <title>Congratulations Cursor on being acquired by SpaceX!</title>
      <dc:creator>Darko from Kilo</dc:creator>
      <pubDate>Wed, 22 Apr 2026 13:33:41 +0000</pubDate>
      <link>https://forem.com/kilocode/congratulations-cursor-on-being-acquired-by-spacex-le5</link>
      <guid>https://forem.com/kilocode/congratulations-cursor-on-being-acquired-by-spacex-le5</guid>
      <description>&lt;p&gt;Cursor &lt;a href="https://www.nytimes.com/2026/04/21/business/spacex-cursor-deal.html" rel="noopener noreferrer"&gt;reportedly just sold for $60 billion&lt;/a&gt;. To SpaceX. Which already owns xAI.&lt;/p&gt;

&lt;p&gt;When a coding tool gets acquired by an AI lab, users don't get more choices; they get fewer. This is a pattern. &lt;a href="https://techcrunch.com/2025/06/05/anthropic-co-founder-on-cutting-access-to-windsurf-it-would-be-odd-for-us-to-sell-claude-to-openai/" rel="noopener noreferrer"&gt;Anthropic pulled model access from Windsurf&lt;/a&gt; the moment acquisition talks with OpenAI were public info. That was how this industry works. Every major lab wants to own the full stack: the model &lt;em&gt;and&lt;/em&gt; the tool sitting on top of it. Control the tool, control what developers reach for every day. Control what they reach for, and you control which models win.&lt;/p&gt;

&lt;p&gt;The endgame is lock-in. Your coding assistant becomes a distribution channel for whatever model the parent company needs to push.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;SpaceX &lt;a href="https://twitter.com/SpaceX/status/2046713419978453374" rel="noopener noreferrer"&gt;@SpaceX&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;SpaceXAI and @cursor_ai are now working closely together to create the world's best coding and knowledge work AI. The combination of Cursor's leading product and distribution to expert software engineers with SpaceX's million H100 equivalent Colossus training supercomputer will…&lt;/p&gt;

&lt;p&gt;&lt;em&gt;10:11 PM · Apr 21, 2026 · 100K Views — 231 Replies · 264 Reposts · 1.54K Likes&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Think about what made Cursor worth using in the first place. It wasn't the interface alone. It was the fact that developers could reach for Claude when they needed deep reasoning, GPT-4o when they needed speed, and whatever else was best for the job at hand. That flexibility was the product. It was the reason engineers trusted it with real workflows.&lt;/p&gt;

&lt;p&gt;SpaceX has an AI strategy. It's called xAI. They spent $1.25 trillion worth of equity absorbing it in February. You don't make that bet and then happily route Cursor users to Anthropic's models. You route them to Grok. You fund xAI's next release. You use Cursor as a growth lever for the model you already own. That's just how businesses work.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdx4o96as4rr1tiy6szd9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdx4o96as4rr1tiy6szd9.png" alt="Illustration of the AI coding market consolidating into vertically integrated stacks" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Cursor was under pressure. OpenAI (Codex) and Anthropic (Claude Code) both gained significant traction this year. They were embraced quickly. Cursor found itself fighting uphill on two fronts: defending market share from tools backed by the very labs whose models Cursor depended on, while also shopping for its next funding round. SpaceX was a lifeline. A $50 billion one.&lt;/p&gt;

&lt;p&gt;But lifelines come with strings. The string here is that Cursor users are inheriting Elon Musk's AI roadmap, whether they asked for it or not.&lt;/p&gt;

&lt;p&gt;There's also the Anthropic question, which is more immediate than the xAI consolidation story. Anthropic pulled access from Windsurf during acquisition talks. Not after acquisition. During. The logic is straightforward: if a competitor is about to buy a distribution channel that runs your models, why hand them more leverage? Cursor users who rely heavily on Claude should take that precedent seriously. It could move fast.&lt;/p&gt;

&lt;p&gt;This matters beyond just Cursor. What's happening here is the broader consolidation of the AI coding market into vertically integrated stacks. OpenAI has its own coding product. Anthropic has its own coding product. Google has its own coding product. Each of those labs has clear incentives to favor tools they own or control. The independent, multi-model layer is shrinking.&lt;/p&gt;

&lt;p&gt;Kilo doesn't have a model to sell. We have a tool to build. That means Opus 4.7 when it is best, GPT-4o when GPT-4o is best, Mistral Large 3 when it's the right tool for the job, and the next breakthrough model the moment it's available — whoever ships it. We have no incentive to steer you toward any particular model.&lt;/p&gt;

&lt;p&gt;That's &lt;a href="https://blog.kilo.ai/p/choosing-the-right-ai-coding-model" rel="noopener noreferrer"&gt;model freedom&lt;/a&gt;. It sounds simple because it is. Use the best tool for the job. Don't let your coding assistant's corporate parent make that decision for you.&lt;/p&gt;

&lt;p&gt;The Cursor news is a reminder of why that principle matters.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>kilocode</category>
      <category>cursor</category>
    </item>
    <item>
      <title>The Elephant is Out of the Bag: Meet Ant Group's Ling-2.6-flash</title>
      <dc:creator>Darko from Kilo</dc:creator>
      <pubDate>Wed, 22 Apr 2026 12:33:55 +0000</pubDate>
      <link>https://forem.com/kilocode/the-elephant-is-out-of-the-bag-meet-ant-groups-ling-26-flash-56gm</link>
      <guid>https://forem.com/kilocode/the-elephant-is-out-of-the-bag-meet-ant-groups-ling-26-flash-56gm</guid>
      <description>&lt;p&gt;A short time ago, we &lt;a href="https://blog.kilo.ai/p/introducing-elephant-a-new-stealth" rel="noopener noreferrer"&gt;announced Elephant&lt;/a&gt;, a 100B-parameter stealth model from a prominent open model lab.&lt;/p&gt;

&lt;p&gt;The response from the Kilo community was fantastic. Across coding tasks, complex document parsing, and dynamic agentic workflows, your feedback was incredibly consistent. Elephant was extremely fast and capable. The speculation immediately took off on X and Discord. Was it a new proprietary model from a well-known tech giant? A highly tuned open-source derivative? A completely new architecture?&lt;/p&gt;

&lt;p&gt;Today, it is time to address the Elephant in the room and take off the mask.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbgkvw7btamzqxru9xrzn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbgkvw7btamzqxru9xrzn.png" alt="Elephant reveal hero image showing the unmasking of the stealth model as Ant Group's Ling-2.6-flash" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We're excited to officially reveal that the stealth model you've been using in your day-to-day coding workflows and agentic assistants is none other than &lt;strong&gt;Ant Group's Ling-2.6-flash&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;After all, you can't spell &lt;em&gt;Elephant&lt;/em&gt; without &lt;em&gt;ant&lt;/em&gt;!&lt;/p&gt;

&lt;p&gt;By releasing Ling-2.6-flash under a pseudonym, we wanted to let the model's performance speak entirely for itself, free from any pre-existing brand bias or market expectations. The community's blind tests confirmed what we already suspected: this model is an absolute powerhouse for developers building next-generation AI applications. With super fast inference from &lt;a href="https://novita.ai/" rel="noopener noreferrer"&gt;Novita&lt;/a&gt;, it was a win-win.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://x.com/novita_labs/status/2046662345997631518?s=20" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgizyv7o1il2em8b0rew2.png" alt="Novita tweet confirming the Ling-2.6-flash reveal and fast inference partnership" width="800" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But the Kilo community didn't just test Elephant — you actively helped refine how it operates. During the stealth phase, we received some absolutely great community PRs improving the system prompts and fine-tuning the integration. Thanks to your collaborative optimizations, the model's performance on Kilo has been pushed even further, unlocking better instruction adherence and sharper contextual reasoning.&lt;/p&gt;

&lt;p&gt;So, what exactly is under the hood of the model formerly known as Elephant?&lt;/p&gt;

&lt;p&gt;Here is the official description of the newly unmasked powerhouse:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Introducing **Ling-2.6-flash&lt;/em&gt;&lt;em&gt;, an Instant model with 104B total parameters and 7.4B active parameters, built for real-world agents to deliver fast responses, strong execution, and high token efficiency — matching SOTA-class performance at similar scale while significantly reducing token usage across coding, document processing, and lightweight agent workflows.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This unique architectural balance is what makes it so incredibly agile. In an era where AI agents are expected to operate autonomously, process massive context windows, and return actionable code in milliseconds, Ling-2.6-flash hits the exact sweet spot of intelligence and speed. You get the deep reasoning capabilities of a massive 104B parameter model, paired perfectly with the low latency and cost-effectiveness of a highly active, focused 7.4B network.&lt;/p&gt;

&lt;p&gt;Many are familiar with &lt;a href="https://www.interconnects.ai/p/inside-a-chinese-frontier-lab-inclusion" rel="noopener noreferrer"&gt;Ant Group&lt;/a&gt;'s trillion-parameter model released at the end of 2025 — Ling-1T — which seemed designed to compete directly with &lt;a href="https://kilo.ai/landing/deepseek-v3" rel="noopener noreferrer"&gt;DeepSeek-V3&lt;/a&gt;. This flash model is an intriguing refinement of those capabilities.&lt;/p&gt;

&lt;p&gt;A major driver behind this agility is how the model was trained. Ant Ling models are specifically designed with &lt;strong&gt;Agentic RL&lt;/strong&gt; (Reinforcement Learning). Because of this agent-first foundation, Ling-2.6-flash is fully compatible with &lt;strong&gt;OpenClaw&lt;/strong&gt;. The best way to see how this works is to use &lt;strong&gt;&lt;a href="https://kilo.ai/kiloclaw" rel="noopener noreferrer"&gt;KiloClaw&lt;/a&gt;&lt;/strong&gt;, our hosted OpenClaw that's faster, easier and safer than anything else on the market. This empowers the model to go far beyond simple text generation and seamlessly handle complex agentic workflows, executing terminal operations, managing dynamic GUI interactions, and coordinating sophisticated tool calls.&lt;/p&gt;

&lt;h2&gt;
  
  
  Celebrate the Reveal: Free Ling-2.6-flash for an Entire Week
&lt;/h2&gt;

&lt;p&gt;To celebrate this unmasking and thank you for your incredible contributions, we want to make sure everyone has the opportunity to experience the raw power of Ling-2.6-flash without any friction. You'll find it under the &lt;a href="https://huggingface.co/inclusionAI" rel="noopener noreferrer"&gt;inclusionAI&lt;/a&gt; moniker, which is the name of Ant Group's Artificial General Intelligence (AGI) initiative.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F197xpqpmut2b9n3geagi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F197xpqpmut2b9n3geagi.png" alt="Ling-2.6-flash shown in the Kilo model selector as free for one week" width="690" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Starting right now, &lt;strong&gt;Ling-2.6-flash is completely free to use in &lt;a href="https://kilo.ai/" rel="noopener noreferrer"&gt;Kilo Code and KiloClaw&lt;/a&gt; for an entire week — with absolutely no limits.&lt;/strong&gt; That's right. No rate limits holding back your automated agent loops, no token caps on your massive document processing tasks, and no paywalls stopping your late-night coding sessions. Whether you are building an autonomous research agent or a personal AI assistant with KiloClaw, we've got you covered.&lt;/p&gt;

&lt;p&gt;The Elephant is out of the bag 🐘&lt;/p&gt;

&lt;p&gt;We can't wait to see what you build with Ant's Ling-2.6-flash during this unlimited free week. Log into Kilo now, and let your agents loose!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>product</category>
      <category>kilocode</category>
    </item>
    <item>
      <title>Thank you, Roo! We’ll take it from here.</title>
      <dc:creator>Darko from Kilo</dc:creator>
      <pubDate>Wed, 22 Apr 2026 12:26:23 +0000</pubDate>
      <link>https://forem.com/kilocode/thank-you-roo-well-take-it-from-here-4hkh</link>
      <guid>https://forem.com/kilocode/thank-you-roo-well-take-it-from-here-4hkh</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; Roo Code &lt;a href="https://x.com/mattrubens/status/2046636598859559114" rel="noopener noreferrer"&gt;is no more&lt;/a&gt;. We're grateful for what the Roo team contributed to Kilo, and we're still going full speed on building the best agentic coding experience in VS Code. &lt;a href="https://app.kilo.ai/get-started" rel="noopener noreferrer"&gt;Install here&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Roo Code is officially shutting down. The team &lt;a href="https://x.com/mattrubens/status/2046636598859559114" rel="noopener noreferrer"&gt;announced&lt;/a&gt; they're archiving the repo on May 15th to go all-in on Roomote, their cloud agent.&lt;/p&gt;

&lt;p&gt;First: congrats to the Roo team. 3 million installs is a hell of a run, and a lot of the modern IDE agent playbook came out of that project. Custom modes, the Architect/Code/Debug split, diff-based editing, the whole "let the agent actually do things" philosophy that's now table stakes. Roo pushed it forward when a lot of people were still arguing about whether autocomplete was enough.&lt;/p&gt;

&lt;p&gt;Kilo started as a fork of Roo. We've been contributing back upstream since our inception, and a lot of what Kilo does well today started with the work Roo shipped first. For that, we're very grateful!&lt;/p&gt;

&lt;h2&gt;
  
  
  Kilo is not slowing down on VS Code
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0b2tp524ofvvmzif5dsw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0b2tp524ofvvmzif5dsw.png" alt="Screenshot of the new Kilo VS Code extension interface showing the Agent Manager and parallel execution features" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The IDE is not over. Far from it, actually. Every independent developer, every engineering team, every enterprise shipping production software still lives in an editor for most of their working hours. That's not going away, and the quality of the agent sitting next to them in that environment matters enormously.&lt;/p&gt;

&lt;p&gt;Which is why we just &lt;a href="https://blog.kilo.ai/p/new-kilo-for-vs-code-is-live" rel="noopener noreferrer"&gt;completely rebuilt the Kilo VS Code extension&lt;/a&gt; from the ground up on the OpenCode server, a portable open-source core that now shares the same engine as the Kilo CLI and Cloud Agents. That's not something you do if you think the IDE is a dead end.&lt;/p&gt;

&lt;p&gt;The rebuild unlocked things that weren't possible before: true parallel execution, subagent delegation, an Agent Manager for running and monitoring multiple agents at once, inline diff review with line-level comments, and cross-platform sessions that carry state between your terminal and your editor without losing context.&lt;/p&gt;

&lt;p&gt;It's already a fundamentally different surface than what we shipped at launch just a few weeks ago, and we're still &lt;a href="https://blog.kilo.ai/p/new-vs-code-extension-week-one-what" rel="noopener noreferrer"&gt;actively hardening it based on what the community is telling us&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We also think coding isn't the only place that AI should be working for you. &lt;a href="https://kilo.ai/kiloclaw" rel="noopener noreferrer"&gt;KiloClaw&lt;/a&gt; is a personal AI assistant that can proactively take actions across external platforms, automate workflows on its own schedule, and handle work that doesn't require you to be in the IDE at all.&lt;/p&gt;

&lt;p&gt;We care about both sides of how developers actually spend their time, and we're not slowing down on either.&lt;/p&gt;

&lt;h2&gt;
  
  
  For the Roo community
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F13tceq5fam1q9dwhts4d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F13tceq5fam1q9dwhts4d.png" alt="Illustration representing the open model future and Kilo's open-source roots" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you've been using Roo Code, you'll feel right at home in the Kilo extension — the &lt;a href="https://blog.kilo.ai/p/roo-or-cline-were-building-a-superset" rel="noopener noreferrer"&gt;codebases share a grandparent&lt;/a&gt;, after all.&lt;/p&gt;

&lt;p&gt;If you want to go deeper, the repo is completely open. Open source is how this whole ecosystem got here, and &lt;a href="https://blog.kilo.ai/p/new-vs-code-week-two" rel="noopener noreferrer"&gt;open source contributors are the reason Kilo moves as fast as it does&lt;/a&gt;. If you've been contributing to Roo, we'd love to have you.&lt;/p&gt;

&lt;p&gt;To the Roo team and the Roo community: thanks for everything. The bar you set is the reason the rest of us had something to aim at.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://app.kilo.ai/get-started" rel="noopener noreferrer"&gt;Download the Kilo VS Code Extension&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/Kilo-Org/kilocode" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kilo.ai/docs/contributing" rel="noopener noreferrer"&gt;Contributor Resources&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kilo.ai/docs/" rel="noopener noreferrer"&gt;Docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kilo.codes/discord" rel="noopener noreferrer"&gt;Discord&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>roo</category>
      <category>programming</category>
      <category>kilocode</category>
    </item>
    <item>
      <title>Kimi K2.6 Has Arrived: An Open-Weight Powerhouse for Agentic Work</title>
      <dc:creator>Darko from Kilo</dc:creator>
      <pubDate>Tue, 21 Apr 2026 10:38:21 +0000</pubDate>
      <link>https://forem.com/kilocode/kimi-k26-has-arrived-an-open-weight-powerhouse-for-agentic-work-27k0</link>
      <guid>https://forem.com/kilocode/kimi-k26-has-arrived-an-open-weight-powerhouse-for-agentic-work-27k0</guid>
      <description>&lt;p&gt;Moonshot AI just dropped their latest model, &lt;strong&gt;Kimi K2.6&lt;/strong&gt;, and it's an absolute powerhouse for agentic workflows. Even better? It's completely open-weight from release day.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9k05yv6khf6p3p3ej0lz.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9k05yv6khf6p3p3ej0lz.jpeg" alt="Kimi K2.6 announcement hero image showing benchmark comparisons" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Moonshot AI is starting to feel like less of a "moonshot" and more of a sure thing. The lab's previous big release, Kimi K2.5, was an &lt;a href="https://blog.kilo.ai/p/what-we-learned-from-a-week-of-free" rel="noopener noreferrer"&gt;immediate hit on Kilo&lt;/a&gt;. Our users praised its ability to reason through complex codebases, suggest refactoring strategies, and maintain context across large-scale projects.&lt;/p&gt;

&lt;p&gt;The next iteration doesn't disappoint, ensuring that Kimi models will stay competitive with frontier offerings like OpenAI's GPT models. During our early preview testing, &lt;a href="https://kilo.ai/models/moonshotai-kimi-k2-6" rel="noopener noreferrer"&gt;Kimi K2.6&lt;/a&gt; blew us away with its ability to handle complex, long-context tasks across massive codebases. &lt;strong&gt;We're thrilled to announce that Kimi K2.6 is already live, fully integrated, and available to use in Kilo Code and KiloClaw.&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;K2.6 offers SOTA-level performance at a fraction of the cost.&lt;/strong&gt; It's tremendously good at long-context tasks across the codebase, as well as the day-to-day work needed to support an always-on agent like KiloClaw. Moonshot has impressed us yet again!&lt;/p&gt;

&lt;p&gt;—Scott Breitenother, Co-founder &amp;amp; CEO, Kilo Code&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  A Model Designed for OpenClaw
&lt;/h2&gt;

&lt;p&gt;What sets Kimi K2.6 apart is its sheer stamina and reliability for continuous, long-horizon coding tasks. This isn't just an iterative update. The numbers speak for themselves. In head-to-head benchmarking against the industry's heaviest closed-source hitters, Kimi K2.6 is an &lt;a href="https://huggingface.co/moonshotai/Kimi-K2.6" rel="noopener noreferrer"&gt;open-weight model&lt;/a&gt; that holds its own. It scored an impressive &lt;strong&gt;80.2% on SWE-Bench Verified&lt;/strong&gt; and &lt;strong&gt;58.6% on SWE-Bench Pro&lt;/strong&gt;, showcasing its deep understanding of real-world software engineering issues. Additionally, it achieved a strong &lt;strong&gt;92.5% F1-score on DeepSearchQA&lt;/strong&gt; and an excellent &lt;strong&gt;66.7% on Terminal-Bench 2.0&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;We found Kimi K2.6 to be tremendously capable at handling the rigorous, day-to-day processing required to support an always-on agent like KiloClaw. Over a continuous 13-hour execution period, Kimi K2.6 independently iterated through 12 optimization strategies, made over 1,000 tool calls, and precisely modified more than 4,000 lines of code. The result? A massive &lt;strong&gt;185% leap in median throughput&lt;/strong&gt; (from 0.43 to 1.24 MT/s).&lt;/p&gt;

&lt;p&gt;We're excited to give it a run in &lt;a href="https://pinchbench.com/" rel="noopener noreferrer"&gt;PinchBench&lt;/a&gt; shortly and see if these tests apply to OpenClaw tasks for the benchmark.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Kimi K2.6 raises the bar for open-source models.&lt;/strong&gt; It excels in coding and especially for agentic tools like OpenClaw and Hermes. In early testing, it sustains long multi-step sessions with impressive stability.&lt;/p&gt;

&lt;p&gt;—Michael Chiang, Co-founder, Ollama&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For teams deploying multi-agent systems, Kimi K2.6 elevates the Agent Swarm architecture to entirely new heights. The model can now dynamically scale horizontally to &lt;strong&gt;300 sub-agents executing across 4,000 coordinated steps simultaneously&lt;/strong&gt; — a massive leap from K2.5's limit of 100 sub-agents and 1,500 steps. This extreme parallelization fundamentally reduces end-to-end latency while enabling the swarm to execute deeply complex, heterogeneous tasks concurrently.&lt;/p&gt;

&lt;p&gt;K2.6 also gives single agents more power. It lets you turn files such as PDFs, spreadsheets, slides, and Word documents into agent skills, unlocking a wide range of agentic &lt;a href="https://kilo.ai/kiloclaw#use-cases" rel="noopener noreferrer"&gt;knowledge work&lt;/a&gt; (see their &lt;a href="https://www.kimi.com/blog/kimi-k2-6" rel="noopener noreferrer"&gt;release post&lt;/a&gt; for examples).&lt;/p&gt;

&lt;h2&gt;
  
  
  Ready for Kilo Code and KiloClaw
&lt;/h2&gt;

&lt;p&gt;Whether you're doing deep codebase refactoring, hunting down non-obvious bugs, or setting up autonomous 24/7 workflows, K2.6 delivers the performance, instruction-following, and stability you need. One caveat: the model can be very creative, so &lt;em&gt;make sure you give it clear instructions;&lt;/em&gt; when you do, its ability to minimize repetitive overhead translates to a significantly smoother, more trustworthy end-to-end experience for developers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kilo.ai/" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftybfl33cvy17jv3bz0w5.png" alt="Kimi K2.6 available in the Kilo model selector" width="712" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ready to put these impressive stats to the test in your own workspace? &lt;strong&gt;&lt;a href="https://kilo.ai/models/moonshotai-kimi-k2-6" rel="noopener noreferrer"&gt;Kimi K2.6&lt;/a&gt; is available to use now in the Kilo Gateway.&lt;/strong&gt; That means you can use it wherever you use Kilo — in the Kilo CLI, our VS Code and JetBrains extensions, &lt;a href="https://blog.kilo.ai/p/how-to-use-kilo-gateway-with-hermes" rel="noopener noreferrer"&gt;Hermes&lt;/a&gt;, &lt;a href="https://kilo.ai/kiloclaw" rel="noopener noreferrer"&gt;KiloClaw&lt;/a&gt; (our hosted OpenClaw), and more. Experience the next evolution of open-source agentic intelligence today.&lt;/p&gt;

&lt;p&gt;Read more about the official model release and dive into the full technical benchmarks from Moonshot AI here: &lt;a href="https://www.kimi.com/blog/kimi-k2-6" rel="noopener noreferrer"&gt;Kimi K2.6 Announcement&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
    </item>
    <item>
      <title>Talk to the Claw: The Interface Is Now a Single Sentence</title>
      <dc:creator>Darko from Kilo</dc:creator>
      <pubDate>Tue, 21 Apr 2026 10:35:15 +0000</pubDate>
      <link>https://forem.com/kilocode/talk-to-the-claw-the-interface-is-now-a-single-sentence-46j7</link>
      <guid>https://forem.com/kilocode/talk-to-the-claw-the-interface-is-now-a-single-sentence-46j7</guid>
      <description>&lt;p&gt;At Kilo, we aren't approaching this question in the abstract — we're living it every day.&lt;/p&gt;

&lt;p&gt;As we lean into agentic flows, we're discovering that working in a new interface means that the layer between you and the tool is no longer a dashboard, a form, or a button. &lt;strong&gt;It's a sentence.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You will still hear people talk about UX improvements. Better navigation. Cleaner design. More intuitive onboarding flows. It will be framed as progress.&lt;/p&gt;

&lt;p&gt;But the real change runs deeper than any redesign. The interface layer is decoupling from the application layer entirely. You don't need to know where the button is. You don't need to learn the menu structure. You just say what you need done.&lt;/p&gt;

&lt;p&gt;Natural language &lt;em&gt;is&lt;/em&gt; the new UI.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I'm not saying every app will disappear.&lt;/li&gt;
&lt;li&gt;I'm not saying this works perfectly today for every use case.&lt;/li&gt;
&lt;li&gt;I'm not saying you should throw away your existing workflows.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But here's what I &lt;em&gt;am&lt;/em&gt; saying: the apps you already use didn't have to rebuild themselves from scratch for this to be true. &lt;a href="https://kilo.ai/kiloclaw" rel="noopener noreferrer"&gt;KiloClaw&lt;/a&gt; can talk to Todoist &lt;em&gt;and&lt;/em&gt; Linear &lt;em&gt;and&lt;/em&gt; your calendar &lt;em&gt;and&lt;/em&gt; your inbox — through the same window, using the same language you'd use to text a colleague. You don't have to live inside each one to operate them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4fckrggltsjaucrjvkod.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4fckrggltsjaucrjvkod.png" alt="Screenshot of KiloClaw connected to Todoist, showing a task list created from a single natural language prompt" width="800" height="746"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Credit: Todoist&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This isn't about saving five minutes. It's about a bigger shift. The way we interact with software is fundamentally changing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Twelve Tools, One Front Door
&lt;/h2&gt;

&lt;p&gt;Here's where the new interface really shines.&lt;/p&gt;

&lt;p&gt;Last week, I had a new project land in my inbox. I downloaded the PDF, uploaded it to my KiloClaw bot on Telegram, and typed a simple prompt in natural language: &lt;em&gt;Create a Todoist project for this and add the tasks based on these guidelines.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;That's it. No excessive bulleted lists. No diagrams. No long paragraphs discussing the background and goals for this project. Just a single sentence.&lt;/p&gt;

&lt;p&gt;Thirty seconds later, it was done.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3fb2ri91foykwgh5s7eb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3fb2ri91foykwgh5s7eb.png" alt="KiloClaw conversation on Telegram showing a Todoist project created from a PDF in a single prompt" width="800" height="794"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On Monday, I was meeting with a friend and colleague, and we agreed to sync again the following week. We both pulled up our calendars, found a time. I sent a message to KiloClaw. My friend received a calendar invite a minute later.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgich8nh1qulgj78acc2a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgich8nh1qulgj78acc2a.png" alt="KiloClaw chat showing a calendar invite sent to a colleague via a single natural language message" width="800" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Two different tools. Two different &lt;a href="https://kilo.ai/kiloclaw/openclaw-for" rel="noopener noreferrer"&gt;workflows&lt;/a&gt;. One conversation.&lt;/p&gt;

&lt;p&gt;Here's the thing: Todoist actually has a feature for this. It's called Todoist Ramble — you can talk to it, describe your project, and it populates tasks for you. That's cool. But that's not the unlock I'm talking about.&lt;/p&gt;

&lt;p&gt;I'm the kind of person who has a different tool for everything. Todoist for tasks. Obsidian for a knowledge base. GitHub for engineering projects. Slack for team communication. Gmail for email. Each of them lives in its own silo, with its own interface, its own learning curve, its own quirks.&lt;/p&gt;

&lt;p&gt;The problem has never been the tools.&lt;/p&gt;

&lt;p&gt;The problem is the twelve different front doors. With a unified interface that acts on natural language, we now have a single way into the house.&lt;/p&gt;

&lt;h2&gt;
  
  
  The New Interface is the Front Door We Always Needed
&lt;/h2&gt;

&lt;p&gt;Count the apps you opened before lunch today.&lt;/p&gt;

&lt;p&gt;Email. Slack. Calendar. Linear. Todoist.&lt;/p&gt;

&lt;p&gt;They're all like different doors into your life, each with its own login, its own layout, its own way of asking you to do the same basic thing: move information from your head into the right place.&lt;/p&gt;

&lt;p&gt;That tax — the constant context-switching, the re-orienting, the "where does this live?" — is so familiar that most of us stopped noticing it.&lt;/p&gt;

&lt;p&gt;We got so used to micro context-switching that we forgot there could be a better way.&lt;/p&gt;

&lt;p&gt;Curious?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Here's what I recommend you do to get started:&lt;/strong&gt; Choose one workflow you do repeatedly. Something tedious. Something where you're just copying information from one place to another. Tell your bot to do it instead.&lt;/p&gt;

&lt;p&gt;You might be surprised how short the conversation needs to be.&lt;/p&gt;

</description>
      <category>openclaw</category>
      <category>developer</category>
      <category>ai</category>
      <category>productivity</category>
    </item>
    <item>
      <title>#MyBotDoesThat: 7 Tasks the Kilo Team Retired From Forever</title>
      <dc:creator>Darko from Kilo</dc:creator>
      <pubDate>Mon, 20 Apr 2026 12:57:02 +0000</pubDate>
      <link>https://forem.com/kilocode/mybotdoesthat-7-tasks-the-kilo-team-retired-from-forever-1b81</link>
      <guid>https://forem.com/kilocode/mybotdoesthat-7-tasks-the-kilo-team-retired-from-forever-1b81</guid>
      <description>&lt;h2&gt;
  
  
  What Does It Actually Look Like to Retire from a Task?
&lt;/h2&gt;

&lt;p&gt;A lot of people are still waiting for AI to deliver on the futurist promise of freeing us from the boring, tedious, repetitive tasks that nobody wants to do. Scheduling, monitoring, status updates. The kind of low-stakes stuff that somehow still eats 30 minutes of your day because you have to context-switch into it, do the thing, and context-switch back out.&lt;/p&gt;

&lt;p&gt;The thing is, that future is already here. Most people just haven't noticed yet. KiloClaw is an always-on personal AI agent that connects to your other platforms, runs in the background, and handles the tasks you keep telling yourself you'll "get to later."&lt;/p&gt;

&lt;p&gt;People are using Claws for everything from meal prep to cattle farming, and we wanted to share what that actually looks like in practice before we tell you about the challenge we're running (and the prizes attached to it).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Challenge TL;DR:&lt;/strong&gt; Retire from a mundane task by automating it with KiloClaw, film a 30-second video of it, post it on social media with &lt;strong&gt;#MyBotDoesThat&lt;/strong&gt;, and nominate 3 people to do the same. You need to mention KiloClaw by name and show part of your dashboard workflow. First place gets $500 in Kilo credits, a $250 Amazon gift card, and 2 free months of hosting.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Enter the Challenge&lt;/p&gt;




&lt;h2&gt;
  
  
  7 Automations from the Kilo Team
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Evgeny: Weekly Meal Prep
&lt;/h3&gt;

&lt;p&gt;Evgeny, Engineer at Kilo, has been running a meal prep workflow through his Claw for the past couple weeks. Every Friday evening it sends him a reminder and they plan the next week's meals together, after which it pushes all the groceries into a single Todoist list and creates a separate list for each day's dishes with step-by-step prep instructions.&lt;/p&gt;

&lt;p&gt;He batch cooks on the weekend, freezes everything, and each evening the Claw tells him what to pull out of the freezer for the next day. The loop of plan, shop, prep, freeze, defrost, eat is all coordinated through a single bot that he set up once.&lt;/p&gt;




&lt;h3&gt;
  
  
  Ligia: Running a Cattle Farm from 10,700 km Away
&lt;/h3&gt;

&lt;p&gt;Ligia, Kilo Support Engineer, manages a cattle operation in Brazil while living in The Netherlands. She uses a health monitoring tool that tracks whether each cow is healthy, lactating, or dry. The tool throws off a constant stream of live updates, most of which are just noise, so she pipes all of them into her Claw. If a health issue persists for more than 24 hours, the Claw adds it to her to-do list and drafts a message to her vet over email or Telegram.&lt;/p&gt;

&lt;p&gt;The Claw also has access to the system that monitors milk production and collection, which means she can check output volumes and cash flow without logging into anything. She's doing all of this from over 10,000 kilometers away from the actual farm, using her Claw as the single interface for everything that's happening on the ground.&lt;/p&gt;




&lt;h3&gt;
  
  
  Scott: Meeting Prep He Never Has to Think About
&lt;/h3&gt;

&lt;p&gt;Scott, Co-Founder and CEO of Kilo, gave his Claw access to its own Google account and connected it to his calendar. Thirty minutes before any meeting, the Claw reviews the attendee list and any attached documents, cross-references everything with his CRM, and sends him a briefing of exactly what he needs to know about 10 minutes before the meeting starts. He doesn't scramble to remember who someone is or what the last conversation was about anymore, because the bot already did that work for him.&lt;/p&gt;




&lt;h3&gt;
  
  
  Emilie: Checking the Weather
&lt;/h3&gt;

&lt;p&gt;Emilie, Co-Founder and VP of Engineering at Kilo, took about 15 minutes to set up a daily weather briefing as part of her morning update. Instead of opening a weather app, making sure it's pulling the right location, and scrolling past hourly forecasts she doesn't care about, she just gets the relevant info sent to her each morning. It's one of those automations that sounds almost too simple to bother with, but once it's running you realize how many small steps you were doing manually every single day.&lt;/p&gt;




&lt;h3&gt;
  
  
  Brian: Flight Info Without the Email Dig
&lt;/h3&gt;

&lt;p&gt;Brian, DevRel at Kilo, forwarded his personal Gmail and Google Calendar to his Claw's own Google account, so it could scan for flight information whenever it comes in. Now, whenever he has a flight coming up, the Claw pulls together a briefing with the gate number, flight number, departure time, and seat number. The whole routine of opening your email at the airport, searching "confirmation," scrolling past three marketing emails from the airline, and finally finding the actual itinerary is just gone. The bot already read it and told him what he needs.&lt;/p&gt;




&lt;h3&gt;
  
  
  Brendan: Spinning Up Benchmarks
&lt;/h3&gt;

&lt;p&gt;Brendan, DevRel at Kilo, retired from dialing into servers to spin up new benchmarks for PinchBench, the benchmark that tests how models perform in OpenClaw. That was it for him. He set up the Claw, pointed it at the workflow, and stopped thinking about it. Not every retirement needs to be elaborate — sometimes the best automation is the one where you just never do the thing again and forget it was ever manual.&lt;/p&gt;




&lt;h3&gt;
  
  
  Ari: Getting Around NYC
&lt;/h3&gt;

&lt;p&gt;Ari retired from using multiple transit apps to get around NYC. He was tired of apps that don't sync correctly with his calendar, suggest routes that aren't actually great, and ignore some of the best options for getting around the city like ferries. So he deleted all of them and replaced the whole stack with his Kilo Claw bot. One interface that knows his schedule, knows his options, and doesn't try to upsell him on a premium subscription to see the fastest route.&lt;/p&gt;




&lt;h2&gt;
  
  
  The #MyBotDoesThat Challenge
&lt;/h2&gt;

&lt;p&gt;We're running a challenge where you do the same thing these folks did: retire from a task by offloading it to KiloClaw, film it, and post it. The best videos win prizes.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Place&lt;/th&gt;
&lt;th&gt;Prize&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;🥇 1st Place&lt;/td&gt;
&lt;td&gt;$500 in Kilo credits + $250 Amazon gift card + 2 free months of hosting&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;🥈 2nd Place&lt;/td&gt;
&lt;td&gt;$250 in Kilo credits + 2 free months of hosting&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;🥉 3rd Place&lt;/td&gt;
&lt;td&gt;$150 in Kilo credits + 2 free months of hosting&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  How to Enter
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Do the task one last time&lt;/strong&gt; and make it dramatic.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Say the line:&lt;/strong&gt; &lt;em&gt;"I'm [name], and I'm retiring from [task] permanently. My KiloClaw bot handles that now."&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Show your workflow.&lt;/strong&gt; Flash your KiloClaw dashboard on screen, screenshare it, or walk through the prompt you gave your bot. We need to see some part of your KiloClaw setup.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Nominate 3 people by name.&lt;/strong&gt; &lt;em&gt;"I nominate [name], [name], and [name]. What are YOU retiring from?"&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Keep it around 30 seconds&lt;/strong&gt; and post it on any social media platform with &lt;strong&gt;#MyBotDoesThat&lt;/strong&gt;. Tag your nominees in the caption and call them out in the video.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Drop your link here:&lt;/strong&gt; Enter the Challenge
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To be eligible, you need to mention &lt;strong&gt;KiloClaw by name&lt;/strong&gt; and show some part of your workflow in the &lt;strong&gt;KiloClaw dashboard&lt;/strong&gt;. Post it on TikTok, X, LinkedIn, Instagram, YouTube Shorts — wherever you want!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The contest closes Friday, April 24th at 11:59 pm PDT.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Need Inspiration?
&lt;/h2&gt;

&lt;p&gt;We have a massive recipe book of KiloClaw use cases here: ClawBytes&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The best entries won't be about impressive automations. They'll be about &lt;strong&gt;relatable&lt;/strong&gt; ones — the kind where everyone watching thinks &lt;em&gt;"wait, I could automate that too."&lt;/em&gt; The nomination chain handles distribution and the relatability handles the rest.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;What are you retiring from?&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>resources</category>
      <category>agents</category>
    </item>
  </channel>
</rss>
