<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Rost</title>
    <description>The latest articles on Forem by Rost (@rosgluk).</description>
    <link>https://forem.com/rosgluk</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/rosgluk"/>
    <language>en</language>
    <item>
      <title>OpenClaw Rise and Fall — Timeline and Real Reasons Behind the Collapse</title>
      <dc:creator>Rost</dc:creator>
      <pubDate>Tue, 28 Apr 2026 12:50:44 +0000</pubDate>
      <link>https://forem.com/rosgluk/openclaw-rise-and-fall-timeline-and-real-reasons-behind-the-collapse-3nh7</link>
      <guid>https://forem.com/rosgluk/openclaw-rise-and-fall-timeline-and-real-reasons-behind-the-collapse-3nh7</guid>
      <description>&lt;p&gt;OpenClaw did not fail as a product. It lost its fuel.&lt;/p&gt;

&lt;p&gt;What looks like a dramatic boom and collapse is actually something more mechanical and more interesting. OpenClaw was a thin layer on top of a temporary economic advantage in the AI ecosystem. Once that advantage disappeared, so did the attention.&lt;/p&gt;

&lt;p&gt;This article breaks down the exact timeline, the real drivers behind the spike, and why the drop was inevitable.&lt;/p&gt;




&lt;h2&gt;
  
  
  The illusion of product-driven growth
&lt;/h2&gt;

&lt;p&gt;Most people assume OpenClaw grew because it was a great AI agent — and that is only partially true.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.glukhov.org/ai-systems/openclaw/" rel="noopener noreferrer"&gt;OpenClaw&lt;/a&gt; was genuinely useful. It supported more than 50 integrations, worked across Claude, GPT-4o, Gemini, and DeepSeek, and attracted enterprise adoption — Tencent built a platform directly on top of it. But capability alone did not set it apart from comparable alternatives:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cline&lt;/li&gt;
&lt;li&gt;LangChain-based setups&lt;/li&gt;
&lt;li&gt;Other agent wrappers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The real driver was access rather than capability — a distinction that explains the entire arc of OpenClaw's rise and collapse.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;OpenClaw made powerful models cheap to use at scale.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Phase 1. Quiet emergence (November 2025)
&lt;/h2&gt;

&lt;p&gt;The story begins in November 2025, when Peter Steinberger built the first prototype in roughly one hour. He was annoyed that the tool did not exist yet, so he built it, calling it &lt;strong&gt;Clawdbot&lt;/strong&gt; — a nod to Anthropic's Claude, complete with a lobster mascot.&lt;/p&gt;

&lt;p&gt;The first version was practical rather than flashy: an AI agent that could manage calendars, check email, book appointments, and automate computer tasks on the user's behalf. Steinberger shared it in developer communities and early adopters recognized something promising, though growth at this stage remained slow and organic with no visibility outside technical circles.&lt;/p&gt;




&lt;h2&gt;
  
  
  Phase 2. The viral ignition (January–February 2026)
&lt;/h2&gt;

&lt;p&gt;The spike began when several forces aligned in quick succession.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Naming drama and forced rebrands
&lt;/h3&gt;

&lt;p&gt;In late January 2026, Anthropic sent Steinberger a trademark notice over "Clawdbot," citing phonetic similarity to "Claude." By his account, Anthropic handled it professionally — but the notice forced a rename. The project became &lt;strong&gt;Moltbot&lt;/strong&gt; for three days, then &lt;strong&gt;OpenClaw&lt;/strong&gt;, and the forced rebranding generated exactly the kind of attention that marketing budgets cannot buy.&lt;/p&gt;




&lt;h3&gt;
  
  
  2. The agent hype wave
&lt;/h3&gt;

&lt;p&gt;The market was already primed for an agent breakthrough:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;autonomous agents were trending across social media and the tech press&lt;/li&gt;
&lt;li&gt;"AI that can act" had become the dominant narrative&lt;/li&gt;
&lt;li&gt;developers were actively searching for tools that could automate complex workflows&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;OpenClaw arrived at exactly the right moment, when demand for this kind of tool was at its highest and the story of autonomous AI agents was capturing mainstream attention.&lt;/p&gt;




&lt;h3&gt;
  
  
  3. The cheap compute loophole
&lt;/h3&gt;

&lt;p&gt;The most decisive factor was a compute pricing loophole that no amount of good engineering could have manufactured deliberately.&lt;/p&gt;

&lt;p&gt;Users discovered that OpenClaw could connect to Claude by grabbing the OAuth token from a Claude Pro or Max subscription and spoofing the authentication headers of Anthropic's own &lt;a href="https://www.glukhov.org/ai-devtools/claude-code/" rel="noopener noreferrer"&gt;Claude Code&lt;/a&gt; client. Instead of paying per token through the API, they effectively got:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;near unlimited agent execution for a fixed monthly cost&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The numbers made this explosive. A Claude Max subscription cost $200 per month, while running equivalent workloads through the API would cost far more — industry analysts estimated a price gap of more than five times, meaning Anthropic was quietly subsidising each heavy OpenClaw user by hundreds of dollars a month.&lt;/p&gt;

&lt;p&gt;This changed behavior instantly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;developers ran heavy experiments they would never have attempted at API prices&lt;/li&gt;
&lt;li&gt;viral demos flooded social media&lt;/li&gt;
&lt;li&gt;large-scale automation became accessible to solo developers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Nothing in the software changed — the economics did, and that shift alone was enough to ignite a viral adoption curve. By March 2, 2026, the OpenClaw repository had accumulated &lt;strong&gt;247,000 GitHub stars and 47,700 forks&lt;/strong&gt;, reaching 100,000 stars in under 48 hours — a pace widely described as the fastest-growing GitHub project in history.&lt;/p&gt;




&lt;h2&gt;
  
  
  Phase 3. Peak usage and inflated expectations
&lt;/h2&gt;

&lt;p&gt;At peak interest, developers pushed agents to extremes, social media amplified the results, and expectations exploded around what personal AI automation could achieve. An estimated &lt;strong&gt;135,000 OpenClaw instances&lt;/strong&gt; were running simultaneously when Anthropic made its announcement, and one founder described publicly how she had deployed nine separate AI agents to manage her administrative work and personal household logistics.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why do AI tools suddenly become popular and then fade
&lt;/h3&gt;

&lt;p&gt;Because the initial spike is driven by novelty and perceived leverage. Once users test the limits, reality sets in — the tool proves harder to use reliably, and the economic conditions that made it attractive often turn out to be temporary. In OpenClaw's case, the perceived leverage was real but built on borrowed economics that Anthropic had not priced for agentic workloads.&lt;/p&gt;




&lt;h2&gt;
  
  
  The creator leaves for OpenAI (February 2026)
&lt;/h2&gt;

&lt;p&gt;Before the collapse arrived, OpenClaw lost its original architect.&lt;/p&gt;

&lt;p&gt;On February 14–15, 2026, Steinberger announced he was leaving the project to join OpenAI. Sam Altman posted that Steinberger would "drive the next generation of personal agents" at the company, and Steinberger wrote that "teaming up with OpenAI is the fastest way to bring this to everyone." OpenClaw was transferred to an independent open-source foundation with OpenAI's continued support.&lt;/p&gt;

&lt;p&gt;The timing was striking. Anthropic had declined to hire or partner with Steinberger, despite the fact that his tool had become arguably their best free marketing in years — a project built explicitly to showcase how good Claude was. Instead, he went directly to their biggest competitor, taking with him both the project's momentum and its community relationships.&lt;/p&gt;




&lt;h2&gt;
  
  
  Phase 4. The correction begins
&lt;/h2&gt;

&lt;p&gt;Two things started happening at the same time.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Reality of agent limitations
&lt;/h3&gt;

&lt;p&gt;Users who had deployed OpenClaw at scale began encountering its real constraints:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;agents are brittle and fail unpredictably on multi-step tasks&lt;/li&gt;
&lt;li&gt;reliability is inconsistent across different workflows and environments&lt;/li&gt;
&lt;li&gt;setup and maintenance is non-trivial for most users outside technical circles&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These limitations alone would have caused a gradual decline, but OpenClaw did not taper off gradually — it dropped sharply, because a second and more decisive force hit at exactly the same time.&lt;/p&gt;




&lt;h3&gt;
  
  
  2. The economic layer breaks
&lt;/h3&gt;

&lt;p&gt;Anthropic had already run this playbook once. In January 2026, just weeks before OpenClaw peaked, they blocked &lt;a href="https://www.glukhov.org/ai-devtools/opencode/" rel="noopener noreferrer"&gt;&lt;strong&gt;OpenCode&lt;/strong&gt;&lt;/a&gt; — another popular third-party coding client — from using Claude subscription tokens in what was framed as a terms of service violation, not a capacity issue. OpenClaw users had every reason to expect the same treatment, and that moment arrived in April.&lt;/p&gt;

&lt;p&gt;Anthropic then introduced restrictions that closed the loophole entirely:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;third-party tools were blocked from using subscription OAuth tokens&lt;/li&gt;
&lt;li&gt;usage shifted to pay-as-you-go extra billing or full API keys&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This removed the key advantage:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;cheap large-scale execution&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now users faced a very different cost structure:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Before cutoff&lt;/th&gt;
&lt;th&gt;After cutoff&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Monthly plan cost&lt;/td&gt;
&lt;td&gt;$20–$200 (flat)&lt;/td&gt;
&lt;td&gt;$20–$200 + usage&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cost per task&lt;/td&gt;
&lt;td&gt;Effectively $0&lt;/td&gt;
&lt;td&gt;$0.50–$2.00&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;API rate (Sonnet 4.6 input)&lt;/td&gt;
&lt;td&gt;Covered by sub&lt;/td&gt;
&lt;td&gt;$3 per million tokens&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;API rate (Sonnet 4.6 output)&lt;/td&gt;
&lt;td&gt;Covered by sub&lt;/td&gt;
&lt;td&gt;$15 per million tokens&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Increase for heavy users&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;10× to 50×&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  What caused the sudden drop in interest in AI agent tools
&lt;/h3&gt;

&lt;p&gt;The answer is straightforward: not a lack of innovation, but the loss of affordable compute. Once the pricing floor disappeared, the incentive to experiment and share disappeared with it, and search interest followed almost immediately.&lt;/p&gt;




&lt;h2&gt;
  
  
  April 4, 2026 — The hard cutoff
&lt;/h2&gt;

&lt;p&gt;On April 4, 2026, at 12 PM Pacific Time, the subscription access ended for all third-party tools.&lt;/p&gt;

&lt;p&gt;Boris Cherny, Head of Claude Code at Anthropic, posted on X that Claude Pro and Max subscriptions would no longer cover usage from third-party tools, effective immediately. An Anthropic spokesperson confirmed that using subscriptions with third-party tools was always against the terms of service, and that those tools were placing "an outsized strain on our systems." Additional context made the timing feel urgent: on April 1, the full source code of Claude Code — 512,000 lines of TypeScript — had leaked through an npm package, exposing exactly how Anthropic's first-party tools authenticated with the backend and making it more pressing to lock down third-party tools that were spoofing those same patterns.&lt;/p&gt;

&lt;p&gt;Anthropic offered a one-time credit equal to one month's subscription fee and a 30% discount on pre-purchased usage bundles to ease the transition. For light users, the credit covered the adjustment period, but for power users running multiple instances the new numbers simply did not work. The effect on activity was immediate:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;experimentation stopped&lt;/li&gt;
&lt;li&gt;viral sharing disappeared&lt;/li&gt;
&lt;li&gt;search interest collapsed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This matches the sharp drop in Google Trends almost perfectly. The full policy mechanics and migration options after the cutoff are covered in &lt;a href="https://www.glukhov.org/ai-systems/openclaw/anthropic-claude-subscription-agent-tools/" rel="noopener noreferrer"&gt;Claude, OpenClaw, and the End of Flat Pricing for Agents&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  OpenAI moves in the opposite direction
&lt;/h2&gt;

&lt;p&gt;On the same day as the Anthropic ban, OpenAI publicly confirmed that ChatGPT Plus, Pro, and Team subscribers were entirely free to use their subscriptions to power OpenClaw through OAuth — including with models like GPT-5.3 Codex for complex coding tasks.&lt;/p&gt;

&lt;p&gt;This was not accidental timing. By hiring Steinberger and explicitly opening their subscription gates, OpenAI positioned themselves as the developer-friendly alternative at the exact moment Anthropic cut off its most active community, securing the loyalty of the developers who were building the next generation of AI tools.&lt;/p&gt;




&lt;h2&gt;
  
  
  Phase 5. Where OpenClaw users actually went
&lt;/h2&gt;

&lt;p&gt;Users did not disappear after the ban — they redistributed across a spectrum of alternatives depending on their technical depth and budget.&lt;/p&gt;

&lt;h3&gt;
  
  
  Direct usage of chat assistants
&lt;/h3&gt;

&lt;p&gt;Many users moved back to direct chat interfaces, trading agent automation for the simplicity and reliability they had given up:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ChatGPT&lt;/li&gt;
&lt;li&gt;Claude UI&lt;/li&gt;
&lt;li&gt;Gemini&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Are AI agents replacing traditional chat assistants
&lt;/h3&gt;

&lt;p&gt;No — for most users, agents add complexity without enough reliability gains. The chat interface remains the default for daily use because it is faster to start, easier to debug when something goes wrong, and requires no infrastructure setup. Agents serve a committed minority of power users, not the general population. The &lt;a href="https://www.glukhov.org/ai-devtools/" rel="noopener noreferrer"&gt;AI developer tools&lt;/a&gt; ecosystem has evolved to fill this gap with tools that sit between raw agents and simple chat, giving developers structured assistance without full agentic overhead.&lt;/p&gt;




&lt;h3&gt;
  
  
  Cheaper model ecosystems
&lt;/h3&gt;

&lt;p&gt;Power users with the technical ability to self-host migrated toward lower-cost alternatives:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Qwen&lt;/li&gt;
&lt;li&gt;DeepSeek&lt;/li&gt;
&lt;li&gt;other low-cost models accessible through &lt;a href="https://www.glukhov.org/llm-hosting/comparisons/hosting-llms-ollama-localai-jan-lmstudio-vllm-comparison/" rel="noopener noreferrer"&gt;Ollama&lt;/a&gt; for fully local setups&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Which models are popular for low-cost AI experimentation
&lt;/h3&gt;

&lt;p&gt;Models that offer lower pricing, fewer usage restrictions, and flexible deployment including local self-hosting absorbed the bulk of displaced OpenClaw power users. These ecosystems grew quietly rather than generating public hype, which is why the migration was largely invisible in trend data even as it represented a significant redistribution of compute demand.&lt;/p&gt;




&lt;h3&gt;
  
  
  Alternative agent frameworks
&lt;/h3&gt;

&lt;p&gt;Developers who still needed agent capabilities switched to leaner approaches:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;custom scripts tailored to specific workflows&lt;/li&gt;
&lt;li&gt;lightweight frameworks with fewer dependencies&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.glukhov.org/llm-hosting/" rel="noopener noreferrer"&gt;self-hosted solutions&lt;/a&gt; combining local models with minimal tooling&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The key difference from OpenClaw is that these users optimized for cost and control rather than convenience, and built for sustainability rather than maximum automation at minimum price. This is the pattern common across the &lt;a href="https://www.glukhov.org/ai-systems/" rel="noopener noreferrer"&gt;self-hosted AI systems&lt;/a&gt; ecosystem — provider independence treated as a design requirement, not an afterthought.&lt;/p&gt;




&lt;h2&gt;
  
  
  The overlooked factor — why cost is the real product
&lt;/h2&gt;

&lt;p&gt;The most important insight from OpenClaw's trajectory is that cost functions as the real product in AI adoption.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why is cost important in AI adoption
&lt;/h3&gt;

&lt;p&gt;Because usage scales non-linearly with compute costs. When compute is cheap, experimentation explodes, innovation accelerates, and attention grows because viral sharing becomes economically rational. When compute becomes expensive, usage contracts to serious workflows only, casual users leave, and hype disappears almost overnight — which is precisely why &lt;a href="https://www.glukhov.org/llm-performance/cost-effective-llm-applications/" rel="noopener noreferrer"&gt;token optimization and cost reduction strategies&lt;/a&gt; become critical skills once compute stops being subsidized.&lt;/p&gt;

&lt;p&gt;OpenClaw demonstrated this rule in an unusually clear form: between February and April 2026, the software did not change, but the economics of running it did — and that single shift was enough to collapse the community in a matter of days.&lt;/p&gt;




&lt;h2&gt;
  
  
  OpenClaw was never the core story
&lt;/h2&gt;

&lt;p&gt;OpenClaw functioned as a surface layer on top of more fundamental forces.&lt;/p&gt;

&lt;p&gt;The real story involved three factors operating simultaneously:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;access to Claude models at subscription prices rather than API rates&lt;/li&gt;
&lt;li&gt;a five-to-one pricing mismatch between what users paid and what usage actually cost Anthropic&lt;/li&gt;
&lt;li&gt;a policy correction that had to happen eventually given the scale of that mismatch&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once those underlying conditions changed, any tool that depended on them would show the same pattern — which is exactly why similar tools spiked and declined in lockstep, regardless of their individual quality or feature sets. Anthropic's decision also revealed something strategic: by blocking third-party clients while protecting Claude Code, the company chose to concentrate developer engagement inside its own first-party tooling at a moment when independent communities were iterating faster than any centralized lab.&lt;/p&gt;




&lt;h2&gt;
  
  
  The pattern repeats across AI
&lt;/h2&gt;

&lt;p&gt;OpenClaw's trajectory is not unique — the same cycle has played out repeatedly across the AI ecosystem.&lt;/p&gt;

&lt;p&gt;The same pattern appears in AutoGPT, BabyAGI, and other early agent frameworks that attracted massive attention and then faded as compute costs, reliability limits, or platform restrictions were enforced. The cycle is consistent:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;New capability appears
&lt;/li&gt;
&lt;li&gt;Cheap or free usage emerges
&lt;/li&gt;
&lt;li&gt;Viral experimentation begins
&lt;/li&gt;
&lt;li&gt;Costs or limits are enforced
&lt;/li&gt;
&lt;li&gt;Attention collapses
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each cycle leaves behind a smaller, more committed user base and a clearer understanding of what actually works at scale — which is how progress compounds even through the boom-and-bust pattern.&lt;/p&gt;




&lt;h2&gt;
  
  
  OpenClaw vs Hermes Agent — what the trend data shows
&lt;/h2&gt;

&lt;p&gt;The chart above compares worldwide Google Trends search interest for OpenClaw AI (blue) and Hermes Agent (red) over the past three months. OpenClaw peaked at an index of 100 in mid-March 2026 and collapsed sharply in April after the subscription cutoff. Hermes Agent barely registered during OpenClaw's peak, then gradually picked up interest as OpenClaw faded — reaching an index of around 40 in bursts through April, compared to OpenClaw's average of 49 and Hermes's average of 8.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.glukhov.org/ai-systems/hermes/" rel="noopener noreferrer"&gt;Hermes Agent&lt;/a&gt; is an open-source framework built by Nous Research and released in February 2026. Unlike OpenClaw, which is optimized for broad reactive tool use across many integrations, Hermes is built around a learning loop: it generates reusable skills from successful task completions, refines them through continued use, and maintains a persistent model of the user across sessions. The result is an agent that improves the more it is used on the same task types, rather than approaching each job from the same baseline. It reached 95,600 GitHub stars in its first seven weeks.&lt;/p&gt;

&lt;p&gt;The gap in the chart is significant. OpenClaw's hype surplus did not transfer to Hermes — it evaporated. Casual experimenters who had been running agents cheaply on Claude subscriptions simply left the space rather than migrating to an alternative. The users who did move to Hermes were the committed technical minority who needed persistent, self-hosted automation and were willing to set it up properly — which is exactly the kind of smaller, more sustainable user base that remains after every AI hype cycle collapses. For those users, &lt;a href="https://www.glukhov.org/ai-systems/hermes/production-setup/" rel="noopener noreferrer"&gt;Hermes production setup patterns&lt;/a&gt; are worth exploring.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final takeaway — follow the economics, not the interface
&lt;/h2&gt;

&lt;p&gt;OpenClaw did not rise because it was revolutionary — it rose because it unlocked something temporarily underpriced, and it fell not because it failed as a product but because that pricing advantage was removed by the platform it depended on.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This was not a product lifecycle. It was a pricing event.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Understanding this distinction is critical for predicting the next spike in AI tooling. The same pattern will repeat whenever a new compute subsidy appears, whether through a subscription loophole, a generous free tier, or a new open-weight model that undercuts established pricing. Track where compute is temporarily cheap and you will find the next wave of viral AI tools before the hype arrives.&lt;/p&gt;

</description>
      <category>selfhosting</category>
      <category>llm</category>
      <category>ai</category>
    </item>
    <item>
      <title>Llama-Server Router Mode - Dynamic Model Switching Without Restarts</title>
      <dc:creator>Rost</dc:creator>
      <pubDate>Mon, 27 Apr 2026 11:42:46 +0000</pubDate>
      <link>https://forem.com/rosgluk/llama-server-router-mode-dynamic-model-switching-without-restarts-1h0j</link>
      <guid>https://forem.com/rosgluk/llama-server-router-mode-dynamic-model-switching-without-restarts-1h0j</guid>
      <description>&lt;p&gt;For a long time, &lt;code&gt;llama.cpp&lt;/code&gt; had a glaring limitation:&lt;br&gt;&lt;br&gt;
you could only serve &lt;strong&gt;one model per process&lt;/strong&gt;, and switching meant a restart.&lt;/p&gt;



&lt;p&gt;That era is over.&lt;/p&gt;

&lt;p&gt;Recent updates introduced &lt;strong&gt;router mode&lt;/strong&gt; in &lt;code&gt;llama-server&lt;/code&gt;, bringing something much closer to what people expect from modern local LLM runtimes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;dynamic model loading
&lt;/li&gt;
&lt;li&gt;unloading on demand
&lt;/li&gt;
&lt;li&gt;switching per request
&lt;/li&gt;
&lt;li&gt;no process restart
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In other words: &lt;strong&gt;Ollama-like behavior, but without the training wheels&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If you are still deciding between local runtimes, cloud APIs, and self-hosted infrastructure, the&lt;br&gt;
&lt;a href="https://www.glukhov.org/llm-hosting/" rel="noopener noreferrer"&gt;LLM hosting overview&lt;/a&gt; is a good starting point.&lt;/p&gt;


&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Router mode requires a recent &lt;code&gt;llama-server&lt;/code&gt; build — roughly post mid-2024. Older builds do not have the &lt;code&gt;--models&lt;/code&gt; flag.&lt;/p&gt;

&lt;p&gt;For install options (package manager, pre-built binaries, or full source build with CUDA), see the&lt;br&gt;
&lt;a href="https://www.glukhov.org/llm-hosting/llama-cpp/" rel="noopener noreferrer"&gt;llama.cpp quickstart&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once you have &lt;code&gt;llama-server&lt;/code&gt;, confirm your build supports router mode:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;llama-server &lt;span class="nt"&gt;--help&lt;/span&gt; | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; models
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the &lt;code&gt;--models&lt;/code&gt; flag appears, you are good. If it is absent, update to a newer build.&lt;/p&gt;

&lt;p&gt;My current output of models-related help:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nt"&gt;-cl&lt;/span&gt;,   &lt;span class="nt"&gt;--cache-list&lt;/span&gt;                     show list of models &lt;span class="k"&gt;in &lt;/span&gt;cache
                                        Prefix/Suffix/Middle&lt;span class="o"&gt;)&lt;/span&gt; as some models prefer this. &lt;span class="o"&gt;(&lt;/span&gt;default: disabled&lt;span class="o"&gt;)&lt;/span&gt;
                                        models with dynamic resolution &lt;span class="o"&gt;(&lt;/span&gt;default: &lt;span class="nb"&gt;read &lt;/span&gt;from model&lt;span class="o"&gt;)&lt;/span&gt;
                                        models with dynamic resolution &lt;span class="o"&gt;(&lt;/span&gt;default: &lt;span class="nb"&gt;read &lt;/span&gt;from model&lt;span class="o"&gt;)&lt;/span&gt;
                                        embedding models &lt;span class="o"&gt;(&lt;/span&gt;default: disabled&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="nt"&gt;--models-dir&lt;/span&gt; PATH                       directory containing models &lt;span class="k"&gt;for &lt;/span&gt;the router server &lt;span class="o"&gt;(&lt;/span&gt;default: disabled&lt;span class="o"&gt;)&lt;/span&gt;
                                        &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;env&lt;/span&gt;: LLAMA_ARG_MODELS_DIR&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="nt"&gt;--models-preset&lt;/span&gt; PATH                    path to INI file containing model presets &lt;span class="k"&gt;for &lt;/span&gt;the router server
                                        &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;env&lt;/span&gt;: LLAMA_ARG_MODELS_PRESET&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="nt"&gt;--models-max&lt;/span&gt; N                          &lt;span class="k"&gt;for &lt;/span&gt;router server, maximum number of models to load simultaneously
                                        &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;env&lt;/span&gt;: LLAMA_ARG_MODELS_MAX&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="nt"&gt;--models-autoload&lt;/span&gt;, &lt;span class="nt"&gt;--no-models-autoload&lt;/span&gt;
                                        &lt;span class="k"&gt;for &lt;/span&gt;router server, whether to automatically load models &lt;span class="o"&gt;(&lt;/span&gt;default:
                                        &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;env&lt;/span&gt;: LLAMA_ARG_MODELS_AUTOLOAD&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  What router mode actually does
&lt;/h2&gt;

&lt;p&gt;Router mode turns &lt;code&gt;llama-server&lt;/code&gt; into a &lt;strong&gt;model dispatcher&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Instead of binding to a single model via &lt;code&gt;-m&lt;/code&gt;, the server:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;starts with no model loaded&lt;/li&gt;
&lt;li&gt;receives a request that names a model&lt;/li&gt;
&lt;li&gt;loads that model if it is not already in memory&lt;/li&gt;
&lt;li&gt;runs inference&lt;/li&gt;
&lt;li&gt;optionally unloads the model after the response, or keeps it warm for the next request&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The key idea
&lt;/h3&gt;

&lt;p&gt;You are no longer running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./llama-server &lt;span class="nt"&gt;-m&lt;/span&gt; model.gguf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You are running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./llama-server &lt;span class="nt"&gt;--models&lt;/span&gt; models.ini &lt;span class="nt"&gt;--port&lt;/span&gt; 8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And letting the server decide &lt;strong&gt;what to load and when&lt;/strong&gt;, based on what the client actually requests.&lt;/p&gt;

&lt;p&gt;This matters because it means one persistent process can serve an entire fleet of models, with clients selecting the right one per task — a coding model, a chat model, a summarisation model — without any coordination overhead on your side.&lt;/p&gt;




&lt;h2&gt;
  
  
  Configuration: defining your models
&lt;/h2&gt;

&lt;p&gt;This is where things are still a bit raw.&lt;/p&gt;

&lt;p&gt;There is no fully stable official format yet, but current builds support &lt;strong&gt;INI-style model definitions&lt;/strong&gt; via a config file.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example models.ini
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ini"&gt;&lt;code&gt;&lt;span class="nn"&gt;[llama3]&lt;/span&gt;
&lt;span class="py"&gt;model&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;/opt/models/llama-3-8b-instruct.Q5_K_M.gguf&lt;/span&gt;
&lt;span class="py"&gt;ctx-size&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;8192&lt;/span&gt;
&lt;span class="py"&gt;ngl&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;35&lt;/span&gt;
&lt;span class="py"&gt;threads&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;8&lt;/span&gt;

&lt;span class="nn"&gt;[mistral]&lt;/span&gt;
&lt;span class="py"&gt;model&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;/opt/models/mistral-7b-instruct-v0.3.Q4_K_M.gguf&lt;/span&gt;
&lt;span class="py"&gt;ctx-size&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;4096&lt;/span&gt;
&lt;span class="py"&gt;ngl&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;20&lt;/span&gt;
&lt;span class="py"&gt;threads&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;8&lt;/span&gt;

&lt;span class="nn"&gt;[qwen]&lt;/span&gt;
&lt;span class="py"&gt;model&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;/opt/models/qwen2.5-coder-7b-instruct.Q5_K_M.gguf&lt;/span&gt;
&lt;span class="py"&gt;ctx-size&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;16384&lt;/span&gt;
&lt;span class="py"&gt;ngl&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;35&lt;/span&gt;
&lt;span class="py"&gt;threads&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;8&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each section name becomes the &lt;strong&gt;model identifier&lt;/strong&gt; that clients use in the &lt;code&gt;"model"&lt;/code&gt; field of their API requests.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key config parameters
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Parameter&lt;/th&gt;
&lt;th&gt;What it controls&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;model&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Absolute path to the GGUF file&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ctx-size&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Context window size in tokens. Larger values use more VRAM.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ngl&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Number of GPU layers offloaded. Set to &lt;code&gt;0&lt;/code&gt; for CPU-only; increase until you hit VRAM limits.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;threads&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;CPU threads for the layers that remain on CPU.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Choosing the right &lt;code&gt;ngl&lt;/code&gt; value depends on your GPU's available VRAM — for GPU selection and hardware economics, the &lt;a href="https://www.glukhov.org/hardware/" rel="noopener noreferrer"&gt;compute hardware guide&lt;/a&gt; is a useful reference. To watch live VRAM consumption while dialing it in, see the &lt;a href="https://www.glukhov.org/observability/gpu-monitoring-apps-linux/" rel="noopener noreferrer"&gt;GPU monitoring tools for Linux&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Starting the server with config
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./llama-server &lt;span class="nt"&gt;--models&lt;/span&gt; /opt/llama.cpp/models.ini &lt;span class="nt"&gt;--port&lt;/span&gt; 8080
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Confirm the server started correctly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl http://localhost:8080/v1/models | jq &lt;span class="s1"&gt;'.data[].id'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see each section name from your &lt;code&gt;models.ini&lt;/code&gt; listed as a model ID.&lt;/p&gt;

&lt;h3&gt;
  
  
  A note on stability
&lt;/h3&gt;

&lt;p&gt;The INI config interface is &lt;strong&gt;still evolving&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;flags may change between commits&lt;/li&gt;
&lt;li&gt;some parameters are only recognised by specific build configurations&lt;/li&gt;
&lt;li&gt;documentation lags behind implementation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Pin to a specific llama.cpp commit if you need reproducibility across restarts.&lt;/p&gt;




&lt;h2&gt;
  
  
  API usage: switching models on request
&lt;/h2&gt;

&lt;p&gt;Once the server is running, model switching happens through the standard OpenAI-compatible API. You simply set the &lt;code&gt;"model"&lt;/code&gt; field.&lt;/p&gt;

&lt;h3&gt;
  
  
  List registered models
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl http://localhost:8080/v1/models
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Completion request — first model
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl http://localhost:8080/v1/chat/completions &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{
    "model": "llama3",
    "messages": [
      {"role": "user", "content": "Explain router mode in one paragraph"}
    ]
  }'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Switch to a different model — same endpoint, same port
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl http://localhost:8080/v1/chat/completions &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{
    "model": "qwen",
    "messages": [
      {"role": "user", "content": "Write a Python function that reads a CSV file"}
    ]
  }'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The server handles the unload/load cycle transparently. Your client code does not change — only the &lt;code&gt;model&lt;/code&gt; field.&lt;/p&gt;

&lt;h3&gt;
  
  
  Python example
&lt;/h3&gt;

&lt;p&gt;If you are using &lt;code&gt;openai&lt;/code&gt; Python client:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;openai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;OpenAI&lt;/span&gt;

&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;OpenAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;base_url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://localhost:8080/v1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;not-needed&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Use the coding model
&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;chat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;completions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;qwen&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Write a Go HTTP handler&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}],&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;choices&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Switch to the chat model — same client, different model name
&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;chat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;completions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;llama3&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;What is the capital of Australia?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}],&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;choices&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  What happens internally
&lt;/h3&gt;

&lt;p&gt;When a request arrives for &lt;code&gt;qwen&lt;/code&gt; and &lt;code&gt;llama3&lt;/code&gt; is currently loaded:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;llama3&lt;/code&gt; is unloaded from VRAM&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;qwen&lt;/code&gt; weights are read from disk and loaded into VRAM&lt;/li&gt;
&lt;li&gt;inference runs&lt;/li&gt;
&lt;li&gt;the next request determines whether to keep &lt;code&gt;qwen&lt;/code&gt; loaded or swap again&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This directly answers the common question:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;How can a local LLM server switch models without restarting&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;By dynamically loading models per request, not binding at startup.&lt;/p&gt;




&lt;h2&gt;
  
  
  Systemd service: production-ready setup
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Create a dedicated user and directories
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;useradd &lt;span class="nt"&gt;--system&lt;/span&gt; &lt;span class="nt"&gt;--shell&lt;/span&gt; /usr/sbin/nologin &lt;span class="nt"&gt;--home-dir&lt;/span&gt; /opt/llama.cpp llm
&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /opt/llama.cpp/models
&lt;span class="nb"&gt;sudo chown&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; llm:llm /opt/llama.cpp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Copy your binary and model config into place:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo cp &lt;/span&gt;build/bin/llama-server /opt/llama.cpp/
&lt;span class="nb"&gt;sudo cp &lt;/span&gt;models.ini /opt/llama.cpp/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  /etc/systemd/system/llama-server.service
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ini"&gt;&lt;code&gt;&lt;span class="nn"&gt;[Unit]&lt;/span&gt;
&lt;span class="py"&gt;Description&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;Llama.cpp Router Server&lt;/span&gt;
&lt;span class="py"&gt;After&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;network.target&lt;/span&gt;

&lt;span class="nn"&gt;[Service]&lt;/span&gt;
&lt;span class="py"&gt;Type&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;simple&lt;/span&gt;
&lt;span class="py"&gt;User&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;llm&lt;/span&gt;
&lt;span class="py"&gt;WorkingDirectory&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;/opt/llama.cpp&lt;/span&gt;
&lt;span class="py"&gt;ExecStart&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;/opt/llama.cpp/llama-server --models /opt/llama.cpp/models.ini --port 8080&lt;/span&gt;
&lt;span class="py"&gt;Restart&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;always&lt;/span&gt;
&lt;span class="py"&gt;RestartSec&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;5&lt;/span&gt;

&lt;span class="py"&gt;Environment&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;LLAMA_LOG_LEVEL=info&lt;/span&gt;

&lt;span class="nn"&gt;[Install]&lt;/span&gt;
&lt;span class="py"&gt;WantedBy&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;multi-user.target&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Enable and start
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl daemon-reload
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;llama-server
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl start llama-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Verify and inspect logs
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl status llama-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;journalctl &lt;span class="nt"&gt;-u&lt;/span&gt; llama-server &lt;span class="nt"&gt;-f&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On a successful start you will see lines indicating the server is listening and the model registry has been loaded. A quick sanity check:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; http://localhost:8080/v1/models | jq &lt;span class="s1"&gt;'.data[].id'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you have a persistent service with auto-restart and centralised model switching — no manual process management required. If you want to apply the same pattern to other binaries, &lt;a href="https://www.glukhov.org/developer-tools/terminals-shell/executable-as-a-service-in-linux/" rel="noopener noreferrer"&gt;hosting any executable as a Linux service&lt;/a&gt; walks through the general approach.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;llama-server&lt;/code&gt; &lt;code&gt;--metrics&lt;/code&gt; flag exposes a Prometheus-compatible endpoint. For llama.cpp-specific dashboards, PromQL queries, and alerting rules, see the &lt;a href="https://www.glukhov.org/observability/monitoring-llm-inference-prometheus-grafana/" rel="noopener noreferrer"&gt;LLM inference monitoring guide&lt;/a&gt;. For the broader observability setup, the &lt;a href="https://www.glukhov.org/observability/" rel="noopener noreferrer"&gt;observability guide&lt;/a&gt; covers the full stack.&lt;/p&gt;




&lt;h2&gt;
  
  
  Limitations you need to understand
&lt;/h2&gt;

&lt;p&gt;Router mode is genuinely useful, but it comes with tradeoffs you should be clear about before relying on it in production.&lt;/p&gt;

&lt;h3&gt;
  
  
  Only one model in memory at a time
&lt;/h3&gt;

&lt;p&gt;Even though multiple models are defined in &lt;code&gt;models.ini&lt;/code&gt;, only one is resident in VRAM per worker at any given moment. Switching means a full unload-and-reload cycle.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;switching means reload&lt;/li&gt;
&lt;li&gt;latency spike is unavoidable&lt;/li&gt;
&lt;li&gt;on a typical 7B model at Q5, a reload can take 3–10 seconds depending on disk speed and VRAM bandwidth&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This answers another key question:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Does llama.cpp support serving multiple models at once&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Not really. It supports &lt;strong&gt;multiple definitions&lt;/strong&gt;, not simultaneous residency. If you need two models genuinely loaded in parallel, you need two processes on two separate GPUs.&lt;/p&gt;

&lt;p&gt;For measured VRAM consumption and tokens-per-second across model sizes, the &lt;a href="https://www.glukhov.org/llm-performance/" rel="noopener noreferrer"&gt;LLM performance benchmarks&lt;/a&gt; cover the full picture. For numbers specific to llama.cpp on a 16 GB GPU — dense and MoE models at multiple context sizes — see the &lt;a href="https://www.glukhov.org/llm-performance/benchmarks/best-llm-on-16gb-vram-gpu/" rel="noopener noreferrer"&gt;16 GB VRAM llama.cpp benchmarks&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  No smart caching
&lt;/h3&gt;

&lt;p&gt;Unlike Ollama, which maintains a warm pool and evicts models based on recency:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;there is no automatic model eviction strategy&lt;/li&gt;
&lt;li&gt;no background pre-warming&lt;/li&gt;
&lt;li&gt;no priority queue for frequently used models&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you send alternating requests for &lt;code&gt;llama3&lt;/code&gt; and &lt;code&gt;mistral&lt;/code&gt;, every single request triggers a reload. This is the fundamental cost of being closer to the metal.&lt;/p&gt;

&lt;h3&gt;
  
  
  Latency is unpredictable for mixed workloads
&lt;/h3&gt;

&lt;p&gt;A well-behaved workload that uses one model consistently will be fast. A workload that interleaves multiple models will be slow. Plan your client routing logic accordingly — group requests by model where possible.&lt;/p&gt;

&lt;h3&gt;
  
  
  Config is not stable
&lt;/h3&gt;

&lt;p&gt;The INI support exists and works in most recent builds, but it is not fully standardised. Flags and parameter names have changed across versions. If you upgrade &lt;code&gt;llama-server&lt;/code&gt;, test your &lt;code&gt;models.ini&lt;/code&gt; against the new build before deploying.&lt;/p&gt;




&lt;h2&gt;
  
  
  Llama.cpp vs Ollama: honest comparison
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;llama.cpp router&lt;/th&gt;
&lt;th&gt;Ollama&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Dynamic loading&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Model switching&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Built-in registry&lt;/td&gt;
&lt;td&gt;Partial (INI)&lt;/td&gt;
&lt;td&gt;Yes (pull-based)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memory management&lt;/td&gt;
&lt;td&gt;Basic&lt;/td&gt;
&lt;td&gt;Advanced&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Model eviction&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;td&gt;TTL-based&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;UX polish&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;OpenAI API compat&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Control&lt;/td&gt;
&lt;td&gt;Maximum&lt;/td&gt;
&lt;td&gt;Opinionated&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Config stability&lt;/td&gt;
&lt;td&gt;Experimental&lt;/td&gt;
&lt;td&gt;Stable&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Opinionated take
&lt;/h3&gt;

&lt;p&gt;Choose llama.cpp router mode when you want:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;maximum control over runtime parameters per model&lt;/li&gt;
&lt;li&gt;minimal process overhead&lt;/li&gt;
&lt;li&gt;direct access to llama.cpp flags without abstraction layers&lt;/li&gt;
&lt;li&gt;a hackable base for custom tooling&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Choose Ollama when you want:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a stable, polished experience&lt;/li&gt;
&lt;li&gt;automatic model downloading and versioning&lt;/li&gt;
&lt;li&gt;smart keep-alive and eviction without configuration&lt;/li&gt;
&lt;li&gt;batteries included from day one&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Neither is wrong. The choice depends on how much you want to manage yourself.&lt;/p&gt;

&lt;p&gt;If you go with Ollama, the &lt;a href="https://www.glukhov.org/llm-hosting/ollama/ollama-cheatsheet/" rel="noopener noreferrer"&gt;Ollama CLI cheatsheet&lt;/a&gt; covers day-to-day commands. For a broader comparison that also includes vLLM, LM Studio, and LocalAI, see &lt;a href="https://www.glukhov.org/llm-hosting/comparisons/hosting-llms-ollama-localai-jan-lmstudio-vllm-comparison/" rel="noopener noreferrer"&gt;how different local runtimes compare in 2026&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Llama.cpp vs llama-swap
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;llama-swap&lt;/code&gt; is an &lt;strong&gt;external orchestrator&lt;/strong&gt; that sits in front of one or more &lt;code&gt;llama-server&lt;/code&gt; instances:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;it intercepts requests and inspects the &lt;code&gt;model&lt;/code&gt; field&lt;/li&gt;
&lt;li&gt;it starts the appropriate &lt;code&gt;llama-server&lt;/code&gt; process for that model&lt;/li&gt;
&lt;li&gt;it shuts down idle instances after a configurable timeout&lt;/li&gt;
&lt;li&gt;it proxies the request through once the model is ready&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For a hands-on setup, see the &lt;a href="https://www.glukhov.org/llm-hosting/llama-swap/" rel="noopener noreferrer"&gt;llama-swap quickstart&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key difference
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;router mode&lt;/th&gt;
&lt;th&gt;llama-swap&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Built-in&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No (separate binary)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Maturity&lt;/td&gt;
&lt;td&gt;Experimental&lt;/td&gt;
&lt;td&gt;More stable&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Flexibility&lt;/td&gt;
&lt;td&gt;Limited&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Control layer&lt;/td&gt;
&lt;td&gt;Internal&lt;/td&gt;
&lt;td&gt;External proxy&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Per-model config&lt;/td&gt;
&lt;td&gt;INI file&lt;/td&gt;
&lt;td&gt;YAML file&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Process model&lt;/td&gt;
&lt;td&gt;Single process&lt;/td&gt;
&lt;td&gt;One process per model&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  When to use llama-swap
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;llama-swap&lt;/code&gt; gives you process-level isolation per model, which means a crash in one model instance does not affect others. It also lets each model run with completely independent &lt;code&gt;llama-server&lt;/code&gt; flags.&lt;/p&gt;

&lt;p&gt;Use it if you need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;better lifecycle control and isolation&lt;/li&gt;
&lt;li&gt;smarter switching logic with configurable idle timeouts&lt;/li&gt;
&lt;li&gt;more predictable latency (each model has a warm process after first load)&lt;/li&gt;
&lt;li&gt;production stability today, not eventually&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  When native router mode is enough
&lt;/h3&gt;

&lt;p&gt;Use the built-in router if you want:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;zero external dependencies&lt;/li&gt;
&lt;li&gt;a single process to manage&lt;/li&gt;
&lt;li&gt;simpler deployment (one binary, one config file)&lt;/li&gt;
&lt;li&gt;minimal stack for dev or single-user setups&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Final thoughts
&lt;/h2&gt;

&lt;p&gt;Router mode is a meaningful step forward for &lt;code&gt;llama-server&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;It answers the long-standing demand:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;What is router mode in llama.cpp server&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It is the missing layer that turns a static binary into a &lt;strong&gt;dynamic inference service&lt;/strong&gt; — one process that can field requests for a whole catalogue of models.&lt;/p&gt;

&lt;p&gt;But it is not finished.&lt;/p&gt;

&lt;p&gt;Today it is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;powerful enough for real workloads&lt;/li&gt;
&lt;li&gt;promising as a foundation for more sophisticated routing&lt;/li&gt;
&lt;li&gt;slightly rough around the config and stability edges&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If your workload is predictable and you can group requests by model, router mode works well today. If you need production-grade reliability and per-model isolation, reach for &lt;code&gt;llama-swap&lt;/code&gt; while the native implementation matures.&lt;/p&gt;

&lt;p&gt;Either way, you get &lt;strong&gt;Ollama-like behavior, without hiding the machinery&lt;/strong&gt;.&lt;/p&gt;

</description>
      <category>cheatsheet</category>
      <category>gguf</category>
      <category>ai</category>
      <category>llm</category>
    </item>
    <item>
      <title>Claude Skills and SKILL.md for Developers: VS Code, JetBrains, Cursor</title>
      <dc:creator>Rost</dc:creator>
      <pubDate>Sat, 25 Apr 2026 08:10:03 +0000</pubDate>
      <link>https://forem.com/rosgluk/claude-skills-and-skillmd-for-developers-vs-code-jetbrains-cursor-17f6</link>
      <guid>https://forem.com/rosgluk/claude-skills-and-skillmd-for-developers-vs-code-jetbrains-cursor-17f6</guid>
      <description>&lt;p&gt;Most teams misuse Claude Skills in one of two ways. They either turn &lt;code&gt;SKILL.md&lt;/code&gt; into a dumping ground, or they never graduate from giant copy-pasted prompts.&lt;/p&gt;

&lt;p&gt;Both approaches are sloppy. If you want Skills to work in a real dev workflow, you need to treat them like code and operations logic, not like prompt poetry.&lt;/p&gt;

&lt;p&gt;Claude Skills are directories anchored by &lt;code&gt;SKILL.md&lt;/code&gt;, with optional scripts, references, and assets. They work because of progressive disclosure. The agent starts by loading only compact metadata such as the skill name and description, then reads the full instructions only when the task matches. That lets an agent keep many skills available without bloating every session from the start.&lt;/p&gt;

&lt;p&gt;Anthropic's own guidance makes the intended division of labour pretty clear. &lt;code&gt;CLAUDE.md&lt;/code&gt; is for durable, always-on project context. Skills are for reusable knowledge, playbooks, and invocable workflows that should load on demand. Claude Code even folded old custom commands into the same mechanism, so legacy &lt;code&gt;.claude/commands/*.md&lt;/code&gt; files still work, but Skills are now the better long-term shape — and the most reusable building block in any &lt;a href="https://www.glukhov.org/ai-devtools/" rel="noopener noreferrer"&gt;AI-powered development workflow&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to Use Claude Skills: CLAUDE.md vs Skills vs Hooks
&lt;/h2&gt;

&lt;p&gt;A Claude Skill is worth creating when you keep pasting the same checklist, the same deployment playbook, the same code review rubric, or the same internal API gotchas into chat. Anthropic explicitly recommends creating a skill when you keep reusing the same procedure, or when a section of &lt;code&gt;CLAUDE.md&lt;/code&gt; has grown into a process rather than a fact. That is the practical answer to the FAQ question "What is a Claude Skill and when should you use one". Use a Skill for repeatable procedure, not for general taste or broad repo rules.&lt;/p&gt;

&lt;p&gt;The real win is control over context cost and behaviour. A good Skill is loaded only when relevant, while a bloated &lt;code&gt;CLAUDE.md&lt;/code&gt; is loaded every session. Anthropic recommends keeping &lt;code&gt;CLAUDE.md&lt;/code&gt; short and moving domain knowledge or procedures into Skills precisely because on-demand loading keeps the agent focused on the task in front of it.&lt;/p&gt;

&lt;p&gt;My opinionated rule is simple. If the instruction should apply every single session, it belongs in &lt;code&gt;CLAUDE.md&lt;/code&gt;. If the instruction is a reusable method, checklist, or workflow that matters only sometimes, it belongs in a Skill. If the action must happen automatically on every matching event, it probably belongs in a hook, not a Skill. Anthropic's feature overview frames those tools in almost exactly that layering model.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Layer&lt;/th&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;When to use&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;CLAUDE.md&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Always loaded&lt;/td&gt;
&lt;td&gt;Project facts, durable conventions, repo-wide rules&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Skill&lt;/td&gt;
&lt;td&gt;Loaded on demand&lt;/td&gt;
&lt;td&gt;Repeatable procedures, playbooks, domain checklists&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Hook&lt;/td&gt;
&lt;td&gt;Event-triggered&lt;/td&gt;
&lt;td&gt;Automatic side effects on file save, commit, or session start&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;A practical smell for each: if you find yourself pasting the same instructions into every chat, that is a Skill. If a &lt;code&gt;CLAUDE.md&lt;/code&gt; section has grown into a step-by-step process, extract it into a Skill. If you want something to fire silently every time a file is saved, write a hook instead.&lt;/p&gt;

&lt;h2&gt;
  
  
  Claude Skills IDE Support: VS Code, JetBrains, Cursor, and Codex
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.glukhov.org/ai-devtools/claude-code/" rel="noopener noreferrer"&gt;Claude Code&lt;/a&gt; runs across CLI, Desktop, VS Code, JetBrains, web, and mobile-related remote control flows. Anthropic describes the CLI as the most complete local surface, while the IDE integrations trade some CLI-only capabilities for editor-native review, file context, and tighter workflow ergonomics. Configuration, project memory, and MCP servers are shared across the local surfaces, so your &lt;code&gt;.claude&lt;/code&gt; setup follows you rather than being trapped in one editor.&lt;/p&gt;

&lt;p&gt;For VS Code, Anthropic says the extension is the recommended interface inside the editor. It provides plan review, inline diffs, file mention support, and integrated access to the CLI. The same install flow also exposes a direct path for Cursor. For JetBrains, the current supported list includes IntelliJ IDEA, PyCharm, Android Studio, WebStorm, PhpStorm, and GoLand, with diff viewing, selection sharing, file-reference shortcuts, and diagnostic sharing built into the plugin.&lt;/p&gt;

&lt;p&gt;JetBrains support is better than many developers realise. If you run &lt;code&gt;claude&lt;/code&gt; from the IDE's integrated terminal, the integration features are active automatically. If you start from an external terminal, Anthropic documents the &lt;code&gt;/ide&lt;/code&gt; command to connect Claude Code back to the JetBrains session, and it explicitly recommends launching from the same project root so Claude sees the same files your IDE sees. If you use auto-edit modes in JetBrains, Anthropic also warns that IDE configuration files can become part of the editable surface, so manual approvals are the safer default in that environment.&lt;/p&gt;

&lt;p&gt;Now the bigger point. Claude Skills are not only a Claude Code thing. Agent Skills is an open standard. The official Agent Skills quickstart says the same skill can work in VS Code with GitHub Copilot, Claude Code, and OpenAI Codex, and OpenAI's own Codex docs say Skills are available in the Codex CLI, IDE extension, and app. The Agent Skills implementation guide adds an important portability detail: &lt;code&gt;.agents/skills&lt;/code&gt; has emerged as the cross-client convention, while some clients also scan &lt;code&gt;.claude/skills&lt;/code&gt; for pragmatic compatibility.&lt;/p&gt;

&lt;p&gt;So here is the practical compatibility rule I recommend. If you are building for Claude Code first and only, author in &lt;code&gt;.claude/skills&lt;/code&gt;. If you genuinely want cross-client portability, target the open Agent Skills shape and use &lt;code&gt;.agents/skills&lt;/code&gt; as the canonical path. Do not pretend those two goals are identical. They are related, not identical.&lt;/p&gt;

&lt;p&gt;Quick compatibility reference:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Client&lt;/th&gt;
&lt;th&gt;Skills path&lt;/th&gt;
&lt;th&gt;Notes&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Claude Code CLI&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;.claude/skills/&lt;/code&gt; or &lt;code&gt;~/.claude/skills/&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;Most complete surface; full &lt;code&gt;allowed-tools&lt;/code&gt; support&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;VS Code + Claude extension&lt;/td&gt;
&lt;td&gt;&lt;code&gt;.claude/skills/&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Inline diffs, plan review, file mention&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cursor&lt;/td&gt;
&lt;td&gt;&lt;code&gt;.claude/skills/&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Same install path as VS Code&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;JetBrains (IDEA, PyCharm, etc.)&lt;/td&gt;
&lt;td&gt;&lt;code&gt;.claude/skills/&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Run &lt;code&gt;claude&lt;/code&gt; from IDE terminal or use &lt;code&gt;/ide&lt;/code&gt; to reconnect&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GitHub Copilot, OpenAI Codex&lt;/td&gt;
&lt;td&gt;&lt;code&gt;.agents/skills/&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Open Agent Skills standard; cross-client portability&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Claude.ai web&lt;/td&gt;
&lt;td&gt;Upload via UI&lt;/td&gt;
&lt;td&gt;Dir name must match &lt;code&gt;name&lt;/code&gt; field; 200-char description cap&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  SKILL.md File Structure, Folder Layout, and Storage Locations
&lt;/h2&gt;

&lt;p&gt;A proper Skill is a folder, not a random markdown file sitting at repo root. The core specification requires a directory with a &lt;code&gt;SKILL.md&lt;/code&gt; file and allows optional &lt;code&gt;scripts/&lt;/code&gt;, &lt;code&gt;references/&lt;/code&gt;, and &lt;code&gt;assets/&lt;/code&gt; directories. &lt;code&gt;SKILL.md&lt;/code&gt; must contain YAML frontmatter followed by markdown instructions. In the spec, &lt;code&gt;name&lt;/code&gt; and &lt;code&gt;description&lt;/code&gt; are required, &lt;code&gt;name&lt;/code&gt; is limited to 64 characters using lowercase letters, numbers, and hyphens, &lt;code&gt;compatibility&lt;/code&gt; is only for real environment requirements, and &lt;code&gt;allowed-tools&lt;/code&gt; is explicitly experimental across implementations.&lt;/p&gt;

&lt;p&gt;Claude Code is a bit looser than the portable spec because it can derive a name from the directory and fall back to the first paragraph when &lt;code&gt;description&lt;/code&gt; is missing. You should not rely on that if you care about portability or predictability. Claude.ai requires the directory name to match the &lt;code&gt;name&lt;/code&gt; field, and its custom-skill upload path caps descriptions at 200 characters even though the broader spec allows much more. The portable choice is to set an explicit &lt;code&gt;name&lt;/code&gt;, keep the directory identical, and write a precise description that fits in tight limits. That answers the FAQ topic "What should a SKILL.md file contain" without hand-waving.&lt;/p&gt;

&lt;p&gt;Start from a structure this boring:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;repo/
  .claude/
    skills/
      review-pr/
        SKILL.md
        scripts/
          review.sh
        references/
          checklist.md
        assets/
          comment-template.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If portability across Skills-compatible clients matters more than Claude Code convenience, keep the same internal shape and swap &lt;code&gt;.claude/skills/&lt;/code&gt; for &lt;code&gt;.agents/skills/&lt;/code&gt;. The folder structure is the same idea either way.&lt;/p&gt;

&lt;p&gt;For Claude Code, the storage locations are straightforward. Project skills live at &lt;code&gt;.claude/skills/&amp;lt;skill-name&amp;gt;/SKILL.md&lt;/code&gt;. Personal skills live at &lt;code&gt;~/.claude/skills/&amp;lt;skill-name&amp;gt;/SKILL.md&lt;/code&gt;. Plugin-distributed skills live under &lt;code&gt;&amp;lt;plugin&amp;gt;/skills/&amp;lt;skill-name&amp;gt;/SKILL.md&lt;/code&gt;. Anthropic documents precedence across the built-in scopes as enterprise over personal over project, while plugin skills avoid collisions by using a namespaced form such as &lt;code&gt;plugin-name:skill-name&lt;/code&gt;. On Windows, &lt;code&gt;~/.claude&lt;/code&gt; resolves to &lt;code&gt;%USERPROFILE%\.claude&lt;/code&gt;, and &lt;code&gt;CLAUDE_CONFIG_DIR&lt;/code&gt; can relocate the whole base directory.&lt;/p&gt;

&lt;p&gt;The choice between project and personal scope is straightforward. Use &lt;code&gt;.claude/skills/&lt;/code&gt; inside the repo when the Skill is tightly coupled to that codebase — for example, a deploy playbook that knows your specific cluster names or a review rubric tuned to your team's conventions. Use &lt;code&gt;~/.claude/skills/&lt;/code&gt; for Skills that travel with you across projects: personal checklists, generic changelog generators, preferred debugging workflows. Anything you would put in a dotfiles repo belongs in personal scope.&lt;/p&gt;

&lt;p&gt;A few sharp edges are worth memorising. &lt;code&gt;SKILL.md&lt;/code&gt; must be named exactly with that casing. Anthropic's PDF guide recommends kebab-case folder names and explicitly says not to place a &lt;code&gt;README.md&lt;/code&gt; inside the skill folder, because the operative documentation should live in &lt;code&gt;SKILL.md&lt;/code&gt; or &lt;code&gt;references/&lt;/code&gt;. That same guide also stresses that &lt;code&gt;SKILL.md&lt;/code&gt; naming is case-sensitive. These are boring constraints, but boring constraints are what make tooling reliable.&lt;/p&gt;

&lt;p&gt;Claude Code also does the right thing for monorepos. It automatically discovers nested &lt;code&gt;.claude/skills/&lt;/code&gt; directories when you work inside subdirectories, which is ideal for package-level or service-level skills. It also watches existing skill directories for live changes during the current session. The one restart trap is creating a top-level skills directory that did not exist when the session started. Anthropic documents that as the case where you do need to restart so the new directory can be watched.&lt;/p&gt;

&lt;h2&gt;
  
  
  Claude Skills Best Practices: Descriptions, Scripts, and Scope
&lt;/h2&gt;

&lt;p&gt;The fastest way to create a useless Skill is to ask an LLM to invent one from generic training knowledge. Anthropic's best-practices guide warns against exactly that. The valuable bits are the domain-specific corrections, edge cases, tool choices, and conventions the model would not reliably invent on its own. The right workflow is to solve the task once with the agent, correct it until it works, then extract the method into a Skill.&lt;/p&gt;

&lt;p&gt;Scope the Skill like a good function, not like a wiki. Anthropic says Skills should encapsulate a coherent unit of work. Too narrow, and you force multiple skills to stack for one task. Too broad, and the agent cannot activate them precisely. The best-practices guide is blunt that overly comprehensive skills can hurt more than they help because the model chases irrelevant instructions and loses the signal.&lt;/p&gt;

&lt;p&gt;Description quality is not a cosmetic concern. It is the routing layer. Both Anthropic and the Agent Skills docs say the &lt;code&gt;description&lt;/code&gt; field is the primary mechanism the model uses to decide whether to load a Skill at all. Good descriptions say what the Skill does, when to use it, and the trigger phrases or file types a user would actually mention. Bad descriptions are vague, overly technical, or broad enough to match nonsense. That is the real answer to the FAQ question "Why is a Claude Skill not triggering". Usually the router is bad, not the model.&lt;/p&gt;

&lt;p&gt;The contrast is clear side by side:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bad descriptions — too vague to route reliably:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Helps with code review&lt;/code&gt; — matches everything, disambiguates nothing&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Useful for development tasks&lt;/code&gt; — broader than a search query&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Assists with writing&lt;/code&gt; — not a router, just a category label&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Good descriptions — specific trigger language:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;Review pull requests for security issues, migration risk, and missing tests. Use when reviewing a PR, git diff, or release critical change.&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Generate a changelog from git log output. Use when preparing a release, writing release notes, or summarising commits since last tag.&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Scaffold a new Go HTTP handler with request validation and error middleware. Use when adding a new endpoint or route to a Go service.&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The pattern is the same each time: state what the Skill does, name the exact user phrases that should activate it, and optionally name file types or tools that are relevant. If your description would match a generic Google query, it is not specific enough.&lt;/p&gt;

&lt;p&gt;If a workflow has side effects, make it manual. Claude Code exposes that directly. &lt;code&gt;disable-model-invocation: true&lt;/code&gt; makes a Skill user-invoked only, which Anthropic recommends for actions like deploys, commits, or outbound messages. &lt;code&gt;user-invocable: false&lt;/code&gt; goes the other way and hides the Skill from the slash menu while still letting Claude use it as background knowledge. That answers the FAQ topic "When should a skill be manual instead of automatic" in one sentence: manual for risk, automatic for safe repeatable guidance.&lt;/p&gt;

&lt;p&gt;Keep &lt;code&gt;SKILL.md&lt;/code&gt; small enough to stay intelligible. Anthropic recommends keeping it under 500 lines and around 5,000 tokens, then moving detailed material into &lt;code&gt;references/&lt;/code&gt; or similar files with explicit loading instructions. "Read &lt;code&gt;references/api-errors.md&lt;/code&gt; if the API returns a non-200" is a good pattern. "See references/" is lazy. Claude Code also injects the rendered Skill into the conversation as a message and does not keep re-reading the file on later turns. After context compaction, only recent Skill content is carried forward within token budgets. Huge Skills are therefore not merely ugly. They are brittle over long sessions.&lt;/p&gt;

&lt;p&gt;A good &lt;code&gt;SKILL.md&lt;/code&gt; can stay very plain:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;review-pr&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Review pull requests for security issues, migration risk, and missing tests. Use when reviewing a PR, git diff, or release critical change.&lt;/span&gt;
&lt;span class="na"&gt;compatibility&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Designed for Claude Code. Requires git and gh.&lt;/span&gt;
&lt;span class="na"&gt;disable-model-invocation&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;allowed-tools&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Bash(git diff *) Bash(gh pr diff *) Read Grep Glob&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="c1"&gt;# Review PR&lt;/span&gt;

&lt;span class="s"&gt;Read references/checklist.md before running any commands.&lt;/span&gt;

&lt;span class="s"&gt;1. Collect the diff and changed files.&lt;/span&gt;
&lt;span class="s"&gt;2. Flag correctness, security, and test coverage issues.&lt;/span&gt;
&lt;span class="s"&gt;3. Return findings grouped by severity with file references.&lt;/span&gt;
&lt;span class="s"&gt;4. Suggest the smallest safe fix first.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use scripts when determinism matters more than eloquence. The Skills scripts guide is excellent here. It says agent-facing scripts must avoid interactive prompts, document usage through &lt;code&gt;--help&lt;/code&gt;, emit helpful error messages, prefer structured output such as JSON or CSV on stdout, send diagnostics to stderr, and support retry-safe use. It also recommends pinning one-off tool versions and describing runtime requirements explicitly in &lt;code&gt;SKILL.md&lt;/code&gt; or the &lt;code&gt;compatibility&lt;/code&gt; field rather than assuming the environment has the right packages.&lt;/p&gt;

&lt;p&gt;A minimal but correct agent-facing script looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/usr/bin/env bash&lt;/span&gt;
&lt;span class="c"&gt;# scripts/collect-diff.sh — called by review-pr skill&lt;/span&gt;
&lt;span class="c"&gt;# Usage: collect-diff.sh &amp;lt;base-ref&amp;gt; [&amp;lt;head-ref&amp;gt;]&lt;/span&gt;
&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-euo&lt;/span&gt; pipefail

&lt;span class="nv"&gt;BASE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;1&lt;/span&gt;:?Usage:&lt;span class="p"&gt; collect-diff.sh &amp;lt;base-ref&amp;gt; [&amp;lt;head-ref&amp;gt;]&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nv"&gt;HEAD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;2&lt;/span&gt;&lt;span class="k"&gt;:-&lt;/span&gt;&lt;span class="nv"&gt;HEAD&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="c"&gt;# Structured output to stdout so the agent can parse it&lt;/span&gt;
git diff &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;BASE&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;...&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;HEAD&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;--stat&lt;/span&gt; &lt;span class="nt"&gt;--name-only&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  | jq &lt;span class="nt"&gt;-Rs&lt;/span&gt; &lt;span class="s1"&gt;'{
      "changed_files": split("\n") | map(select(length &amp;gt; 0))
    }'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="nb"&gt;printf&lt;/span&gt; &lt;span class="s1"&gt;'{"error":"git diff failed"}\n'&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;amp;2&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nb"&gt;exit &lt;/span&gt;1&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Three things make this agent-safe. &lt;code&gt;set -euo pipefail&lt;/code&gt; ensures the script exits loudly on any failure rather than silently proceeding. JSON on stdout gives the agent a format it can parse without guessing. Diagnostics go to stderr so the agent's stdout stream stays clean. None of this is clever. All of it is necessary.&lt;/p&gt;

&lt;p&gt;One subtle trap is &lt;code&gt;allowed-tools&lt;/code&gt;. In the spec it is experimental and support varies. In Claude Code it pre-approves specific tools while the Skill is active, but it does not restrict the universe of callable tools, and deny rules still belong in Claude Code permissions. In the Claude Agent SDK, Anthropic explicitly says the &lt;code&gt;allowed-tools&lt;/code&gt; frontmatter in &lt;code&gt;SKILL.md&lt;/code&gt; does not apply, so SDK apps must enforce tool access in the main &lt;code&gt;allowed_tools&lt;/code&gt; or &lt;code&gt;allowedTools&lt;/code&gt; configuration instead. If you ignore that difference, your Skill will behave differently in the CLI and in SDK-powered automation.&lt;/p&gt;

&lt;p&gt;One more advanced pattern is worth stealing. When a workflow would flood your main thread with logs, file searches, or long research output, Claude Code lets a Skill run in a forked subagent using &lt;code&gt;context: fork&lt;/code&gt; and an &lt;code&gt;agent&lt;/code&gt; such as &lt;code&gt;Explore&lt;/code&gt;. Anthropic shows this for research workflows, where the heavy lifting happens in isolated context and the main conversation gets the summary. For deep codebase exploration, that is a much better design than a giant inline Skill that pollutes the main session.&lt;/p&gt;

&lt;p&gt;A forked Skill looks like this in frontmatter:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;explore-codebase&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deep exploration of an unfamiliar codebase. Use when onboarding to a new repo, auditing architecture, or mapping module dependencies.&lt;/span&gt;
&lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fork&lt;/span&gt;
&lt;span class="na"&gt;agent&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Explore&lt;/span&gt;
&lt;span class="na"&gt;compatibility&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Requires Claude Code CLI.&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="c1"&gt;# Explore Codebase&lt;/span&gt;

&lt;span class="s"&gt;1. Walk the directory tree and summarise the top-level modules.&lt;/span&gt;
&lt;span class="s"&gt;2. Identify the main entry points and their responsibilities.&lt;/span&gt;
&lt;span class="s"&gt;3. Map the dependency graph between packages.&lt;/span&gt;
&lt;span class="s"&gt;4. Return a structured summary to the main session — not the raw file list.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The key line is &lt;code&gt;context: fork&lt;/code&gt;. Without it, the exploration output lands inline in your conversation. With it, the subagent runs in its own context window and hands back a summary. The difference matters on large repos where exploration alone can consume thousands of tokens.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing Claude Skills: Triggers, Correctness, and Baseline Comparisons
&lt;/h2&gt;

&lt;p&gt;A Skill is not tested because one happy-path demo worked once. Anthropic's guide breaks testing into three layers: manual testing in Claude.ai, scripted testing in Claude Code, and programmatic testing via the Skills API. The recommended evaluation areas are triggering, functional correctness, and performance against a baseline without the Skill. That is also the best answer to the FAQ question "How do you test whether a skill is reliable". You test route selection, output quality, and efficiency, not just whether the model sounded confident.&lt;/p&gt;

&lt;p&gt;The official eval guidance gives a clean structure for test cases. Each case should include a realistic user prompt, a human-readable description of the expected output, and optional input files. The docs store those in &lt;code&gt;evals/evals.json&lt;/code&gt; inside the Skill directory, which is a sensible convention even if you roll your own harness.&lt;/p&gt;

&lt;p&gt;Use a fixture file and a no-nonsense eval layout like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"skill_name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"review-pr"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"evals"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"prompt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Review this PR for security issues and missing tests"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"expected_output"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Findings grouped by severity with file references and at least one test recommendation."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"files"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"evals/files/pr-diff.patch"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"prompt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Summarise last week's commits"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"expected_output"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"The skill should not activate."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"files"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;My own testing rule is harsher than most teams use, but it lines up with the official guidance. Every serious Skill should have should-trigger queries, should-not-trigger queries, at least one edge-case test, and a baseline comparison without the Skill. Anthropic's examples compare tool calls, failed API calls, clarification loops, and token use with and without the Skill because "works" is not the same as "improves the workflow".&lt;/p&gt;

&lt;p&gt;If you test through the Claude Agent SDK, remember the plumbing. Skills are filesystem artefacts there, not programmatic registrations. Anthropic says you must enable the &lt;code&gt;"Skill"&lt;/code&gt; tool and load the relevant filesystem settings through &lt;code&gt;settingSources&lt;/code&gt; or &lt;code&gt;setting_sources&lt;/code&gt;. If you omit &lt;code&gt;user&lt;/code&gt; or &lt;code&gt;project&lt;/code&gt;, or point &lt;code&gt;cwd&lt;/code&gt; at the wrong place, the SDK will not discover the Skill. Anthropic even recommends asking "What Skills are available?" as a direct discovery check.&lt;/p&gt;

&lt;p&gt;Also test on the model and client you actually intend to ship. The open Agent Skills quickstart explicitly warns that tool-use reliability varies across models, and some models may answer directly instead of executing the command the Skill intends. That is not always a Skill design problem. Sometimes it is a model-selection problem, and your test matrix should expose it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Claude Skills Troubleshooting: Common Failures and Fixes
&lt;/h2&gt;

&lt;p&gt;When a Skill misbehaves, assume packaging before intelligence. The most common failures are still the boring ones.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If the Skill is not found at all, verify the file is named exactly &lt;code&gt;SKILL.md&lt;/code&gt;, with the right case, inside the correct directory. Anthropic's troubleshooting guide calls out filename case explicitly, and its Claude Code and SDK docs point you straight at &lt;code&gt;.claude/skills/*/SKILL.md&lt;/code&gt; and &lt;code&gt;~/.claude/skills/*/SKILL.md&lt;/code&gt; as the first checks.&lt;/li&gt;
&lt;li&gt;If frontmatter is invalid, check the YAML delimiters and quotes first. Anthropic's examples show the classic mistakes: missing &lt;code&gt;---&lt;/code&gt;, unclosed quotes, or invalid names with spaces and capitals. Skill names should be lowercase and hyphenated.&lt;/li&gt;
&lt;li&gt;If the Skill exists but does not trigger, the description is usually too vague. Claude Code's own troubleshooting says to include keywords users would naturally say, verify the Skill appears when you ask "What skills are available?", and try rephrasing closer to the description. Anthropic's PDF guide adds a great diagnostic trick: ask Claude when it would use the Skill and listen to how it paraphrases the description back to you.&lt;/li&gt;
&lt;li&gt;If the Skill triggers too often, narrow the scope. Anthropic recommends making the description more specific, adding negative triggers, and using &lt;code&gt;disable-model-invocation: true&lt;/code&gt; for workflows you want only by explicit command. Over-triggering is usually just under-specified routing language.&lt;/li&gt;
&lt;li&gt;If the Skill seems to lose influence in long sessions, remember that descriptions can be shortened in the Claude Code catalogue when many skills are present, and invoked Skills are later carried within token budgets after compaction. Anthropic recommends front-loading keywords in the description, trimming excess text, and, for Claude Code specifically, adjusting &lt;code&gt;SLASH_COMMAND_TOOL_CHAR_BUDGET&lt;/code&gt; if description listings are being squeezed too aggressively.&lt;/li&gt;
&lt;li&gt;If a bundled script hangs or behaves erratically, check whether it expects interactive input. The scripts guide says agents run in non-interactive shells, so TTY prompts, password dialogs, and confirmation menus are design bugs. Accept input through flags, environment variables, or stdin and make failures explicit.&lt;/li&gt;
&lt;li&gt;If the SDK does not see your Skill, confirm that &lt;code&gt;allowed_tools&lt;/code&gt; includes &lt;code&gt;"Skill"&lt;/code&gt;, that &lt;code&gt;settingSources&lt;/code&gt; or &lt;code&gt;setting_sources&lt;/code&gt; contains &lt;code&gt;user&lt;/code&gt; and or &lt;code&gt;project&lt;/code&gt;, and that &lt;code&gt;cwd&lt;/code&gt; points at the directory that actually contains &lt;code&gt;.claude/skills/&lt;/code&gt;. Without that setup, the Skill system is not enabled no matter how correct your markdown looks.&lt;/li&gt;
&lt;li&gt;If an MCP-backed Skill loads but the tool calls fail, Anthropic's troubleshooting checklist is sensible: verify the MCP server is connected, confirm authentication and scopes, test the MCP tool directly without the Skill, then check the exact tool names because they are case-sensitive.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The boring truth is that good Claude Skills look like good operational engineering. Clear names. Small files. Explicit triggers. Deterministic scripts where needed. Real tests. If your Skill reads like a crisp runbook, the agent has a fighting chance. If it reads like a brainstorm, you have simply hidden chaos in a folder.&lt;/p&gt;

</description>
      <category>llm</category>
      <category>ai</category>
      <category>aicoding</category>
      <category>dev</category>
    </item>
    <item>
      <title>Pause Scripts with 'Press Any Key' in Bash, CMD, PowerShell, and macOS</title>
      <dc:creator>Rost</dc:creator>
      <pubDate>Sat, 25 Apr 2026 08:09:59 +0000</pubDate>
      <link>https://forem.com/rosgluk/pause-scripts-with-press-any-key-in-bash-cmd-powershell-and-macos-31ii</link>
      <guid>https://forem.com/rosgluk/pause-scripts-with-press-any-key-in-bash-cmd-powershell-and-macos-31ii</guid>
      <description>&lt;p&gt;Batch files and shell scripts often need a short wait so a double-clicked window or installer log stays visible. Windows CMD has a dedicated &lt;strong&gt;&lt;code&gt;pause&lt;/code&gt;&lt;/strong&gt; command. Unix shells use &lt;strong&gt;&lt;code&gt;read&lt;/code&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;PowerShell sits in the middle and needs an explicit pattern.&lt;/p&gt;

&lt;p&gt;This page collects portable snippets and the usual pitfalls (pipes, SSH without TTY, CI).&lt;/p&gt;

&lt;p&gt;For more shell references, see the &lt;a href="https://www.glukhov.org/developer-tools/terminals-shell/bash-cheat-sheet/" rel="noopener noreferrer"&gt;Bash cheat sheet&lt;/a&gt; and &lt;a href="https://www.glukhov.org/developer-tools/terminals-shell/powershell-cheatsheet/" rel="noopener noreferrer"&gt;PowerShell cheatsheet&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For broader tooling workflows, visit &lt;a href="https://www.glukhov.org/developer-tools/" rel="noopener noreferrer"&gt;Developer Tools: The Complete Guide to Modern Development Workflows&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to pause (and when not to)
&lt;/h2&gt;

&lt;p&gt;Use a pause when a human is watching an interactive terminal and you want to avoid an instant exit—for example after a &lt;code&gt;.bat&lt;/code&gt; file is double-clicked, or after a local maintenance script prints a summary.&lt;/p&gt;

&lt;p&gt;Skip pauses in &lt;strong&gt;cron&lt;/strong&gt;, &lt;strong&gt;systemd&lt;/strong&gt; services, &lt;strong&gt;CI pipelines&lt;/strong&gt;, and most &lt;strong&gt;SSH&lt;/strong&gt; one-liners. There is often no keyboard attached and &lt;strong&gt;&lt;code&gt;stdin&lt;/code&gt; may not be a terminal&lt;/strong&gt;, so waiting for input can hang forever. In Bash and POSIX sh, test with &lt;strong&gt;&lt;code&gt;[ -t 0 ]&lt;/code&gt;&lt;/strong&gt; (or &lt;strong&gt;&lt;code&gt;test -t 0&lt;/code&gt;&lt;/strong&gt;) before prompting.&lt;/p&gt;

&lt;h2&gt;
  
  
  Windows CMD
&lt;/h2&gt;

&lt;p&gt;The &lt;strong&gt;&lt;code&gt;pause&lt;/code&gt;&lt;/strong&gt; command prints a localized line such as “Press any key to continue . . .” and waits for a keypress.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight batchfile"&gt;&lt;code&gt;&lt;span class="c"&gt;:: save as pause-demo.bat&lt;/span&gt;
@echo &lt;span class="na"&gt;off&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="kd"&gt;Hello&lt;/span&gt; &lt;span class="kd"&gt;from&lt;/span&gt; &lt;span class="kd"&gt;CMD&lt;/span&gt;
&lt;span class="nb"&gt;pause&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If output is redirected, behavior can differ; for scripts that must run logged to a file, consider &lt;strong&gt;&lt;code&gt;timeout /t N&lt;/code&gt;&lt;/strong&gt; for a timed delay instead of an interactive pause.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;choice&lt;/code&gt;&lt;/strong&gt; is useful when you want a timed wait or specific keys (menu scripts). It is a separate topic from &lt;strong&gt;&lt;code&gt;pause&lt;/code&gt;&lt;/strong&gt; but often appears in the same batch workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  PowerShell
&lt;/h2&gt;

&lt;p&gt;PowerShell has no single &lt;strong&gt;&lt;code&gt;pause&lt;/code&gt;&lt;/strong&gt; alias that matches CMD in every host. Two common patterns follow.&lt;/p&gt;

&lt;h3&gt;
  
  
  Wait for Enter only
&lt;/h3&gt;

&lt;p&gt;Simple and works in many hosts including some IDEs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="c"&gt;# pause-demo.ps1&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;Read-Host&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'Press Enter to continue'&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is &lt;strong&gt;not&lt;/strong&gt; “any key”—only Enter counts.&lt;/p&gt;

&lt;h3&gt;
  
  
  Wait for any key (Windows console)
&lt;/h3&gt;

&lt;p&gt;In &lt;strong&gt;Windows PowerShell&lt;/strong&gt; running in a normal console host, &lt;strong&gt;&lt;code&gt;ReadKey&lt;/code&gt;&lt;/strong&gt; waits for a physical key:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="c"&gt;# pause-any-key.ps1&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;Write-Host&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'Press any key to continue...'&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="bp"&gt;$null&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="bp"&gt;$Host&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;UI&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;RawUI&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ReadKey&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'NoEcho,IncludeKeyDown'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If &lt;strong&gt;&lt;code&gt;$Host.UI&lt;/code&gt;&lt;/strong&gt; or &lt;strong&gt;&lt;code&gt;RawUI&lt;/code&gt;&lt;/strong&gt; is not available (some non-interactive hosts, remoting, or constrained environments), this can fail. Wrap such calls in &lt;strong&gt;&lt;code&gt;try&lt;/code&gt; / &lt;code&gt;catch&lt;/code&gt;&lt;/strong&gt; or detect &lt;strong&gt;&lt;code&gt;[Console]::KeyAvailable&lt;/code&gt;&lt;/strong&gt; / host capabilities when you need robustness.&lt;/p&gt;

&lt;p&gt;PowerShell 7 on non-Windows platforms may not offer the same &lt;strong&gt;&lt;code&gt;ReadKey&lt;/code&gt;&lt;/strong&gt; experience; prefer &lt;strong&gt;&lt;code&gt;Read-Host&lt;/code&gt;&lt;/strong&gt; or shell-native &lt;strong&gt;&lt;code&gt;read&lt;/code&gt;&lt;/strong&gt; in those cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bash (Linux and macOS)
&lt;/h2&gt;

&lt;p&gt;Classic “any key” style with a visible prompt and no echo of the key:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/usr/bin/env bash&lt;/span&gt;
&lt;span class="nb"&gt;read&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; 1 &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s1"&gt;$'Press any key to continue...&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s1"&gt;'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;-n 1&lt;/code&gt;&lt;/strong&gt; — read one character
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;-s&lt;/code&gt;&lt;/strong&gt; — silent (no echo)
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;-r&lt;/code&gt;&lt;/strong&gt; — raw (backslash not special)
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;-p&lt;/code&gt;&lt;/strong&gt; — prompt string
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With a friendly guard for automation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/usr/bin/env bash&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-t&lt;/span&gt; 0 &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nb"&gt;read&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; 1 &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s1"&gt;$'Press any key to continue...&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s1"&gt;'&lt;/span&gt;
&lt;span class="k"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  macOS notes
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Terminal.app&lt;/strong&gt; and &lt;strong&gt;iTerm2&lt;/strong&gt; behave like other Unix terminals for Bash. Apple’s default login shell is often &lt;strong&gt;Zsh&lt;/strong&gt;; for interactive Zsh you may use &lt;strong&gt;&lt;code&gt;read -k 1&lt;/code&gt;&lt;/strong&gt; in scripts explicitly run by Zsh. For maximum portability across Linux and macOS, stay with Bash or document &lt;strong&gt;&lt;code&gt;#!/usr/bin/env bash&lt;/code&gt;&lt;/strong&gt; at the top of the script.&lt;/p&gt;

&lt;h2&gt;
  
  
  POSIX &lt;code&gt;sh&lt;/code&gt; (portable “press Enter”)
&lt;/h2&gt;

&lt;p&gt;POSIX &lt;strong&gt;&lt;code&gt;read&lt;/code&gt;&lt;/strong&gt; does not require &lt;strong&gt;&lt;code&gt;read -n&lt;/code&gt;&lt;/strong&gt;, which is &lt;strong&gt;not&lt;/strong&gt; POSIX. The portable pattern is “press Enter to continue”:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/sh&lt;/span&gt;
&lt;span class="nb"&gt;printf&lt;/span&gt; &lt;span class="s1"&gt;'Press Enter to continue... '&lt;/span&gt;
&lt;span class="nb"&gt;read&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; _
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is widely supported on &lt;strong&gt;&lt;code&gt;dash&lt;/code&gt;&lt;/strong&gt;, &lt;strong&gt;&lt;code&gt;ksh&lt;/code&gt;&lt;/strong&gt;, and &lt;strong&gt;&lt;code&gt;ash&lt;/code&gt;&lt;/strong&gt;-based systems. True single-character “any key” without Bash extensions is messier; if you need it on minimal &lt;strong&gt;&lt;code&gt;sh&lt;/code&gt;&lt;/strong&gt;, document Bash or use &lt;strong&gt;&lt;code&gt;stty&lt;/code&gt;&lt;/strong&gt;-based approaches with care (terminal state, portability).&lt;/p&gt;

&lt;h2&gt;
  
  
  Cross-platform branching
&lt;/h2&gt;

&lt;p&gt;Installer scripts sometimes branch on OS:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Windows CMD&lt;/strong&gt; — &lt;strong&gt;&lt;code&gt;pause&lt;/code&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PowerShell&lt;/strong&gt; — &lt;strong&gt;&lt;code&gt;ReadKey&lt;/code&gt;&lt;/strong&gt; or &lt;strong&gt;&lt;code&gt;Read-Host&lt;/code&gt;&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Unix&lt;/strong&gt; — &lt;strong&gt;&lt;code&gt;read&lt;/code&gt;&lt;/strong&gt; with &lt;strong&gt;&lt;code&gt;[ -t 0 ]&lt;/code&gt;&lt;/strong&gt; guard&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In mixed environments, keep the “interactive only” guard so servers and CI never block.&lt;/p&gt;

&lt;h2&gt;
  
  
  Related reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.glukhov.org/developer-tools/terminals-shell/bash-cheat-sheet/" rel="noopener noreferrer"&gt;Bash cheat sheet&lt;/a&gt; — general command reference
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.glukhov.org/developer-tools/terminals-shell/powershell-cheatsheet/" rel="noopener noreferrer"&gt;PowerShell cheatsheet&lt;/a&gt; — cmdlets and everyday usage
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Useful links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;GNU Bash manual — &lt;a href="https://www.gnu.org/software/bash/manual/html_node/Bash-Builtins.html" rel="noopener noreferrer"&gt;Bash Builtins&lt;/a&gt; — &lt;code&gt;read&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Microsoft Learn — &lt;a href="https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/read-host" rel="noopener noreferrer"&gt;Read-Host&lt;/a&gt; and console APIs for advanced hosts&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>bash</category>
      <category>powershell</category>
      <category>windows</category>
      <category>linux</category>
    </item>
    <item>
      <title>OpenClaw Plugins — Ecosystem Guide and Practical Picks</title>
      <dc:creator>Rost</dc:creator>
      <pubDate>Mon, 20 Apr 2026 09:51:41 +0000</pubDate>
      <link>https://forem.com/rosgluk/openclaw-plugins-ecosystem-guide-and-practical-picks-4an1</link>
      <guid>https://forem.com/rosgluk/openclaw-plugins-ecosystem-guide-and-practical-picks-4an1</guid>
      <description>&lt;p&gt;This article is about &lt;strong&gt;OpenClaw plugins&lt;/strong&gt; — native gateway packages that add channels, model providers, tools, speech, memory, media, web search, and other runtime surfaces.&lt;/p&gt;

&lt;p&gt;The rest of the piece covers discovery, packaging, CLI lifecycle, maturity, security, and concrete plugin picks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OpenClaw skills&lt;/strong&gt; matter for navigation and safety because ClawHub and announcement text often say "skills" when they mean installable agent packs and workflows. Those are related to the same registries you use for plugins, but they are not the same mechanism as a validated &lt;code&gt;openclaw.plugin.json&lt;/code&gt; package. The glossary below keeps the vocabulary straight; the &lt;a href="https://www.glukhov.org/ai-systems/openclaw/skills/" rel="noopener noreferrer"&gt;OpenClaw skills guide&lt;/a&gt; goes deeper on authoring, moderation, usage patterns, and per-role stacks.&lt;/p&gt;

&lt;p&gt;At the same time, the public plugin ecosystem is uneven.&lt;br&gt;
The strongest parts are still bundled first-party surfaces and a small set of community plugins with visible maintenance and usage. The weaker parts are the business-automation edge cases that look impressive in demos but still have thin public adoption signals, including skill-oriented repos that are not yet mature native plugins.&lt;/p&gt;

&lt;p&gt;If you want the short version up front, this is it.&lt;br&gt;
In OpenClaw today, the "actually useful" plugin layer is mostly about boring wins: browser access, web extraction, memory, provider routing, voice, channels, observability, and workflow triggers. The categories that sound most enterprise-friendly — CRM, lead generation, inbox automation, calendar orchestration — do exist publicly, but the verified native-plugin surface is still much thinner and less battle-tested than the rest of the stack. That is not a criticism so much as a maturity signal.&lt;/p&gt;
&lt;h3&gt;
  
  
  Glossary (plugins, extensions, skills)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;OpenClaw plugins&lt;/strong&gt; — Native gateway packages installed with &lt;code&gt;openclaw plugins …&lt;/code&gt;, validated through &lt;code&gt;openclaw.plugin.json&lt;/code&gt;, and able to register channels, providers, tools, memory backends, and other hooks inside the gateway process.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OpenClaw extensions&lt;/strong&gt; — Workspace and global directories OpenClaw scans as plugin roots before bundled defaults (extension paths under the workspace, then &lt;code&gt;~/.openclaw&lt;/code&gt;). This is a layout and discovery idea. It is not a different kind of artifact from plugins; it is where plugin packages are loaded from.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OpenClaw skills&lt;/strong&gt; — Agent-facing packs and workflows often published for OpenClaw-style agents and listed on ClawHub alongside packages. Security and moderation messaging frequently refers to "skills" because that layer has its own adoption and abuse history. Treat skills as a related install surface, not as a synonym for "native plugin" unless the listing is actually a plugin package with a manifest.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When OpenClaw imports content from Codex, Claude, or Cursor ecosystems, upstream docs often call those &lt;strong&gt;bundles&lt;/strong&gt;, not native plugins. Bundles map to selective features and a narrower trust boundary than full plugins. If you mix bundles, skills marketing, and native OpenClaw plugins without that distinction, the ecosystem looks broader than it actually is.&lt;/p&gt;
&lt;h2&gt;
  
  
  Why this ecosystem matters
&lt;/h2&gt;

&lt;p&gt;Inside the codebase and CLI, the extension story is still expressed as plugins. Discovery walks explicit config paths, then extension directories, then bundled plugins — same capability type, different roots. Skills enter the picture when you browse ClawHub or read incident writeups, not when you reason about slot selection for &lt;code&gt;memory&lt;/code&gt; or &lt;code&gt;contextEngine&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The plugin system is also opinionated in a useful way. OpenClaw does not treat plugins as a cosmetic add-on layer. It uses them for concrete runtime ownership: channels, model providers, tools, memory backends, context engines, speech, realtime voice, media understanding, image generation, video generation, web fetch, and web search. Some of those ship bundled inside OpenClaw, while others are external packages published by the community on npm or ClawHub.&lt;/p&gt;

&lt;p&gt;That is why the plugin ecosystem matters more than it first appears. In practice, plugin choice determines not just integrations, but also how the assistant searches, remembers, calls, routes, fetches, traces, and survives long-running sessions. For a technical blog, that is the important frame. Not "which package looks cool", but "which package owns a meaningful runtime surface".&lt;/p&gt;
&lt;h2&gt;
  
  
  How the plugin system actually works
&lt;/h2&gt;

&lt;p&gt;Under the hood, OpenClaw discovers plugins in a fixed order, and first match wins. It looks at explicit config paths first, then workspace extension directories, then global extensions under &lt;code&gt;~/.openclaw&lt;/code&gt;, and finally bundled plugins that ship with OpenClaw. Workspace-origin plugins are disabled by default, restrictive allowlists can block even bundled plugins, and some capability classes are exclusive slots, notably &lt;code&gt;memory&lt;/code&gt; and &lt;code&gt;contextEngine&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;That slot model is one of the least flashy but most important parts of the system. It means plugins are not only additive. In some categories they are selectors. &lt;code&gt;memory-core&lt;/code&gt; can be the active memory plugin, &lt;code&gt;memory-lancedb&lt;/code&gt; can replace it, and a context engine such as &lt;code&gt;lossless-claw&lt;/code&gt; can replace the default legacy context engine. This is why memory plugins tend to matter operationally more than UI-facing plugins. They change how the assistant thinks across time, not just where it sends messages.&lt;/p&gt;

&lt;p&gt;Native plugins also have a fairly strict packaging model. A package advertises its plugin entrypoints and setup metadata through &lt;code&gt;package.json&lt;/code&gt;, while &lt;code&gt;openclaw.plugin.json&lt;/code&gt; is the manifest OpenClaw uses to validate plugin identity and config before executing plugin code. That manifest is not decorative. Missing or invalid manifests are treated as plugin errors and block config validation. The platform is clearly trying to fail early rather than load first and hope later.&lt;/p&gt;

&lt;p&gt;The SDK surface is broader than many blog posts imply. Plugin hooks can intercept model resolution, agent lifecycle, message flow, tool execution, sub-agent coordination, and gateway lifecycle, and the docs state that the SDK exposes 28 hooks. That is enough power to build real runtime products, but it is also enough power to create runtime surprises if the plugin is immature.&lt;/p&gt;
&lt;h2&gt;
  
  
  Where to get plugins and how the lifecycle works
&lt;/h2&gt;

&lt;p&gt;Plugin installs always go through the &lt;code&gt;openclaw plugins&lt;/code&gt; commands below. ClawHub lists both native plugin packages and OpenClaw skills-style entries, so read each listing for manifests and supported install paths — this section is only about the plugin path.&lt;/p&gt;

&lt;p&gt;The public repository layer is straightforward. ClawHub is the canonical discovery surface for community plugins and many skills listings, and OpenClaw can install plugins from ClawHub, npm, local paths, local archives, and supported marketplaces. For bare package names, OpenClaw checks ClawHub first and falls back to npm automatically. That alone answers one of the common ecosystem questions: yes, there is a public repository story, but it is split between the official registry layer and npm.&lt;/p&gt;

&lt;p&gt;The install and removal lifecycle is also clearer than the ecosystem chatter makes it sound. The CLI supports listing, inspecting, enabling, disabling, uninstalling, doctoring, and updating plugins. Config changes require a gateway restart, although the default &lt;code&gt;openclaw gateway&lt;/code&gt; path may auto-restart after a config write lands. In practice, temporary removal is &lt;code&gt;disable&lt;/code&gt;, hard removal is &lt;code&gt;uninstall&lt;/code&gt;, and validation failures are designed to fail closed instead of leaving half-installed state behind.&lt;/p&gt;

&lt;p&gt;The commands you actually need are simple:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openclaw plugins list
openclaw plugins inspect &amp;lt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
openclaw plugins &lt;span class="nb"&gt;install&lt;/span&gt; &amp;lt;package&amp;gt;
openclaw plugins &lt;span class="nb"&gt;enable&lt;/span&gt; &amp;lt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
openclaw plugins disable &amp;lt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
openclaw plugins uninstall &amp;lt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
openclaw gateway restart
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Those commands are the stable part. The interesting parts are the safety rails around them. OpenClaw recommends pinned versions for plugin installs, uses &lt;code&gt;--ignore-scripts&lt;/code&gt; for npm dependency installs, validates compatibility metadata such as &lt;code&gt;pluginApi&lt;/code&gt; and &lt;code&gt;minGatewayVersion&lt;/code&gt; before archive installs, and ships a built-in dangerous-code scanner with a break-glass override named &lt;code&gt;--dangerously-force-unsafe-install&lt;/code&gt;. That is a more serious security posture than many agent ecosystems currently offer.&lt;/p&gt;

&lt;p&gt;One subtle detail is worth calling out. ClawHub install counts are useful, but they are not absolute ecosystem census numbers. The documentation says install counts are computed when logged-in users run &lt;code&gt;clawhub sync&lt;/code&gt;, and stale roots stop counting after 120 days. That makes ClawHub usage counters directionally useful, especially for ranking, but not a universal measure of actual adoption.&lt;/p&gt;

&lt;h2&gt;
  
  
  Maturity, support, and security reality
&lt;/h2&gt;

&lt;p&gt;The maturity story is split in two. First-party bundled plugins are the safest default because they live inside the main OpenClaw release train, share the same compatibility model, and benefit from a very large public core repository footprint. At crawl time, the main &lt;code&gt;openclaw/openclaw&lt;/code&gt; repository showed roughly 359k GitHub stars, which is the strongest public popularity signal anywhere in this ecosystem. Community plugins can absolutely be useful, but they are not all equal and they do not inherit that maturity automatically.&lt;/p&gt;

&lt;p&gt;OpenClaw's own community-plugin page is refreshingly blunt about the quality bar. The project asks for a public GitHub repository, working installation through &lt;code&gt;openclaw plugins install&lt;/code&gt;, setup and usage docs, and active maintenance. Low-effort wrappers, unclear ownership, or unmaintained packages may be declined. That tells you a lot about where the team has already seen ecosystem failure.&lt;/p&gt;

&lt;p&gt;Security is the part where opinion should replace hype. The docs themselves say to treat OpenClaw plugin installs like running code. ClawHub exposes moderation hooks, stars, comments, and usage signals, and the broader OpenClaw security response has moved toward stronger package scrutiny. The team announced VirusTotal scanning for all ClawHub skills, and independent security research documented malicious ClawHub campaigns and large-scale insecure credential handling in early 2026. Those incidents were centered on OpenClaw skills and skill-style listings, not every native plugin path, but they are still the correct backdrop for evaluating the whole installable ecosystem. The lesson is simple: the extension perimeter — config, OpenClaw extensions directories, and anything you install from a registry — is now part of the attack surface.&lt;/p&gt;

&lt;p&gt;A second, more nuanced security point is that safer ecosystems still produce false positives. OpenClaw's dangerous-code scanner is heuristic, and public plugin maintainers have already had to react to scanner warnings and installation friction. That is not a sign that the scanner is wrong to exist. It is a sign that "scanner clean" and "safe" are not identical concepts, and that human review still matters for nontrivial plugins.&lt;/p&gt;

&lt;h2&gt;
  
  
  Useful plugins worth tracking right now
&lt;/h2&gt;

&lt;p&gt;What follows is the pragmatic list, not the maximal list. For bundled first-party plugins that do not have standalone repos, the popularity metric below uses the OpenClaw core repo star count as the proxy. For community plugins, the popularity metric uses the canonical public GitHub repo star count visible at crawl time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tools and web access&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;browser&lt;/code&gt;&lt;br&gt;&lt;br&gt;
URL: &lt;code&gt;https://docs.openclaw.ai/tools/browser&lt;/code&gt;&lt;br&gt;&lt;br&gt;
This is the default serious-tool plugin because it gives the agent a managed isolated browser profile and an attach-to-user-browser mode when logged-in human sessions matter. That is more useful than another generic web search wrapper. Popularity: bundled first-party plugin, proxy metric 359k core repo stars.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;firecrawl&lt;/code&gt;&lt;br&gt;&lt;br&gt;
URL: &lt;code&gt;https://docs.openclaw.ai/tools/firecrawl&lt;/code&gt;&lt;br&gt;&lt;br&gt;
Firecrawl is useful because it can act as a &lt;code&gt;web_search&lt;/code&gt; provider, expose explicit &lt;code&gt;firecrawl_search&lt;/code&gt; and &lt;code&gt;firecrawl_scrape&lt;/code&gt; tools, and serve as a &lt;code&gt;web_fetch&lt;/code&gt; fallback for JS-heavy or anti-bot pages. Popularity: bundled first-party plugin, proxy metric 359k core repo stars.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;tavily&lt;/code&gt;&lt;br&gt;&lt;br&gt;
URL: &lt;code&gt;https://docs.openclaw.ai/tools/tavily&lt;/code&gt;&lt;br&gt;&lt;br&gt;
Tavily is still one of the cleaner structured-search options because it exposes both search and extraction and is explicitly optimized for LLM consumption. Popularity: bundled first-party plugin, proxy metric 359k core repo stars.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;exa&lt;/code&gt;&lt;br&gt;&lt;br&gt;
URL: &lt;code&gt;https://docs.openclaw.ai/tools/exa-search&lt;/code&gt;&lt;br&gt;&lt;br&gt;
Exa is the best fit when you want hybrid search modes plus extraction in one provider without immediately jumping to browser automation. Popularity: bundled first-party plugin, proxy metric 359k core repo stars.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Integrations and collaboration&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;matrix&lt;/code&gt;&lt;br&gt;&lt;br&gt;
URL: &lt;code&gt;https://docs.openclaw.ai/channels/matrix&lt;/code&gt;&lt;br&gt;&lt;br&gt;
Matrix is one of the more complete bundled collaboration plugins because it already supports DMs, rooms, threads, media, reactions, polls, location, and E2EE through &lt;code&gt;matrix-js-sdk&lt;/code&gt;. Popularity: bundled first-party plugin, proxy metric 359k core repo stars.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;msteams&lt;/code&gt;&lt;br&gt;&lt;br&gt;
URL: &lt;code&gt;https://docs.openclaw.ai/channels/msteams&lt;/code&gt;&lt;br&gt;&lt;br&gt;
Teams matters because it is one of the few enterprise channels with a real first-party path, including Azure Bot setup, tenant credentials, default webhook shape, and group-chat policy controls. Popularity: bundled first-party plugin, proxy metric 359k core repo stars.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;wecom&lt;/code&gt;&lt;br&gt;&lt;br&gt;
URL: &lt;code&gt;https://github.com/WecomTeam/wecom-openclaw-plugin&lt;/code&gt;&lt;br&gt;&lt;br&gt;
WeCom is one of the stronger community channel plugins because it is officially maintained by the Tencent WeCom team and supports direct messages, group chats, streaming replies, proactive messaging, and both Bot and Agent operating modes. Popularity: about 365 GitHub stars.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;openclaw-discourse&lt;/code&gt;&lt;br&gt;&lt;br&gt;
URL: &lt;code&gt;https://github.com/pranciskus/discourse-openclaw&lt;/code&gt;&lt;br&gt;&lt;br&gt;
Discourse is a good example of a plugin that is small but useful. It focuses on searching, reading, filtering, finding unanswered topics, and optionally writing back to the forum, which is exactly what support and community workflows need. Popularity: about 10 GitHub stars.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A side note here is that Slack is less interesting in a plugins article than many people expect, because Slack is already treated as a built-in channel surface in current OpenClaw documentation and marketing copy. Teams and WeCom are more revealing plugin picks because they show where external or bundled channel ownership still matters visibly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Memory and context&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;memory-lancedb&lt;/code&gt;&lt;br&gt;&lt;br&gt;
URL: &lt;code&gt;https://docs.openclaw.ai/tools/plugin&lt;/code&gt;&lt;br&gt;&lt;br&gt;
This is the practical long-session memory pick in the bundled set. OpenClaw describes it as an install-on-demand long-term memory plugin with auto-recall and capture, selected through &lt;code&gt;plugins.slots.memory&lt;/code&gt;. Popularity: bundled first-party plugin, proxy metric 359k core repo stars.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;memory-wiki&lt;/code&gt;&lt;br&gt;&lt;br&gt;
URL: &lt;code&gt;https://docs.openclaw.ai/plugins/memory-wiki&lt;/code&gt;&lt;br&gt;&lt;br&gt;
&lt;code&gt;memory-wiki&lt;/code&gt; is not a replacement memory backend. It is a companion plugin that compiles durable memory into a navigable wiki with provenance, contradictions, dashboards, and wiki-native search and apply tools. That makes it more useful for knowledge maintenance than raw recall alone. Popularity: bundled first-party plugin, proxy metric 359k core repo stars.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;lossless-claw&lt;/code&gt;&lt;br&gt;&lt;br&gt;
URL: &lt;code&gt;https://github.com/Martian-Engineering/lossless-claw&lt;/code&gt;&lt;br&gt;&lt;br&gt;
This is probably the most important community memory-context plugin right now. It replaces sliding-window compaction with DAG-based summarization that preserves full conversation history while keeping active context inside token limits. Popularity: about 4.3k GitHub stars.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;memos-cloud&lt;/code&gt;&lt;br&gt;&lt;br&gt;
URL: &lt;code&gt;https://github.com/MemTensor/MemOS-Cloud-OpenClaw-Plugin&lt;/code&gt;&lt;br&gt;&lt;br&gt;
MemOS Cloud is noteworthy because it treats memory as a lifecycle plugin, recalling context before execution and saving results after each run. That makes it closer to persistent memory infrastructure than a note store. Popularity: about 339 GitHub stars.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Model providers and harnesses&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;openai&lt;/code&gt;&lt;br&gt;&lt;br&gt;
URL: &lt;code&gt;https://docs.openclaw.ai/providers/openai&lt;/code&gt;&lt;br&gt;&lt;br&gt;
The OpenAI provider remains useful mostly because OpenClaw separates direct API access via &lt;code&gt;openai/*&lt;/code&gt; from ChatGPT or Codex OAuth via &lt;code&gt;openai-codex/*&lt;/code&gt;, which avoids a lot of confusion around billing and runtime path. Popularity: bundled first-party plugin, proxy metric 359k core repo stars.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;anthropic&lt;/code&gt;&lt;br&gt;&lt;br&gt;
URL: &lt;code&gt;https://docs.openclaw.ai/providers/anthropic&lt;/code&gt;&lt;br&gt;&lt;br&gt;
Anthropic is useful because OpenClaw supports both API keys and Claude CLI reuse, while still documenting API keys as the clearest long-lived gateway path. Popularity: bundled first-party plugin, proxy metric 359k core repo stars.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;openrouter&lt;/code&gt;&lt;br&gt;&lt;br&gt;
URL: &lt;code&gt;https://docs.openclaw.ai/providers/openrouter&lt;/code&gt;&lt;br&gt;&lt;br&gt;
OpenRouter is the pragmatic aggregation plugin. It gives a single endpoint and API key for many models and defaults onboarding to &lt;code&gt;openrouter/auto&lt;/code&gt;, which makes it operationally convenient even if it is not the most opinionated route. Popularity: bundled first-party plugin, proxy metric 359k core repo stars.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;google&lt;/code&gt;&lt;br&gt;&lt;br&gt;
URL: &lt;code&gt;https://docs.openclaw.ai/providers/google&lt;/code&gt;&lt;br&gt;&lt;br&gt;
Google is more than just another text provider in OpenClaw. The plugin also brings image generation, media understanding, and web search via Gemini Grounding. Popularity: bundled first-party plugin, proxy metric 359k core repo stars.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;codex&lt;/code&gt;&lt;br&gt;&lt;br&gt;
URL: &lt;code&gt;https://docs.openclaw.ai/plugins/codex-harness&lt;/code&gt;&lt;br&gt;&lt;br&gt;
The bundled Codex harness is useful when you want the Codex app-server to own the low-level session, thread resume, compaction, and execution path, while OpenClaw still owns channels and visible transcripts. Popularity: bundled first-party plugin, proxy metric 359k core repo stars.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Dev workflows and observability&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;openclaw-codex-app-server&lt;/code&gt;&lt;br&gt;&lt;br&gt;
URL: &lt;code&gt;https://github.com/pwrdrvr/openclaw-codex-app-server&lt;/code&gt;&lt;br&gt;&lt;br&gt;
This is one of the clearest community dev-workflow wins. It binds a chat to a Codex App Server thread and exposes chat-native controls for resume, planning, review, model selection, and compaction. Popularity: about 193 GitHub stars.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;@opik/opik-openclaw&lt;/code&gt;&lt;br&gt;&lt;br&gt;
URL: &lt;code&gt;https://github.com/comet-ml/opik-openclaw&lt;/code&gt;&lt;br&gt;&lt;br&gt;
Opik is the clean observability plugin pick. It exports LLM spans, tool spans, sub-agent spans, usage, and cost metadata to Opik, and it has visible release cadence and public docs. Popularity: about 453 to 459 GitHub stars.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;manifest&lt;/code&gt;&lt;br&gt;&lt;br&gt;
URL: &lt;code&gt;https://github.com/mnfst/manifest/tree/main/packages/openclaw-plugin&lt;/code&gt;&lt;br&gt;&lt;br&gt;
Manifest matters because it combines model routing and observability in one plugin, intercepting requests to score and route them while recording costs and timings. It is one of the bigger public projects in the ecosystem, though it has also had public friction around scanner warnings and onboarding noise. Popularity: about 4.3k GitHub stars.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Voice agents and multi-step workflows&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;voice-call&lt;/code&gt;&lt;br&gt;&lt;br&gt;
URL: &lt;code&gt;https://docs.openclaw.ai/plugins/voice-call&lt;/code&gt;&lt;br&gt;&lt;br&gt;
This is the useful voice plugin, not the flashy one. It supports outbound calls, multi-turn conversations, inbound call policies, and current providers including Twilio, Telnyx, Plivo, and a mock transport. Popularity: bundled first-party plugin, proxy metric 359k core repo stars.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;webhooks&lt;/code&gt;&lt;br&gt;&lt;br&gt;
URL: &lt;code&gt;https://docs.openclaw.ai/plugins/webhooks&lt;/code&gt;&lt;br&gt;&lt;br&gt;
The Webhooks plugin is the most underrated workflow plugin because it lets trusted systems such as Zapier, n8n, CI jobs, or internal services create and drive TaskFlows over authenticated HTTP routes. It is much less glamorous than AI orchestration marketing, but much closer to how teams actually automate work. Popularity: bundled first-party plugin, proxy metric 359k core repo stars.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Lead generation, CRM, and email-calendar automation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is the part of the ecosystem where restraint is healthy. Based on the public packages and repositories I could verify, OpenClaw has promising native plugin experiments for Google Workspace and Google Calendar, and there are early CRM-oriented packages in the wider ecosystem, but the public popularity signals are still very small. &lt;code&gt;tensorfold/openclaw-google-workspace&lt;/code&gt; presented an all-in-one Gmail, Calendar, Drive, Contacts, Tasks, and Sheets plugin but showed 0 GitHub stars. &lt;code&gt;alefsolutions/openclaw-google-calendar&lt;/code&gt; also showed 0 GitHub stars. &lt;code&gt;crm-skills-openclaw&lt;/code&gt; existed publicly with HubSpot and Salesforce direction, but it is a skill-oriented repo rather than a mature native plugin, and it showed about 1 GitHub star. That does not make these projects useless. It makes them early.&lt;/p&gt;

&lt;p&gt;There is also an interesting social-and-growth plugin direction. SendIt exposes publishing, analytics, campaigns, inbox, CRM, and workflow tools through an OpenClaw plugin plus a bundled skill pack. Publicly, though, the repo still showed 0 GitHub stars at crawl time. The honest reading is that this category is promising, but not yet popular enough to call mature.&lt;/p&gt;

&lt;p&gt;So the practical conclusion for lead generation and business automation is mildly unromantic. OpenClaw's strongest plugin-native wins today are still web access, memory, routing, channels, voice, and observability. For CRM-heavy or inbox-heavy workflows, the real-world path is still often a mix of Webhooks, a provider or browser plugin, and skills or API bridges rather than one dominant plugin package. That pattern is visible in the public ecosystem itself, and it maps directly to the plugin and skill stacks described in the &lt;a href="https://www.glukhov.org/ai-systems/openclaw/production-setup/" rel="noopener noreferrer"&gt;OpenClaw production setup guide&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bottom line
&lt;/h2&gt;

&lt;p&gt;The useful OpenClaw plugin ecosystem today is less about novelty than about operational leverage. The boring picks are still the right picks: &lt;code&gt;browser&lt;/code&gt;, &lt;code&gt;firecrawl&lt;/code&gt;, &lt;code&gt;tavily&lt;/code&gt;, &lt;code&gt;memory-lancedb&lt;/code&gt;, &lt;code&gt;memory-wiki&lt;/code&gt;, &lt;code&gt;voice-call&lt;/code&gt;, &lt;code&gt;webhooks&lt;/code&gt;, and the bundled provider plugins for OpenAI, Anthropic, Google, OpenRouter, and Codex. On the community side, &lt;code&gt;lossless-claw&lt;/code&gt;, &lt;code&gt;@opik/opik-openclaw&lt;/code&gt;, &lt;code&gt;openclaw-codex-app-server&lt;/code&gt;, &lt;code&gt;manifest&lt;/code&gt;, and &lt;code&gt;wecom&lt;/code&gt; are the clearest public packages with visible utility and public traction.&lt;/p&gt;

&lt;p&gt;When you later evaluate OpenClaw skills on the same registries, use the same hygiene as for plugins (pin versions, read manifests, treat scanning as directional). See the &lt;a href="https://www.glukhov.org/ai-systems/openclaw/skills/" rel="noopener noreferrer"&gt;OpenClaw skills guide&lt;/a&gt; for per-role stacks and a security checklist. For extension directories, keep workspace plugin roots intentional and use allowlists when you cannot trust every path on disk.&lt;/p&gt;

&lt;p&gt;The opinionated read is this. OpenClaw already has a serious native plugin platform, and extension directories give you predictable places to stage that code. Skills widen what you can publish without always widening what runs with full plugin privileges. The part that deserves trust right now is still the runtime plumbing layer for native plugins, not the long tail of business-ops demos. If you want a useful baseline rather than an aspirational one, that is the line to hold.&lt;/p&gt;

&lt;p&gt;For how these plugin choices map to real user types and production workflows, see &lt;a href="https://www.glukhov.org/ai-systems/openclaw/production-setup/" rel="noopener noreferrer"&gt;OpenClaw production setup patterns&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>selfhosting</category>
      <category>llm</category>
      <category>ai</category>
      <category>openclaw</category>
    </item>
    <item>
      <title>OpenClaw Skills Ecosystem and Practical Production Picks</title>
      <dc:creator>Rost</dc:creator>
      <pubDate>Mon, 20 Apr 2026 09:51:37 +0000</pubDate>
      <link>https://forem.com/rosgluk/openclaw-skills-ecosystem-and-practical-production-picks-2imn</link>
      <guid>https://forem.com/rosgluk/openclaw-skills-ecosystem-and-practical-production-picks-2imn</guid>
      <description>&lt;p&gt;OpenClaw has two extension stories, and they are easy to mix up.&lt;/p&gt;

&lt;p&gt;Plugins extend the runtime. Skills extend the agent's behavior.&lt;/p&gt;

&lt;p&gt;That distinction matters. A plugin adds a new capability surface such as a channel, provider, or tool integration. A skill is usually lighter. It teaches the agent how and when to use existing tools, binaries, APIs, or workflows. In practice, that makes skills the faster moving part of the OpenClaw ecosystem, and also the noisier one.&lt;/p&gt;

&lt;p&gt;This article stays on the ecosystem and selection side. For how skills and plugins combine in practice by user type, see &lt;a href="https://www.glukhov.org/ai-systems/openclaw/production-setup/" rel="noopener noreferrer"&gt;OpenClaw production setup patterns&lt;/a&gt;. The question here is simpler and more useful: which skills are actually worth installing, how do they fit into OpenClaw, and which ones look more like noise than durable tooling.&lt;/p&gt;

&lt;p&gt;Popularity notes below use ClawHub stars and downloads as a rough snapshot on 2026-04-18.&lt;/p&gt;

&lt;h2&gt;
  
  
  What OpenClaw skills really are
&lt;/h2&gt;

&lt;p&gt;The OpenClaw skill model is elegant because it is mostly plain files.&lt;/p&gt;

&lt;p&gt;A typical skill looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;my-skill/
  SKILL.md
  scripts/
  references/
  assets/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At minimum, the skill needs &lt;code&gt;SKILL.md&lt;/code&gt;. That file contains YAML frontmatter and markdown instructions that tell the agent what the skill does, when to use it, and what tools or commands are available.&lt;/p&gt;

&lt;p&gt;A minimal example looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hello_world&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;A&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;simple&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;skill&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;that&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;says&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;hello"&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;

&lt;span class="gh"&gt;# Hello World Skill&lt;/span&gt;

Use this skill when the user wants a quick greeting.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The useful part is not the markdown itself. The useful part is how OpenClaw loads and gates skills.&lt;/p&gt;

&lt;p&gt;A skill can be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;bundled with OpenClaw&lt;/li&gt;
&lt;li&gt;installed into a workspace&lt;/li&gt;
&lt;li&gt;shared at user level&lt;/li&gt;
&lt;li&gt;scoped to an agent&lt;/li&gt;
&lt;li&gt;injected by a plugin&lt;/li&gt;
&lt;li&gt;filtered by OS, binaries, environment variables, or config&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That last point is why OpenClaw skills feel closer to operational recipes than to prompt snippets. A good skill is not only descriptive. It declares enough metadata that OpenClaw can decide whether it should even be visible.&lt;/p&gt;

&lt;p&gt;In other words, the system is more disciplined than the average public "prompt pack" marketplace.&lt;/p&gt;

&lt;h2&gt;
  
  
  OpenClaw skill locations and structure
&lt;/h2&gt;

&lt;p&gt;OpenClaw uses a precedence model rather than a single global skills folder.&lt;/p&gt;

&lt;p&gt;In practice, the highest value locations are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;&amp;lt;workspace&amp;gt;/skills&lt;/code&gt; for project specific overrides&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;&amp;lt;workspace&amp;gt;/.agents/skills&lt;/code&gt; for project agent skills&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;~/.agents/skills&lt;/code&gt; for personal agent skills&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;~/.openclaw/skills&lt;/code&gt; for shared local skills&lt;/li&gt;
&lt;li&gt;bundled skills shipped with the install&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That layout is one of the better design decisions in OpenClaw. It makes skills overrideable without editing the upstream install, and it keeps local customization from turning into a dirty fork.&lt;/p&gt;

&lt;p&gt;It also means skill visibility and skill location are separate concerns.&lt;/p&gt;

&lt;p&gt;A skill can exist locally and still be blocked from a given agent. That happens through skill allowlists in &lt;code&gt;agents.defaults.skills&lt;/code&gt; and &lt;code&gt;agents.list[].skills&lt;/code&gt;. For production, that separation is more important than the marketplace itself. It is what stops every agent from receiving every possible workflow.&lt;/p&gt;

&lt;p&gt;There are also a few frontmatter flags worth remembering:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;user-invocable&lt;/code&gt; exposes a slash command&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;disable-model-invocation&lt;/code&gt; keeps the skill out of the model prompt while still allowing explicit invocation&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;command-dispatch&lt;/code&gt; and &lt;code&gt;command-tool&lt;/code&gt; can bypass model reasoning and call a tool directly&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;metadata.openclaw.requires.*&lt;/code&gt; can gate a skill on binaries, env vars, OS, or config&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is enough structure to make skills powerful, but also enough rope to create fragile packages if the metadata is sloppy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where to get OpenClaw skills
&lt;/h2&gt;

&lt;p&gt;For practical use, there are three real sources.&lt;/p&gt;

&lt;h3&gt;
  
  
  ClawHub
&lt;/h3&gt;

&lt;p&gt;ClawHub is the official public registry for OpenClaw skills and plugins. It is the default place to search, install, update, inspect versions, and see lightweight community signals such as stars and downloads.&lt;/p&gt;

&lt;p&gt;If you only pick one source, use ClawHub.&lt;/p&gt;

&lt;h3&gt;
  
  
  Bundled skills
&lt;/h3&gt;

&lt;p&gt;OpenClaw ships with bundled skills inside the install. These are lower friction, but the list is naturally smaller than the public registry.&lt;/p&gt;

&lt;p&gt;Bundled skills are the closest thing the ecosystem has to a supported baseline.&lt;/p&gt;

&lt;h3&gt;
  
  
  Local and Git based skills
&lt;/h3&gt;

&lt;p&gt;You can also keep skills in your own workspace or user folders, or pull them from public repositories.&lt;/p&gt;

&lt;p&gt;This is useful for private skills, experiments, and local overrides.&lt;/p&gt;

&lt;p&gt;There is also a public archived repository of registry skills on GitHub. It is useful as an audit trail, not as the first place to install from. Treat it as a historical dump and inspection surface, not as a curated store.&lt;/p&gt;

&lt;p&gt;Community discovery layers such as awesome lists and filtered indexes are now part of the ecosystem as well. That is a signal in itself. Once a marketplace gets large enough, secondary curation becomes necessary.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to install, update, and remove skills
&lt;/h2&gt;

&lt;p&gt;The normal install flow is through the OpenClaw CLI.&lt;/p&gt;

&lt;h3&gt;
  
  
  Search
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openclaw skills search &lt;span class="s2"&gt;"calendar"&lt;/span&gt;
openclaw skills search &lt;span class="s2"&gt;"github"&lt;/span&gt;
openclaw skills search &lt;span class="nt"&gt;--limit&lt;/span&gt; 20 &lt;span class="nt"&gt;--json&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Install
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openclaw skills &lt;span class="nb"&gt;install&lt;/span&gt; &amp;lt;skill-slug&amp;gt;
openclaw skills &lt;span class="nb"&gt;install&lt;/span&gt; &amp;lt;skill-slug&amp;gt; &lt;span class="nt"&gt;--version&lt;/span&gt; &amp;lt;version&amp;gt;
openclaw skills &lt;span class="nb"&gt;install&lt;/span&gt; &amp;lt;skill-slug&amp;gt; &lt;span class="nt"&gt;--force&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By default, &lt;code&gt;openclaw skills install&lt;/code&gt; places the skill into the active workspace &lt;code&gt;skills/&lt;/code&gt; directory.&lt;/p&gt;

&lt;h3&gt;
  
  
  Update
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openclaw skills update &amp;lt;skill-slug&amp;gt;
openclaw skills update &lt;span class="nt"&gt;--all&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Inspect and validate
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openclaw skills list
openclaw skills list &lt;span class="nt"&gt;--eligible&lt;/span&gt;
openclaw skills info &amp;lt;name&amp;gt;
openclaw skills check
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Install with the dedicated ClawHub CLI
&lt;/h3&gt;

&lt;p&gt;If you publish skills, sync local folders, or want registry specific workflows, use the separate &lt;code&gt;clawhub&lt;/code&gt; CLI.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm i &lt;span class="nt"&gt;-g&lt;/span&gt; clawhub

clawhub search &lt;span class="s2"&gt;"research"&lt;/span&gt;
clawhub &lt;span class="nb"&gt;install&lt;/span&gt; &amp;lt;skill-slug&amp;gt;
clawhub update &lt;span class="nt"&gt;--all&lt;/span&gt;
clawhub skill publish ./my-skill &lt;span class="nt"&gt;--slug&lt;/span&gt; my-skill &lt;span class="nt"&gt;--name&lt;/span&gt; &lt;span class="s2"&gt;"My Skill"&lt;/span&gt; &lt;span class="nt"&gt;--version&lt;/span&gt; 1.0.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The dedicated CLI writes a &lt;code&gt;.clawhub/lock.json&lt;/code&gt; file in the working directory, which is useful for tracking what came from the registry.&lt;/p&gt;

&lt;h3&gt;
  
  
  Removal
&lt;/h3&gt;

&lt;p&gt;This part is less polished than installation.&lt;/p&gt;

&lt;p&gt;OpenClaw documents install and update flows for skills, but not a dedicated &lt;code&gt;openclaw skills uninstall&lt;/code&gt; command. In practice, removal is filesystem based.&lt;/p&gt;

&lt;p&gt;If a skill was installed into the workspace, remove its folder from &lt;code&gt;&amp;lt;workspace&amp;gt;/skills&lt;/code&gt;, then start a new session.&lt;/p&gt;

&lt;p&gt;If you want the skill to stay present but not be usable by a given agent, use skill allowlists instead of deletion.&lt;/p&gt;

&lt;p&gt;That sounds a little manual because it is. The skill system is clean. The lifecycle UX is still catching up.&lt;/p&gt;

&lt;h2&gt;
  
  
  Maturity, reliability, community, and support
&lt;/h2&gt;

&lt;p&gt;The skill system is mature enough to be real, but not mature enough to be calm.&lt;/p&gt;

&lt;p&gt;That is the shortest honest summary.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is mature
&lt;/h3&gt;

&lt;p&gt;The underlying model is solid.&lt;/p&gt;

&lt;p&gt;Skills are plain files, easy to inspect, easy to override, easy to version, and flexible enough to express both tiny instruction packs and fairly serious task helpers. OpenClaw also separates visibility, precedence, and runtime gating in a way that feels intentionally designed rather than bolted on.&lt;/p&gt;

&lt;p&gt;The community signal is also real. OpenClaw itself is one of the most visible open source AI agent projects right now, and the skill ecosystem is large enough that third party curation has already appeared.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is not mature
&lt;/h3&gt;

&lt;p&gt;Registry quality is uneven.&lt;/p&gt;

&lt;p&gt;The interesting issue is not whether a skill can work. Many do. The issue is whether the packaging, metadata, secret handling, and trust story are coherent.&lt;/p&gt;

&lt;p&gt;A good OpenClaw skill is narrow, boring, and inspectable.&lt;/p&gt;

&lt;p&gt;A weak OpenClaw skill usually has one or more of these problems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;metadata that does not match what the skill actually needs&lt;/li&gt;
&lt;li&gt;hidden or undocumented environment variables&lt;/li&gt;
&lt;li&gt;third party taps or installers with thin provenance&lt;/li&gt;
&lt;li&gt;broad account access for a narrow task&lt;/li&gt;
&lt;li&gt;hooks that quietly become default behavior&lt;/li&gt;
&lt;li&gt;an impressive pitch with very little durable workflow value&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is why "most downloaded" is not the same thing as "production ready".&lt;/p&gt;

&lt;h3&gt;
  
  
  Support reality
&lt;/h3&gt;

&lt;p&gt;Support comes from a mix of places:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;official docs&lt;/li&gt;
&lt;li&gt;ClawHub metadata and scan pages&lt;/li&gt;
&lt;li&gt;GitHub issues and repository history&lt;/li&gt;
&lt;li&gt;community comments and curation lists&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is enough for active operators. It is not the same as enterprise support.&lt;/p&gt;

&lt;p&gt;If you need predictable ownership and response times, the skill ecosystem still feels more open source registry than platform contract.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security concerns are not optional
&lt;/h2&gt;

&lt;p&gt;OpenClaw is powerful because it can act.&lt;/p&gt;

&lt;p&gt;That also means skills should be treated as code, not decoration.&lt;/p&gt;

&lt;p&gt;The official security posture already hints at the correct mental model. Run the gateway on a dedicated machine, VM, or container. Use a dedicated OS user. Keep personal accounts and browser profiles away from that runtime. Restrict high risk tools. Treat links, attachments, and pasted instructions as hostile by default.&lt;/p&gt;

&lt;p&gt;That guidance becomes more important, not less, once skills enter the picture.&lt;/p&gt;

&lt;p&gt;The ClawHub moderation story has improved, but it is still fundamentally a public registry. Skills can be reported, hidden, deleted, and scanned. Publishing now has some basic controls. But the high level lesson from recent incidents is obvious: a public skill registry attracts malware quickly.&lt;/p&gt;

&lt;p&gt;The right filter is simple:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;instruction only skills are usually lower risk&lt;/li&gt;
&lt;li&gt;small helper scripts can be fine if metadata and provenance are clean&lt;/li&gt;
&lt;li&gt;hooks deserve extra scrutiny&lt;/li&gt;
&lt;li&gt;skills that touch sensitive accounts need the highest bar&lt;/li&gt;
&lt;li&gt;any scan flag should matter more than social hype&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Popularity is not a trust signal. At best, it is a hint that a skill solved a real problem for many people.&lt;/p&gt;

&lt;h2&gt;
  
  
  The most useful OpenClaw skills right now
&lt;/h2&gt;

&lt;p&gt;The most useful skills are not the flashiest ones. They are the ones that make repeated workflows cheaper, clearer, or safer.&lt;/p&gt;

&lt;p&gt;My filter here is opinionated:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;narrow scope beats broad promise&lt;/li&gt;
&lt;li&gt;inspectable beats magical&lt;/li&gt;
&lt;li&gt;local or transparent beats opaque proxying&lt;/li&gt;
&lt;li&gt;workflow value beats novelty&lt;/li&gt;
&lt;li&gt;clean packaging beats vibes&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Safety and self correction
&lt;/h3&gt;

&lt;p&gt;These are the least glamorous skills in the ecosystem, which is exactly why they matter.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Skill&lt;/th&gt;
&lt;th&gt;URL&lt;/th&gt;
&lt;th&gt;What it does&lt;/th&gt;
&lt;th&gt;Why it is useful&lt;/th&gt;
&lt;th&gt;Popularity&lt;/th&gt;
&lt;th&gt;Scan note&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;self-improving-agent&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/pskoett/self-improving-agent" rel="noopener noreferrer"&gt;https://clawhub.ai/pskoett/self-improving-agent&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Captures learnings, errors, and corrections for future runs&lt;/td&gt;
&lt;td&gt;One of the few skills that improves repeat work instead of adding another endpoint&lt;/td&gt;
&lt;td&gt;3.2k stars, 396k downloads&lt;/td&gt;
&lt;td&gt;benign&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Skill Vetter 1.0.0&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/fedrov2025/skill-vetter-1-0-0" rel="noopener noreferrer"&gt;https://clawhub.ai/fedrov2025/skill-vetter-1-0-0&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Reviews other skills for red flags before install&lt;/td&gt;
&lt;td&gt;The skill ecosystem needed this very early, which says a lot about the ecosystem&lt;/td&gt;
&lt;td&gt;9 stars, 7.3k downloads&lt;/td&gt;
&lt;td&gt;benign&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The first one is popular for a reason. It is not a gimmick. It creates a feedback loop around failure, which is one of the few things that consistently pays off in agent systems.&lt;/p&gt;

&lt;p&gt;The second one is not popular in absolute terms, but it is one of the most sensible installs you can add if you plan to browse ClawHub regularly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Search and research
&lt;/h3&gt;

&lt;p&gt;Search skills are where OpenClaw gets genuinely useful, but also where packaging quality varies a lot.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Skill&lt;/th&gt;
&lt;th&gt;URL&lt;/th&gt;
&lt;th&gt;What it does&lt;/th&gt;
&lt;th&gt;Why it is useful&lt;/th&gt;
&lt;th&gt;Popularity&lt;/th&gt;
&lt;th&gt;Scan note&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Multi Search Engine&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/gpyangyoujun/multi-search-engine" rel="noopener noreferrer"&gt;https://clawhub.ai/gpyangyoujun/multi-search-engine&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Aggregates 16 search engines with operators and time filters&lt;/td&gt;
&lt;td&gt;Better than single engine skills when you need broad recall&lt;/td&gt;
&lt;td&gt;566 stars, 121k downloads&lt;/td&gt;
&lt;td&gt;benign&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tavily Search&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/matthew77/liang-tavily-search" rel="noopener noreferrer"&gt;https://clawhub.ai/matthew77/liang-tavily-search&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Tavily backed web search with snippets and metadata&lt;/td&gt;
&lt;td&gt;Clean, narrow, and easy to reason about&lt;/td&gt;
&lt;td&gt;92 stars, 36.2k downloads&lt;/td&gt;
&lt;td&gt;benign&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Academic Deep Research&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/kesslerio/academic-deep-research" rel="noopener noreferrer"&gt;https://clawhub.ai/kesslerio/academic-deep-research&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Forces multi cycle research with explicit method&lt;/td&gt;
&lt;td&gt;Good when you want structure, not just a quick answer&lt;/td&gt;
&lt;td&gt;53 stars, 17.2k downloads&lt;/td&gt;
&lt;td&gt;benign&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The strongest pattern here is that method often beats breadth.&lt;/p&gt;

&lt;p&gt;Multi Search Engine is the broad utility pick. Tavily Search is the cleaner service backed pick. Academic Deep Research is the process pick. None of them are flashy. All of them can be useful.&lt;/p&gt;

&lt;h3&gt;
  
  
  Developer workflows
&lt;/h3&gt;

&lt;p&gt;This is the most obviously valuable category for technical readers.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Skill&lt;/th&gt;
&lt;th&gt;URL&lt;/th&gt;
&lt;th&gt;What it does&lt;/th&gt;
&lt;th&gt;Why it is useful&lt;/th&gt;
&lt;th&gt;Popularity&lt;/th&gt;
&lt;th&gt;Scan note&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Github&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/steipete/github" rel="noopener noreferrer"&gt;https://clawhub.ai/steipete/github&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Uses the &lt;code&gt;gh&lt;/code&gt; CLI for issues, PRs, runs, and API calls&lt;/td&gt;
&lt;td&gt;One of the cleanest examples of a skill that maps directly to a real CLI&lt;/td&gt;
&lt;td&gt;514 stars, 159k downloads&lt;/td&gt;
&lt;td&gt;benign&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Agent Browser&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/matrixy/agent-browser-clawdbot" rel="noopener noreferrer"&gt;https://clawhub.ai/matrixy/agent-browser-clawdbot&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Headless browser automation with snapshots and refs&lt;/td&gt;
&lt;td&gt;Useful for tests, admin flows, and web tasks that are too awkward for plain fetch&lt;/td&gt;
&lt;td&gt;323 stars, 90.1k downloads&lt;/td&gt;
&lt;td&gt;benign&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Opencode-controller&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/karatla/opencode-controller" rel="noopener noreferrer"&gt;https://clawhub.ai/karatla/opencode-controller&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Controls Opencode sessions, agents, and models&lt;/td&gt;
&lt;td&gt;Practical if Opencode is already part of your workflow&lt;/td&gt;
&lt;td&gt;72 stars, 17.9k downloads&lt;/td&gt;
&lt;td&gt;benign&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The GitHub skill is the kind of skill the ecosystem should have more of. It is boring, direct, and tied to a tool developers already know.&lt;/p&gt;

&lt;p&gt;Agent Browser is more powerful, but also deserves more care. Browser state files, cookies, and page context are real data surfaces. That does not make the skill bad. It makes it operational.&lt;/p&gt;

&lt;h3&gt;
  
  
  Memory and knowledge
&lt;/h3&gt;

&lt;p&gt;This category is more valuable than it looks at first glance.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Skill&lt;/th&gt;
&lt;th&gt;URL&lt;/th&gt;
&lt;th&gt;What it does&lt;/th&gt;
&lt;th&gt;Why it is useful&lt;/th&gt;
&lt;th&gt;Popularity&lt;/th&gt;
&lt;th&gt;Scan note&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;ontology&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/oswalpalash/ontology" rel="noopener noreferrer"&gt;https://clawhub.ai/oswalpalash/ontology&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Typed knowledge graph for local structured memory&lt;/td&gt;
&lt;td&gt;One of the strongest memory oriented skills I found&lt;/td&gt;
&lt;td&gt;539 stars, 166k downloads&lt;/td&gt;
&lt;td&gt;benign&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Academic Deep Research&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/kesslerio/academic-deep-research" rel="noopener noreferrer"&gt;https://clawhub.ai/kesslerio/academic-deep-research&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Research workflow with explicit evidence handling&lt;/td&gt;
&lt;td&gt;Useful as a temporary method layer when memory quality matters&lt;/td&gt;
&lt;td&gt;53 stars, 17.2k downloads&lt;/td&gt;
&lt;td&gt;benign&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The ontology skill stands out because it treats memory as structure rather than as note accumulation. That is a stronger long term direction for agent systems than endlessly appending summaries.&lt;/p&gt;

&lt;h3&gt;
  
  
  Workspace and personal productivity
&lt;/h3&gt;

&lt;p&gt;This is the most uneven category. It contains genuinely useful skills, but also some of the most obvious metadata mismatches.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Skill&lt;/th&gt;
&lt;th&gt;URL&lt;/th&gt;
&lt;th&gt;What it does&lt;/th&gt;
&lt;th&gt;Why it is useful&lt;/th&gt;
&lt;th&gt;Popularity&lt;/th&gt;
&lt;th&gt;Scan note&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Gog&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/steipete/gog" rel="noopener noreferrer"&gt;https://clawhub.ai/steipete/gog&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Google Workspace CLI for Gmail, Calendar, Drive, Sheets, Docs&lt;/td&gt;
&lt;td&gt;Very practical if your work already lives in Google Workspace&lt;/td&gt;
&lt;td&gt;839 stars, 157k downloads&lt;/td&gt;
&lt;td&gt;suspicious&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Notion&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/steipete/notion" rel="noopener noreferrer"&gt;https://clawhub.ai/steipete/notion&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Notion API helper for pages, blocks, and databases&lt;/td&gt;
&lt;td&gt;Useful in theory and often useful in practice, but packaging details matter&lt;/td&gt;
&lt;td&gt;229 stars, 77.4k downloads&lt;/td&gt;
&lt;td&gt;suspicious&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Openai Whisper&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/steipete/openai-whisper" rel="noopener noreferrer"&gt;https://clawhub.ai/steipete/openai-whisper&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Local Whisper CLI transcription&lt;/td&gt;
&lt;td&gt;One of the best examples of a narrow, useful local skill&lt;/td&gt;
&lt;td&gt;274 stars, 70k downloads&lt;/td&gt;
&lt;td&gt;benign&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This is where the ecosystem gets interesting.&lt;/p&gt;

&lt;p&gt;Gog is clearly useful. It is also a good example of why utility and trust are separate questions. The current scan notes point out metadata mismatches around binaries and credentials. That does not automatically make it malicious. It does make it a skill to inspect before granting account access.&lt;/p&gt;

&lt;p&gt;Notion sits in the same category. Good workflow value. Messier packaging story.&lt;/p&gt;

&lt;p&gt;Openai Whisper is the opposite. It is limited, local, and refreshingly straightforward.&lt;/p&gt;

&lt;h2&gt;
  
  
  The skills I would not rush to install
&lt;/h2&gt;

&lt;p&gt;Some skills are popular for understandable reasons and still do not make my first pass list.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Skill&lt;/th&gt;
&lt;th&gt;URL&lt;/th&gt;
&lt;th&gt;Why I would wait&lt;/th&gt;
&lt;th&gt;Popularity&lt;/th&gt;
&lt;th&gt;Scan note&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Desktop Control&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/matagul/desktop-control" rel="noopener noreferrer"&gt;https://clawhub.ai/matagul/desktop-control&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Powerful enough to matter, but current scan status is a red flag and the capability is sensitive by design&lt;/td&gt;
&lt;td&gt;299 stars, 47.7k downloads&lt;/td&gt;
&lt;td&gt;suspicious&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Baidu web search&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/ide-rea/baidu-search" rel="noopener noreferrer"&gt;https://clawhub.ai/ide-rea/baidu-search&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Good idea, but undocumented env vars and metadata gaps are exactly the kind of sloppiness that should slow you down&lt;/td&gt;
&lt;td&gt;203 stars, 79.2k downloads&lt;/td&gt;
&lt;td&gt;suspicious&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Obsidian&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/steipete/obsidian" rel="noopener noreferrer"&gt;https://clawhub.ai/steipete/obsidian&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;High utility, but current scan notes call out mismatched metadata and undeclared file access&lt;/td&gt;
&lt;td&gt;333 stars, 82.5k downloads&lt;/td&gt;
&lt;td&gt;suspicious&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;That is the larger pattern in one table.&lt;/p&gt;

&lt;p&gt;High download counts do not erase packaging problems.&lt;/p&gt;

&lt;h2&gt;
  
  
  The real shape of the OpenClaw skills ecosystem
&lt;/h2&gt;

&lt;p&gt;The OpenClaw skills ecosystem is already big enough to be useful and already noisy enough to need curation.&lt;/p&gt;

&lt;p&gt;That is usually the moment an ecosystem becomes real.&lt;/p&gt;

&lt;p&gt;The good news is that the underlying skill format is strong. Skills are inspectable. Overrides are clean. Precedence is sensible. Gating is practical. ClawHub provides versioning, discovery, stars, downloads, comments, and basic moderation.&lt;/p&gt;

&lt;p&gt;The bad news is that public registries move faster than trust models.&lt;/p&gt;

&lt;p&gt;If you want the short opinionated take, it is this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the skill system is better than the average AI marketplace&lt;/li&gt;
&lt;li&gt;the registry is more useful than safe by default&lt;/li&gt;
&lt;li&gt;the best skills are small, specific, and operationally boring&lt;/li&gt;
&lt;li&gt;suspicious metadata is not a cosmetic issue&lt;/li&gt;
&lt;li&gt;"popular" should never outrank "inspectable"&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Final take
&lt;/h2&gt;

&lt;p&gt;If I were trimming OpenClaw skills down to the set that looks most durable right now, I would start with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;self-improving-agent&lt;/li&gt;
&lt;li&gt;Skill Vetter&lt;/li&gt;
&lt;li&gt;Github&lt;/li&gt;
&lt;li&gt;Multi Search Engine&lt;/li&gt;
&lt;li&gt;Tavily Search&lt;/li&gt;
&lt;li&gt;Academic Deep Research&lt;/li&gt;
&lt;li&gt;ontology&lt;/li&gt;
&lt;li&gt;Openai Whisper&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then I would consider Gog and Notion only after a manual review of current metadata, source, and secret handling.&lt;/p&gt;

&lt;p&gt;That is probably the right framing for the entire OpenClaw skills ecosystem in 2026.&lt;/p&gt;

&lt;p&gt;The good part is already very good.&lt;/p&gt;

&lt;p&gt;The safe part still requires an adult in the room.&lt;/p&gt;




&lt;p&gt;For how skills combine with plugins in real deployments by user type, see &lt;a href="https://www.glukhov.org/ai-systems/openclaw/production-setup/" rel="noopener noreferrer"&gt;OpenClaw production setup patterns&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For the plugin layer those skills depend on, see &lt;a href="https://www.glukhov.org/ai-systems/openclaw/plugins/" rel="noopener noreferrer"&gt;OpenClaw plugins guide&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>selfhosting</category>
      <category>openclaw</category>
      <category>llm</category>
      <category>ai</category>
    </item>
    <item>
      <title>OpenClaw Production Setup Patterns with Plugins and Skills</title>
      <dc:creator>Rost</dc:creator>
      <pubDate>Mon, 20 Apr 2026 09:51:33 +0000</pubDate>
      <link>https://forem.com/rosgluk/openclaw-production-setup-patterns-with-plugins-and-skills-1jfj</link>
      <guid>https://forem.com/rosgluk/openclaw-production-setup-patterns-with-plugins-and-skills-1jfj</guid>
      <description>&lt;p&gt;OpenClaw looks simple in demos.&lt;br&gt;
In production, it becomes a system.&lt;/p&gt;

&lt;p&gt;The real complexity is not in prompts or models. It is in how plugins and skills interact to manage state, integrate systems, and execute workflows over time.&lt;/p&gt;

&lt;p&gt;A useful mental model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Plugins = capabilities&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
APIs, memory, tools, integrations&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Skills = behavior&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
How the agent uses those capabilities in structured ways&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Production systems fail when these two are mixed without boundaries.&lt;/p&gt;

&lt;p&gt;They become reliable when both are mapped to real user needs.&lt;/p&gt;




&lt;h2&gt;
  
  
  How to think about production setup
&lt;/h2&gt;

&lt;p&gt;Most teams ask what plugins or skills they should install.&lt;/p&gt;

&lt;p&gt;That is the wrong starting point.&lt;/p&gt;

&lt;p&gt;A better question is:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Who is this system for, and what work are they trying to complete?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Each user type creates a different architecture:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;developers need control and traceability&lt;/li&gt;
&lt;li&gt;automation users need triggers and determinism&lt;/li&gt;
&lt;li&gt;researchers need memory and retrieval&lt;/li&gt;
&lt;li&gt;support teams need continuity and communication&lt;/li&gt;
&lt;li&gt;growth teams need pipelines and data flow&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Plugins enable these systems.&lt;br&gt;&lt;br&gt;
Skills make them usable.&lt;/p&gt;

&lt;p&gt;The combination of both, tailored to a real user profile, is what separates a production system from a demo.&lt;/p&gt;




&lt;h2&gt;
  
  
  Installation and lifecycle note
&lt;/h2&gt;

&lt;p&gt;This article focuses on architecture patterns and user-specific configurations.&lt;/p&gt;

&lt;p&gt;For full installation and lifecycle details see:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.glukhov.org/ai-systems/openclaw/plugins/" rel="noopener noreferrer"&gt;OpenClaw plugins guide&lt;/a&gt; — plugin installation, extension directories, CLI lifecycle, and mature picks&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.glukhov.org/ai-systems/openclaw/skills/" rel="noopener noreferrer"&gt;OpenClaw skills guide&lt;/a&gt; — ClawHub discovery, install and removal flows, security tradeoffs, and per-role stacks&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.glukhov.org/ai-systems/openclaw/quickstart/" rel="noopener noreferrer"&gt;OpenClaw quickstart&lt;/a&gt; — Docker-based installation with Ollama GPU or Claude&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In production, both plugins and skills should be treated as dependencies with version control, review, and rollback strategies.&lt;/p&gt;




&lt;h2&gt;
  
  
  1. The Developer Workflow User
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Profile
&lt;/h3&gt;

&lt;p&gt;This user treats OpenClaw as an execution layer for development workflows.&lt;/p&gt;

&lt;p&gt;Not just code generation, but:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;debugging&lt;/li&gt;
&lt;li&gt;iteration&lt;/li&gt;
&lt;li&gt;multi-step reasoning&lt;/li&gt;
&lt;li&gt;repository interaction&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The system is expected to remember decisions, track changes, and make its reasoning visible.&lt;/p&gt;

&lt;h3&gt;
  
  
  Core needs
&lt;/h3&gt;

&lt;p&gt;The key requirement is continuity and visibility.&lt;/p&gt;

&lt;p&gt;Developers need to understand:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;what the system did&lt;/li&gt;
&lt;li&gt;why it did it&lt;/li&gt;
&lt;li&gt;how to reproduce or fix it&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without structured memory, every session starts from scratch. Without observability, failures are invisible and expensive to diagnose.&lt;/p&gt;




&lt;h3&gt;
  
  
  Typical OpenClaw Plugin Set
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;model providers&lt;br&gt;&lt;br&gt;
openai, anthropic, openrouter for fallback routing&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;memory and context&lt;br&gt;&lt;br&gt;
memory lancedb, lossless claw&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;dev workflow&lt;br&gt;&lt;br&gt;
codex app server, codex harness&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;observability&lt;br&gt;&lt;br&gt;
opik openclaw, manifest&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Why this helps
&lt;/h4&gt;

&lt;p&gt;Plugins transform OpenClaw into a controlled execution environment.&lt;/p&gt;

&lt;p&gt;Memory lancedb and lossless claw preserve intent across iterations, so the system does not reset its understanding every few turns. Lossless context plugins are especially valuable here because they preserve intent rather than raw tokens.&lt;/p&gt;

&lt;p&gt;Codex plugins move the agent from passive assistant to active participant. They enable real execution, validation, and iteration on code rather than static responses.&lt;/p&gt;

&lt;p&gt;Observability completes the picture. It answers what happened, which is often more important than the output itself. Without this layer, the system feels intelligent but remains unreliable in practice.&lt;/p&gt;




&lt;h3&gt;
  
  
  Typical OpenClaw Skill Set
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Skill&lt;/th&gt;
&lt;th&gt;Link&lt;/th&gt;
&lt;th&gt;Why it helps&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;github&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/steipete/github" rel="noopener noreferrer"&gt;clawhub.ai/steipete/github&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Best day-to-day control plane for issues, PRs, CI status, and &lt;code&gt;gh&lt;/code&gt; API queries. Instruction-only and low risk. 517 stars, 159k downloads.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;tmux&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/steipete/tmux" rel="noopener noreferrer"&gt;clawhub.ai/steipete/tmux&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Keeps long-running builds, test servers, and agent-driven shells from collapsing into one fragile terminal. 38 stars, 22.5k downloads.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;session-logs&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/guogang1024/session-logs" rel="noopener noreferrer"&gt;clawhub.ai/guogang1024/session-logs&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Turns prior agent sessions into searchable operational memory. Answers "what did the agent actually do yesterday". 22 stars, 30.9k downloads.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;model-usage&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/steipete/model-usage" rel="noopener noreferrer"&gt;clawhub.ai/steipete/model-usage&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Local model cost breakdowns by model rather than a vague monthly bill. 101 stars, 32k downloads.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;nano-pdf&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/steipete/nano-pdf" rel="noopener noreferrer"&gt;clawhub.ai/steipete/nano-pdf&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Handles release notes, partner decks, and PDF patching without context switching. 220 stars, 91.5k downloads.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;openclaw-token-optimizer&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/asif2bd/openclaw-token-optimizer" rel="noopener noreferrer"&gt;clawhub.ai/asif2bd/openclaw-token-optimizer&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Workspace-level token and cost hygiene when usage creeps up from overpowered defaults. 28 stars, 9.4k downloads.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;openclaw-skill-vetter&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/donovanpankratz-del/openclaw-skill-vetter" rel="noopener noreferrer"&gt;clawhub.ai/donovanpankratz-del/openclaw-skill-vetter&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Pre-install review checklist for suspicious community skills and risky bundles. 24 stars, 17.4k downloads.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h4&gt;
  
  
  Why this helps
&lt;/h4&gt;

&lt;p&gt;Skills define how developers actually work with the system.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;github skill enables real repository workflows instead of manual copy-paste&lt;/li&gt;
&lt;li&gt;tmux allows long-running or parallel agent tasks without session loss&lt;/li&gt;
&lt;li&gt;session-logs provide operational memory beyond the chat window&lt;/li&gt;
&lt;li&gt;model-usage and token-optimizer expose cost and performance patterns&lt;/li&gt;
&lt;li&gt;skill-vetter adds package-review discipline before any community install&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Plugins give capability. Skills turn that into repeatable engineering workflows.&lt;/p&gt;




&lt;h3&gt;
  
  
  How plugins and skills together serve the developer
&lt;/h3&gt;

&lt;p&gt;The plugin layer provides the raw infrastructure: persistent memory, code execution, and observability.&lt;/p&gt;

&lt;p&gt;The skill layer structures how a developer actually interacts with that infrastructure day to day.&lt;/p&gt;

&lt;p&gt;A developer with codex plugins but no github skill has execution power without workflow integration. A developer with session-logs but no memory plugin has audit trails without cross-session context.&lt;/p&gt;

&lt;p&gt;The combination is what makes the system feel like a reliable collaborator rather than an unpredictable assistant.&lt;/p&gt;

&lt;p&gt;For more on skill selection and security review for this profile see the &lt;a href="https://www.glukhov.org/ai-systems/openclaw/skills/" rel="noopener noreferrer"&gt;OpenClaw skills guide&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  OpenClaw Skill and Plugin Install for Developer Workflow
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Plugins — capabilities layer&lt;/span&gt;
openclaw plugins &lt;span class="nb"&gt;install &lt;/span&gt;memory-lancedb             &lt;span class="c"&gt;# persistent long-term memory with vector recall&lt;/span&gt;
openclaw plugins &lt;span class="nb"&gt;install &lt;/span&gt;lossless-claw              &lt;span class="c"&gt;# lossless context compression, preserves intent not tokens&lt;/span&gt;
openclaw plugins &lt;span class="nb"&gt;install &lt;/span&gt;openclaw-codex-app-server  &lt;span class="c"&gt;# code execution harness, resume, planning, and model selection&lt;/span&gt;
openclaw plugins &lt;span class="nb"&gt;install&lt;/span&gt; @opik/opik-openclaw        &lt;span class="c"&gt;# LLM observability: spans, tool calls, usage, and cost&lt;/span&gt;

&lt;span class="c"&gt;# Skills — behavior layer&lt;/span&gt;
openclaw skills &lt;span class="nb"&gt;install &lt;/span&gt;github                      &lt;span class="c"&gt;# PR, issue, CI status, and gh API workflows&lt;/span&gt;
openclaw skills &lt;span class="nb"&gt;install &lt;/span&gt;tmux                        &lt;span class="c"&gt;# persistent terminal sessions for long-running tasks&lt;/span&gt;
openclaw skills &lt;span class="nb"&gt;install &lt;/span&gt;session-logs                &lt;span class="c"&gt;# searchable agent session history across days&lt;/span&gt;
openclaw skills &lt;span class="nb"&gt;install &lt;/span&gt;model-usage                 &lt;span class="c"&gt;# per-model cost breakdown from session logs&lt;/span&gt;
openclaw skills &lt;span class="nb"&gt;install &lt;/span&gt;nano-pdf                    &lt;span class="c"&gt;# PDF editing, patching, and release note handling&lt;/span&gt;
openclaw skills &lt;span class="nb"&gt;install &lt;/span&gt;openclaw-token-optimizer    &lt;span class="c"&gt;# workspace-level token and cost hygiene&lt;/span&gt;
openclaw skills &lt;span class="nb"&gt;install &lt;/span&gt;openclaw-skill-vetter       &lt;span class="c"&gt;# pre-install review checklist before adding community skills&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  2. The Automation and Ops User
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Profile
&lt;/h3&gt;

&lt;p&gt;This user is not chatting. They are orchestrating.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;workflows&lt;/li&gt;
&lt;li&gt;triggers&lt;/li&gt;
&lt;li&gt;pipelines&lt;/li&gt;
&lt;li&gt;system integrations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For this profile, OpenClaw becomes part of infrastructure, not a UI. The system is expected to react to events and coordinate workflows across systems without human intervention at each step.&lt;/p&gt;

&lt;h3&gt;
  
  
  Core needs
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;deterministic execution&lt;/li&gt;
&lt;li&gt;external triggers&lt;/li&gt;
&lt;li&gt;reliability under failure&lt;/li&gt;
&lt;li&gt;integration with existing systems&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The focus shifts from intelligence to predictability. Automation workflows must be repeatable, externally triggered, and easy to integrate into existing infrastructure.&lt;/p&gt;




&lt;h3&gt;
  
  
  Typical OpenClaw Plugin Set
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;workflow and triggers&lt;br&gt;&lt;br&gt;
webhooks&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;tools&lt;br&gt;&lt;br&gt;
browser, firecrawl, exa&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;providers&lt;br&gt;&lt;br&gt;
openrouter or google for resilience&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;integrations&lt;br&gt;&lt;br&gt;
lightweight API wrappers, not monolithic plugins&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Why this helps
&lt;/h4&gt;

&lt;p&gt;Webhooks act as controlled entry points into the system, turning external events into structured execution.&lt;/p&gt;

&lt;p&gt;Search and scraping tools provide flexibility when APIs are unavailable or inconsistent. Exa and firecrawl handle different retrieval patterns and are worth using together.&lt;/p&gt;

&lt;p&gt;Provider routing reduces dependency on a single model, improving resilience under failure conditions. Integrations are best handled through lightweight API wrappers rather than single all-in-one packages, which keeps failure surfaces small and debugging straightforward.&lt;/p&gt;

&lt;p&gt;The system stops being reactive chat and becomes a component in a larger automation pipeline.&lt;/p&gt;




&lt;h3&gt;
  
  
  Typical OpenClaw Skill Set
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Skill&lt;/th&gt;
&lt;th&gt;Link&lt;/th&gt;
&lt;th&gt;Why it helps&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;taskflow&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/openclaw/openclaw/blob/main/skills/taskflow/SKILL.md" rel="noopener noreferrer"&gt;bundled official skill&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Durable multi-step execution with one owner context across detached tasks. The right abstraction when work spans sessions.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;taskflow-inbox-triage&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/openclaw/openclaw/blob/main/skills/taskflow-inbox-triage/SKILL.md" rel="noopener noreferrer"&gt;bundled official skill&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Concrete pattern for routing inbound work by intent and urgency. Good fit for event-driven pipelines.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;tmux&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/steipete/tmux" rel="noopener noreferrer"&gt;clawhub.ai/steipete/tmux&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Necessary when detached tasks become long-running or require interactive shell sessions.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;session-logs&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/guogang1024/session-logs" rel="noopener noreferrer"&gt;clawhub.ai/guogang1024/session-logs&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Postmortems are easier when logs are first-class rather than an afterthought.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;blogwatcher&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/steipete/blogwatcher" rel="noopener noreferrer"&gt;clawhub.ai/steipete/blogwatcher&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Practical for monitoring release feeds, vendor blogs, and changelogs without loading a full scraping stack.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;github&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/steipete/github" rel="noopener noreferrer"&gt;clawhub.ai/steipete/github&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Incident and release work is often GitHub work. Keeps CI and issue workflows close to the operator.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h4&gt;
  
  
  Why this helps
&lt;/h4&gt;

&lt;p&gt;Automation without structure breaks quickly.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;taskflow introduces multi-step execution ownership across detached sessions&lt;/li&gt;
&lt;li&gt;inbox triage provides a repeatable pattern for routing work by intent and urgency&lt;/li&gt;
&lt;li&gt;tmux enables persistent execution contexts for long-running tasks&lt;/li&gt;
&lt;li&gt;session-logs support debugging, auditability, and postmortems&lt;/li&gt;
&lt;li&gt;blogwatcher handles passive monitoring without a full scraping stack&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Skills provide structure where plugins only provide access.&lt;/p&gt;




&lt;h3&gt;
  
  
  How plugins and skills together serve the automation user
&lt;/h3&gt;

&lt;p&gt;The plugin layer connects OpenClaw to the external world: webhooks bring in events, tools provide flexible data access, provider routing adds resilience.&lt;/p&gt;

&lt;p&gt;The skill layer gives that access structure: taskflow ensures multi-step work maintains ownership and context, triage patterns route incoming work predictably, and logs make failures diagnosable after the fact.&lt;/p&gt;

&lt;p&gt;An ops setup with webhooks but no taskflow skill has triggers but no consistent execution model. A taskflow-based system without provider routing has structure but a single point of failure.&lt;/p&gt;

&lt;p&gt;Together, they make OpenClaw a reliable component in a larger automation pipeline rather than a reactive chat interface.&lt;/p&gt;

&lt;h3&gt;
  
  
  OpenClaw Skill and Plugin Install for Automation and Ops
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Plugins — capabilities layer&lt;/span&gt;
openclaw plugins &lt;span class="nb"&gt;install &lt;/span&gt;webhooks    &lt;span class="c"&gt;# external event triggers over authenticated HTTP routes&lt;/span&gt;
openclaw plugins &lt;span class="nb"&gt;install &lt;/span&gt;browser     &lt;span class="c"&gt;# managed browser profile for dynamic page interaction&lt;/span&gt;
openclaw plugins &lt;span class="nb"&gt;install &lt;/span&gt;firecrawl   &lt;span class="c"&gt;# structured extraction from static and JS-heavy content&lt;/span&gt;
openclaw plugins &lt;span class="nb"&gt;install &lt;/span&gt;exa         &lt;span class="c"&gt;# hybrid search and extraction in one provider&lt;/span&gt;

&lt;span class="c"&gt;# Skills — behavior layer&lt;/span&gt;
&lt;span class="c"&gt;# taskflow and taskflow-inbox-triage are bundled — enable via agent config:&lt;/span&gt;
&lt;span class="c"&gt;# agents.defaults.skills: ["taskflow", "taskflow-inbox-triage"]&lt;/span&gt;

openclaw skills &lt;span class="nb"&gt;install &lt;/span&gt;tmux         &lt;span class="c"&gt;# persistent shell sessions for long-running detached tasks&lt;/span&gt;
openclaw skills &lt;span class="nb"&gt;install &lt;/span&gt;session-logs &lt;span class="c"&gt;# postmortem and audit trail for agent actions&lt;/span&gt;
openclaw skills &lt;span class="nb"&gt;install &lt;/span&gt;blogwatcher  &lt;span class="c"&gt;# monitor release feeds and vendor changelogs without a full scraper&lt;/span&gt;
openclaw skills &lt;span class="nb"&gt;install &lt;/span&gt;github       &lt;span class="c"&gt;# CI, incident, and release workflows from the agent surface&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  3. The Knowledge and Research User
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Profile
&lt;/h3&gt;

&lt;p&gt;This user builds knowledge over time.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;research&lt;/li&gt;
&lt;li&gt;synthesis&lt;/li&gt;
&lt;li&gt;documentation&lt;/li&gt;
&lt;li&gt;analysis&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal is not to execute tasks but to collect, organise, and reuse information across sessions and projects. The system must remember what it has learned and retrieve it accurately.&lt;/p&gt;

&lt;h3&gt;
  
  
  Core needs
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;persistent memory&lt;/li&gt;
&lt;li&gt;high quality retrieval&lt;/li&gt;
&lt;li&gt;traceability&lt;/li&gt;
&lt;li&gt;consistency&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Reliability in this context is less about speed and more about correctness and repeatability. The system should build on prior work rather than repeat the same research each session.&lt;/p&gt;




&lt;h3&gt;
  
  
  Typical OpenClaw Plugin Set
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;memory&lt;br&gt;&lt;br&gt;
memory lancedb, memory wiki&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;search&lt;br&gt;&lt;br&gt;
tavily, exa, firecrawl&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;providers&lt;br&gt;&lt;br&gt;
anthropic or google for large context windows&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Why this helps
&lt;/h4&gt;

&lt;p&gt;Memory plugins turn transient interactions into persistent knowledge. Lancedb provides vector-based retrieval, while wiki-style memory adds structure and traceability so users can verify where information came from.&lt;/p&gt;

&lt;p&gt;Search tools improve input quality, which directly impacts output quality. Tavily and exa provide different retrieval characteristics and are worth using together for research coverage.&lt;/p&gt;

&lt;p&gt;Larger context providers like Anthropic or Google are relevant here because synthesis often requires holding more source material at once than a standard context window allows.&lt;/p&gt;

&lt;p&gt;Without strong memory plugins, research becomes repetitive regardless of how well the skills are configured.&lt;/p&gt;




&lt;h3&gt;
  
  
  Typical OpenClaw Skill Set
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Skill&lt;/th&gt;
&lt;th&gt;Link&lt;/th&gt;
&lt;th&gt;Why it helps&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;multi-search-engine&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/gpyangyoujun/multi-search-engine" rel="noopener noreferrer"&gt;clawhub.ai/gpyangyoujun/multi-search-engine&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Cross-engine query aggregation with useful operators and time filters. 566 stars, 121k downloads.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;agent-browser&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/matrixy/agent-browser-clawdbot" rel="noopener noreferrer"&gt;clawhub.ai/matrixy/agent-browser-clawdbot&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Controlled interaction with dynamic pages. Better fit for research than random scraping wrappers. 323 stars, 90.2k downloads.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;blogwatcher&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/steipete/blogwatcher" rel="noopener noreferrer"&gt;clawhub.ai/steipete/blogwatcher&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Keeps a research corpus fresh through RSS and blog feeds instead of repeated manual browsing. 57 stars, 34.9k downloads.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;nano-pdf&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/steipete/nano-pdf" rel="noopener noreferrer"&gt;clawhub.ai/steipete/nano-pdf&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;PDF edits, redlines, or document cleanup without switching to a separate tool. 220 stars, 91.5k downloads.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;openai-whisper&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/steipete/openai-whisper" rel="noopener noreferrer"&gt;clawhub.ai/steipete/openai-whisper&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Local speech-to-text for interview recordings, meeting audio, and field notes. 274 stars, 70.1k downloads.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;notion&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/steipete/notion" rel="noopener noreferrer"&gt;clawhub.ai/steipete/notion&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Structured team knowledge base for pages and databases. Review secret handling before install. 230 stars, 77.4k downloads.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;obsidian&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/steipete/obsidian" rel="noopener noreferrer"&gt;clawhub.ai/steipete/obsidian&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Local markdown vault automation for personal knowledge management. High value, review install source. 333 stars, 82.5k downloads.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h4&gt;
  
  
  Why this helps
&lt;/h4&gt;

&lt;p&gt;Skills define how research actually happens.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;multi-search-engine improves discovery quality across sources simultaneously&lt;/li&gt;
&lt;li&gt;agent-browser enables controlled interaction with real web content&lt;/li&gt;
&lt;li&gt;blogwatcher maintains fresh information streams automatically&lt;/li&gt;
&lt;li&gt;pdf and whisper handle real-world data formats that arrive outside clean APIs&lt;/li&gt;
&lt;li&gt;notion and obsidian structure outputs into persistent, queryable knowledge systems&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The system evolves from a query engine into a knowledge engine.&lt;/p&gt;




&lt;h3&gt;
  
  
  How plugins and skills together serve the research user
&lt;/h3&gt;

&lt;p&gt;The plugin layer ensures the system remembers and retrieves reliably: lancedb builds a persistent vector store, wiki memory adds provenance, and search plugins expand the input surface.&lt;/p&gt;

&lt;p&gt;The skill layer determines how research actually flows: multi-search drives discovery, agent-browser handles dynamic sources, blogwatcher maintains ongoing monitoring, and note-taking skills capture outputs into usable formats.&lt;/p&gt;

&lt;p&gt;Without the memory plugin layer, even excellent skills produce knowledge that evaporates after each session. Without the skill layer, even a well-configured memory system sits idle because there is no structured process to feed it.&lt;/p&gt;

&lt;p&gt;See the &lt;a href="https://www.glukhov.org/ai-systems/openclaw/plugins/" rel="noopener noreferrer"&gt;OpenClaw plugins guide&lt;/a&gt; for memory plugin selection and configuration details.&lt;/p&gt;

&lt;h3&gt;
  
  
  OpenClaw Skill and Plugin Install for Knowledge and Research
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Plugins — capabilities layer&lt;/span&gt;
openclaw plugins &lt;span class="nb"&gt;install &lt;/span&gt;memory-lancedb   &lt;span class="c"&gt;# persistent vector memory with auto-recall and capture&lt;/span&gt;
openclaw plugins &lt;span class="nb"&gt;install &lt;/span&gt;memory-wiki      &lt;span class="c"&gt;# structured wiki layer with provenance and contradiction tracking&lt;/span&gt;
openclaw plugins &lt;span class="nb"&gt;install &lt;/span&gt;tavily           &lt;span class="c"&gt;# LLM-optimized structured search and extraction&lt;/span&gt;
openclaw plugins &lt;span class="nb"&gt;install &lt;/span&gt;exa              &lt;span class="c"&gt;# hybrid search modes plus extraction in one provider&lt;/span&gt;
openclaw plugins &lt;span class="nb"&gt;install &lt;/span&gt;firecrawl        &lt;span class="c"&gt;# web_search provider and fallback fetch for JS-heavy pages&lt;/span&gt;

&lt;span class="c"&gt;# Skills — behavior layer&lt;/span&gt;
openclaw skills &lt;span class="nb"&gt;install &lt;/span&gt;multi-search-engine    &lt;span class="c"&gt;# 16-engine aggregation with operators and time filters&lt;/span&gt;
openclaw skills &lt;span class="nb"&gt;install &lt;/span&gt;agent-browser-clawdbot &lt;span class="c"&gt;# controlled browser interaction for dynamic pages&lt;/span&gt;
openclaw skills &lt;span class="nb"&gt;install &lt;/span&gt;blogwatcher            &lt;span class="c"&gt;# RSS and blog feed monitoring to keep corpus fresh&lt;/span&gt;
openclaw skills &lt;span class="nb"&gt;install &lt;/span&gt;nano-pdf               &lt;span class="c"&gt;# PDF editing, redlines, and document cleanup&lt;/span&gt;
openclaw skills &lt;span class="nb"&gt;install &lt;/span&gt;openai-whisper         &lt;span class="c"&gt;# local speech-to-text for recordings and meeting audio&lt;/span&gt;
openclaw skills &lt;span class="nb"&gt;install &lt;/span&gt;notion                 &lt;span class="c"&gt;# structured team knowledge base (review secret handling first)&lt;/span&gt;
&lt;span class="c"&gt;# openclaw skills install obsidian             # local markdown vault — review install source before enabling&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  4. The Customer Support and Communication User
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Profile
&lt;/h3&gt;

&lt;p&gt;This user operates across communication channels.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;customer support&lt;/li&gt;
&lt;li&gt;internal communication&lt;/li&gt;
&lt;li&gt;ticket handling&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The challenge is not generating answers but maintaining context across conversations and platforms.&lt;/p&gt;

&lt;h3&gt;
  
  
  Core needs
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;context continuity across conversations&lt;/li&gt;
&lt;li&gt;multi-channel integration&lt;/li&gt;
&lt;li&gt;fast response generation&lt;/li&gt;
&lt;li&gt;auditability&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Typical OpenClaw Plugin Set
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;communication channels&lt;br&gt;&lt;br&gt;
msteams, matrix, wecom, discourse&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;memory&lt;br&gt;&lt;br&gt;
memory lancedb&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;tools&lt;br&gt;&lt;br&gt;
browser&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Why this helps
&lt;/h4&gt;

&lt;p&gt;Channel plugins embed OpenClaw into existing workflows instead of requiring users to switch environments. Where communication happens determines which plugins matter most.&lt;/p&gt;

&lt;p&gt;Memory ensures conversations do not reset between sessions, which is essential for support scenarios where context accumulates over time. A support system without persistent memory forces operators to re-establish context on every interaction.&lt;/p&gt;

&lt;p&gt;Browser access allows the system to retrieve up-to-date information without relying on static integrations — useful when product documentation or policies change frequently.&lt;/p&gt;




&lt;h3&gt;
  
  
  Typical OpenClaw Skill Set
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Skill&lt;/th&gt;
&lt;th&gt;Link&lt;/th&gt;
&lt;th&gt;Why it helps&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;himalaya&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/lamelas/himalaya" rel="noopener noreferrer"&gt;clawhub.ai/lamelas/himalaya&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Terminal email with triage, reply, forward, search, and organization. One of the cleaner communication skills in the ecosystem. 62 stars, 38.3k downloads.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;slack&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/steipete/slack" rel="noopener noreferrer"&gt;clawhub.ai/steipete/slack&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Useful when support work lives in Slack. Review undeclared token assumptions before install. 117 stars, 39.1k downloads.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;session-logs&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/guogang1024/session-logs" rel="noopener noreferrer"&gt;clawhub.ai/guogang1024/session-logs&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Critical for reconstructing prior support interactions and agent decisions. 22 stars, 30.9k downloads.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;nano-pdf&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/steipete/nano-pdf" rel="noopener noreferrer"&gt;clawhub.ai/steipete/nano-pdf&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Essential when customers send forms, guides, or documents needing quick cleanup or annotation.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;openai-whisper&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/steipete/openai-whisper" rel="noopener noreferrer"&gt;clawhub.ai/steipete/openai-whisper&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Local speech-to-text for voicemail, support calls, or short media handoffs.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;taskflow-inbox-triage&lt;/td&gt;
&lt;td&gt;&lt;a href="https://github.com/openclaw/openclaw/blob/main/skills/taskflow-inbox-triage/SKILL.md" rel="noopener noreferrer"&gt;bundled official skill&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Workflow pattern for immediate reply, delayed follow-up, and later summary queues.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;notion&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/steipete/notion" rel="noopener noreferrer"&gt;clawhub.ai/steipete/notion&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Triage notes, FAQ capture, and evolving support playbooks. Fix secret handling before use.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h4&gt;
  
  
  Why this helps
&lt;/h4&gt;

&lt;p&gt;Support workflows are repetitive, structured, and high-stakes.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;himalaya and slack enable direct interaction across the channels where support happens&lt;/li&gt;
&lt;li&gt;session-logs provide the audit trail for prior interactions and agent decisions&lt;/li&gt;
&lt;li&gt;inbox triage structures incoming requests into actionable queues&lt;/li&gt;
&lt;li&gt;whisper and pdf handle real customer inputs that arrive in non-text formats&lt;/li&gt;
&lt;li&gt;notion captures evolving support knowledge into reusable playbooks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Skills reduce cognitive load and standardize response patterns.&lt;/p&gt;




&lt;h3&gt;
  
  
  How plugins and skills together serve the support user
&lt;/h3&gt;

&lt;p&gt;The plugin layer connects OpenClaw to the channels where support actually happens: msteams, matrix, or discourse for channel presence, lancedb for context persistence, browser for live information retrieval.&lt;/p&gt;

&lt;p&gt;The skill layer structures how each interaction is handled: himalaya and slack bring communication directly to the agent surface, inbox triage routes work by urgency, session-logs maintain the audit trail, and notion captures institutional knowledge.&lt;/p&gt;

&lt;p&gt;Support operators touch more customer data than most other roles. That makes the combination of narrow skill sets, per-agent allowlists, and strong auditability especially important. The stack should be smaller than a research stack by design.&lt;/p&gt;

&lt;p&gt;See the &lt;a href="https://www.glukhov.org/ai-systems/openclaw/skills/" rel="noopener noreferrer"&gt;OpenClaw skills guide&lt;/a&gt; for security guidance on communication skills and per-agent allowlist configuration.&lt;/p&gt;

&lt;h3&gt;
  
  
  OpenClaw Skill and Plugin Install for Customer Support and Communication
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Plugins — capabilities layer&lt;/span&gt;
&lt;span class="c"&gt;# Choose the channel plugin that matches your platform:&lt;/span&gt;
openclaw plugins &lt;span class="nb"&gt;install &lt;/span&gt;msteams   &lt;span class="c"&gt;# Microsoft Teams: Azure Bot, tenant credentials, group chat policies&lt;/span&gt;
&lt;span class="c"&gt;# openclaw plugins install matrix  # Matrix: DMs, rooms, threads, media, E2EE&lt;/span&gt;
&lt;span class="c"&gt;# openclaw plugins install wecom   # WeCom: direct messages, group chats, Bot and Agent modes&lt;/span&gt;

openclaw plugins &lt;span class="nb"&gt;install &lt;/span&gt;memory-lancedb   &lt;span class="c"&gt;# persistent conversation context across sessions&lt;/span&gt;
openclaw plugins &lt;span class="nb"&gt;install &lt;/span&gt;browser          &lt;span class="c"&gt;# live information retrieval when docs or policies change&lt;/span&gt;

&lt;span class="c"&gt;# Skills — behavior layer&lt;/span&gt;
&lt;span class="c"&gt;# taskflow-inbox-triage is bundled — enable per agent via config:&lt;/span&gt;
&lt;span class="c"&gt;# agents.list[].skills: ["taskflow-inbox-triage", "himalaya", "session-logs"]&lt;/span&gt;

openclaw skills &lt;span class="nb"&gt;install &lt;/span&gt;himalaya       &lt;span class="c"&gt;# terminal email with triage, reply, forward, and search&lt;/span&gt;
openclaw skills &lt;span class="nb"&gt;install &lt;/span&gt;session-logs   &lt;span class="c"&gt;# audit trail for prior interactions and agent decisions&lt;/span&gt;
openclaw skills &lt;span class="nb"&gt;install &lt;/span&gt;nano-pdf       &lt;span class="c"&gt;# handle forms, guides, and documents from customers&lt;/span&gt;
openclaw skills &lt;span class="nb"&gt;install &lt;/span&gt;openai-whisper &lt;span class="c"&gt;# local speech-to-text for voicemail and support calls&lt;/span&gt;
&lt;span class="c"&gt;# openclaw skills install notion       # triage notes and support playbooks (review secret handling first)&lt;/span&gt;
&lt;span class="c"&gt;# openclaw skills install slack        # Slack channel integration (review token assumptions before enabling)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  5. The Growth and Lead Generation User
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Profile
&lt;/h3&gt;

&lt;p&gt;This user builds pipelines.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;lead discovery&lt;/li&gt;
&lt;li&gt;enrichment&lt;/li&gt;
&lt;li&gt;outreach preparation&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Core needs
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;data collection from public sources&lt;/li&gt;
&lt;li&gt;enrichment and signal extraction&lt;/li&gt;
&lt;li&gt;integration with CRM systems&lt;/li&gt;
&lt;li&gt;repeatability across campaigns&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Typical OpenClaw Plugin Set
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;tools&lt;br&gt;&lt;br&gt;
browser, firecrawl&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;workflow&lt;br&gt;&lt;br&gt;
webhooks&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;integrations&lt;br&gt;&lt;br&gt;
CRM APIs or early-stage connector plugins&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;providers&lt;br&gt;&lt;br&gt;
openrouter for cost-efficient routing&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Why this helps
&lt;/h4&gt;

&lt;p&gt;Browser and firecrawl handle different source types and are worth using together — browser for dynamic interactive pages, firecrawl for structured extraction from static content.&lt;/p&gt;

&lt;p&gt;Webhooks push enriched results into downstream systems such as CRMs or analytics pipelines. Provider routing through openrouter keeps costs predictable when running repeated enrichment passes over large datasets.&lt;/p&gt;

&lt;p&gt;Many growth-focused plugins still show maturity gaps in the ecosystem. Treat them as processing layers rather than systems of record, and verify stability before relying on them in production pipelines.&lt;/p&gt;




&lt;h3&gt;
  
  
  Typical OpenClaw Skill Set
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Skill&lt;/th&gt;
&lt;th&gt;Link&lt;/th&gt;
&lt;th&gt;Why it helps&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;xurl&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/gaurangzalariya/xurl" rel="noopener noreferrer"&gt;clawhub.ai/gaurangzalariya/xurl&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Converts public X content into pain points, messaging angles, and lead themes without a heavy API-driven setup. 7 stars, 10.2k downloads.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;multi-search-engine&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/gpyangyoujun/multi-search-engine" rel="noopener noreferrer"&gt;clawhub.ai/gpyangyoujun/multi-search-engine&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Broad prospect and market discovery when one engine never tells the full story. 566 stars, 121k downloads.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;agent-browser&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/matrixy/agent-browser-clawdbot" rel="noopener noreferrer"&gt;clawhub.ai/matrixy/agent-browser-clawdbot&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Controlled interaction with dynamic prospect pages, forms, or dashboards. 323 stars, 90.2k downloads.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;blogwatcher&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/steipete/blogwatcher" rel="noopener noreferrer"&gt;clawhub.ai/steipete/blogwatcher&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Monitors competitor posts, launch feeds, and niche sites for ongoing market signals. 57 stars, 34.9k downloads.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;notion&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/steipete/notion" rel="noopener noreferrer"&gt;clawhub.ai/steipete/notion&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Turns captured signals into structured campaign or pipeline notes. Review secret handling before use.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;openai-whisper&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/steipete/openai-whisper" rel="noopener noreferrer"&gt;clawhub.ai/steipete/openai-whisper&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Handy for call snippets, voice notes, and quick post-meeting debrief capture.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;slack&lt;/td&gt;
&lt;td&gt;&lt;a href="https://clawhub.ai/steipete/slack" rel="noopener noreferrer"&gt;clawhub.ai/steipete/slack&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Useful for sharing SDR notes and campaign updates. Review token scope before enabling.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h4&gt;
  
  
  Why this helps
&lt;/h4&gt;

&lt;p&gt;Growth workflows rely on signal extraction from public sources.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;xurl extracts themes and pain points from social content without a heavy API setup&lt;/li&gt;
&lt;li&gt;multi-search and agent-browser provide broad and deep discovery across sources&lt;/li&gt;
&lt;li&gt;blogwatcher tracks ongoing market signals and competitor activity&lt;/li&gt;
&lt;li&gt;notion structures raw signal into actionable pipeline assets&lt;/li&gt;
&lt;li&gt;whisper captures voice-based research inputs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Skills transform scattered public data into repeatable outreach inputs.&lt;/p&gt;




&lt;h3&gt;
  
  
  How plugins and skills together serve the growth user
&lt;/h3&gt;

&lt;p&gt;The plugin layer provides data infrastructure: browser and firecrawl gather raw web data, webhooks push enriched results downstream, and openrouter manages cost across repeated enrichment runs.&lt;/p&gt;

&lt;p&gt;The skill layer extracts signal and structures it: xurl surfaces social themes, multi-search broadens discovery coverage, blogwatcher maintains continuous monitoring, and notion converts raw captures into organized pipeline assets.&lt;/p&gt;

&lt;p&gt;Growth setups have a natural tendency toward over-engineering. The most stable configurations stay public-facing and avoid installing every scraping wrapper that promises infinite automation. A focused stack with clear data flow is more durable than an ambitious one that requires constant maintenance.&lt;/p&gt;

&lt;h3&gt;
  
  
  OpenClaw Skill and Plugin Install for Growth and Lead Generation
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Plugins — capabilities layer&lt;/span&gt;
openclaw plugins &lt;span class="nb"&gt;install &lt;/span&gt;browser     &lt;span class="c"&gt;# dynamic page interaction for prospect research and forms&lt;/span&gt;
openclaw plugins &lt;span class="nb"&gt;install &lt;/span&gt;firecrawl   &lt;span class="c"&gt;# structured content extraction from static sources&lt;/span&gt;
openclaw plugins &lt;span class="nb"&gt;install &lt;/span&gt;webhooks    &lt;span class="c"&gt;# push enriched results to CRM and analytics downstream&lt;/span&gt;

&lt;span class="c"&gt;# Skills — behavior layer&lt;/span&gt;
openclaw skills &lt;span class="nb"&gt;install &lt;/span&gt;xurl                   &lt;span class="c"&gt;# extract pain points and messaging angles from public X content&lt;/span&gt;
openclaw skills &lt;span class="nb"&gt;install &lt;/span&gt;multi-search-engine    &lt;span class="c"&gt;# multi-engine prospect and market discovery&lt;/span&gt;
openclaw skills &lt;span class="nb"&gt;install &lt;/span&gt;agent-browser-clawdbot &lt;span class="c"&gt;# controlled interaction with dynamic pages and dashboards&lt;/span&gt;
openclaw skills &lt;span class="nb"&gt;install &lt;/span&gt;blogwatcher            &lt;span class="c"&gt;# monitor competitor posts, launch feeds, and niche sites&lt;/span&gt;
openclaw skills &lt;span class="nb"&gt;install &lt;/span&gt;notion                 &lt;span class="c"&gt;# structure captured signals into campaign pipeline notes (review secret handling first)&lt;/span&gt;
openclaw skills &lt;span class="nb"&gt;install &lt;/span&gt;openai-whisper         &lt;span class="c"&gt;# capture call snippets and voice debrief notes locally&lt;/span&gt;
&lt;span class="c"&gt;# openclaw skills install slack                # share SDR notes and updates (review token scope before enabling)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Cross-cutting production patterns
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Separation of responsibilities
&lt;/h3&gt;

&lt;p&gt;Plugins and skills should not overlap.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;plugins provide capabilities
&lt;/li&gt;
&lt;li&gt;skills define behavior
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Mixing them leads to unpredictable systems where failures are difficult to attribute. When something breaks, you should be able to say immediately whether it is a capability problem or a behavior problem.&lt;/p&gt;




&lt;h3&gt;
  
  
  Start from user intent, not feature lists
&lt;/h3&gt;

&lt;p&gt;Configuration should emerge from what a user actually does, not from what looks impressive.&lt;/p&gt;

&lt;p&gt;Two systems with identical plugins can behave completely differently depending on which skills are loaded and for which agent roles. The skill layer is the real interface.&lt;/p&gt;




&lt;h3&gt;
  
  
  Minimalism wins
&lt;/h3&gt;

&lt;p&gt;More plugins do not mean better systems.&lt;/p&gt;

&lt;p&gt;Production setups converge toward:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;fewer components&lt;/li&gt;
&lt;li&gt;clearer ownership&lt;/li&gt;
&lt;li&gt;predictable behavior&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Adding a component should require justifying what breaks if it is removed. The most effective setups are not the most complex ones.&lt;/p&gt;




&lt;h3&gt;
  
  
  Observability is not optional
&lt;/h3&gt;

&lt;p&gt;Without logs and visibility:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;failures are silent&lt;/li&gt;
&lt;li&gt;debugging is slow&lt;/li&gt;
&lt;li&gt;trust erodes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The session-logs skill and observability plugins like opik openclaw are cheap insurance against invisible failures. They belong in every production setup regardless of user type.&lt;/p&gt;




&lt;h3&gt;
  
  
  Per-agent allowlists matter
&lt;/h3&gt;

&lt;p&gt;OpenClaw's &lt;code&gt;agents.list[].skills&lt;/code&gt; configuration replaces inherited defaults entirely for a given agent role.&lt;/p&gt;

&lt;p&gt;That is the right tool for high-consequence roles like support or finance operators where a narrow, explicit skill set is safer than a broad inherited one.&lt;/p&gt;




&lt;h3&gt;
  
  
  Third-party components need review
&lt;/h3&gt;

&lt;p&gt;Skills from ClawHub should be inspected before install.&lt;/p&gt;

&lt;p&gt;Run &lt;code&gt;clawhub inspect &amp;lt;slug&amp;gt;&lt;/code&gt; to check scan results, declared binaries, and credential use before enabling any community skill in production. Instruction-only skills are safer than code-bearing ones. Bundled official skills are the safest starting point.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://www.glukhov.org/ai-systems/openclaw/skills/" rel="noopener noreferrer"&gt;OpenClaw skills guide&lt;/a&gt; covers the full review workflow and security checklist.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final thoughts
&lt;/h2&gt;

&lt;p&gt;OpenClaw production systems are not built by installing everything available.&lt;/p&gt;

&lt;p&gt;They are shaped by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;user intent&lt;/li&gt;
&lt;li&gt;workflow structure&lt;/li&gt;
&lt;li&gt;clear separation between capability and behavior&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Plugins make the system powerful.&lt;br&gt;&lt;br&gt;
Skills make it usable.&lt;/p&gt;

&lt;p&gt;The most effective setups are the ones where every component has a clear reason to exist, and every user type has both the capabilities and the structured behaviors needed to do their actual work.&lt;/p&gt;

&lt;p&gt;For next steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.glukhov.org/ai-systems/openclaw/plugins/" rel="noopener noreferrer"&gt;OpenClaw plugins guide&lt;/a&gt; — plugin lifecycle, ecosystem picks, and safety rails&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.glukhov.org/ai-systems/openclaw/skills/" rel="noopener noreferrer"&gt;OpenClaw skills guide&lt;/a&gt; — ClawHub discovery, per-role stacks, and security tradeoffs&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.glukhov.org/ai-systems/openclaw/quickstart/" rel="noopener noreferrer"&gt;OpenClaw quickstart&lt;/a&gt; — installation with Docker&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>openclaw</category>
      <category>architecture</category>
      <category>selfhosting</category>
      <category>llm</category>
    </item>
    <item>
      <title>Hermes AI Assistant Skills for Real Production Setups</title>
      <dc:creator>Rost</dc:creator>
      <pubDate>Mon, 20 Apr 2026 09:50:09 +0000</pubDate>
      <link>https://forem.com/rosgluk/hermes-ai-assistant-skills-for-real-production-setups-f5f</link>
      <guid>https://forem.com/rosgluk/hermes-ai-assistant-skills-for-real-production-setups-f5f</guid>
      <description>&lt;p&gt;Hermes AI assistant, officially documented as Hermes Agent, is not positioned as a simple chat wrapper.&lt;/p&gt;

&lt;p&gt;For installation, provider setup, tool sandboxing, and gateway configuration, see the &lt;a href="https://www.glukhov.org/ai-systems/hermes/" rel="noopener noreferrer"&gt;Hermes AI Assistant guide&lt;/a&gt;. This article focuses on the skills and profile architecture that determines how Hermes behaves once it is running.&lt;/p&gt;

&lt;p&gt;The official docs and repository describe a self-improving agent with a built-in learning loop that creates skills from experience, improves them during use, persists knowledge across sessions, and runs on anything from a low-cost VPS to cloud sandboxes.&lt;/p&gt;

&lt;p&gt;In April, 2026, the public GitHub repository shows about 94.6k stars, 13.2k forks, and a latest release tagged v0.10.0 on April 16, 2026. That is enough activity to call the project fast-moving, well-adopted, and still operationally young at the same time.&lt;/p&gt;

&lt;p&gt;That dual nature matters for production design. Hermes is mature enough to support real work, but dynamic enough that a messy setup will age badly. The article below treats configuration and skills as an operational architecture question, not as a feature checklist.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Hermes needs a profile-first architecture
&lt;/h2&gt;

&lt;p&gt;Hermes skills are on-demand knowledge documents. They use progressive disclosure so the agent can see a compact skill index first and only load full skill content when needed, which keeps token use under control even when many skills are installed. Every installed skill becomes a slash command in the CLI and in messaging surfaces, and the docs explicitly position skills as the preferred extension mechanism when a capability can be expressed with instructions, shell commands, and existing tools rather than custom agent code.&lt;/p&gt;

&lt;p&gt;The production complication is that Hermes treats skills as living state, not frozen packages. Bundled skills, hub-installed skills, and agent-created skills all live under &lt;code&gt;~/.hermes/skills/&lt;/code&gt;, and the docs state that the agent can modify or delete skills. The same system exposes create, patch, edit, delete, and supporting-file actions for skill management. That is powerful, but it also means one oversized "do everything" agent tends to become a procedural junk drawer.&lt;/p&gt;

&lt;p&gt;Profiles are the answer. Hermes profiles are fully isolated environments, each with its own &lt;code&gt;config.yaml&lt;/code&gt;, &lt;code&gt;.env&lt;/code&gt;, &lt;code&gt;SOUL.md&lt;/code&gt;, memories, sessions, skills, cron jobs, and state database. The CLI also turns a profile into its own command alias, so a profile called &lt;code&gt;coder&lt;/code&gt; becomes &lt;code&gt;coder chat&lt;/code&gt;, &lt;code&gt;coder setup&lt;/code&gt;, &lt;code&gt;coder gateway start&lt;/code&gt;, and so on. In practice, that makes profiles the real unit of production ownership, not the individual skill.&lt;/p&gt;

&lt;h2&gt;
  
  
  The production baseline
&lt;/h2&gt;

&lt;p&gt;The baseline shape is surprisingly clean. Hermes stores non-secret behavior in &lt;code&gt;~/.hermes/config.yaml&lt;/code&gt;, secrets in &lt;code&gt;~/.hermes/.env&lt;/code&gt;, identity in &lt;code&gt;SOUL.md&lt;/code&gt;, persistent facts in &lt;code&gt;memories/&lt;/code&gt;, procedural knowledge in &lt;code&gt;skills/&lt;/code&gt;, scheduled jobs in &lt;code&gt;cron/&lt;/code&gt;, sessions in &lt;code&gt;sessions/&lt;/code&gt;, and logs in &lt;code&gt;logs/&lt;/code&gt;. The &lt;code&gt;hermes config set&lt;/code&gt; command routes API keys into &lt;code&gt;.env&lt;/code&gt; and everything else into &lt;code&gt;config.yaml&lt;/code&gt;, and the documented precedence order is CLI flags first, then &lt;code&gt;config.yaml&lt;/code&gt;, then &lt;code&gt;.env&lt;/code&gt;, then built-in defaults. That is also the cleanest answer to the production FAQ about how secrets and config should be split.&lt;/p&gt;

&lt;p&gt;A practical multi-profile layout usually ends up looking something like this, with one profile per responsibility rather than one profile per human:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/.hermes/profiles/
  eng/
  research/
  ops/
  execops/
  ml/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That pattern matches how Hermes profiles are documented: each profile is its own isolated environment, and profiles can be cloned from a base configuration when common defaults are useful. The docs also note that profiles do not share memory or sessions, and that updated skills can be synced across profiles when the main installation is updated.&lt;/p&gt;

&lt;p&gt;The next production boundary is execution. Hermes supports six terminal backends - local, Docker, SSH, Modal, Daytona, and Singularity - and the security docs describe a defense-in-depth model that includes dangerous command approval, container isolation, MCP credential filtering, context file scanning, cross-session isolation, and input sanitization. In other words, the "profile first" decision answers who owns state, and the backend decision answers where risky work is allowed to happen.&lt;/p&gt;

&lt;p&gt;Automation sits on top of that baseline. Hermes cron jobs can attach zero, one, or multiple skills, and they run in fresh agent sessions rather than inheriting the current chat. The messaging gateway is also the background process that manages sessions, runs cron, and routes results back to platforms like Telegram, Discord, Slack, WhatsApp, Email, Matrix, and others. The official MCP guide adds one more production rule that is easy to overlook: the best pattern is not to connect everything, but to expose the smallest useful surface.&lt;/p&gt;

&lt;h2&gt;
  
  
  The software engineering profile
&lt;/h2&gt;

&lt;p&gt;The most obvious Hermes persona is the software engineer who wants the agent to behave less like a chat window and more like a repeatable repo operator. This profile usually cares about repository auth, issue triage, PR creation, code review, debugging, and plan-backed execution. In the Hermes catalogs, the core built-in skill pack is unusually coherent for that job: &lt;code&gt;github-auth&lt;/code&gt;, &lt;code&gt;github-issues&lt;/code&gt;, &lt;code&gt;github-pr-workflow&lt;/code&gt;, &lt;code&gt;github-code-review&lt;/code&gt;, &lt;code&gt;code-review&lt;/code&gt;, &lt;code&gt;plan&lt;/code&gt;, &lt;code&gt;writing-plans&lt;/code&gt;, &lt;code&gt;systematic-debugging&lt;/code&gt;, and &lt;code&gt;test-driven-development&lt;/code&gt;. If delegation matters, Hermes also ships built-in autonomous agent skills such as &lt;code&gt;codex&lt;/code&gt;, &lt;code&gt;claude-code&lt;/code&gt;, &lt;code&gt;opencode&lt;/code&gt;, and &lt;code&gt;hermes-agent-spawning&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;What makes that pack useful is not any single skill. It is the way the skills encode development procedure. &lt;code&gt;github-pr-workflow&lt;/code&gt; covers the full PR lifecycle, &lt;code&gt;github-issues&lt;/code&gt; formalizes issue operations, &lt;code&gt;github-code-review&lt;/code&gt; and &lt;code&gt;code-review&lt;/code&gt; make review a distinct step instead of an afterthought, and &lt;code&gt;systematic-debugging&lt;/code&gt; keeps the agent from jumping straight to premature fixes. That also answers the practical question of which AI assistant skills matter most for coding workflows. The highest-value skills are usually the ones that lock in repo hygiene and review discipline, not the ones that promise more raw code generation.&lt;/p&gt;

&lt;p&gt;Hermes delegation strengthens this profile further. The platform can spawn isolated child agents with their own conversation, terminal session, and toolset, and only the final summary is returned to the parent. For codebases, that is a cleaner fit than stuffing every intermediate diff, stack trace, and review note into one conversation. In production terms, the engineering profile benefits from narrow skill sets, a sandboxed backend such as Docker or SSH, and generous use of delegation when context noise starts to dominate.&lt;/p&gt;

&lt;h2&gt;
  
  
  The research and knowledge profile
&lt;/h2&gt;

&lt;p&gt;The research profile is where Hermes starts to feel distinct from ordinary assistants. The built-in catalogs already include &lt;code&gt;arxiv&lt;/code&gt;, &lt;code&gt;duckduckgo-search&lt;/code&gt;, &lt;code&gt;blogwatcher&lt;/code&gt;, &lt;code&gt;llm-wiki&lt;/code&gt;, &lt;code&gt;ocr-and-documents&lt;/code&gt;, &lt;code&gt;obsidian&lt;/code&gt;, &lt;code&gt;domain-intel&lt;/code&gt;, and &lt;code&gt;ml-paper-writing&lt;/code&gt;, while the official optional catalog adds &lt;code&gt;qmd&lt;/code&gt;, &lt;code&gt;parallel-cli&lt;/code&gt;, &lt;code&gt;scrapling&lt;/code&gt;, and a broader research tier for specialized domains. That stack covers paper search, source monitoring, OCR, local note systems, domain reconnaissance, writing, and hybrid retrieval without forcing everything into a single RAG pattern.&lt;/p&gt;

&lt;p&gt;This profile is also the clearest place to answer the memory-versus-skills question. Hermes documentation defines memory as facts about users, projects, and preferences, while skills store procedures for how to do things. Research work needs both. Memory holds what the assistant has already learned about the domain and the reader's preferences; skills encode repeatable procedures such as "scan arXiv, summarize new papers, and write notes into Obsidian." That distinction matters because production research systems fail when everything is treated as memory or everything is treated as workflow. Hermes gives those concerns separate homes.&lt;/p&gt;

&lt;p&gt;The research profile also benefits disproportionately from cron. Hermes cron jobs can explicitly load skills before execution, and the automation guides stress that scheduled prompts must be fully self-contained because they run in fresh sessions. A recurring pipeline that combines &lt;code&gt;blogwatcher&lt;/code&gt;, &lt;code&gt;arxiv&lt;/code&gt;, &lt;code&gt;obsidian&lt;/code&gt;, or &lt;code&gt;llm-wiki&lt;/code&gt; is therefore more reliable than a vague "check what changed today" job. In other words, research profiles work best when source discovery, note writing, and long-term storage are each represented by a named skill rather than hidden inside one long natural-language prompt.&lt;/p&gt;

&lt;h2&gt;
  
  
  The automation and operations profile
&lt;/h2&gt;

&lt;p&gt;The ops profile is less glamorous and often more valuable. This is the user who wants Hermes to react to events, inspect systems, run scripted checks, route output to a channel, and do all of that without turning the host into a liability. Hermes has the right building blocks for that style of work: built-in &lt;code&gt;webhook-subscriptions&lt;/code&gt; for event-driven activation, built-in &lt;code&gt;native-mcp&lt;/code&gt; and &lt;code&gt;mcporter&lt;/code&gt; for MCP-based tools, and official optional skills such as &lt;code&gt;docker-management&lt;/code&gt;, &lt;code&gt;fastmcp&lt;/code&gt;, &lt;code&gt;cli&lt;/code&gt;, and &lt;code&gt;1password&lt;/code&gt; when the workflow expands into containers, custom MCP servers, or secret injection.&lt;/p&gt;

&lt;p&gt;The reason this pack works is that each skill owns one boundary. &lt;code&gt;webhook-subscriptions&lt;/code&gt; handles ingress from external systems. &lt;code&gt;docker-management&lt;/code&gt; turns container chores into a named procedure instead of a free-form shell game. &lt;code&gt;fastmcp&lt;/code&gt; is useful when Hermes needs to become the orchestrator around new MCP tools, and &lt;code&gt;1password&lt;/code&gt; keeps secret handling explicit rather than smuggled into shell history or markdown files. The official MCP guidance reinforces the same production instinct: connect the right thing with the smallest useful surface.&lt;/p&gt;

&lt;p&gt;This profile is also the cleanest place to answer how scheduled AI workflows stay reliable. Hermes cron documentation says jobs run in fresh sessions, can attach one or more skills, and should use self-contained prompts. The cron troubleshooting guide adds that automatic firing depends on the gateway ticker rather than an ordinary CLI chat session. So the reliable pattern is straightforward even if the implementation is not: explicit skills, explicit delivery target, self-contained prompt, isolated backend, and a gateway that is actually running.&lt;/p&gt;

&lt;h2&gt;
  
  
  The executive operations profile
&lt;/h2&gt;

&lt;p&gt;There is a quieter but very real Hermes persona that looks like a chief of staff, operations lead, or heavily overloaded founder. The relevant skills are less flashy and more office-shaped: &lt;code&gt;google-workspace&lt;/code&gt;, &lt;code&gt;notion&lt;/code&gt;, &lt;code&gt;linear&lt;/code&gt;, &lt;code&gt;nano-pdf&lt;/code&gt;, &lt;code&gt;powerpoint&lt;/code&gt;, and the built-in &lt;code&gt;himalaya&lt;/code&gt; email skill, plus official optional skills such as &lt;code&gt;agentmail&lt;/code&gt;, &lt;code&gt;telephony&lt;/code&gt;, and &lt;code&gt;one-three-one-rule&lt;/code&gt;. That mix gives Hermes access to inbox, calendar, docs, tasks, decks, PDF cleanup, a structured communication framework, and even phone and SMS workflows where that actually matters.&lt;/p&gt;

&lt;p&gt;The flow here is more important than the catalog. &lt;code&gt;google-workspace&lt;/code&gt; anchors day-to-day execution. &lt;code&gt;Notion&lt;/code&gt; and &lt;code&gt;Linear&lt;/code&gt; prevent the assistant from becoming the task system of record. &lt;code&gt;one-three-one-rule&lt;/code&gt; is surprisingly useful because decision support is often the hardest thing to standardize, and that skill gives Hermes a named procedure for proposals rather than generic "summarize this" behavior. &lt;code&gt;nano-pdf&lt;/code&gt; and &lt;code&gt;powerpoint&lt;/code&gt; are the kind of operational multipliers that look small until a team starts touching decks and PDFs every day.&lt;/p&gt;

&lt;p&gt;Hermes messaging and voice features make this profile more practical than it first appears. The gateway can expose the agent through Slack, Telegram, Discord, WhatsApp, Email, Matrix, and several other channels, and the voice stack supports microphone input, spoken replies in messaging, and live Discord voice conversations. The docs also note that one Hermes instance can serve multiple users through allowlists and DM pairing, while bot tokens remain exclusive to a single profile. That is why a communication-heavy deployment usually benefits from at least one dedicated profile instead of sharing the same bot identity with engineering or ops.&lt;/p&gt;

&lt;h2&gt;
  
  
  The ML and data platform profile
&lt;/h2&gt;

&lt;p&gt;Hermes is built by a research lab, and that lineage shows. The catalogs include &lt;code&gt;jupyter-live-kernel&lt;/code&gt; for stateful notebook-style work, &lt;code&gt;huggingface-hub&lt;/code&gt; for model and dataset operations, &lt;code&gt;evaluating-llms-harness&lt;/code&gt; and &lt;code&gt;weights-and-biases&lt;/code&gt; for evaluation and experiment tracking, &lt;code&gt;qdrant-vector-search&lt;/code&gt; for production RAG storage, and a large built-in and optional MLOps tier with skills such as &lt;code&gt;axolotl&lt;/code&gt;, &lt;code&gt;fine-tuning-with-trl&lt;/code&gt;, &lt;code&gt;modal-serverless-gpu&lt;/code&gt;, &lt;code&gt;lambda-labs-gpu-cloud&lt;/code&gt;, &lt;code&gt;flash-attention&lt;/code&gt;, &lt;code&gt;tensorrt-llm&lt;/code&gt;, &lt;code&gt;pinecone&lt;/code&gt;, &lt;code&gt;qdrant&lt;/code&gt;, and &lt;code&gt;nemo-curator&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;What is notable here is not just breadth. It is that the skills span the whole stack from notebook iteration to data curation, evaluation, vector search, fine-tuning, and inference optimization. For an ML platform user, Hermes stops feeling like an assistant and starts feeling like a control plane that can carry procedures across the lifecycle. &lt;code&gt;jupyter-live-kernel&lt;/code&gt; handles iterative exploration, &lt;code&gt;evaluating-llms-harness&lt;/code&gt; and &lt;code&gt;weights-and-biases&lt;/code&gt; formalize measurement, and the optional compute and optimization skills let Hermes talk coherently about both experimentation and deployment.&lt;/p&gt;

&lt;p&gt;This is also the profile where restraint matters most. Because the optional MLOps catalog is so large, a production Hermes setup for ML work usually benefits from being opinionated about scope. A platform engineering profile that owns evaluation and deployment does not need every training framework installed. A research profile that owns papers and note systems does not need every vector database skill. Hermes can carry huge skill inventories, but production usefulness still comes from narrowing the active surface.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where skills become liabilities
&lt;/h2&gt;

&lt;p&gt;The strongest part of the Hermes skills system is also the place where production setups go wrong. Hermes can browse and install skills from its built-in catalog, the official optional catalog, Vercel's &lt;code&gt;skills.sh&lt;/code&gt;, well-known skill endpoints, direct GitHub repositories, and marketplace-style community sources. The security model distinguishes between &lt;code&gt;builtin&lt;/code&gt;, &lt;code&gt;official&lt;/code&gt;, &lt;code&gt;trusted&lt;/code&gt;, and &lt;code&gt;community&lt;/code&gt; sources, runs security scans for hub-installed skills, and allows &lt;code&gt;--force&lt;/code&gt; only for non-dangerous policy blocks. A dangerous scan verdict stays blocked. Hermes also surfaces upstream metadata such as repository URL, weekly installs, and audit signals during inspection. That is a solid trust model, but it is not a substitute for taste.&lt;/p&gt;

&lt;p&gt;There is also a limit to what a skill should be asked to do. Hermes documentation is explicit that skills are the preferred choice when the job can be expressed as instructions plus shell commands plus existing tools, while plugins are the more honest abstraction for custom tools, hooks, and lifecycle behavior. The plugin guide even shows how a plugin can bundle its own skill. In production, that means skills are best treated as reusable procedures, not as a forced substitute for proper tool or plugin design.&lt;/p&gt;

&lt;p&gt;Community and support look healthy, but they do not erase change velocity. Hermes documentation points users to Discord, GitHub Discussions, Issues, and the Skills Hub, and the public repository shows frequent releases and a large contribution footprint. The operational takeaway is simple enough: updates are part of the system, not an event outside it. A real production setup assumes profiles, skills, and workflow assumptions will evolve, then uses isolation and narrow skill packs so that change stays local when it inevitably arrives.&lt;/p&gt;

&lt;p&gt;Hermes works best when skills are treated as procedural contracts around clearly separated profiles. The moment one profile becomes the engineering agent, the research assistant, the ops worker, the inbox bot, and the ML platform all at once, the system stops compounding and starts leaking responsibilities. The clean production pattern is less about having more skills and more about giving each profile a job description it can actually keep.&lt;/p&gt;

&lt;p&gt;This article is part of the &lt;a href="https://www.glukhov.org/ai-systems/" rel="noopener noreferrer"&gt;AI Systems&lt;/a&gt; cluster, which covers self-hosted assistants, retrieval architecture, local LLM infrastructure, and observability.&lt;/p&gt;

</description>
      <category>selfhosting</category>
      <category>hermes</category>
      <category>aiagents</category>
      <category>llm</category>
    </item>
    <item>
      <title>Backup and Restore Gitea server</title>
      <dc:creator>Rost</dc:creator>
      <pubDate>Mon, 20 Apr 2026 09:47:17 +0000</pubDate>
      <link>https://forem.com/rosgluk/backup-and-restore-gitea-server-3l8e</link>
      <guid>https://forem.com/rosgluk/backup-and-restore-gitea-server-3l8e</guid>
      <description>&lt;p&gt;Need to backup the 1) db, 2) filestorage, 3) some other gitea files. Here we go.&lt;/p&gt;

&lt;p&gt;In the &lt;a href="https://www.glukhov.org/developer-tools/git-and-forges/gitea-test1/" rel="noopener noreferrer"&gt;Testing Gitea&lt;/a&gt; post we installed the gitea server.&lt;/p&gt;

&lt;p&gt;For the complete developer tools collection including Git workflows and Docker management, see &lt;a href="https://www.glukhov.org/developer-tools/" rel="noopener noreferrer"&gt;Developer Tools: The Complete Guide to Modern Development Workflows&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you're setting up Gitea for the first time, check out &lt;a href="https://www.glukhov.org/developer-tools/git-and-forges/gitea-test1/" rel="noopener noreferrer"&gt;Choosing free on-prem git server - Gitea is the winner!&lt;/a&gt; for installation details, and &lt;a href="https://www.glukhov.org/developer-tools/git-and-forges/gitea-ssl/" rel="noopener noreferrer"&gt;Gitea SSL with Apache as reverse proxy&lt;/a&gt; for secure deployment.&lt;/p&gt;

&lt;h2&gt;
  
  
  When
&lt;/h2&gt;

&lt;p&gt;Now just as a precaution of terrible things happenings, need to rehearse the backup and restore procedure.&lt;/p&gt;

&lt;p&gt;Better be safe then sorry.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where
&lt;/h2&gt;

&lt;p&gt;Gitea server data consists of 3 components&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;code&lt;/li&gt;
&lt;li&gt;filestore&lt;/li&gt;
&lt;li&gt;db&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In our test environment all togeather they take a bit more then 700MB:&lt;/p&gt;

&lt;p&gt;As they recommend, need to stop all services and back up them all, kind of in the same transaction.&lt;/p&gt;

&lt;p&gt;And restore, in the same transaction too.&lt;/p&gt;

&lt;h2&gt;
  
  
  How - Backup
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; ~/gitea-srv-local

&lt;span class="c"&gt;# backup the db&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-t&lt;/span&gt; gitea-srv-local_db_1 bash &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'pg_dump gitea -U gitea  --file=/var/lib/postgresql/backups/gitea-db-$(date +%Y-%m-%d).sql'&lt;/span&gt;

&lt;span class="c"&gt;# take gitea down&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;docker-compose down

&lt;span class="c"&gt;# check the backups folder&lt;/span&gt;
&lt;span class="nb"&gt;sudo ls &lt;/span&gt;postgres-backups

&lt;span class="c"&gt;# create backup dir&lt;/span&gt;
&lt;span class="nb"&gt;mkdir &lt;/span&gt;gitea-backups

&lt;span class="c"&gt;# backup gitea folder&lt;/span&gt;
&lt;span class="nb"&gt;sudo tar&lt;/span&gt; &lt;span class="nt"&gt;-zcvf&lt;/span&gt; gitea-backups/gitea-gitea-&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; +%Y-%m-%d&lt;span class="si"&gt;)&lt;/span&gt;.tgz gitea/gitea

&lt;span class="c"&gt;# backup repos folder&lt;/span&gt;
&lt;span class="nb"&gt;sudo tar&lt;/span&gt; &lt;span class="nt"&gt;-zcvf&lt;/span&gt; gitea-backups/gitea-git-&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; +%Y-%m-%d&lt;span class="si"&gt;)&lt;/span&gt;.tgz gitea/git

&lt;span class="c"&gt;# bring it up&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;docker-compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;last bit - login to some other server and pull backup folder there, or do some other more elaborated files manipulation&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;scp &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="nb"&gt;uname&lt;/span&gt;@gitea-srv-ip-addr:/home/uname/gitea-srv-local/gitea-backups ~/gitea-backups
scp &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="nb"&gt;uname&lt;/span&gt;@gitea-srv-ip-addr:/home/uname/gitea-srv-local/postgres-backups ~/postgres-backups

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How - Restore
&lt;/h2&gt;

&lt;p&gt;Actually, there is a bit more then that, esp with permissions and hooks, but idea is the same.&lt;/p&gt;

&lt;p&gt;But! check the original doco: &lt;a href="https://docs.gitea.com/administration/backup-and-restore" rel="noopener noreferrer"&gt;https://docs.gitea.com/administration/backup-and-restore&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# install it first&lt;/span&gt;
&lt;span class="c"&gt;# then take it down&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;docker-compose down

&lt;span class="c"&gt;# restore files&lt;/span&gt;
&lt;span class="nb"&gt;tar&lt;/span&gt; &lt;span class="nt"&gt;-zxvf&lt;/span&gt; gitea-git-___.tgz gitea/git
&lt;span class="nb"&gt;tar&lt;/span&gt; &lt;span class="nt"&gt;-zxvf&lt;/span&gt; gitea-gitea-___.tgz gitea/gitea

&lt;span class="c"&gt;# bring it up&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;docker-compose up &lt;span class="nt"&gt;-d&lt;/span&gt;

&lt;span class="c"&gt;# here some activity with psql or pg_restore&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-t&lt;/span&gt; gitea-srv-local_db_1 bash &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'psql gitea -U gitea  --file=/var/lib/postgresql/backups/gitea-db-$(date +%Y-%m-%d).sql'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then goto UI and check it&lt;/p&gt;

&lt;p&gt;For quick reference on Git commands, see &lt;a href="https://www.glukhov.org/developer-tools/git-and-forges/git-cheatsheet/" rel="noopener noreferrer"&gt;GIT Cheatsheet: Most useful GIT commands&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Useful links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.gitea.com/administration/backup-and-restore" rel="noopener noreferrer"&gt;https://docs.gitea.com/administration/backup-and-restore&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.glukhov.org/developer-tools/containers/docker-cheatsheet/" rel="noopener noreferrer"&gt;Docker Cheatsheet&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.glukhov.org/developer-tools/git-and-forges/gitflow-steps-and-alternatives/" rel="noopener noreferrer"&gt;Gitflow Explained: Steps, Alternatives, Pros, and Cons&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>linux</category>
      <category>selfhosting</category>
      <category>git</category>
      <category>gitea</category>
    </item>
    <item>
      <title>DBeaver vs Beekeeper - SQL Database Management Tools</title>
      <dc:creator>Rost</dc:creator>
      <pubDate>Mon, 20 Apr 2026 09:41:57 +0000</pubDate>
      <link>https://forem.com/rosgluk/dbeaver-vs-beekeeper-sql-database-management-tools-407l</link>
      <guid>https://forem.com/rosgluk/dbeaver-vs-beekeeper-sql-database-management-tools-407l</guid>
      <description>&lt;p&gt;New Linux Ubuntu 24.04 desktop edition has offered me to install Beekeeper Studio as SQL Editor and DB Manager tool.&lt;br&gt;
I was previously using DBeaver.&lt;br&gt;
OK.&lt;br&gt;
Let's &lt;a href="https://www.glukhov.org/developer-tools/database-tools/dbeaver-vs-beekeeper/" rel="noopener noreferrer"&gt;Compare DBeaver with Beekeeper Studio&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This nice image is generated by the model &lt;a href="https://www.glukhov.org/post/2024/09/flux-text-to-image/" rel="noopener noreferrer"&gt;AI model Flux 1 dev&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;p&gt;TL;DR means &lt;code&gt;too long, didn't read&lt;/code&gt; for those who don't know...&lt;/p&gt;

&lt;p&gt;Beekeeper studio is looking nice but still:&lt;/p&gt;

&lt;p&gt;My choice of best DB management tool is still the same - &lt;a href="https://www.glukhov.org/developer-tools/database-tools/install-dbeaver-on-linux/" rel="noopener noreferrer"&gt;DBeaver&lt;/a&gt; .&lt;br&gt;
Main advantages of the DBeaver in my eyes are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;DBeaver can do backup and restore the SQL DBs&lt;/li&gt;
&lt;li&gt;DBeaver has better license (Apache) comparing to Beekeper Studio (GGPL3)&lt;/li&gt;
&lt;li&gt;In DBeaver you cen select output forma - grid or text. The text is better for copypasting. Don't call it &lt;code&gt;advanced feature&lt;/code&gt;, Beekeeper, please...&lt;/li&gt;
&lt;li&gt;Free Beekeeper Studio feels like intentionally cut version to push everyone to Pro one.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Detailed comparison of &lt;strong&gt;DBeaver&lt;/strong&gt; and &lt;strong&gt;Beekeeper Studio&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;OK, here’s a detailed comparison of &lt;strong&gt;DBeaver&lt;/strong&gt; and &lt;strong&gt;Beekeeper Studio&lt;/strong&gt;, two popular database management tools:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Differences&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Beekeeper Studio&lt;/th&gt;
&lt;th&gt;DBeaver&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;User Interface&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Modern, user-friendly, fast, and intuitive&lt;/td&gt;
&lt;td&gt;Traditional, robust, may feel complex&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Database Support&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;MySQL, PostgreSQL, SQLite, SQL Server, more&lt;/td&gt;
&lt;td&gt;Relational &amp;amp; NoSQL (MongoDB, Cassandra, etc)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Query Editor&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Intuitive, syntax highlighting, autocomplete&lt;/td&gt;
&lt;td&gt;Comprehensive, execution plan visualization&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Migration Tools&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Streamlined, easy-to-use migration wizards&lt;/td&gt;
&lt;td&gt;Supports migrations, less streamlined&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Data Visualization&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Basic charting, table previews&lt;/td&gt;
&lt;td&gt;Advanced charts, dashboards, reports&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Collaboration&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Built-in collaboration for simultaneous work&lt;/td&gt;
&lt;td&gt;No native collaboration; supports Git&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Learning Curve&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Minimal, easy to start&lt;/td&gt;
&lt;td&gt;Moderate, more features to learn&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Performance&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Lightweight, fast&lt;/td&gt;
&lt;td&gt;Can be slower due to feature density&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;License&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Open source (GPLv3), free &amp;amp; paid tiers&lt;/td&gt;
&lt;td&gt;Open source, free &amp;amp; paid versions&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Strengths
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Beekeeper Studio&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Ease of Use:&lt;/strong&gt; Designed for simplicity and speed, with a modern UI that feels like a code editor (similar to VSCode).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Quick Start:&lt;/strong&gt; Minimal learning curve, suitable for users who want to get work done without complex setup.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Collaboration:&lt;/strong&gt; Built-in tools for team-based database work.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Privacy:&lt;/strong&gt; No telemetry or tracking in the community edition.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;DBeaver&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Feature Density:&lt;/strong&gt; Extensive features for advanced users, including support for a wide range of database types (relational and NoSQL).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Visualization:&lt;/strong&gt; Advanced charting and reporting tools.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Version Control:&lt;/strong&gt; Integration with Git for team collaboration via code repositories.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Universal Support:&lt;/strong&gt; Broad compatibility with obscure or legacy databases via JDBC.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Use Cases
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Choose Beekeeper Studio&lt;/strong&gt; if you prioritize a fast, modern, and easy-to-use tool for SQL work, especially if you work with mainstream databases and value collaboration and privacy.
For SQL command references, see &lt;a href="https://www.glukhov.org/developer-tools/database-tools/sql-cheatsheet/" rel="noopener noreferrer"&gt;SQL Cheatsheet&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Choose DBeaver&lt;/strong&gt; if you need support for a wide variety of databases (including NoSQL), advanced data visualization, or integration with version control systems.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;DBeaver&lt;/strong&gt; offers superior support for NoSQL databases—including both Redis and MongoDB—compared to Beekeeper Studio.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;DBeaver:&lt;/strong&gt; Supports a wide range of NoSQL databases such as MongoDB, Cassandra, Redis (via JDBC or plugins), and more. Its advanced database management features, including schema browsing, query building, and data visualization, make it a strong choice for users who need to work with various NoSQL solutions. DBeaver’s extensions and plugins further enhance its compatibility with these databases.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Beekeeper Studio:&lt;/strong&gt; Primarily focused on relational databases (e.g., MySQL, PostgreSQL, SQLite, SQL Server). While it is user-friendly and modern, current versions do not provide native or robust support for NoSQL databases like MongoDB or Redis.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Beekeeper Studio offers a more user-friendly and streamlined experience, while DBeaver provides broader database support and advanced features at the cost of a steeper learning curve. The choice depends on your workflow, database needs, and preference for simplicity versus feature richness.&lt;br&gt;
If your primary need is working with NoSQL databases such as Redis and MongoDB, DBeaver is the better choice.&lt;br&gt;
Beekeeper Studio is more suitable for relational database management.&lt;/p&gt;

&lt;p&gt;For PostgreSQL-specific commands, check out the &lt;a href="https://www.glukhov.org/developer-tools/database-tools/postgresql-cheatsheet/" rel="noopener noreferrer"&gt;PostgreSQL Cheatsheet&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;And I like DBeaver more.&lt;/p&gt;

&lt;h2&gt;
  
  
  Useful links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.glukhov.org/developer-tools/database-tools/install-dbeaver-on-linux/" rel="noopener noreferrer"&gt;Install DBeaver on linux&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.glukhov.org/developer-tools/" rel="noopener noreferrer"&gt;Developer Tools: The Complete Guide to Modern Development Workflows&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dbeaver.io" rel="noopener noreferrer"&gt;https://dbeaver.io&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.beekeeperstudio.io" rel="noopener noreferrer"&gt;https://www.beekeeperstudio.io&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>dev</category>
      <category>sql</category>
    </item>
    <item>
      <title>Kubuntu vs KDE Neon: A Technical Deep Dive</title>
      <dc:creator>Rost</dc:creator>
      <pubDate>Sun, 19 Apr 2026 04:48:01 +0000</pubDate>
      <link>https://forem.com/rosgluk/kubuntu-vs-kde-neon-a-technical-deep-dive-48dm</link>
      <guid>https://forem.com/rosgluk/kubuntu-vs-kde-neon-a-technical-deep-dive-48dm</guid>
      <description>&lt;p&gt;For KDE Plasma fans, two Linux distributions frequently come up in discussion:&lt;br&gt;
&lt;a href="https://www.glukhov.org/developer-tools/comparisons/kubuntu-vs-kde-neon/" rel="noopener noreferrer"&gt;&lt;strong&gt;Kubuntu&lt;/strong&gt; and &lt;strong&gt;KDE Neon&lt;/strong&gt;&lt;/a&gt;.&lt;br&gt;
They may appear similar - both ship with KDE Plasma as the default desktop, both are based on Ubuntu, and both are friendly to newcomers. &lt;/p&gt;

&lt;p&gt;But under the hood, they diverge in philosophy, update cadence, and package management. Let's break them down in technical detail.&lt;/p&gt;

&lt;p&gt;For more developer tools comparisons, see &lt;a href="https://www.glukhov.org/developer-tools/" rel="noopener noreferrer"&gt;Developer Tools: The Complete Guide to Modern Development Workflows&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Base System and Repositories
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://kubuntu.org/" rel="noopener noreferrer"&gt;&lt;strong&gt;Kubuntu&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Built as an official &lt;strong&gt;Ubuntu flavor&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Uses &lt;strong&gt;Ubuntu repositories&lt;/strong&gt; (main, universe, multiverse, restricted) plus the &lt;strong&gt;Kubuntu PPAs&lt;/strong&gt; maintained by the Kubuntu team.&lt;/li&gt;
&lt;li&gt;Plasma and KDE applications are &lt;strong&gt;snapshotted&lt;/strong&gt; per &lt;a href="https://www.glukhov.org/developer-tools/terminals-shell/check-linux-ubuntu-version/" rel="noopener noreferrer"&gt;Ubuntu release cycle&lt;/a&gt;, meaning you only get newer KDE versions when upgrading to the next Kubuntu release (unless you manually add backports).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;a href="https://neon.kde.org/" rel="noopener noreferrer"&gt;&lt;strong&gt;KDE Neon&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Built on top of &lt;strong&gt;Ubuntu LTS releases only&lt;/strong&gt; (e.g., 22.04 LTS).&lt;/li&gt;
&lt;li&gt;Core system packages (kernel, drivers, base libraries) come from Ubuntu LTS repositories.&lt;/li&gt;
&lt;li&gt;KDE packages (Plasma desktop, Frameworks, and Applications) come directly from the &lt;strong&gt;KDE Neon repositories&lt;/strong&gt;, which are maintained by KDE developers.&lt;/li&gt;
&lt;li&gt;Uses a &lt;strong&gt;hybrid model&lt;/strong&gt;: stable Ubuntu LTS base + rolling-release KDE stack.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Update and Release Cycle
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Kubuntu&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Release cycle mirrors Ubuntu: &lt;strong&gt;every six months&lt;/strong&gt; (April and October).&lt;/li&gt;
&lt;li&gt;LTS releases every two years with &lt;strong&gt;5 years of support&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;KDE updates are delivered at the &lt;strong&gt;point release&lt;/strong&gt; stage. Between upgrades, &lt;a href="https://kde.org/plasma-desktop/" rel="noopener noreferrer"&gt;KDE Plasma&lt;/a&gt; versions stay frozen (unless you use the &lt;strong&gt;Kubuntu Backports PPA&lt;/strong&gt;).&lt;/li&gt;
&lt;li&gt;Example: Kubuntu 22.04 shipped with Plasma 5.24 LTS and won’t get Plasma 5.27 unless the user opts into backports.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;KDE Neon&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Ubuntu base remains fixed (e.g., still on 22.04).&lt;/li&gt;
&lt;li&gt;KDE software is updated &lt;strong&gt;within days of upstream release&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Users receive Plasma point releases, Frameworks, and Application updates through standard APT upgrades.&lt;/li&gt;
&lt;li&gt;Example: Plasma 5.27 becomes available to Neon users almost immediately after KDE publishes it.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Package Management
&lt;/h2&gt;

&lt;p&gt;Both use &lt;strong&gt;APT/dpkg&lt;/strong&gt; as their package management system, but their package sources differ.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Kubuntu&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;apt&lt;/code&gt; pulls from Ubuntu archives and Kubuntu PPAs.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://en.wikipedia.org/wiki/Snap_(software)" rel="noopener noreferrer"&gt;Snap&lt;/a&gt; integration comes by default, as per Ubuntu policy.&lt;/li&gt;
&lt;li&gt;Flatpak available but not preconfigured.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;KDE Neon&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;apt&lt;/code&gt; pulls core from &lt;a href="https://www.glukhov.org/developer-tools/local-dev-platforms/install-linux-ubuntu-24-04/" rel="noopener noreferrer"&gt;Ubuntu LTS&lt;/a&gt; + KDE Neon’s own repos.&lt;/li&gt;
&lt;li&gt;KDE Neon avoids Snap by default, focusing on DEB packages.&lt;/li&gt;
&lt;li&gt;Flatpak is often recommended for newer non-KDE apps.&lt;/li&gt;
&lt;li&gt;Because KDE software is packaged directly by KDE devs, you often see newer versions compared to Ubuntu/Kubuntu.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Kernel and Driver Updates
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Kubuntu&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Follows Ubuntu kernel and driver updates.&lt;/li&gt;
&lt;li&gt;Hardware Enablement (HWE) kernels available on LTS.&lt;/li&gt;
&lt;li&gt;Kernel updates tied to Ubuntu release cycle.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;KDE Neon&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Since the base is Ubuntu LTS, kernel updates come from &lt;strong&gt;Ubuntu LTS + HWE stack&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Neon doesn’t modify kernel or drivers — focus is purely on KDE software.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Stability and Regression Risks
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Kubuntu&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Stable because Plasma and &lt;a href="https://apps.kde.org/" rel="noopener noreferrer"&gt;KDE apps&lt;/a&gt; are frozen until the next release.&lt;/li&gt;
&lt;li&gt;Fewer regressions because software versions are heavily tested.&lt;/li&gt;
&lt;li&gt;Risks come mainly when upgrading between Ubuntu versions (e.g., 22.04 → 22.10).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;KDE Neon&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;More prone to &lt;strong&gt;regressions&lt;/strong&gt; since you’re on bleeding-edge KDE builds.&lt;/li&gt;
&lt;li&gt;Users sometimes face issues after major Plasma updates (e.g., panel crashes, &lt;a href="https://invent.kde.org/plasma/kwin" rel="noopener noreferrer"&gt;KWin&lt;/a&gt; bugs).&lt;/li&gt;
&lt;li&gt;However, KDE Neon acts as a &lt;strong&gt;testing ground&lt;/strong&gt;, so bugs are quickly reported and patched by KDE devs.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Target Use Cases
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Kubuntu&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enterprises, developers, and users who want a &lt;strong&gt;“set it and forget it”&lt;/strong&gt; system.&lt;/li&gt;
&lt;li&gt;Ideal for those who rely on long-term stability (e.g., LTS versions).&lt;/li&gt;
&lt;li&gt;Works well in production and business setups.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;KDE Neon&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enthusiasts, testers, and developers who want &lt;strong&gt;the latest KDE software&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Great for people contributing to KDE or reporting bugs upstream.&lt;/li&gt;
&lt;li&gt;Not always ideal for mission-critical environments due to its rolling KDE nature.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Resource Usage and Performance
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Plasma itself is efficient, and both distros perform similarly on the same hardware.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kubuntu&lt;/strong&gt;: Slightly more conservative with background services, since it adheres to Ubuntu defaults.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Neon&lt;/strong&gt;: Sometimes lighter initially, but Plasma updates may introduce new services or defaults faster than Kubuntu.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Community and Support
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Kubuntu&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Official Ubuntu flavor → benefits from Ubuntu forums, AskUbuntu, Launchpad bug tracking.&lt;/li&gt;
&lt;li&gt;Kubuntu team maintains additional documentation and a strong IRC/Telegram community.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;KDE Neon&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Supported directly by KDE devs and community.&lt;/li&gt;
&lt;li&gt;Bugs in KDE software can be reported &lt;strong&gt;directly upstream to KDE&lt;/strong&gt;, rather than Ubuntu.&lt;/li&gt;
&lt;li&gt;Smaller support base outside of KDE-specific issues, but relies on Ubuntu docs for general system problems.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  TL;DR — Key Differences in Table Form
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Kubuntu&lt;/th&gt;
&lt;th&gt;KDE Neon&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Base&lt;/td&gt;
&lt;td&gt;Ubuntu (regular releases + LTS)&lt;/td&gt;
&lt;td&gt;Ubuntu LTS only&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Update cycle&lt;/td&gt;
&lt;td&gt;Fixed, tied to Ubuntu&lt;/td&gt;
&lt;td&gt;Rolling KDE on fixed Ubuntu LTS&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;KDE updates&lt;/td&gt;
&lt;td&gt;Frozen per release (backports optional)&lt;/td&gt;
&lt;td&gt;Immediate, within days of upstream&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Package sources&lt;/td&gt;
&lt;td&gt;Ubuntu repos + Kubuntu PPAs&lt;/td&gt;
&lt;td&gt;Ubuntu LTS repos + Neon KDE repos&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Snap support&lt;/td&gt;
&lt;td&gt;Included by default&lt;/td&gt;
&lt;td&gt;Not included by default&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Stability&lt;/td&gt;
&lt;td&gt;Very stable&lt;/td&gt;
&lt;td&gt;Stable base, but KDE is bleeding-edge&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Target users&lt;/td&gt;
&lt;td&gt;General desktop &amp;amp; enterprise&lt;/td&gt;
&lt;td&gt;KDE enthusiasts, testers, devs&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;While &lt;strong&gt;Kubuntu&lt;/strong&gt; is a rock-solid Ubuntu flavor offering a predictable, stable KDE Plasma experience, &lt;strong&gt;KDE Neon&lt;/strong&gt; acts as a rolling showcase of the KDE ecosystem, with Plasma updates delivered almost instantly.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Choose &lt;strong&gt;Kubuntu&lt;/strong&gt; if you want &lt;strong&gt;stability, long-term support, and predictability&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Choose &lt;strong&gt;KDE Neon&lt;/strong&gt; if you want &lt;strong&gt;the latest KDE tech, rapid updates, and direct integration with KDE development&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both are excellent — the decision comes down to whether you prioritize &lt;strong&gt;stability&lt;/strong&gt; or &lt;strong&gt;innovation&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Useful links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.glukhov.org/developer-tools/terminals-shell/bash-cheat-sheet/" rel="noopener noreferrer"&gt;Bash Cheat Sheet&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.glukhov.org/developer-tools/local-dev-platforms/install-linux-ubuntu-24-04/" rel="noopener noreferrer"&gt;How to Install Ubuntu 24.04 &amp;amp; useful tools&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.glukhov.org/developer-tools/terminals-shell/reinstall-linux/" rel="noopener noreferrer"&gt;Reinstall linux Mint&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.glukhov.org/developer-tools/terminals-shell/file-managers-for-linux-ubuntu/" rel="noopener noreferrer"&gt;Context menu in File managers for Ubuntu 24.04 - Nautilus vs Nemo vs Dolphin vs Caja&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.glukhov.org/observability/gpu-monitoring-apps-linux/" rel="noopener noreferrer"&gt;GPU monitoring applications in Linux / Ubuntu&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.glukhov.org/developer-tools/comparisons/programming-languages-frameworks-popularity/" rel="noopener noreferrer"&gt;Programming languages and frameworks popularity&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Little Ubuntu Linux Howtos:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.glukhov.org/developer-tools/terminals-shell/check-linux-ubuntu-version/" rel="noopener noreferrer"&gt;Check Linux Ubuntu Version&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.glukhov.org/developer-tools/terminals-shell/howto-start-terminal-windows-tiled-linux-mint-ubuntu/" rel="noopener noreferrer"&gt;How to start terminal windows tiled linux mint ubuntu&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.glukhov.org/developer-tools/terminals-shell/how-to-change-static-ip-address-in-ubuntu/" rel="noopener noreferrer"&gt;How to Change a Static IP Address in Ubuntu Server&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>linux</category>
      <category>selfhosting</category>
      <category>devops</category>
    </item>
    <item>
      <title>Gitflow Explained: Steps, Alternatives, Pros, and Cons</title>
      <dc:creator>Rost</dc:creator>
      <pubDate>Sun, 19 Apr 2026 04:35:33 +0000</pubDate>
      <link>https://forem.com/rosgluk/gitflow-explained-steps-alternatives-pros-and-cons-10ae</link>
      <guid>https://forem.com/rosgluk/gitflow-explained-steps-alternatives-pros-and-cons-10ae</guid>
      <description>&lt;p&gt;&lt;a href="https://www.glukhov.org/developer-tools/git-and-forges/gitflow-steps-and-alternatives/" rel="noopener noreferrer"&gt;Gitflow&lt;/a&gt; is widely used in projects requiring &lt;strong&gt;versioned releases&lt;/strong&gt;, &lt;strong&gt;parallel development&lt;/strong&gt;, and &lt;strong&gt;hotfix management&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;This guide is part of &lt;a href="https://www.glukhov.org/developer-tools/" rel="noopener noreferrer"&gt;Developer Tools: The Complete Guide to Modern Development Workflows&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;By separating development, testing, and production environments into distinct branches, Gitflow ensures &lt;strong&gt;predictable deployments&lt;/strong&gt; and &lt;strong&gt;clear traceability&lt;/strong&gt; of changes. Its importance lies in its ability to &lt;strong&gt;scale for large teams&lt;/strong&gt; and &lt;strong&gt;maintain stability&lt;/strong&gt; in complex projects.&lt;/p&gt;

&lt;p&gt;Gitflow is a branching model introduced by Vincent Driessen in 2010, designed to manage complex software development workflows with structured release cycles. &lt;/p&gt;

&lt;h2&gt;
  
  
  2. Definition and Core Concept of Gitflow
&lt;/h2&gt;

&lt;p&gt;Gitflow is a &lt;strong&gt;branching strategy&lt;/strong&gt; that organizes workflows around five primary branches:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;main&lt;/code&gt;/&lt;code&gt;master&lt;/code&gt;&lt;/strong&gt;: Stores production-ready code (stable releases).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;develop&lt;/code&gt;&lt;/strong&gt;: Acts as the integration branch for ongoing development.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;feature/xxx&lt;/code&gt;&lt;/strong&gt;: Short-lived branches for developing new features.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;release/xxx&lt;/code&gt;&lt;/strong&gt;: Created from &lt;code&gt;develop&lt;/code&gt; to prepare for production releases.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;hotfix/xxx&lt;/code&gt;&lt;/strong&gt;: Branches from &lt;code&gt;main&lt;/code&gt; to address critical production bugs.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The core concept is to &lt;strong&gt;isolate work&lt;/strong&gt; (features, releases, hotfixes) into dedicated branches, ensuring that production code remains stable while allowing parallel development and testing.&lt;/p&gt;




&lt;h2&gt;
  
  
  3. Step-by-Step Sequence of Actions in Gitflow
&lt;/h2&gt;

&lt;p&gt;The Gitflow workflow follows a structured process:  &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Initialize Gitflow&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Use &lt;code&gt;git flow init&lt;/code&gt; or standard Git commands to set up &lt;code&gt;main&lt;/code&gt; and &lt;code&gt;develop&lt;/code&gt; branches.
&lt;/li&gt;
&lt;li&gt;Before starting, ensure you've &lt;a href="https://www.glukhov.org/developer-tools/git-and-forges/configure-git-username/" rel="noopener noreferrer"&gt;Configure Git User Name and Email Address&lt;/a&gt;.
&lt;/li&gt;
&lt;li&gt;For a comprehensive list of Git commands, see &lt;a href="https://www.glukhov.org/developer-tools/git-and-forges/git-cheatsheet/" rel="noopener noreferrer"&gt;GIT Cheatsheet: Most useful GIT commands&lt;/a&gt;.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Start a Feature&lt;/strong&gt;:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a feature branch from &lt;code&gt;develop&lt;/code&gt;:
&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt; git checkout develop  
 git checkout &lt;span class="nt"&gt;-b&lt;/span&gt; feature/new-feature  
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;(Alternative): &lt;code&gt;git flow feature start new-feature&lt;/code&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Develop the Feature&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;Commit changes to the feature branch.

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Finish the Feature&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Merge into &lt;code&gt;develop&lt;/code&gt; and delete the branch:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt; git checkout develop  
 git merge feature/new-feature  
 git branch &lt;span class="nt"&gt;-d&lt;/span&gt; feature/new-feature  
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Alternative): &lt;code&gt;git flow feature finish new-feature&lt;/code&gt;  &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Prepare a Release&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a release branch from &lt;code&gt;develop&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt; git checkout develop  
 git checkout &lt;span class="nt"&gt;-b&lt;/span&gt; release/1.2.0  
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Alternative): &lt;code&gt;git flow release start 1.2.0&lt;/code&gt;  &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Finalize the Release&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Merge into &lt;code&gt;main&lt;/code&gt; and &lt;code&gt;develop&lt;/code&gt;, tag the release:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt; git checkout main  
 git merge release/1.2.0  
 git tag &lt;span class="nt"&gt;-a&lt;/span&gt; 1.2.0 &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"Release version 1.2.0"&lt;/span&gt;  
 git checkout develop  
 git merge release/1.2.0  
 git branch &lt;span class="nt"&gt;-d&lt;/span&gt; release/1.2.0  
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;(Alternative): &lt;code&gt;git flow release finish 1.2.0&lt;/code&gt;  &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Handle Hotfixes&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a hotfix branch from &lt;code&gt;main&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt; git checkout main  
 git checkout &lt;span class="nt"&gt;-b&lt;/span&gt; hotfix/critical-bug  
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;&lt;p&gt;(Alternative): &lt;code&gt;git flow hotfix start critical-bug&lt;/code&gt;  &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Merge into &lt;code&gt;main&lt;/code&gt; and &lt;code&gt;develop&lt;/code&gt;, tag the hotfix:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt; git checkout main  
 git merge hotfix/critical-bug  
 git tag &lt;span class="nt"&gt;-a&lt;/span&gt; 1.2.1 &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"Hotfix version 1.2.1"&lt;/span&gt;  
 git checkout develop  
 git merge hotfix/critical-bug  
 git branch &lt;span class="nt"&gt;-d&lt;/span&gt; hotfix/critical-bug  
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;&lt;p&gt;(Alternative): &lt;code&gt;git flow hotfix finish critical-bug&lt;/code&gt;  &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  4. Typical Workflow Stages and Branching Strategy
&lt;/h2&gt;

&lt;p&gt;Gitflowâ€™s branching strategy ensures &lt;strong&gt;separation of concerns&lt;/strong&gt;:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Feature branches&lt;/strong&gt; allow parallel development without affecting &lt;code&gt;develop&lt;/code&gt;.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Release branches&lt;/strong&gt; provide a &lt;strong&gt;testing environment&lt;/strong&gt; for finalizing releases.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hotfix branches&lt;/strong&gt; enable &lt;strong&gt;urgent bug fixes&lt;/strong&gt; without disrupting ongoing development.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Key stages include:  &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Feature Development&lt;/strong&gt; â†’ 2. &lt;strong&gt;Integration into &lt;code&gt;develop&lt;/code&gt;&lt;/strong&gt; â†’ 3. &lt;strong&gt;Release Preparation&lt;/strong&gt; â†’ 4. &lt;strong&gt;Stabilization and Deployment&lt;/strong&gt; â†’ 5. &lt;strong&gt;Hotfix Handling&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  5. Common Use Cases and Scenarios for Gitflow
&lt;/h2&gt;

&lt;p&gt;Gitflow is ideal for:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Large teams&lt;/strong&gt; requiring &lt;strong&gt;structured collaboration&lt;/strong&gt;.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Projects with scheduled releases&lt;/strong&gt; (e.g., enterprise software, regulated industries).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complex systems&lt;/strong&gt; requiring &lt;strong&gt;versioned deployments&lt;/strong&gt; (e.g., multi-tenant applications).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Teams needing isolation&lt;/strong&gt; between development, testing, and production environments.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  6. Overview of Gitflow Alternatives
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;GitHub Flow&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Workflow&lt;/strong&gt;: Single &lt;code&gt;main&lt;/code&gt; branch with short-lived feature branches.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Steps&lt;/strong&gt;:

&lt;ol&gt;
&lt;li&gt;Create a feature branch from &lt;code&gt;main&lt;/code&gt;.
&lt;/li&gt;
&lt;li&gt;Merge via pull request after testing.
&lt;/li&gt;
&lt;li&gt;Deploy directly to production.
&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Advantages&lt;/strong&gt;: Simplicity, CI/CD compatibility, rapid deployment.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Disadvantages&lt;/strong&gt;: No structured release management; unsuitable for versioned projects.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;GitLab Flow&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Workflow&lt;/strong&gt;: Combines GitHub Flow with environment-specific branches (e.g., &lt;code&gt;staging&lt;/code&gt;, &lt;code&gt;production&lt;/code&gt;).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Advantages&lt;/strong&gt;: Balances simplicity and structure for hybrid workflows.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Trunk-Based Development&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Workflow&lt;/strong&gt;: All changes are merged directly into &lt;code&gt;main&lt;/code&gt; using &lt;strong&gt;feature flags&lt;/strong&gt;.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Advantages&lt;/strong&gt;: Reduces branching overhead, supports CI/CD.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Disadvantages&lt;/strong&gt;: Requires mature testing pipelines and disciplined teams.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Branch Per Feature&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Workflow&lt;/strong&gt;: Each feature is developed in its own branch, merged into &lt;code&gt;main&lt;/code&gt; after testing.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Advantages&lt;/strong&gt;: Isolates features, reduces conflicts.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Adoption&lt;/strong&gt;: Used by companies like Spotify and Netflix.
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  7. Weaknesses and Limitations of Gitflow
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Complexity&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Managing multiple branches increases &lt;strong&gt;merge conflicts&lt;/strong&gt; and &lt;strong&gt;overhead&lt;/strong&gt;.
&lt;/li&gt;
&lt;li&gt;Requires strict &lt;strong&gt;branch hygiene&lt;/strong&gt; and discipline.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Not Ideal for CI/CD&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;The branching model is &lt;strong&gt;rigid&lt;/strong&gt; for continuous delivery environments.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Risk of Merge Conflicts&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Long-lived branches (e.g., &lt;code&gt;develop&lt;/code&gt;, &lt;code&gt;release&lt;/code&gt;) can diverge, leading to integration issues.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Learning Curve&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;New developers may struggle with &lt;strong&gt;branching rules&lt;/strong&gt; and &lt;strong&gt;merge strategies&lt;/strong&gt;.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Slower Releases&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Multi-step processes (e.g., release â†’ &lt;code&gt;develop&lt;/code&gt; â†’ &lt;code&gt;main&lt;/code&gt;) can delay deployments.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  8. Advantages and Benefits of Using Gitflow
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Structured Release Management&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Clear separation of features, releases, and hotfixes.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stability&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Ensures &lt;code&gt;main&lt;/code&gt; remains &lt;strong&gt;production-ready&lt;/strong&gt; at all times.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Version Control&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Semantic versioning and tagging improve &lt;strong&gt;traceability&lt;/strong&gt; and &lt;strong&gt;reproducibility&lt;/strong&gt;.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Collaboration&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Enables &lt;strong&gt;parallel development&lt;/strong&gt; and &lt;strong&gt;isolated testing&lt;/strong&gt;.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hotfix Efficiency&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Critical fixes can be applied to &lt;code&gt;main&lt;/code&gt; without disrupting ongoing development.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  9. Comparison: Gitflow vs. Alternative Workflows
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Aspect&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Gitflow&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;GitHub Flow&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Trunk-Based Development&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Branching Model&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Multi-branch (feature, develop, release, hotfix, main)&lt;/td&gt;
&lt;td&gt;Minimal (main + feature branches)&lt;/td&gt;
&lt;td&gt;Single &lt;code&gt;main&lt;/code&gt; branch with feature flags&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Release Process&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Structured with release branches&lt;/td&gt;
&lt;td&gt;Direct deployment from main&lt;/td&gt;
&lt;td&gt;Continuous deployment from &lt;code&gt;main&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Complexity&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;High (suitable for large projects)&lt;/td&gt;
&lt;td&gt;Low (ideal for agile, small teams)&lt;/td&gt;
&lt;td&gt;Low (requires mature CI/CD)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Merge Frequency&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Frequent (across multiple branches)&lt;/td&gt;
&lt;td&gt;Minimal (fewer merges)&lt;/td&gt;
&lt;td&gt;Frequent (direct to &lt;code&gt;main&lt;/code&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Testing Requirements&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Rigorous (for release/hotfix branches)&lt;/td&gt;
&lt;td&gt;Automated tests critical for main&lt;/td&gt;
&lt;td&gt;Automated tests for feature flags&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  10. Best Practices for Implementing Gitflow
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Automate Workflows&lt;/strong&gt;: Use CI/CD tools (e.g., Jenkins, GitHub Actions) to reduce manual effort.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enforce Branch Naming Conventions&lt;/strong&gt;: Standardize branch names (e.g., &lt;code&gt;feature/{name}&lt;/code&gt;) for clarity.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Regular Sync Meetings&lt;/strong&gt;: Ensure alignment between teams to address bottlenecks.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automated Dependency Management&lt;/strong&gt;: Use tools like Dependabot to manage outdated dependencies.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Merge Strategy&lt;/strong&gt;: Use &lt;code&gt;--no-ff&lt;/code&gt; merges to preserve feature history.
&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  11. Case Studies or Real-World Examples
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Large Enterprises&lt;/strong&gt;: Companies like &lt;strong&gt;Microsoft&lt;/strong&gt; and &lt;strong&gt;IBM&lt;/strong&gt; use Gitflow for managing complex releases in legacy systems.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Open-Source Projects&lt;/strong&gt;: Gitflow is less common in open-source due to its complexity but is used in projects requiring &lt;strong&gt;long-term maintenance&lt;/strong&gt; (e.g., &lt;strong&gt;Kubernetes&lt;/strong&gt;).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hybrid Workflows&lt;/strong&gt;: Teams like &lt;strong&gt;GitLab&lt;/strong&gt; use &lt;strong&gt;GitLab Flow&lt;/strong&gt; to combine Gitflowâ€™s structure with GitHub Flowâ€™s simplicity.
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  12. Conclusion and Final Thoughts on Gitflowâ€™s Relevance
&lt;/h2&gt;

&lt;p&gt;Gitflow remains a &lt;strong&gt;robust solution&lt;/strong&gt; for &lt;strong&gt;structured release management&lt;/strong&gt; in large, complex projects. Its strengths in &lt;strong&gt;version control&lt;/strong&gt;, &lt;strong&gt;stability&lt;/strong&gt;, and &lt;strong&gt;collaboration&lt;/strong&gt; make it ideal for teams with &lt;strong&gt;scheduled release cycles&lt;/strong&gt; and &lt;strong&gt;regulatory compliance&lt;/strong&gt; requirements. However, its &lt;strong&gt;complexity&lt;/strong&gt; and &lt;strong&gt;overhead&lt;/strong&gt; make it less suitable for &lt;strong&gt;small teams&lt;/strong&gt;, &lt;strong&gt;agile environments&lt;/strong&gt;, or &lt;strong&gt;CI/CD pipelines&lt;/strong&gt;.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Alternatives&lt;/strong&gt; like GitHub Flow (for simplicity) and Trunk-Based Development (for CI/CD) offer &lt;strong&gt;trade-offs&lt;/strong&gt; in flexibility and scalability. The choice of workflow depends on &lt;strong&gt;team size&lt;/strong&gt;, &lt;strong&gt;project complexity&lt;/strong&gt;, and &lt;strong&gt;release frequency&lt;/strong&gt;. As DevOps practices evolve, Gitflowâ€™s role may shift toward &lt;strong&gt;hybrid models&lt;/strong&gt; that combine its structure with modern automation tools.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Final Recommendation&lt;/strong&gt;:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Use Gitflow&lt;/strong&gt; for large-scale, versioned projects.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Adopt GitHub Flow or Trunk-Based Development&lt;/strong&gt; for smaller teams or CI/CD environments.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Customize workflows&lt;/strong&gt; based on team needs and project scope.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Useful links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.glukhov.org/developer-tools/git-and-forges/gitea-test1/" rel="noopener noreferrer"&gt;Gitea&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.glukhov.org/ai-devtools/vibe-coding/" rel="noopener noreferrer"&gt;What is Vibe Coding?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.glukhov.org/developer-tools/comparisons/programming-languages-frameworks-popularity/" rel="noopener noreferrer"&gt;Programming languages and frameworks popularity&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.glukhov.org/post/2025/05/python-venv-cheatsheet/" rel="noopener noreferrer"&gt;venv Cheatsheet&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.glukhov.org/documentation-tools/pdf/generating-pdf-in-python/" rel="noopener noreferrer"&gt;Generating PDF in Python - Libraries and examples&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.glukhov.org/data-infrastructure/object-storage/minio-vs-aws-s3/" rel="noopener noreferrer"&gt;Minio as Aws S3 alternative. Minio overview and install&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.glukhov.org/developer-tools/editors-ides/vscode-cheatsheet/" rel="noopener noreferrer"&gt;VSCode Cheatsheet&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>dev</category>
      <category>devops</category>
      <category>git</category>
      <category>gitea</category>
    </item>
  </channel>
</rss>
