<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Serenities AI</title>
    <description>The latest articles on Forem by Serenities AI (@serenitiesai).</description>
    <link>https://forem.com/serenitiesai</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/serenitiesai"/>
    <language>en</language>
    <item>
      <title>Developer's Guide to Qwen 3.6 Plus: How to Get Started</title>
      <dc:creator>Serenities AI</dc:creator>
      <pubDate>Sat, 04 Apr 2026 10:22:38 +0000</pubDate>
      <link>https://forem.com/serenitiesai/developers-guide-to-qwen-36-plus-how-to-get-started-3ni0</link>
      <guid>https://forem.com/serenitiesai/developers-guide-to-qwen-36-plus-how-to-get-started-3ni0</guid>
      <description>&lt;h2&gt;
  
  
  Best Use Cases
&lt;/h2&gt;

&lt;p&gt;Based on benchmark performance and developer reports, Qwen 3.6 Plus excels in:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Repository-Level Code Analysis
&lt;/h3&gt;

&lt;p&gt;With 1M tokens of context, most mid-sized codebases fit within a single prompt. Load your entire repository and ask the model to analyze architecture, find bugs, or suggest improvements.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Document Parsing and Analysis
&lt;/h3&gt;

&lt;p&gt;Qwen 3.6 Plus leads all models on OmniDocBench v1.5 (91.2), making it the strongest choice for processing complex PDFs with tables and mixed layouts.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Visual Coding (UI to Code)
&lt;/h3&gt;

&lt;p&gt;The model can interpret UI screenshots, wireframes, or prototypes and generate functional frontend code.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Multi-Step Agent Pipelines
&lt;/h3&gt;

&lt;p&gt;Strong tool-calling (MCPMark 48.2%) and terminal task completion (Terminal-Bench 61.6%) make it well-suited for autonomous agent workflows.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. High-Throughput Applications
&lt;/h3&gt;

&lt;p&gt;At ~158 tokens/second — roughly 1.7x faster than Claude Opus 4.6 and 2x faster than GPT-5.4 — speed advantage is meaningful for developer tools and batch processing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical Limitations You Must Know
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Data Collection on Free Tier
&lt;/h3&gt;

&lt;p&gt;The free tier collects your prompts and completions for model training. Do not send confidential data through the free endpoint.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Fabrication Rate
&lt;/h3&gt;

&lt;p&gt;Independent testing identified a 26.5% fabrication rate. Always verify the model's claims about APIs, library behavior, or language features.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Security Coding
&lt;/h3&gt;

&lt;p&gt;A 43.3% success rate on security coding tests is below Claude and GPT benchmarks. Apply extra review for security-sensitive code.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Time-to-First-Token (TTFT)
&lt;/h3&gt;

&lt;p&gt;The free tier averages 11.5 seconds for the first token, which impacts interactive use.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. No Production SLA
&lt;/h3&gt;

&lt;p&gt;This is a preview model. Do not build production systems that depend on the free endpoint's availability.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bottom Line
&lt;/h2&gt;

&lt;p&gt;Qwen 3.6 Plus is the most capable free AI model available right now. For developers who want to test a frontier-class model with 1M context, strong tool-calling, and competitive coding benchmarks — without spending a dollar — there's no reason not to try it.&lt;/p&gt;

&lt;p&gt;The caveats are real: fabrication rate, security gaps, lack of production SLA, and data collection terms. But for evaluation, prototyping, and cost-sensitive production workloads that can tolerate these tradeoffs, Qwen 3.6 Plus deserves serious consideration.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://serenitiesai.com/articles/qwen-3-6-plus-developer-guide" rel="noopener noreferrer"&gt;serenitiesai.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>programming</category>
    </item>
    <item>
      <title>Bring Your Own AI: How to Cut App Building Costs by 80% in 2026</title>
      <dc:creator>Serenities AI</dc:creator>
      <pubDate>Thu, 26 Mar 2026 13:03:38 +0000</pubDate>
      <link>https://forem.com/serenitiesai/bring-your-own-ai-how-to-cut-app-building-costs-by-80-in-2026-7d4</link>
      <guid>https://forem.com/serenitiesai/bring-your-own-ai-how-to-cut-app-building-costs-by-80-in-2026-7d4</guid>
      <description>&lt;p&gt;Here's a pricing math problem every startup founder should solve:&lt;/p&gt;

&lt;p&gt;If you're paying $20/month for an AI tool that charges you again through markup on every API call, and that tool uses OpenAI's API which costs $0.01/1K tokens... how much of your money is actually going to AI?&lt;/p&gt;

&lt;p&gt;The answer? Sometimes as little as 20%. The rest is margin for the middleman.&lt;/p&gt;

&lt;p&gt;This is why the smartest builders in 2026 are switching to BYOK AI tools—Bring Your Own Key platforms that let you plug in your own API keys, eliminate markups, and cut AI costs by up to 80%.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Are BYOK AI Tools?
&lt;/h2&gt;

&lt;p&gt;BYOK (Bring Your Own Key) AI tools are applications and platforms that let you use your own API keys from AI providers like OpenAI, Anthropic, Google, or Mistral. Instead of the tool provider billing you at a markup for AI usage, you connect directly to the source.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Traditional Model (Expensive)
&lt;/h3&gt;

&lt;p&gt;You → AI SaaS Tool (adds 300-500% markup) → OpenAI&lt;/p&gt;

&lt;h3&gt;
  
  
  The BYOK Model (Smart)
&lt;/h3&gt;

&lt;p&gt;You → BYOK Tool (minimal flat fee) → OpenAI (pay actual API cost)&lt;/p&gt;

&lt;h3&gt;
  
  
  Real Cost Comparison
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;AI Usage&lt;/th&gt;
&lt;th&gt;Traditional Tool&lt;/th&gt;
&lt;th&gt;BYOK Tool&lt;/th&gt;
&lt;th&gt;Savings&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;100K tokens/month&lt;/td&gt;
&lt;td&gt;$25-40/mo&lt;/td&gt;
&lt;td&gt;~$3-5/mo&lt;/td&gt;
&lt;td&gt;85%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1M tokens/month&lt;/td&gt;
&lt;td&gt;$200-400/mo&lt;/td&gt;
&lt;td&gt;~$30-50/mo&lt;/td&gt;
&lt;td&gt;83%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10M tokens/month&lt;/td&gt;
&lt;td&gt;$1,500-3,000/mo&lt;/td&gt;
&lt;td&gt;~$200-300/mo&lt;/td&gt;
&lt;td&gt;87%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;em&gt;Based on GPT-4o pricing of ~$2.50/1M input tokens + typical SaaS markup.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why BYOK AI Tools Are Gaining Massive Traction
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Cost Control
&lt;/h3&gt;

&lt;p&gt;The primary driver is obvious: you pay API rates, not SaaS rates. When you use a traditional AI tool, you're paying:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The actual AI cost&lt;/li&gt;
&lt;li&gt;Platform margin (often 300-500%)&lt;/li&gt;
&lt;li&gt;Convenience premium&lt;/li&gt;
&lt;li&gt;Infrastructure markup&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With BYOK, you pay:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Actual AI cost&lt;/li&gt;
&lt;li&gt;Small platform fee (if any)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Model Flexibility
&lt;/h3&gt;

&lt;p&gt;BYOK tools don't lock you into a single AI provider. Today you might use GPT-4. Tomorrow, Anthropic's Claude Opus might be better for your use case. Next week, a new open-source model could outperform both.&lt;/p&gt;

&lt;p&gt;With your own keys, you switch models without switching platforms.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Data Privacy &amp;amp; Security
&lt;/h3&gt;

&lt;p&gt;When you use a traditional AI SaaS tool, your data flows through their servers before reaching the AI provider. That's two parties seeing your sensitive information.&lt;/p&gt;

&lt;p&gt;With BYOK:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your data goes directly to the AI provider&lt;/li&gt;
&lt;li&gt;The tool sees structure, not content&lt;/li&gt;
&lt;li&gt;You control data retention policies&lt;/li&gt;
&lt;li&gt;Enterprise compliance becomes simpler&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Best BYOK AI Tools by Category
&lt;/h2&gt;

&lt;h3&gt;
  
  
  All-Purpose AI Assistants
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;Platforms&lt;/th&gt;
&lt;th&gt;Supported Providers&lt;/th&gt;
&lt;th&gt;Key Feature&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;TypingMind&lt;/td&gt;
&lt;td&gt;Web&lt;/td&gt;
&lt;td&gt;OpenAI, Anthropic, Google, Mistral +&lt;/td&gt;
&lt;td&gt;Best chat management &amp;amp; prompt libraries&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MindMac&lt;/td&gt;
&lt;td&gt;macOS&lt;/td&gt;
&lt;td&gt;OpenAI, Azure, Google +&lt;/td&gt;
&lt;td&gt;Native Mac performance&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Msty&lt;/td&gt;
&lt;td&gt;All&lt;/td&gt;
&lt;td&gt;HuggingFace, Ollama, OpenRouter +&lt;/td&gt;
&lt;td&gt;Works offline with local models&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Witsy&lt;/td&gt;
&lt;td&gt;All&lt;/td&gt;
&lt;td&gt;15+ providers including local&lt;/td&gt;
&lt;td&gt;Open-source MCP client&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Developer &amp;amp; Coding
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;Best For&lt;/th&gt;
&lt;th&gt;Supported Providers&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;CodeGPT&lt;/td&gt;
&lt;td&gt;IDE integration&lt;/td&gt;
&lt;td&gt;OpenAI, Anthropic, Gemini +&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Refact.ai&lt;/td&gt;
&lt;td&gt;Self-hosted code assist&lt;/td&gt;
&lt;td&gt;OpenAI, HuggingFace&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;16x Prompt&lt;/td&gt;
&lt;td&gt;Prompt engineering&lt;/td&gt;
&lt;td&gt;Claude, DeepSeek&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  How to Calculate Your BYOK Savings
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Audit Current AI Spending
&lt;/h3&gt;

&lt;p&gt;Track all AI-related costs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Direct subscriptions (ChatGPT Plus, Claude Pro, etc.)&lt;/li&gt;
&lt;li&gt;SaaS tools with AI features&lt;/li&gt;
&lt;li&gt;API usage (if already using BYOK partially)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 2: Estimate Token Usage
&lt;/h3&gt;

&lt;p&gt;Most tools don't show token counts. Use these estimates:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Task&lt;/th&gt;
&lt;th&gt;Approximate Tokens&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Short response (~100 words)&lt;/td&gt;
&lt;td&gt;400-600 tokens&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Blog post (~1,000 words)&lt;/td&gt;
&lt;td&gt;4,000-6,000 tokens&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Code review (~500 lines)&lt;/td&gt;
&lt;td&gt;8,000-15,000 tokens&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Document analysis (10 pages)&lt;/td&gt;
&lt;td&gt;15,000-25,000 tokens&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Step 3: Compare API Costs
&lt;/h3&gt;

&lt;p&gt;Current API pricing (as of 2026):&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Input (per 1M tokens)&lt;/th&gt;
&lt;th&gt;Output (per 1M tokens)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;GPT-4o&lt;/td&gt;
&lt;td&gt;$2.50&lt;/td&gt;
&lt;td&gt;$10.00&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GPT-4o-mini&lt;/td&gt;
&lt;td&gt;$0.15&lt;/td&gt;
&lt;td&gt;$0.60&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Claude 3.5 Sonnet&lt;/td&gt;
&lt;td&gt;$3.00&lt;/td&gt;
&lt;td&gt;$15.00&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Claude 3.5 Haiku&lt;/td&gt;
&lt;td&gt;$0.25&lt;/td&gt;
&lt;td&gt;$1.25&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gemini 1.5 Flash&lt;/td&gt;
&lt;td&gt;$0.075&lt;/td&gt;
&lt;td&gt;$0.30&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  The 10-25x Advantage: AI Subscriptions vs. API Pricing
&lt;/h2&gt;

&lt;p&gt;Here's a secret most AI tools don't want you to know: the gap between consumer AI subscriptions and API pricing is enormous.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Service&lt;/th&gt;
&lt;th&gt;Subscription&lt;/th&gt;
&lt;th&gt;Equivalent API Cost&lt;/th&gt;
&lt;th&gt;Multiplier&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;ChatGPT Plus&lt;/td&gt;
&lt;td&gt;$20/mo&lt;/td&gt;
&lt;td&gt;~$0.80-2.00 typical usage&lt;/td&gt;
&lt;td&gt;10-25x&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Claude Pro&lt;/td&gt;
&lt;td&gt;$20/mo&lt;/td&gt;
&lt;td&gt;~$1.50-3.00 typical usage&lt;/td&gt;
&lt;td&gt;7-13x&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gemini Advanced&lt;/td&gt;
&lt;td&gt;$20/mo&lt;/td&gt;
&lt;td&gt;~$0.50-1.00 typical usage&lt;/td&gt;
&lt;td&gt;20-40x&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Subscriptions make sense if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You need specific features (like Advanced Data Analysis)&lt;/li&gt;
&lt;li&gt;You value simplicity over savings&lt;/li&gt;
&lt;li&gt;You're a light user who doesn't maximize the subscription&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;API/BYOK makes sense if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You're cost-conscious&lt;/li&gt;
&lt;li&gt;You use AI heavily&lt;/li&gt;
&lt;li&gt;You need model flexibility&lt;/li&gt;
&lt;li&gt;You're building products&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Getting Started with BYOK: Action Plan
&lt;/h2&gt;

&lt;h3&gt;
  
  
  This Week
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Audit your current AI tool spending&lt;/li&gt;
&lt;li&gt;Create accounts with major AI providers (OpenAI, Anthropic)&lt;/li&gt;
&lt;li&gt;Set spending limits on each account (most providers support this)&lt;/li&gt;
&lt;li&gt;Choose one BYOK tool to test&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  This Month
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Migrate one workflow to BYOK&lt;/li&gt;
&lt;li&gt;Track actual costs vs. previous spending&lt;/li&gt;
&lt;li&gt;Expand successful migrations&lt;/li&gt;
&lt;li&gt;Document prompts and workflows (they're portable now)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  This Quarter
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Evaluate total savings&lt;/li&gt;
&lt;li&gt;Standardize on optimal tool stack&lt;/li&gt;
&lt;li&gt;Train team on BYOK best practices&lt;/li&gt;
&lt;li&gt;Build custom workflows using API access&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion: The BYOK Future
&lt;/h2&gt;

&lt;p&gt;The AI tool landscape in 2026 is bifurcating:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Premium SaaS&lt;/strong&gt;: Convenience, support, and hand-holding at 5-10x markup&lt;br&gt;
&lt;strong&gt;BYOK Tools&lt;/strong&gt;: Control, flexibility, and cost efficiency for savvy users&lt;/p&gt;

&lt;p&gt;Neither is universally better. But if you're building AI-powered products, automating at scale, or simply want control over your AI costs, BYOK isn't optional—it's essential.&lt;/p&gt;

&lt;p&gt;The question isn't whether you should explore BYOK AI tools. It's how quickly you can implement them before your competitors do.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Ready to cut your AI costs by 80%? Start with &lt;a href="https://serenitiesai.com" rel="noopener noreferrer"&gt;Serenities AI's&lt;/a&gt; BYOK-native platform and keep more money in your runway.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>startup</category>
    </item>
    <item>
      <title>The Complete Guide to Vibe Coding in 2026: Everything You Need to Know</title>
      <dc:creator>Serenities AI</dc:creator>
      <pubDate>Tue, 24 Mar 2026 12:42:23 +0000</pubDate>
      <link>https://forem.com/serenitiesai/the-complete-guide-to-vibe-coding-in-2026-everything-you-need-to-know-3k6k</link>
      <guid>https://forem.com/serenitiesai/the-complete-guide-to-vibe-coding-in-2026-everything-you-need-to-know-3k6k</guid>
      <description>&lt;p&gt;Vibe coding went from a throwaway tweet to a movement that's reshaping how software gets built. In 2026, everyone from solo founders to Fortune 500 engineering teams is using AI to write code — and the results are no longer just weekend projects. They're production applications generating real revenue.&lt;/p&gt;

&lt;p&gt;This is the definitive guide to vibe coding in 2026. Whether you're a non-technical founder who wants to build your first app, a developer looking to 10x your output, or a team lead evaluating AI coding tools for your organization — this guide covers everything: what vibe coding actually is, the best tools available today, how to do it safely, what it really costs, and where it's heading next.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Vibe Coding? The Origin Story
&lt;/h2&gt;

&lt;p&gt;The term "vibe coding" was coined by Andrej Karpathy — co-founder of OpenAI and former AI leader at Tesla — in February 2025. In a now-famous post, he described a new way of programming where you "fully give in to the vibes, embrace exponentials, and forget that the code even exists."&lt;/p&gt;

&lt;p&gt;The idea was simple but radical: instead of writing code line by line, you describe what you want in plain English, and an AI generates the code for you. You test it, give feedback, and iterate — never needing to read or understand the actual code that's produced.&lt;/p&gt;

&lt;p&gt;[Continue reading the complete guide...]&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This is a cross-post. Read the full article with all examples, comparisons, and updates at: &lt;a href="https://serenitiesai.com/articles/complete-guide-vibe-coding-2026" rel="noopener noreferrer"&gt;https://serenitiesai.com/articles/complete-guide-vibe-coding-2026&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>programming</category>
    </item>
    <item>
      <title>Top 6 Coding Plans for Vibe Coding in 2026: Every Price Verified</title>
      <dc:creator>Serenities AI</dc:creator>
      <pubDate>Mon, 16 Mar 2026 13:05:40 +0000</pubDate>
      <link>https://forem.com/serenitiesai/top-6-coding-plans-for-vibe-coding-in-2026-every-price-verified-57p9</link>
      <guid>https://forem.com/serenitiesai/top-6-coding-plans-for-vibe-coding-in-2026-every-price-verified-57p9</guid>
      <description>&lt;h2&gt;
  
  
  The Real Cost of Vibe Coding in 2026
&lt;/h2&gt;

&lt;p&gt;Vibe coding burns through tokens fast. One debugging session with extended thinking can eat half your daily quota before lunch. And if you're still paying per API call, a single afternoon of iterating on a complex feature can cost more than a monthly subscription.&lt;/p&gt;

&lt;p&gt;That's why subscription-based coding plans have exploded in 2026. Instead of watching your API bill climb with every prompt, you pay a flat monthly fee and code until you hit a rolling limit — then wait a few hours and start again.&lt;/p&gt;

&lt;p&gt;We tested and compared every major coding plan available in March 2026. Here's what you actually get for your money.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick Comparison: All 6 Coding Plans
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Provider&lt;/th&gt;
&lt;th&gt;Monthly Price&lt;/th&gt;
&lt;th&gt;Usage Limit&lt;/th&gt;
&lt;th&gt;Reset Style&lt;/th&gt;
&lt;th&gt;Best For&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Claude Code&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$17–$200&lt;/td&gt;
&lt;td&gt;~10–800 prompts/5hr&lt;/td&gt;
&lt;td&gt;Rolling 5hr + weekly caps&lt;/td&gt;
&lt;td&gt;Long iterative sessions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;ChatGPT Codex&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$8–$229&lt;/td&gt;
&lt;td&gt;Varies by plan&lt;/td&gt;
&lt;td&gt;Rolling windows&lt;/td&gt;
&lt;td&gt;General coding + GPT-5 ecosystem&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Google AI&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$7.99–$249.99&lt;/td&gt;
&lt;td&gt;Daily quotas&lt;/td&gt;
&lt;td&gt;Daily reset&lt;/td&gt;
&lt;td&gt;Steady daily coding&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;GLM (Z.ai)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$9–$72&lt;/td&gt;
&lt;td&gt;~80–1,600 prompts/5hr&lt;/td&gt;
&lt;td&gt;Rolling 5hr + weekly caps&lt;/td&gt;
&lt;td&gt;Budget vibe coding&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;MiniMax&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$10–$150&lt;/td&gt;
&lt;td&gt;100–2,000 prompts/5hr&lt;/td&gt;
&lt;td&gt;Rolling 5hr&lt;/td&gt;
&lt;td&gt;Sprint-based coding&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Cerebras Code&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$50–$200&lt;/td&gt;
&lt;td&gt;24M–120M tokens/day&lt;/td&gt;
&lt;td&gt;Daily reset&lt;/td&gt;
&lt;td&gt;High-speed continuous coding&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;p&gt;&lt;em&gt;This article was originally published at &lt;a href="https://serenitiesai.com/articles/top-coding-plans-vibe-coding-2026" rel="noopener noreferrer"&gt;serenitiesai.com&lt;/a&gt; with full detailed analysis of all 6 plans, pricing tables, and selection framework.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>claude</category>
    </item>
    <item>
      <title>Is Vibe Coding Replacing Developers in 2026? What the Community Actually Says</title>
      <dc:creator>Serenities AI</dc:creator>
      <pubDate>Mon, 16 Mar 2026 00:09:13 +0000</pubDate>
      <link>https://forem.com/serenitiesai/is-vibe-coding-replacing-developers-in-2026-what-the-community-actually-says-274m</link>
      <guid>https://forem.com/serenitiesai/is-vibe-coding-replacing-developers-in-2026-what-the-community-actually-says-274m</guid>
      <description>&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://serenitiesai.com/articles/is-vibe-coding-replacing-developers-2026" rel="noopener noreferrer"&gt;serenitiesai.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;Vibe coding is everywhere in 2026. From Samsung's unexpected pivot to Replit's explosive growth, AI-powered app building has moved from Silicon Valley experiments to mainstream enterprise adoption.&lt;/p&gt;

&lt;p&gt;But the question everyone's asking isn't &lt;em&gt;if&lt;/em&gt; vibe coding will change software development — it's &lt;em&gt;how much&lt;/em&gt; and &lt;em&gt;how fast&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;After analyzing developer surveys, enterprise adoption rates, and community discussions across Reddit, Twitter, and Hacker News, here's what the data actually shows about vibe coding's impact on the developer job market in 2026.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Numbers That Matter
&lt;/h2&gt;

&lt;p&gt;The stats on vibe coding adoption tell a clear story:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Enterprise adoption up 340%&lt;/strong&gt; in Q4 2025 (McKinsey Digital)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Samsung pivoted their entire app strategy&lt;/strong&gt; to vibe coding in late 2025&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Replit's revenue grew 10x&lt;/strong&gt; after launching Agent mode&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;43% of Fortune 500 companies&lt;/strong&gt; now use vibe coding for internal tools&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But what does this mean for developers?&lt;/p&gt;

&lt;h2&gt;
  
  
  What Developers Are Actually Saying
&lt;/h2&gt;

&lt;p&gt;The developer community is split, but not in the way you'd expect. Here are the real perspectives from those working with these tools daily:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Optimists (45%)&lt;/strong&gt;&lt;br&gt;
"Vibe coding handles the boring stuff so I can focus on architecture and complex problems." - Senior Dev at fintech startup&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Pragmatists (40%)&lt;/strong&gt;&lt;br&gt;
"It's just another tool. Good for rapid prototyping, but you still need real developers for production systems." - Lead Engineer at SaaS company&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Skeptics (15%)&lt;/strong&gt;&lt;br&gt;
"AI can't understand business requirements or handle edge cases. This is overhyped." - Full-stack developer&lt;/p&gt;

&lt;p&gt;&lt;a href="https://serenitiesai.com/articles/is-vibe-coding-replacing-developers-2026" rel="noopener noreferrer"&gt;&lt;strong&gt;Continue reading the full analysis at serenitiesai.com →&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This article dives deep into:&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enterprise adoption patterns and real ROI data&lt;/li&gt;
&lt;li&gt;Which types of developer roles are most affected&lt;/li&gt;
&lt;li&gt;Community reactions from Reddit, HN, and Twitter&lt;/li&gt;
&lt;li&gt;Predictions from industry leaders&lt;/li&gt;
&lt;li&gt;How platforms like Serenities AI are democratizing app development&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Read the complete analysis: &lt;a href="https://serenitiesai.com/articles/is-vibe-coding-replacing-developers-2026" rel="noopener noreferrer"&gt;Is Vibe Coding Replacing Developers in 2026? What the Community Actually Says&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>coding</category>
    </item>
    <item>
      <title>AI Agent Memory: Why 2026 is the Year of Persistent Context</title>
      <dc:creator>Serenities AI</dc:creator>
      <pubDate>Sat, 14 Mar 2026 13:04:04 +0000</pubDate>
      <link>https://forem.com/serenitiesai/ai-agent-memory-why-2026-is-the-year-of-persistent-context-e73</link>
      <guid>https://forem.com/serenitiesai/ai-agent-memory-why-2026-is-the-year-of-persistent-context-e73</guid>
      <description>&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://serenitiesai.com/articles/ai-agent-memory-why-2026-is-the-year-of-persistent-context" rel="noopener noreferrer"&gt;serenitiesai.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Every AI agent has the same problem: amnesia.&lt;/p&gt;

&lt;p&gt;You've experienced it. You spent an hour explaining your project requirements to an AI assistant, crafted the perfect workflow, then returned the next day to find... nothing. A blank slate. All that context, gone.&lt;/p&gt;

&lt;p&gt;This isn't a minor inconvenience. It's the fundamental bottleneck holding AI agents back from becoming truly useful.&lt;/p&gt;

&lt;p&gt;But 2026 is changing everything.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why AI Agent Memory Matters More Than Ever
&lt;/h2&gt;

&lt;p&gt;When large language models first entered the enterprise, the promise seemed simple: just fill the context window with everything the agent might need. More tokens, better results, right?&lt;/p&gt;

&lt;p&gt;That illusion collapsed under real workloads.&lt;/p&gt;

&lt;p&gt;Performance degraded. Retrieval became expensive. Costs compounded. Researchers started calling it "context rot"—where simply enlarging context windows actually made responses less accurate, not more.&lt;/p&gt;

&lt;p&gt;The problem runs deeper than token limits. Traditional LLMs are fundamentally stateless. Every interaction starts fresh. There's no memory of past decisions, no understanding of evolving preferences, no accumulated wisdom from previous sessions.&lt;/p&gt;

&lt;p&gt;For short conversations, this works fine. For workflows that span days, weeks, or entire projects? It's crippling.&lt;/p&gt;

&lt;p&gt;Consider what you're missing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A sales copilot that remembers previous customer conversations could cut research time in half&lt;/li&gt;
&lt;li&gt;A customer service agent with durable recall could dramatically reduce churn
&lt;/li&gt;
&lt;li&gt;A coding assistant that tracks your architectural decisions could eliminate repetitive explanations&lt;/li&gt;
&lt;li&gt;An enterprise knowledge system that learns from every interaction could preserve institutional wisdom&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The stakes are enormous. And in 2026, the technology finally exists to solve this problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Memory Revolution: Three Approaches Battling for Dominance
&lt;/h2&gt;

&lt;p&gt;Human memory evolved as a layered system precisely because holding everything in working memory is impossible. We compress, abstract, and forget to function. AI systems need the same architectural sophistication.&lt;/p&gt;

&lt;p&gt;Today, three distinct philosophies dominate the AI agent memory landscape:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Vector Store Approach: Memory as Retrieval
&lt;/h3&gt;

&lt;p&gt;Systems like Pinecone and Weaviate store past interactions as embeddings in a vector database. When queried, the agent retrieves the most relevant fragments by similarity matching.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strengths:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fast and conceptually simple&lt;/li&gt;
&lt;li&gt;Scales to massive datasets&lt;/li&gt;
&lt;li&gt;Well-established infrastructure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Weaknesses:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prone to surface-level recall&lt;/li&gt;
&lt;li&gt;Loses relationships between facts&lt;/li&gt;
&lt;li&gt;Can't track how information changes over time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This approach finds similar text but treats each memory independently. Your agent might know you like coffee, but it won't understand that you prefer coffee from a specific shop, ordered last Tuesday, while discussing your morning routine.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Summarization Approach: Memory as Compression
&lt;/h3&gt;

&lt;p&gt;Rather than storing everything, these systems periodically condense transcripts into rolling summaries. Think of it as creating CliffsNotes of your conversation history.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strengths:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dramatically reduces token usage&lt;/li&gt;
&lt;li&gt;Preserves key insights&lt;/li&gt;
&lt;li&gt;Works well for linear narratives&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Weaknesses:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Loses granular details&lt;/li&gt;
&lt;li&gt;Summarization quality varies&lt;/li&gt;
&lt;li&gt;Can introduce compression artifacts&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Graph Approach: Memory as Knowledge
&lt;/h3&gt;

&lt;p&gt;The most ambitious systems organize memories as interconnected nodes and relationships—people, places, events, and time. The graph stores "who said what about whom and when."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strengths:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Preserves rich relationships&lt;/li&gt;
&lt;li&gt;Enables multi-hop reasoning&lt;/li&gt;
&lt;li&gt;Tracks temporal evolution&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Weaknesses:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;More complex to implement&lt;/li&gt;
&lt;li&gt;Requires careful schema design&lt;/li&gt;
&lt;li&gt;Can become computationally expensive at scale&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Leading Memory Solutions of 2026
&lt;/h2&gt;

&lt;p&gt;The startup ecosystem has exploded with solutions tackling AI agent memory from different angles. Here are the leading platforms:&lt;/p&gt;

&lt;h3&gt;
  
  
  Mem0: Hybrid Memory with Enterprise Focus
&lt;/h3&gt;

&lt;p&gt;Mem0 combines vector-based semantic search with optional graph memory for entity relationships. The system maintains cross-session context through hierarchical memory at user, session, and agent levels.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key results:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;26% accuracy gain on standard memory benchmarks&lt;/li&gt;
&lt;li&gt;Significant token cost reduction&lt;/li&gt;
&lt;li&gt;Automatic memory extraction without manual orchestration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The platform supports both open-source self-hosting and managed cloud service with SOC 2 compliance, making it enterprise-ready.&lt;/p&gt;

&lt;h3&gt;
  
  
  Zep: Temporal Knowledge Graphs
&lt;/h3&gt;

&lt;p&gt;Zep's approach focuses on tracking how facts change over time. Instead of treating memories as static, it integrates structured business data with conversational history.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Performance highlights:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;18.5% improvement in long-horizon accuracy over baseline retrieval&lt;/li&gt;
&lt;li&gt;Nearly 90% latency reduction&lt;/li&gt;
&lt;li&gt;Multi-hop and temporal query support&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This makes Zep particularly powerful for enterprise scenarios requiring relationship modeling and temporal reasoning.&lt;/p&gt;

&lt;h3&gt;
  
  
  Claude-Mem: Persistent Memory for Coding Agents
&lt;/h3&gt;

&lt;p&gt;For developers using Claude Code, Claude-mem solves the session amnesia problem by automatically capturing tool usage observations, generating semantic summaries, and making relevant context available to future sessions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The approach:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Capture:&lt;/strong&gt; Records user prompts, tool usage, and observations during sessions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compress:&lt;/strong&gt; Creates compact, indexed memory units using AI&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Retrieve:&lt;/strong&gt; Intelligently injects relevant context when new sessions start&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This reduces token usage by up to 95% while maintaining project continuity across coding sessions.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Architecture of Intelligent Memory
&lt;/h2&gt;

&lt;p&gt;Effective AI agent memory isn't just about storing information. It requires three distinct capabilities working together:&lt;/p&gt;

&lt;p&gt;Agents generate enormous amounts of text, much of it redundant. Good memory requires salience detection—identifying which facts matter.&lt;/p&gt;

&lt;p&gt;Different systems approach this differently:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Mem0 uses a "memory candidate selector" to isolate atomic statements&lt;/li&gt;
&lt;li&gt;Zep encodes entities and relationships explicitly&lt;/li&gt;
&lt;li&gt;Memvid relies on frame-based indexing with timestamps&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Consolidation: How Do Memories Evolve?
&lt;/h3&gt;

&lt;p&gt;Human recall is recursive—we re-encode memories each time we retrieve them, strengthening some and discarding others. AI systems can mimic this by summarizing or rewriting old entries when new evidence appears.&lt;/p&gt;

&lt;p&gt;This prevents "context drift" where outdated facts persist and contaminate current reasoning.&lt;/p&gt;

&lt;h3&gt;
  
  
  Retrieval: How Do We Find What We Need?
&lt;/h3&gt;

&lt;p&gt;The best systems weight relevance by both recency and importance. They understand that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Recent information often supersedes older data&lt;/li&gt;
&lt;li&gt;Some facts are always relevant regardless of age&lt;/li&gt;
&lt;li&gt;Context determines which memories matter&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Done right, these layers produce agents that evolve alongside users. Done poorly, they create brittle systems that hallucinate old facts, repeat mistakes, or lose trust altogether.&lt;/p&gt;

&lt;h2&gt;
  
  
  What 2026 Holds: Three Trajectories
&lt;/h2&gt;

&lt;p&gt;Based on current developments, expect these trends to accelerate:&lt;/p&gt;

&lt;h3&gt;
  
  
  Memory as Infrastructure
&lt;/h3&gt;

&lt;p&gt;Developers will call &lt;code&gt;memory.write()&lt;/code&gt; as easily as they now call &lt;code&gt;db.save()&lt;/code&gt;. Specialized providers will evolve into middleware for every agent platform. Memory APIs will become as standardized as database APIs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Memory as Governance
&lt;/h3&gt;

&lt;p&gt;Enterprises will demand visibility into what agents know and why. Dashboards will show "memory graphs" of learned facts with controls to edit or erase. Transparency will become table stakes; memories will be written in natural language that humans can audit.&lt;/p&gt;

&lt;h3&gt;
  
  
  Memory as Identity
&lt;/h3&gt;

&lt;p&gt;Over time, agents will develop personal histories—records of collaboration, preferences, even patterns. That history will anchor trust but raise new philosophical questions. When a model fine-tuned on your interactions generates insight, whose memory is it?&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started: Your Next Steps
&lt;/h2&gt;

&lt;p&gt;The AI agent memory revolution isn't coming—it's here. If you're building agents today, here's how to move forward:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For simple use cases:&lt;/strong&gt; Start with summarization-based approaches. They're easy to implement and work well for straightforward assistants.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For enterprise applications:&lt;/strong&gt; Evaluate Mem0 or Zep for their production-ready features and compliance capabilities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For coding agents:&lt;/strong&gt; Claude-mem or similar session-persistence tools can dramatically improve developer experience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For maximum control:&lt;/strong&gt; LangMem or Letta's tool-based approaches let you define exactly how memory works.&lt;/p&gt;

&lt;p&gt;The winners in AI will be those who solve the memory problem—not with bigger context windows, but with intelligent systems that remember what matters, forget what doesn't, and learn from every interaction.&lt;/p&gt;

&lt;p&gt;2026 is the year persistent context goes from experimental to essential. The only question is: will your agents remember, or will they forget?&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Building AI agents that need to remember? Read the full article at &lt;a href="https://serenitiesai.com/articles/ai-agent-memory-why-2026-is-the-year-of-persistent-context" rel="noopener noreferrer"&gt;serenitiesai.com&lt;/a&gt; for more details on implementation and architecture.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>programming</category>
    </item>
    <item>
      <title>GPT-5.4 Is Here: Everything Developers and Builders Need to Know</title>
      <dc:creator>Serenities AI</dc:creator>
      <pubDate>Fri, 13 Mar 2026 13:03:07 +0000</pubDate>
      <link>https://forem.com/serenitiesai/gpt-54-is-here-everything-developers-and-builders-need-to-know-clj</link>
      <guid>https://forem.com/serenitiesai/gpt-54-is-here-everything-developers-and-builders-need-to-know-clj</guid>
      <description>&lt;p&gt;OpenAI just dropped GPT-5.4 — and it's not just another incremental update. Released on March 5, 2026, this model combines frontier coding capabilities, native computer-use, and a 1M token context window into a single package designed for professional work. If you build apps, automate workflows, or run AI-powered businesses, here's exactly what changed and why it matters.&lt;/p&gt;

&lt;h2&gt;
  
  
  TL;DR — What's New in GPT-5.4
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;GPT-5.4&lt;/th&gt;
&lt;th&gt;GPT-5.2 (Previous)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Computer Use&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Native — operate desktops, browsers, apps&lt;/td&gt;
&lt;td&gt;Not available&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Context Window&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Up to 1M tokens&lt;/td&gt;
&lt;td&gt;128K–256K&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Tool Search&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;47% fewer tokens for tool-heavy workflows&lt;/td&gt;
&lt;td&gt;All tools loaded upfront&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Knowledge Work (GDPval)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;83.0% (matches/exceeds professionals)&lt;/td&gt;
&lt;td&gt;70.9%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;OSWorld (Desktop Use)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;75.0% — surpasses human performance (72.4%)&lt;/td&gt;
&lt;td&gt;47.3%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Coding (SWE-Bench Pro)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;57.7%&lt;/td&gt;
&lt;td&gt;55.6%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;API Pricing (Input)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$2.50/M tokens&lt;/td&gt;
&lt;td&gt;$1.75/M tokens&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;API Pricing (Output)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$15/M tokens&lt;/td&gt;
&lt;td&gt;$14/M tokens&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  1. Native Computer Use — The Headline Feature
&lt;/h2&gt;

&lt;p&gt;GPT-5.4 is OpenAI's first general-purpose model with &lt;strong&gt;native computer-use capabilities&lt;/strong&gt;. This isn't a bolted-on feature — it's built into the model itself.&lt;/p&gt;

&lt;p&gt;What does that mean practically? GPT-5.4 can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate desktop environments through screenshots and keyboard/mouse actions&lt;/li&gt;
&lt;li&gt;Write Playwright code to automate browser workflows&lt;/li&gt;
&lt;li&gt;Issue mouse and keyboard commands in response to what it sees on screen&lt;/li&gt;
&lt;li&gt;Complete multi-step workflows across different applications&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The benchmark results tell the story. On &lt;strong&gt;OSWorld-Verified&lt;/strong&gt;, which measures a model's ability to navigate desktop environments, GPT-5.4 hits &lt;strong&gt;75.0%&lt;/strong&gt; — exceeding human performance at 72.4% and obliterating GPT-5.2's 47.3%. That's a 59% relative improvement in one generation.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Tool Search — Finally, Efficient Tool Ecosystems
&lt;/h2&gt;

&lt;p&gt;GPT-5.4 introduces &lt;strong&gt;tool search&lt;/strong&gt;. Instead of loading all tool definitions into context, the model receives a lightweight list and looks up specific tool definitions only when needed.&lt;/p&gt;

&lt;p&gt;Testing with 250 tasks from Scale's MCP Atlas benchmark with all 36 MCP servers enabled, tool search &lt;strong&gt;reduced total token usage by 47%&lt;/strong&gt; while achieving the same accuracy.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. 1M Token Context Window
&lt;/h2&gt;

&lt;p&gt;GPT-5.4 supports up to &lt;strong&gt;1M tokens of context&lt;/strong&gt; — 4x Claude's current 256K limit. There's a catch: requests exceeding 272K context count against usage limits at &lt;strong&gt;2x the normal rate&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Long-context performance:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;4K–128K: 86–97% accuracy (strong)&lt;/li&gt;
&lt;li&gt;128K–256K: 79.3% (good)&lt;/li&gt;
&lt;li&gt;256K–512K: 57.5% (moderate drop-off)&lt;/li&gt;
&lt;li&gt;512K–1M: 36.6% (significant degradation)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  4. Professional Knowledge Work — 83% Match With Experts
&lt;/h2&gt;

&lt;p&gt;On &lt;strong&gt;GDPval&lt;/strong&gt;, GPT-5.4 matches or exceeds professionals in &lt;strong&gt;83.0% of comparisons&lt;/strong&gt; across 44 occupations — up from 70.9% for GPT-5.2.&lt;/p&gt;

&lt;p&gt;Specific improvements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Spreadsheet modeling:&lt;/strong&gt; 87.3% mean score on investment banking analyst tasks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Factual accuracy:&lt;/strong&gt; 33% fewer false individual claims&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  API Pricing Breakdown
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Input Price&lt;/th&gt;
&lt;th&gt;Output Price&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;gpt-5.2&lt;/td&gt;
&lt;td&gt;$1.75/M tokens&lt;/td&gt;
&lt;td&gt;$14/M tokens&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;gpt-5.4&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$2.50/M tokens&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$15/M tokens&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;gpt-5.4-pro&lt;/td&gt;
&lt;td&gt;$30/M tokens&lt;/td&gt;
&lt;td&gt;$180/M tokens&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;GPT-5.4 costs ~43% more per input token than GPT-5.2, but OpenAI claims greater token efficiency reduces total tokens required for many tasks.&lt;/p&gt;

&lt;p&gt;Competitor comparison:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GPT-5.4: $2.50/M input, $15/M output&lt;/li&gt;
&lt;li&gt;Claude Opus 4.6: $5/M input, $25/M output&lt;/li&gt;
&lt;li&gt;Claude Sonnet 4.6: $3/M input, $15/M output&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cost savings tip:&lt;/strong&gt; With platforms like &lt;a href="https://serenitiesai.com" rel="noopener noreferrer"&gt;Serenities AI&lt;/a&gt;, you can connect your existing ChatGPT subscription instead of paying per-token API costs — potentially saving 10-25x on AI costs at scale.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Means for Builders
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Computer use changes the agent game&lt;/strong&gt; — Models can directly operate software, unlocking automation scenarios previously impossible&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tool search makes MCP practical at scale&lt;/strong&gt; — No more token bloat from dozens of MCP servers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Professional work capabilities are real&lt;/strong&gt; — 83% match rate with professionals isn't a toy demo&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost efficiency matters more than raw price&lt;/strong&gt; — Fewer tokens per task may offset higher per-token costs&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;h3&gt;
  
  
  When was GPT-5.4 released?
&lt;/h3&gt;

&lt;p&gt;March 5, 2026, with gradual rollout across ChatGPT, Codex, and API.&lt;/p&gt;

&lt;h3&gt;
  
  
  How much does GPT-5.4 cost?
&lt;/h3&gt;

&lt;p&gt;API: $2.50/M input, $15/M output. In ChatGPT, access depends on your subscription plan.&lt;/p&gt;

&lt;h3&gt;
  
  
  Can GPT-5.4 use my computer?
&lt;/h3&gt;

&lt;p&gt;Yes — native computer-use capabilities via the API's &lt;code&gt;computer&lt;/code&gt; tool.&lt;/p&gt;

&lt;h3&gt;
  
  
  How does it compare to Claude Opus 4.6?
&lt;/h3&gt;

&lt;p&gt;GPT-5.4 is significantly cheaper and leads on computer use. Both are frontier-class for coding.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bottom Line
&lt;/h2&gt;

&lt;p&gt;GPT-5.4 isn't just better — it's a different kind of model. Native computer use, tool search, and 1M context transform what's possible for agents and professional automation.&lt;/p&gt;

&lt;p&gt;The age of AI agents that actually do professional work isn't coming. It just shipped.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://serenitiesai.com/articles/gpt-5-4-everything-you-need-to-know" rel="noopener noreferrer"&gt;serenitiesai.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>tutorial</category>
      <category>api</category>
      <category>python</category>
    </item>
    <item>
      <title>Claude Pro vs Max Plans 2026: Limits, Pricing &amp; Which to Choose</title>
      <dc:creator>Serenities AI</dc:creator>
      <pubDate>Mon, 09 Mar 2026 16:34:21 +0000</pubDate>
      <link>https://forem.com/serenitiesai/claude-pro-vs-max-plans-2026-limits-pricing-which-to-choose-3m28</link>
      <guid>https://forem.com/serenitiesai/claude-pro-vs-max-plans-2026-limits-pricing-which-to-choose-3m28</guid>
      <description>&lt;p&gt;Here's an uncomfortable truth: most people paying for Claude are on the wrong plan. Some are overpaying $80/month for Max features they barely touch. Others are grinding through Pro usage caps when a Max subscription would save them hours of waiting every week. Anthropic’s pricing page looks simple — $20/month, $100/month, or $200/month — but the real differences between Claude Pro and Max are buried in usage multipliers, feature access, and workflow-specific perks that most comparison articles completely ignore. This guide breaks down exactly what you get at each tier so you can stop guessing and start choosing.&lt;/p&gt;

&lt;h2&gt;
  
  
  TL;DR: Claude Pro vs Max at a Glance
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Claude Pro ($20/mo)&lt;/th&gt;
&lt;th&gt;Claude Max (from $100/mo)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Monthly Price&lt;/td&gt;
&lt;td&gt;$20/mo or $17/mo annual&lt;/td&gt;
&lt;td&gt;From $100/mo&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Usage vs Pro&lt;/td&gt;
&lt;td&gt;1x (baseline)&lt;/td&gt;
&lt;td&gt;5x or 20x more&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Claude Code&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅ (higher limits)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cowork&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Research&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Projects&lt;/td&gt;
&lt;td&gt;✅ Unlimited&lt;/td&gt;
&lt;td&gt;✅ Unlimited&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Claude in Excel&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Claude in PowerPoint&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅ (preview)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Higher Output Limits&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Priority Access&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Context Window&lt;/td&gt;
&lt;td&gt;200k tokens&lt;/td&gt;
&lt;td&gt;200k tokens&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Best For&lt;/td&gt;
&lt;td&gt;Daily professionals&lt;/td&gt;
&lt;td&gt;Power users, agencies&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Claude Pro: What You Get for $20/Month
&lt;/h2&gt;

&lt;p&gt;Claude Pro is Anthropic’s core paid tier. At $20/month (or $17/month annual at $200/yr), you unlock everything that makes Claude genuinely useful for professional work.&lt;/p&gt;

&lt;h3&gt;
  
  
  Usage: More Than Free, But Not Unlimited
&lt;/h3&gt;

&lt;p&gt;Pro users get all models: Opus, Sonnet, and Haiku with 200k context windows. The jump from Free to Pro is substantial. But usage limits still apply — during peak hours, heavy Pro users may hit rate limits. This is the primary reason people consider Max.&lt;/p&gt;

&lt;h3&gt;
  
  
  Claude Code and Cowork
&lt;/h3&gt;

&lt;p&gt;Claude Code is Anthropic’s terminal-based coding agent, included with every Pro subscription. It handles file edits, terminal commands, and multi-step coding tasks directly in your development environment.&lt;/p&gt;

&lt;p&gt;Cowork is Claude’s collaborative mode — Claude works alongside you in real-time on longer projects, maintaining context and contributing proactively.&lt;/p&gt;

&lt;h3&gt;
  
  
  Research and Projects
&lt;/h3&gt;

&lt;p&gt;Research conducts multi-step deep dives, synthesizes findings, and presents comprehensive results. Unlimited Projects let you organize work into persistent workspaces with custom instructions and knowledge bases.&lt;/p&gt;

&lt;h3&gt;
  
  
  Claude in Excel
&lt;/h3&gt;

&lt;p&gt;Integrates Claude directly into Microsoft Excel spreadsheets — eliminates copy-paste workflows between Claude and your data.&lt;/p&gt;

&lt;h3&gt;
  
  
  Who Is Pro For?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Software developers using Claude Code daily&lt;/li&gt;
&lt;li&gt;Writers and content creators&lt;/li&gt;
&lt;li&gt;Analysts who need Research mode&lt;/li&gt;
&lt;li&gt;Business professionals organizing work into Projects&lt;/li&gt;
&lt;li&gt;Anyone hitting Free tier limits weekly&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Claude Max: 5x and 20x Usage Explained
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Max 5x ($100/Month)
&lt;/h3&gt;

&lt;p&gt;5x more usage than Pro. Eliminates rate limits that plague Pro users during heavy work sessions. Higher output limits mean Claude can generate longer responses in a single turn.&lt;/p&gt;

&lt;h3&gt;
  
  
  Max 20x ($200/Month)
&lt;/h3&gt;

&lt;p&gt;20x Pro usage for agencies, dev teams using Claude Code as a core pipeline tool, and researchers conducting marathon sessions daily.&lt;/p&gt;

&lt;h3&gt;
  
  
  Max-Exclusive Features
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Claude in PowerPoint (Research Preview):&lt;/strong&gt; Max-only, lets you use Claude inside PowerPoint presentations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Priority Access:&lt;/strong&gt; Max users get served first during peak hours.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Early Access:&lt;/strong&gt; New capabilities roll out to Max users first.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Max Does NOT Give You
&lt;/h3&gt;

&lt;p&gt;Same 200k context window, same models, same response quality. Max is about quantity and access, not quality.&lt;/p&gt;

&lt;h2&gt;
  
  
  Feature Comparison: Free vs Pro vs Max
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Free&lt;/th&gt;
&lt;th&gt;Pro ($20/mo)&lt;/th&gt;
&lt;th&gt;Max (from $100/mo)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;All platforms&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Code Generation&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Web Search&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Extended Thinking&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MCP Connectors&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Claude Code&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cowork&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Research&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Projects&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Excel&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PowerPoint&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅ (preview)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5x/20x Usage&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Priority Access&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Pricing Breakdown
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Plan&lt;/th&gt;
&lt;th&gt;Monthly&lt;/th&gt;
&lt;th&gt;Annual&lt;/th&gt;
&lt;th&gt;Per Usage Unit&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Pro&lt;/td&gt;
&lt;td&gt;$20/mo&lt;/td&gt;
&lt;td&gt;$200/yr ($17/mo)&lt;/td&gt;
&lt;td&gt;$20&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Max 5x&lt;/td&gt;
&lt;td&gt;~$100/mo&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;~$20&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Max 20x&lt;/td&gt;
&lt;td&gt;$200/mo&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;$10&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;You’re not getting a volume discount with Max 5x — you’re paying proportionally for proportionally more usage. The value is removing the ceiling.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Hidden Cost: Lost Productivity
&lt;/h3&gt;

&lt;p&gt;A developer billing $100+/hour who loses 30 minutes/day to rate limits = $1,000/month in lost productivity. Max at $100/month saves $900/month.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real-World Scenarios
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Casual user (few times/week):&lt;/strong&gt; Free tier is plenty.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Daily professional (3-4 hrs/day):&lt;/strong&gt; Pro handles it. Go annual at $200/yr.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Heavy coder (8+ hrs/day):&lt;/strong&gt; Max 5x eliminates daily rate limit frustration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agency lead (50+ conversations/day):&lt;/strong&gt; Max 20x pays for itself many times over.&lt;/p&gt;

&lt;h2&gt;
  
  
  Claude Code: The Upgrade Driver
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Your Usage&lt;/th&gt;
&lt;th&gt;Plan&lt;/th&gt;
&lt;th&gt;Why&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Don’t use it&lt;/td&gt;
&lt;td&gt;Free/Pro&lt;/td&gt;
&lt;td&gt;No need for Max&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&amp;lt; 1 hr/day&lt;/td&gt;
&lt;td&gt;Pro&lt;/td&gt;
&lt;td&gt;Limits sufficient&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1-3 hrs/day&lt;/td&gt;
&lt;td&gt;Pro (watch limits)&lt;/td&gt;
&lt;td&gt;May hit limits on heavy days&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4+ hrs/day&lt;/td&gt;
&lt;td&gt;Max 5x&lt;/td&gt;
&lt;td&gt;Pro limits frustrate daily&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;All-day pairing&lt;/td&gt;
&lt;td&gt;Max 20x&lt;/td&gt;
&lt;td&gt;Even 5x may not suffice&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Is Max Worth It?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Yes if:&lt;/strong&gt; You hit Pro limits regularly, use Claude 4+ hrs/day, bill $75+/hour, or need priority access.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No if:&lt;/strong&gt; You rarely hit limits, use Claude occasionally, want better quality (it’s identical), or mainly want PowerPoint.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Verdict:&lt;/strong&gt; Start with Pro. Upgrade to Max only after 2-3 months of consistently hitting limits.&lt;/p&gt;

&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What are Claude Pro’s exact limits?&lt;/strong&gt; Anthropic doesn’t publish specific counts. Limits are dynamic based on model, conversation length, and server load.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Does Pro have a fallback model?&lt;/strong&gt; When approaching limits, Anthropic may route you to lighter models (Sonnet/Haiku instead of Opus).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Can I switch mid-month?&lt;/strong&gt; Yes, upgrades are prorated. Downgrades take effect at billing cycle end.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Is there a team plan?&lt;/strong&gt; Yes — Claude Team and Enterprise plans exist with admin controls and shared billing.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://serenitiesai.com/articles/claude-pro-vs-max-2026" rel="noopener noreferrer"&gt;Serenities AI&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>productivity</category>
      <category>coding</category>
    </item>
    <item>
      <title>How to Set Up MCP Servers in Cursor IDE: Complete Guide 2026</title>
      <dc:creator>Serenities AI</dc:creator>
      <pubDate>Mon, 09 Mar 2026 13:07:34 +0000</pubDate>
      <link>https://forem.com/serenitiesai/how-to-set-up-mcp-servers-in-cursor-ide-complete-guide-2026-5gdl</link>
      <guid>https://forem.com/serenitiesai/how-to-set-up-mcp-servers-in-cursor-ide-complete-guide-2026-5gdl</guid>
      <description>&lt;p&gt;You've heard the buzz about MCP servers in Cursor, but every time you try to set one up, you hit a wall. Maybe the config file isn't being read. Maybe the server starts but Cursor can't see the tools. Or maybe you're just not sure where to begin — stdio, SSE, HTTP… what does any of it mean?&lt;/p&gt;

&lt;p&gt;You're not alone. The Model Context Protocol (MCP) is one of the most powerful features in Cursor IDE, letting your AI assistant connect to external tools, databases, APIs, and data sources. But the documentation can feel scattered, and getting your first mcp server cursor setup working requires understanding a few key concepts.&lt;/p&gt;

&lt;p&gt;This guide fixes that. By the end, you'll have MCP servers running in Cursor — whether you're a Node.js developer, a Python programmer, or someone deploying remote servers for a team. No guesswork, no frustration, just working configurations.&lt;/p&gt;

&lt;h2&gt;
  
  
  What You'll Learn
&lt;/h2&gt;

&lt;p&gt;This comprehensive guide covers everything you need to set up and use MCP servers in Cursor IDE:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What MCP is and why it matters for AI-assisted development&lt;/li&gt;
&lt;li&gt;Three transport methods — stdio, SSE, and Streamable HTTP — and when to use each&lt;/li&gt;
&lt;li&gt;Step-by-step setup for local stdio servers (Node.js and Python examples)&lt;/li&gt;
&lt;li&gt;Remote server configuration for HTTP and SSE endpoints&lt;/li&gt;
&lt;li&gt;Configuration locations — project-level vs. global config files&lt;/li&gt;
&lt;li&gt;Config interpolation — using environment variables and workspace paths&lt;/li&gt;
&lt;li&gt;OAuth and authentication for secure server connections&lt;/li&gt;
&lt;li&gt;Using MCP tools in Cursor's chat and agent modes&lt;/li&gt;
&lt;li&gt;Security best practices to keep your setup safe&lt;/li&gt;
&lt;li&gt;Troubleshooting tips for common issues&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's dive in.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before you start configuring MCP servers in Cursor, make sure you have these basics in place:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Requirement&lt;/th&gt;
&lt;th&gt;Details&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Cursor IDE&lt;/td&gt;
&lt;td&gt;Installed and running. MCP support is available on all Cursor plans (Free, Pro, Ultra, and Enterprise).&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Node.js (for JS servers)&lt;/td&gt;
&lt;td&gt;Version 18+ recommended. Needed for npx-based MCP servers.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Python (for Python servers)&lt;/td&gt;
&lt;td&gt;Version 3.10+ recommended. Needed for Python-based MCP servers.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;A terminal&lt;/td&gt;
&lt;td&gt;You'll need basic command-line comfort for testing servers.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;A text editor&lt;/td&gt;
&lt;td&gt;For editing JSON config files (Cursor itself works great for this).&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;If you're only setting up remote MCP servers (HTTP/SSE), you don't need Node.js or Python installed locally — just the server URL.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is MCP? (Model Context Protocol Explained)
&lt;/h2&gt;

&lt;p&gt;The Model Context Protocol (MCP) is an open standard that enables AI coding assistants like Cursor to connect to external tools and data sources. Think of it as a universal adapter between your AI and the outside world.&lt;/p&gt;

&lt;p&gt;Without MCP, your AI assistant is limited to what it can see in your codebase and its training data. With MCP, it can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Query databases directly to understand your schema and data&lt;/li&gt;
&lt;li&gt;Call APIs — search the web, check deployment status, create tickets&lt;/li&gt;
&lt;li&gt;Access documentation from external sources in real time&lt;/li&gt;
&lt;li&gt;Interact with services like GitHub, Jira, Slack, or your own custom tools&lt;/li&gt;
&lt;li&gt;Read and write files beyond your workspace boundaries&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;MCP works by defining a standard JSON-RPC protocol between the AI client (Cursor) and external servers. You can write MCP servers in any language that can print to stdout or serve an HTTP endpoint — Python, JavaScript, Go, Rust, and more.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Five Protocol Capabilities
&lt;/h3&gt;

&lt;p&gt;MCP in Cursor supports five protocol capabilities, all of which are fully supported:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Capability&lt;/th&gt;
&lt;th&gt;What It Does&lt;/th&gt;
&lt;th&gt;Cursor Support&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Tools&lt;/td&gt;
&lt;td&gt;Functions for the AI model to execute (e.g., search a database, call an API)&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Prompts&lt;/td&gt;
&lt;td&gt;Templated messages and workflows the server can expose&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Resources&lt;/td&gt;
&lt;td&gt;Structured data sources the AI can query&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Roots&lt;/td&gt;
&lt;td&gt;Server-initiated inquiries into URI or filesystem boundaries&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Elicitation&lt;/td&gt;
&lt;td&gt;Server-initiated requests for additional information from the user&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This full protocol support means Cursor's MCP implementation is among the most complete available.&lt;/p&gt;

&lt;h2&gt;
  
  
  How MCP Works in Cursor: Transport Methods
&lt;/h2&gt;

&lt;p&gt;When setting up an MCP server in Cursor, the first decision you need to make is which transport method to use. Cursor supports three transport types:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Transport&lt;/th&gt;
&lt;th&gt;Execution&lt;/th&gt;
&lt;th&gt;Deployment&lt;/th&gt;
&lt;th&gt;Users&lt;/th&gt;
&lt;th&gt;Input&lt;/th&gt;
&lt;th&gt;Auth&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;stdio&lt;/td&gt;
&lt;td&gt;Local&lt;/td&gt;
&lt;td&gt;Cursor manages&lt;/td&gt;
&lt;td&gt;Single user&lt;/td&gt;
&lt;td&gt;Shell command&lt;/td&gt;
&lt;td&gt;Manual&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SSE&lt;/td&gt;
&lt;td&gt;Local/Remote&lt;/td&gt;
&lt;td&gt;Deploy as server&lt;/td&gt;
&lt;td&gt;Multiple users&lt;/td&gt;
&lt;td&gt;URL to SSE endpoint&lt;/td&gt;
&lt;td&gt;OAuth&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Streamable HTTP&lt;/td&gt;
&lt;td&gt;Local/Remote&lt;/td&gt;
&lt;td&gt;Deploy as server&lt;/td&gt;
&lt;td&gt;Multiple users&lt;/td&gt;
&lt;td&gt;URL to HTTP endpoint&lt;/td&gt;
&lt;td&gt;OAuth&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  When to Use Each Transport
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;stdio (Standard I/O)&lt;/strong&gt; is the most common choice for individual developers. Cursor launches the server process itself and communicates through stdin/stdout. Use stdio when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You're a solo developer working on your own machine&lt;/li&gt;
&lt;li&gt;The MCP server needs access to local files or tools&lt;/li&gt;
&lt;li&gt;You want zero deployment overhead&lt;/li&gt;
&lt;li&gt;You're using community MCP packages via npx or pip&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;SSE (Server-Sent Events)&lt;/strong&gt; is ideal for remote or shared servers. Use SSE when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Multiple team members need the same MCP server&lt;/li&gt;
&lt;li&gt;The server needs to run persistently&lt;/li&gt;
&lt;li&gt;You're connecting to a hosted MCP service&lt;/li&gt;
&lt;li&gt;You need OAuth-based authentication&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Streamable HTTP&lt;/strong&gt; is the newest transport method. Use it when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You're building a new remote MCP server from scratch&lt;/li&gt;
&lt;li&gt;You want better compatibility with standard HTTP infrastructure&lt;/li&gt;
&lt;li&gt;You need OAuth authentication&lt;/li&gt;
&lt;li&gt;Your infrastructure doesn't support SSE well&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step-by-Step: Setting Up stdio MCP Servers
&lt;/h2&gt;

&lt;p&gt;The stdio transport is the fastest way to get an MCP server running in Cursor.&lt;/p&gt;

&lt;h3&gt;
  
  
  Understanding the Config File
&lt;/h3&gt;

&lt;p&gt;For stdio servers, you'll need these fields:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Field&lt;/th&gt;
&lt;th&gt;Required&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;type&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Server connection type — set to "stdio"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;command&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Command to start the server (e.g., npx, node, python, docker)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;args&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Array of arguments passed to the command&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;env&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Environment variables for the server process&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;envFile&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Path to a .env file (stdio only)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Example 1: Node.js MCP Server via npx
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Create the config file. In your project root:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; .cursor
&lt;span class="nb"&gt;touch&lt;/span&gt; .cursor/mcp.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; Add your server configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"server-name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"npx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"-y"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"mcp-server"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"env"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"API_KEY"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"your-api-key-here"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; Restart Cursor (or reload the window).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; Verify it's working. Open the Cursor chat (Agent mode) and look for the server's tools listed under "Available Tools."&lt;/p&gt;

&lt;h3&gt;
  
  
  Example 2: Python MCP Server
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"my-python-server"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"${workspaceFolder}/.venv/bin/python"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"${workspaceFolder}/mcp-server.py"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"env"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"API_KEY"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"your-api-key-here"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Running Multiple stdio Servers
&lt;/h3&gt;

&lt;p&gt;You can configure as many MCP servers as you need:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"github"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"npx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"-y"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"@modelcontextprotocol/server-github"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"env"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"GITHUB_TOKEN"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ghp_your_token_here"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"database"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"python"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"db-mcp-server.py"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"env"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"DATABASE_URL"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"postgresql://localhost:5432/mydb"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"search"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"npx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"-y"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"mcp-server-brave-search"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"env"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"BRAVE_API_KEY"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"your-brave-key"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step-by-Step: Setting Up Remote MCP Servers (HTTP/SSE)
&lt;/h2&gt;

&lt;p&gt;Remote MCP servers are perfect for team environments or hosted services.&lt;/p&gt;

&lt;h3&gt;
  
  
  Remote Server Configuration
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"server-name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"http://localhost:3000/mcp"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"headers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"API_KEY"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"your-api-key-here"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Connecting to a Cloud-Hosted MCP Server
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"team-tools"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://mcp.yourcompany.com/tools"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"headers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Authorization"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Bearer your-team-token"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  SSE vs. Streamable HTTP
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;SSE&lt;/strong&gt; — Well-established protocol, works great for most deployments&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Streamable HTTP&lt;/strong&gt; — Newer standard, better for serverless and modern HTTP infrastructure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If building from scratch, Streamable HTTP is recommended.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuration Locations: Project vs. Global
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Config Location&lt;/th&gt;
&lt;th&gt;Path&lt;/th&gt;
&lt;th&gt;Scope&lt;/th&gt;
&lt;th&gt;Best For&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Project config&lt;/td&gt;
&lt;td&gt;.cursor/mcp.json&lt;/td&gt;
&lt;td&gt;Current project only&lt;/td&gt;
&lt;td&gt;Project-specific tools, shared team config&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Global config&lt;/td&gt;
&lt;td&gt;~/.cursor/mcp.json&lt;/td&gt;
&lt;td&gt;All projects&lt;/td&gt;
&lt;td&gt;Personal tools you want everywhere&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;When both configs define a server with the same name, the project config takes precedence.&lt;/p&gt;

&lt;h2&gt;
  
  
  Config Interpolation: Environment Variables and Workspace Paths
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Available Interpolation Variables
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Variable&lt;/th&gt;
&lt;th&gt;Resolves To&lt;/th&gt;
&lt;th&gt;Example&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;${env:NAME}&lt;/td&gt;
&lt;td&gt;Environment variable value&lt;/td&gt;
&lt;td&gt;${env:GITHUB_TOKEN}&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;${userHome}&lt;/td&gt;
&lt;td&gt;Path to home folder&lt;/td&gt;
&lt;td&gt;${userHome}/.config/keys.json&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;${workspaceFolder}&lt;/td&gt;
&lt;td&gt;Project root directory&lt;/td&gt;
&lt;td&gt;${workspaceFolder}/scripts/server.py&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;${workspaceFolderBasename}&lt;/td&gt;
&lt;td&gt;Project root folder name&lt;/td&gt;
&lt;td&gt;Used for dynamic naming&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;${pathSeparator} or ${/}&lt;/td&gt;
&lt;td&gt;OS path separator&lt;/td&gt;
&lt;td&gt;/ on Mac/Linux, \ on Windows&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Secure API Key Management
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"github"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"npx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"-y"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"@modelcontextprotocol/server-github"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"env"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"GITHUB_TOKEN"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"${env:GITHUB_TOKEN}"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then set in your shell profile:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;GITHUB_TOKEN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"ghp_your_actual_token_here"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Using envFile for Local Development
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"my-server"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"node"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"server.js"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"envFile"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"${workspaceFolder}/.env"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  OAuth and Authentication
&lt;/h2&gt;

&lt;p&gt;For remote MCP servers requiring secure authentication, Cursor supports OAuth flows.&lt;/p&gt;

&lt;p&gt;The OAuth callback URL for Cursor is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cursor://anysphere.cursor-mcp/oauth/callback
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Many MCP servers now support one-click installation from the MCP directory, automatically configuring OAuth.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using MCP Tools in Cursor Chat
&lt;/h2&gt;

&lt;p&gt;Once configured, MCP tools appear under "Available Tools" in the chat interface. The agent automatically uses them when relevant.&lt;/p&gt;

&lt;p&gt;Key features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Plan Mode support&lt;/strong&gt; — MCP tools work in Plan Mode too&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Toggle tools on/off&lt;/strong&gt; — Reduce noise by disabling unused tools&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tool approval&lt;/strong&gt; — Default safety feature requiring confirmation before tool execution&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auto-run option&lt;/strong&gt; — Skip approval for trusted servers (use cautiously)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Image responses&lt;/strong&gt; — MCP servers can return base64-encoded images rendered in chat&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Security Best Practices
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Verify the source&lt;/strong&gt; — Only install MCP servers from trusted developers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Review permissions&lt;/strong&gt; — Understand what data and APIs each server can access&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Limit API keys&lt;/strong&gt; — Create dedicated keys with minimal permissions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Audit code&lt;/strong&gt; — Review source code for critical integrations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use environment variables&lt;/strong&gt; — Never hardcode secrets in mcp.json&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Keep tool approval enabled&lt;/strong&gt; — The extra confirmation prevents costly mistakes&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Common Troubleshooting Tips
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Server Doesn't Appear in Available Tools
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Verify config file is at &lt;code&gt;.cursor/mcp.json&lt;/code&gt; (not &lt;code&gt;.vscode/mcp.json&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Check for invalid JSON syntax&lt;/li&gt;
&lt;li&gt;Restart Cursor or reload the window&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Server Starts But Tools Don't Work
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Check dependencies are installed (Node.js, Python packages)&lt;/li&gt;
&lt;li&gt;Verify command path (use full path for virtual environments)&lt;/li&gt;
&lt;li&gt;Run the command manually in terminal to see errors&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Environment Variables Not Resolving
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Ensure variables are set in the right shell context&lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;${env:VARIABLE_NAME}&lt;/code&gt; syntax (not &lt;code&gt;$VARIABLE_NAME&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Remote Server Connection Fails
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Verify server is online&lt;/li&gt;
&lt;li&gt;Check firewall/network connectivity&lt;/li&gt;
&lt;li&gt;Double-check authentication headers&lt;/li&gt;
&lt;li&gt;Check for CORS issues&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  OAuth Flow Fails
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Verify redirect URI is &lt;code&gt;cursor://anysphere.cursor-mcp/oauth/callback&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Try disconnecting and reconnecting&lt;/li&gt;
&lt;li&gt;Check scope configuration&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How Serenities AI Integrates with MCP
&lt;/h2&gt;

&lt;p&gt;If you're using &lt;a href="https://serenitiesai.com" rel="noopener noreferrer"&gt;Serenities AI&lt;/a&gt; as your AI development platform, it supports MCP natively. The same MCP servers you configure in Cursor work with Serenities AI's workflows.&lt;/p&gt;

&lt;p&gt;Serenities AI also offers built-in MCP server support — you can expose your Serenities AI workspace as an MCP server that Cursor connects to, creating a powerful two-way bridge between your AI tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  Frequently Asked Questions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Is MCP available on Cursor's free plan?
&lt;/h3&gt;

&lt;p&gt;Yes. MCP support is available across all Cursor plans, including the free tier.&lt;/p&gt;

&lt;h3&gt;
  
  
  Can I use MCP servers written in languages other than Node.js and Python?
&lt;/h3&gt;

&lt;p&gt;Absolutely. Any language that can handle JSON-RPC communication works — Go, Rust, Java, Ruby, C#, and more.&lt;/p&gt;

&lt;h3&gt;
  
  
  What's the difference between project-level and global MCP configuration?
&lt;/h3&gt;

&lt;p&gt;Project-level config (&lt;code&gt;.cursor/mcp.json&lt;/code&gt;) only applies to the current project. Global config (&lt;code&gt;~/.cursor/mcp.json&lt;/code&gt;) applies to every project. Project config takes precedence if both define the same server name.&lt;/p&gt;

&lt;h3&gt;
  
  
  Is it safe to commit .cursor/mcp.json to version control?
&lt;/h3&gt;

&lt;p&gt;Yes, as long as you use config interpolation (&lt;code&gt;${env:API_KEY}&lt;/code&gt;) instead of hardcoding secrets.&lt;/p&gt;

&lt;h3&gt;
  
  
  Can MCP servers return images or other non-text data?
&lt;/h3&gt;

&lt;p&gt;Yes. MCP servers can return base64-encoded images rendered directly in Cursor's chat interface.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Setting up MCP servers in Cursor transforms your AI coding assistant from a smart autocomplete tool into a connected, context-aware development partner. With MCP, Cursor can query your databases, check your CI/CD pipelines, search documentation, manage tickets, and interact with virtually any external service.&lt;/p&gt;

&lt;p&gt;The MCP ecosystem is growing fast — learn the protocol once, and use it everywhere.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://serenitiesai.com/articles/mcp-server-cursor-setup-2026" rel="noopener noreferrer"&gt;Serenities AI&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>mcp</category>
      <category>ai</category>
      <category>tutorial</category>
      <category>programming</category>
    </item>
    <item>
      <title>Windsurf vs GitHub Copilot: Which AI Coding Tool Wins? (2026)</title>
      <dc:creator>Serenities AI</dc:creator>
      <pubDate>Sun, 08 Mar 2026 13:05:22 +0000</pubDate>
      <link>https://forem.com/serenitiesai/windsurf-vs-github-copilot-which-ai-coding-tool-wins-2026-3cj7</link>
      <guid>https://forem.com/serenitiesai/windsurf-vs-github-copilot-which-ai-coding-tool-wins-2026-3cj7</guid>
      <description>&lt;p&gt;Here's the bottom line: Windsurf and GitHub Copilot represent two fundamentally different visions for AI-assisted development. One wants to be your editor. The other wants to live inside your editor. After comparing pricing, features, and real-world workflows, the right choice depends entirely on whether you want a purpose-built AI IDE or an AI layer on top of the tools you already use.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick Verdict: Windsurf vs Copilot at a Glance (TL;DR)
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Category&lt;/th&gt;
&lt;th&gt;Windsurf&lt;/th&gt;
&lt;th&gt;GitHub Copilot&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Type&lt;/td&gt;
&lt;td&gt;Standalone AI IDE&lt;/td&gt;
&lt;td&gt;AI extension for multiple IDEs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Starting Price&lt;/td&gt;
&lt;td&gt;Free / $15/mo Pro&lt;/td&gt;
&lt;td&gt;Free / $10/mo Pro&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Key Differentiator&lt;/td&gt;
&lt;td&gt;Cascade agentic flow + SWE-1.5 model&lt;/td&gt;
&lt;td&gt;Agent mode + GitHub PR/issue integration&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;IDE Support&lt;/td&gt;
&lt;td&gt;Windsurf IDE only (VS Code-based)&lt;/td&gt;
&lt;td&gt;VS Code, Visual Studio, JetBrains, Eclipse, Xcode, CLI&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Best For&lt;/td&gt;
&lt;td&gt;Developers wanting a deeply integrated AI IDE experience&lt;/td&gt;
&lt;td&gt;Teams using GitHub who want AI across any IDE&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MCP Support&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes (all plans)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Quick take:&lt;/strong&gt; Choose Windsurf if you want an all-in-one AI coding environment with deep agentic capabilities built into the editor itself. Choose GitHub Copilot if you're already embedded in the GitHub ecosystem, use JetBrains or other non-VS Code editors, or want the cheapest paid entry point at $10/month.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Windsurf?
&lt;/h2&gt;

&lt;p&gt;Windsurf positions itself as "tomorrow's editor, today" — a standalone AI-native IDE built on a VS Code foundation. Unlike extensions that bolt AI onto an existing editor, Windsurf was designed from the ground up to make AI a first-class citizen of the development experience.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cascade: The Agentic Chatbot
&lt;/h3&gt;

&lt;p&gt;Windsurf's flagship feature is Cascade, an agentic chatbot that goes well beyond simple code completion. Cascade operates as a collaborative coding partner that can understand your project context, suggest multi-file changes, and execute complex coding tasks through a conversational interface.&lt;/p&gt;

&lt;p&gt;The "agentic" part is key: Cascade doesn't just respond to prompts — it can proactively identify issues, suggest refactors, and chain together multi-step operations. Think of it less like a chatbot and more like a junior developer who can read your entire codebase instantly.&lt;/p&gt;

&lt;h3&gt;
  
  
  SWE-1.5: Windsurf's Proprietary Model
&lt;/h3&gt;

&lt;p&gt;One of Windsurf's unique advantages is SWE-1.5, their proprietary AI model specifically optimized for software engineering tasks. While Windsurf also provides access to all premium models, SWE-1.5 is purpose-built for coding workflows.&lt;/p&gt;

&lt;h3&gt;
  
  
  Getting Started with Windsurf
&lt;/h3&gt;

&lt;p&gt;Because it's built on VS Code, you can import your existing configuration from VS Code or Cursor — extensions, keybindings, settings, and themes all carry over. Project-level configuration is handled through &lt;code&gt;.windsurfrules&lt;/code&gt; files, which let you define AI behavior and coding standards.&lt;/p&gt;

&lt;p&gt;The free tier includes unlimited Cascade usage (with limited prompt credits), meaning you can get a genuine feel for the agentic coding experience before committing to a paid plan.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is GitHub Copilot?
&lt;/h2&gt;

&lt;p&gt;GitHub Copilot takes the opposite approach: rather than replacing your editor, it enhances it. Copilot is an AI coding assistant that works as an extension across VS Code, Visual Studio, JetBrains, Eclipse, Xcode, and even the CLI, GitHub.com, and GitHub Mobile.&lt;/p&gt;

&lt;h3&gt;
  
  
  Agent Mode and the Coding Agent
&lt;/h3&gt;

&lt;p&gt;Copilot's agent mode brings agentic capabilities directly into your IDE of choice, powered by GPT-5 mini. Agent mode is available in VS Code, Visual Studio, JetBrains, Eclipse, and Xcode. On the Free plan, you get 50 agent mode requests per month, while Pro and Pro+ users get unlimited access.&lt;/p&gt;

&lt;p&gt;The standout feature is Copilot's coding agent, which can be assigned GitHub issues and autonomously create pull requests to resolve them. Assign an issue, and Copilot will analyze the codebase, write the code, and open a PR for review. This is a game-changer for teams that live in GitHub.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deep GitHub Ecosystem Integration
&lt;/h3&gt;

&lt;p&gt;Where Copilot truly shines is integration with the broader GitHub platform:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pull request reviews (Pro+) with AI-assisted code review feedback&lt;/li&gt;
&lt;li&gt;File diff reviews directly in your workflow&lt;/li&gt;
&lt;li&gt;Custom instructions via &lt;code&gt;instructions.md&lt;/code&gt; files for team standardization&lt;/li&gt;
&lt;li&gt;MCP server integration on all plans&lt;/li&gt;
&lt;li&gt;Third-party agent delegation (Claude, OpenAI Codex) on Pro+&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Free for Students and Open Source
&lt;/h3&gt;

&lt;p&gt;GitHub Copilot is free for verified students, teachers, and open source maintainers — a significant perk that Windsurf doesn't match.&lt;/p&gt;

&lt;h2&gt;
  
  
  Feature-by-Feature Comparison
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Windsurf&lt;/th&gt;
&lt;th&gt;GitHub Copilot&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Agentic AI&lt;/td&gt;
&lt;td&gt;Cascade — deeply integrated&lt;/td&gt;
&lt;td&gt;Agent mode (GPT-5 mini) across major IDEs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Code Completion&lt;/td&gt;
&lt;td&gt;Yes, with Fast Context&lt;/td&gt;
&lt;td&gt;Yes, inline suggestions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Chat Interface&lt;/td&gt;
&lt;td&gt;Cascade conversational UI&lt;/td&gt;
&lt;td&gt;Chat + agent mode&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Proprietary Model&lt;/td&gt;
&lt;td&gt;SWE-1.5 (software engineering optimized)&lt;/td&gt;
&lt;td&gt;GPT-5 mini (agent mode)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Premium Models Access&lt;/td&gt;
&lt;td&gt;All premium models included&lt;/td&gt;
&lt;td&gt;Via premium requests (plan-dependent)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MCP Server Support&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes (all plans)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PR/Issue Integration&lt;/td&gt;
&lt;td&gt;No native integration&lt;/td&gt;
&lt;td&gt;Coding agent creates PRs from issues&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Code Review&lt;/td&gt;
&lt;td&gt;Not a standalone feature&lt;/td&gt;
&lt;td&gt;Pull request reviews + file diff reviews (Pro+)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Third-Party Agent Delegation&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Claude, OpenAI Codex (Pro+)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Custom Instructions&lt;/td&gt;
&lt;td&gt;.windsurfrules config files&lt;/td&gt;
&lt;td&gt;instructions.md + custom agents&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Deployment&lt;/td&gt;
&lt;td&gt;Unlimited deploys (Pro+)&lt;/td&gt;
&lt;td&gt;Not a built-in feature&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Knowledge Base&lt;/td&gt;
&lt;td&gt;Enterprise only&lt;/td&gt;
&lt;td&gt;Via Copilot knowledge bases (Business/Enterprise)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Where Each Tool Wins
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Windsurf wins on depth of AI integration.&lt;/strong&gt; Because the entire IDE is built around AI, Cascade has deeper hooks into the editing experience than any extension can achieve. The SWE-1.5 model adds coding-specific optimization, and features like Fast Context ensure the AI understands your project with minimal setup. Windsurf also wins on deployment — unlimited deploys from Pro+ creates a streamlined code-to-production workflow.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub Copilot wins on ecosystem breadth.&lt;/strong&gt; The coding agent that creates PRs from issues is a fundamentally different workflow. Pull request reviews add another layer of value beyond the editor. Multi-IDE support is a decisive advantage — if your team uses JetBrains, Xcode, and VS Code, Copilot works for everyone.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pricing Comparison (2026)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Windsurf Pricing
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Plan&lt;/th&gt;
&lt;th&gt;Price&lt;/th&gt;
&lt;th&gt;Credits&lt;/th&gt;
&lt;th&gt;Key Features&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Free&lt;/td&gt;
&lt;td&gt;$0/mo&lt;/td&gt;
&lt;td&gt;Limited prompt credits&lt;/td&gt;
&lt;td&gt;Unlimited Cascade, all premium models&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pro&lt;/td&gt;
&lt;td&gt;$15/mo&lt;/td&gt;
&lt;td&gt;500 credits/mo (+$10 for 250 add-on)&lt;/td&gt;
&lt;td&gt;Unlimited deploys, Fast Context, SWE-1.5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Teams&lt;/td&gt;
&lt;td&gt;$30/user/mo&lt;/td&gt;
&lt;td&gt;500 credits/user/mo&lt;/td&gt;
&lt;td&gt;Centralized billing, admin dashboard&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Enterprise&lt;/td&gt;
&lt;td&gt;Custom&lt;/td&gt;
&lt;td&gt;1,000 credits/user/mo&lt;/td&gt;
&lt;td&gt;Knowledge base, SSO, RBAC&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  GitHub Copilot Pricing
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Plan&lt;/th&gt;
&lt;th&gt;Price&lt;/th&gt;
&lt;th&gt;Premium Requests&lt;/th&gt;
&lt;th&gt;Key Features&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Free&lt;/td&gt;
&lt;td&gt;$0/mo&lt;/td&gt;
&lt;td&gt;50/mo&lt;/td&gt;
&lt;td&gt;Chat, agent mode (50/mo), MCP, custom instructions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pro&lt;/td&gt;
&lt;td&gt;$10/mo ($100/yr)&lt;/td&gt;
&lt;td&gt;300/mo&lt;/td&gt;
&lt;td&gt;Unlimited agent mode, all Free features&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pro+&lt;/td&gt;
&lt;td&gt;$39/mo ($390/yr)&lt;/td&gt;
&lt;td&gt;1,500/mo (+$0.04/extra)&lt;/td&gt;
&lt;td&gt;PR reviews, third-party agents, app modernization&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Pricing Analysis
&lt;/h3&gt;

&lt;p&gt;At the paid entry point, &lt;strong&gt;Copilot wins on price&lt;/strong&gt;: $10/month vs Windsurf's $15/month. Copilot Pro also offers annual billing at $100/year (effectively $8.33/month).&lt;/p&gt;

&lt;p&gt;For power users, the comparison gets more nuanced. Windsurf Pro at $15/month gives 500 credits with $10 add-ons. Copilot Pro+ at $39/month provides 1,500 premium requests with overflow at $0.04 per request.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Copilot is free for verified students, teachers, and open source maintainers&lt;/strong&gt; — if you qualify, the pricing comparison is moot.&lt;/p&gt;

&lt;h2&gt;
  
  
  IDE Support and Ecosystem
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Windsurf: One IDE to Rule Them All
&lt;/h3&gt;

&lt;p&gt;Windsurf is its own IDE built on VS Code. You can import existing VS Code or Cursor configuration. The trade-off: you &lt;strong&gt;must&lt;/strong&gt; use the Windsurf editor. If your team uses JetBrains, Xcode, or Visual Studio, Windsurf isn't an option unless everyone switches.&lt;/p&gt;

&lt;p&gt;The upside is a more tightly integrated AI experience — Cascade has deeper hooks into the editing environment than any extension could achieve.&lt;/p&gt;

&lt;h3&gt;
  
  
  GitHub Copilot: AI Everywhere You Code
&lt;/h3&gt;

&lt;p&gt;Copilot supports:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;VS Code&lt;/strong&gt; — Full support including agent mode&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Visual Studio&lt;/strong&gt; — Full support including agent mode&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;JetBrains IDEs&lt;/strong&gt; — Full support including agent mode&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Eclipse&lt;/strong&gt; — Full support including agent mode&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Xcode&lt;/strong&gt; — Full support including agent mode&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub CLI&lt;/strong&gt; — AI-assisted command line&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub.com&lt;/strong&gt; — AI in the browser&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Mobile&lt;/strong&gt; — AI on the go&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A single Copilot subscription covers your entire development workflow — from writing code in JetBrains to reviewing PRs on GitHub.com to quick fixes on your phone.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real-World Use Cases: When to Pick Each Tool
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Pick Windsurf When...
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;You're a solo developer or small team&lt;/strong&gt; building web applications — Cascade + SWE-1.5 + unlimited deploys creates a seamless code-to-production pipeline&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;You want the deepest AI integration&lt;/strong&gt; — the AI experience is more cohesive than any extension&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;You're coming from VS Code or Cursor&lt;/strong&gt; — import your existing config with zero setup cost&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;You value a coding-optimized model&lt;/strong&gt; — SWE-1.5 is purpose-built for software engineering tasks&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Pick GitHub Copilot When...
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Your team lives in the GitHub ecosystem&lt;/strong&gt; — coding agent creates PRs from issues, AI-assisted PR reviews&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;You use JetBrains, Xcode, Eclipse, or Visual Studio&lt;/strong&gt; — Windsurf doesn't support these&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;You're a student, teacher, or open source maintainer&lt;/strong&gt; — free Copilot access&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;You want multi-model flexibility&lt;/strong&gt; — Pro+ delegates to Claude, OpenAI Codex, and GPT-5 mini&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Budget is your primary concern&lt;/strong&gt; — $10/month ($8.33/yr annual) is the cheapest paid option&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Frequently Asked Questions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Is Windsurf better than GitHub Copilot for beginners?
&lt;/h3&gt;

&lt;p&gt;It depends on your setup. Windsurf's free tier with unlimited Cascade is generous for learning. But if you're already in a specific IDE like JetBrains or Xcode, Copilot works where you already are. Copilot is also free for students and teachers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Can I use Windsurf and GitHub Copilot together?
&lt;/h3&gt;

&lt;p&gt;You can't run Copilot inside Windsurf since it's a standalone IDE. But you could use Windsurf for focused AI-driven sessions and keep Copilot in your JetBrains or Visual Studio environments for other projects.&lt;/p&gt;

&lt;h3&gt;
  
  
  Which tool has better code completion?
&lt;/h3&gt;

&lt;p&gt;Both offer strong completion through different approaches. In practice, the quality difference for standard code completion is marginal — the real differentiation is in agentic capabilities and ecosystem integration.&lt;/p&gt;

&lt;h3&gt;
  
  
  Is Windsurf vs Copilot pricing worth it compared to Cursor?
&lt;/h3&gt;

&lt;p&gt;For reference: Cursor Pro starts at $20/month, Windsurf Pro at $15/month, and Copilot Pro at $10/month. All three offer free tiers, so try each and see which workflow clicks.&lt;/p&gt;

&lt;h3&gt;
  
  
  Do both tools support MCP (Model Context Protocol)?
&lt;/h3&gt;

&lt;p&gt;Yes. Both support MCP server integration for connecting external tools, databases, and APIs. Copilot includes MCP support on all plans, including the free tier.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Verdict
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;There is no universal winner.&lt;/strong&gt; These tools serve different philosophies and different developers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Choose Windsurf&lt;/strong&gt; if you want a purpose-built AI IDE with the deepest agentic integration, are comfortable with a VS Code-based editor, and value built-in deployment and SWE-1.5.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Choose GitHub Copilot&lt;/strong&gt; if you need multi-IDE support, live in the GitHub ecosystem, want AI-assisted PR reviews and a coding agent for issues, or want the cheapest paid option at $10/month.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://serenitiesai.com/articles/windsurf-vs-github-copilot-2026" rel="noopener noreferrer"&gt;serenitiesai.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>productivity</category>
      <category>vscode</category>
    </item>
    <item>
      <title>Claude Code Hooks Guide 2026: Automate Your AI Coding Workflow</title>
      <dc:creator>Serenities AI</dc:creator>
      <pubDate>Sat, 07 Mar 2026 14:31:26 +0000</pubDate>
      <link>https://forem.com/serenitiesai/claude-code-hooks-guide-2026-automate-your-ai-coding-workflow-dde</link>
      <guid>https://forem.com/serenitiesai/claude-code-hooks-guide-2026-automate-your-ai-coding-workflow-dde</guid>
      <description>&lt;p&gt;What if Claude Code could automatically format your files after every edit, block dangerous shell commands before they execute, and run your test suite whenever code changes — all without you lifting a finger?&lt;/p&gt;

&lt;p&gt;That's exactly what Claude Code hooks do. Hooks are lifecycle event listeners that let you attach custom logic to specific moments in Claude Code's execution pipeline. They intercept actions at precisely the right time — before a tool runs, after it succeeds, when a session starts, or when Claude finishes responding.&lt;/p&gt;

&lt;p&gt;Think of hooks as middleware for your AI coding assistant. They give you programmatic control over Claude's behavior without modifying Claude itself.&lt;/p&gt;

&lt;p&gt;This guide is the definitive resource on Claude Code hooks in 2026. We'll cover all 18 hook events, four hook types, configuration locations, and five production-ready recipes.&lt;/p&gt;

&lt;h2&gt;
  
  
  What You'll Learn
&lt;/h2&gt;

&lt;p&gt;By the end of this guide, you'll understand the complete hook system. You'll know every lifecycle event, how to configure each hook type, and how data flows through the hook pipeline — from stdin JSON input to stdout decisions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Claude Code installed and working&lt;/strong&gt; — Run &lt;code&gt;claude&lt;/code&gt; from your terminal&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;jq installed&lt;/strong&gt; — &lt;code&gt;sudo apt install jq&lt;/code&gt; or &lt;code&gt;brew install jq&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Basic terminal/shell knowledge&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A project directory&lt;/strong&gt; — &lt;code&gt;mkdir ~/hooks-playground &amp;amp;&amp;amp; cd ~/hooks-playground&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How Hooks Work: The stdin JSON Pattern
&lt;/h2&gt;

&lt;p&gt;Command hooks receive event data as JSON on stdin. There is no &lt;code&gt;$CLAUDE_TOOL_INPUT&lt;/code&gt; environment variable — that doesn't exist.&lt;/p&gt;

&lt;p&gt;Here's what the JSON input looks like for a PreToolUse event:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"hook_event_name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"PreToolUse"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"tool_name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Bash"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"tool_input"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ls -la /home/user/project"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"session_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"abc123..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"transcript_path"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/path/to/transcript"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"cwd"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/home/user/project"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"permission_mode"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"default"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Reading it in bash:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;
&lt;span class="nv"&gt;INPUT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;TOOL_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$INPUT&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.tool_name'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;COMMAND&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$INPUT&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.tool_input.command'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Output: Exit Codes and stdout JSON
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Exit Code&lt;/th&gt;
&lt;th&gt;Meaning&lt;/th&gt;
&lt;th&gt;Behavior&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;Allow&lt;/td&gt;
&lt;td&gt;Proceeds normally&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;Block&lt;/td&gt;
&lt;td&gt;For PreToolUse: blocks the call&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Other&lt;/td&gt;
&lt;td&gt;Error&lt;/td&gt;
&lt;td&gt;Hook failed&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;For granular control, output JSON to stdout:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;jq &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s1"&gt;'{
  hookSpecificOutput: {
    hookEventName: "PreToolUse",
    permissionDecision: "deny",
    permissionDecisionReason: "Destructive command blocked"
  }
}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  All 18 Hook Events
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Event&lt;/th&gt;
&lt;th&gt;When It Fires&lt;/th&gt;
&lt;th&gt;Matchers?&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;SessionStart&lt;/td&gt;
&lt;td&gt;Session begins/resumes/clears&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;InstructionsLoaded&lt;/td&gt;
&lt;td&gt;CLAUDE.md or rules loaded&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;UserPromptSubmit&lt;/td&gt;
&lt;td&gt;Before Claude processes prompt&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PreToolUse&lt;/td&gt;
&lt;td&gt;Before tool executes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PermissionRequest&lt;/td&gt;
&lt;td&gt;Permission dialog appears&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PostToolUse&lt;/td&gt;
&lt;td&gt;After tool succeeds&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PostToolUseFailure&lt;/td&gt;
&lt;td&gt;After tool fails&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Notification&lt;/td&gt;
&lt;td&gt;Claude sends notification&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SubagentStart&lt;/td&gt;
&lt;td&gt;Subagent spawned&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SubagentStop&lt;/td&gt;
&lt;td&gt;Subagent terminates&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Stop&lt;/td&gt;
&lt;td&gt;Claude finishes responding&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;TeammateIdle&lt;/td&gt;
&lt;td&gt;Teammate going idle&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;TaskCompleted&lt;/td&gt;
&lt;td&gt;Task marked complete&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ConfigChange&lt;/td&gt;
&lt;td&gt;Config file changes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;WorktreeCreate&lt;/td&gt;
&lt;td&gt;Git worktree created&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;WorktreeRemove&lt;/td&gt;
&lt;td&gt;Git worktree removed&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PreCompact&lt;/td&gt;
&lt;td&gt;Before compaction&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SessionEnd&lt;/td&gt;
&lt;td&gt;Session terminates&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Hook Configuration Structure
&lt;/h2&gt;

&lt;p&gt;Three-level JSON: event → matcher group → hook handler.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"hooks"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"EVENT_NAME"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"matcher"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"REGEX_PATTERN"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"hooks"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"your-script.sh"&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Four Hook Types
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;Required&lt;/th&gt;
&lt;th&gt;Optional&lt;/th&gt;
&lt;th&gt;Timeout&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;command&lt;/td&gt;
&lt;td&gt;type, command&lt;/td&gt;
&lt;td&gt;timeout, async, statusMessage, once&lt;/td&gt;
&lt;td&gt;600s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;http&lt;/td&gt;
&lt;td&gt;type, url&lt;/td&gt;
&lt;td&gt;headers, allowedEnvVars, timeout&lt;/td&gt;
&lt;td&gt;30s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;prompt&lt;/td&gt;
&lt;td&gt;type, prompt&lt;/td&gt;
&lt;td&gt;model, timeout&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;agent&lt;/td&gt;
&lt;td&gt;type, prompt&lt;/td&gt;
&lt;td&gt;model, timeout&lt;/td&gt;
&lt;td&gt;60s&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Your First Hook: Blocking rm -rf /
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"hooks"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"PreToolUse"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"matcher"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Bash"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"hooks"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"COMMAND=$(cat | jq -r '.tool_input.command'); if echo &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;$COMMAND&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt; | grep -qE 'rm&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;s+-rf&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;s+/'; then jq -n '{hookSpecificOutput: {hookEventName: &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;PreToolUse&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;, permissionDecision: &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;deny&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;, permissionDecisionReason: &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Blocked: rm -rf / detected&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;}}'; else exit 0; fi"&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  5 Production-Ready Recipes
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Recipe 1: Auto-Format on File Save
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;
&lt;span class="c"&gt;# .claude/hooks/auto-format.sh&lt;/span&gt;
&lt;span class="nv"&gt;INPUT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;FILE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$INPUT&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.tool_input.file_path // .tool_input.path // empty'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-z&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then &lt;/span&gt;&lt;span class="nb"&gt;exit &lt;/span&gt;0&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;fi

&lt;/span&gt;&lt;span class="nv"&gt;EXT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;FILE&lt;/span&gt;&lt;span class="p"&gt;##*.&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$EXT&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="k"&gt;in
  &lt;/span&gt;js|jsx|ts|tsx|json|css|md&lt;span class="p"&gt;)&lt;/span&gt; npx prettier &lt;span class="nt"&gt;--write&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; 2&amp;gt;/dev/null &lt;span class="p"&gt;;;&lt;/span&gt;
  py&lt;span class="p"&gt;)&lt;/span&gt; black &lt;span class="nt"&gt;--quiet&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; 2&amp;gt;/dev/null &lt;span class="p"&gt;;;&lt;/span&gt;
  go&lt;span class="p"&gt;)&lt;/span&gt; gofmt &lt;span class="nt"&gt;-w&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; 2&amp;gt;/dev/null &lt;span class="p"&gt;;;&lt;/span&gt;
  rs&lt;span class="p"&gt;)&lt;/span&gt; rustfmt &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; 2&amp;gt;/dev/null &lt;span class="p"&gt;;;&lt;/span&gt;
&lt;span class="k"&gt;esac&lt;/span&gt;
&lt;span class="nb"&gt;exit &lt;/span&gt;0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"hooks"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"PostToolUse"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"matcher"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Edit|Write|MultiEdit"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"hooks"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;".claude/hooks/auto-format.sh"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"statusMessage"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Formatting..."&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Recipe 2: Block Dangerous Commands
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;
&lt;span class="nv"&gt;COMMAND&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.tool_input.command // empty'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-z&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$COMMAND&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then &lt;/span&gt;&lt;span class="nb"&gt;exit &lt;/span&gt;0&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;fi

&lt;/span&gt;&lt;span class="nv"&gt;PATTERNS&lt;/span&gt;&lt;span class="o"&gt;=(&lt;/span&gt;&lt;span class="s1"&gt;'rm\s+-rf\s+/'&lt;/span&gt; &lt;span class="s1"&gt;'mkfs\.'&lt;/span&gt; &lt;span class="s1"&gt;'dd\s+if='&lt;/span&gt; &lt;span class="s1"&gt;'chmod\s+-R\s+777\s+/'&lt;/span&gt; &lt;span class="s1"&gt;'curl.*\|\s*bash'&lt;/span&gt; &lt;span class="s1"&gt;'&amp;gt;\s*/dev/sd[a-z]'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;for &lt;/span&gt;p &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PATTERNS&lt;/span&gt;&lt;span class="p"&gt;[@]&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
  if &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$COMMAND&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-qE&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$p&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;jq &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="nt"&gt;--arg&lt;/span&gt; r &lt;span class="s2"&gt;"Blocked: &lt;/span&gt;&lt;span class="nv"&gt;$p&lt;/span&gt;&lt;span class="s2"&gt; detected"&lt;/span&gt; &lt;span class="s1"&gt;'{hookSpecificOutput:{hookEventName:"PreToolUse",permissionDecision:"deny",permissionDecisionReason:$r}}'&lt;/span&gt;
    &lt;span class="nb"&gt;exit &lt;/span&gt;0
  &lt;span class="k"&gt;fi
done
&lt;/span&gt;&lt;span class="nb"&gt;exit &lt;/span&gt;0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Recipe 3: Auto-Run Tests (Async)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;
&lt;span class="nv"&gt;INPUT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;FILE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$INPUT&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.tool_input.file_path // .tool_input.path // empty'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-z&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then &lt;/span&gt;&lt;span class="nb"&gt;exit &lt;/span&gt;0&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;fi
if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-qE&lt;/span&gt; &lt;span class="s1"&gt;'\.(js|ts|py|go|rs)$'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then &lt;/span&gt;&lt;span class="nb"&gt;exit &lt;/span&gt;0&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;fi

if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"package.json"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;npx jest &lt;span class="nt"&gt;--findRelatedTests&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;--no-coverage&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; /tmp/claude-tests.txt 2&amp;gt;&amp;amp;1
&lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"pyproject.toml"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;python &lt;span class="nt"&gt;-m&lt;/span&gt; pytest &lt;span class="nt"&gt;--tb&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;short &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; /tmp/claude-tests.txt 2&amp;gt;&amp;amp;1
&lt;span class="k"&gt;fi
&lt;/span&gt;&lt;span class="nb"&gt;exit &lt;/span&gt;0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Set &lt;code&gt;"async": true&lt;/code&gt; so Claude continues working.&lt;/p&gt;

&lt;h3&gt;
  
  
  Recipe 4: Log All Tool Usage
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;
&lt;span class="nv"&gt;INPUT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$HOME&lt;/span&gt;&lt;span class="s2"&gt;/.claude/logs"&lt;/span&gt;
&lt;span class="nv"&gt;TOOL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$INPUT&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.tool_name // "unknown"'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;SESSION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$INPUT&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.session_id // "unknown"'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;TINPUT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$INPUT&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | jq &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'.tool_input // {}'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

jq &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="nt"&gt;--arg&lt;/span&gt; ts &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; &lt;span class="nt"&gt;-Iseconds&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;--arg&lt;/span&gt; t &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$TOOL&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;--arg&lt;/span&gt; s &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SESSION&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;--argjson&lt;/span&gt; i &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$TINPUT&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="s1"&gt;'{timestamp:$ts,tool:$t,session:$s,input:$i}'&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$HOME&lt;/span&gt;&lt;span class="s2"&gt;/.claude/logs/tool-usage.jsonl"&lt;/span&gt;
&lt;span class="nb"&gt;exit &lt;/span&gt;0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Recipe 5: Custom Notifications
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;
&lt;span class="nv"&gt;INPUT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;EVENT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$INPUT&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.hook_event_name'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$EVENT&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="k"&gt;in
  &lt;/span&gt;Notification&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nv"&gt;MSG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$INPUT&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.message // "Notification"'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="p"&gt;;;&lt;/span&gt;
  Stop&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nv"&gt;MSG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Claude finished responding."&lt;/span&gt; &lt;span class="p"&gt;;;&lt;/span&gt;
&lt;span class="k"&gt;esac&lt;/span&gt;

&lt;span class="c"&gt;# Desktop&lt;/span&gt;
&lt;span class="nb"&gt;command&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; notify-send &amp;amp;&amp;gt;/dev/null &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; notify-send &lt;span class="s2"&gt;"Claude Code"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$MSG&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="c"&gt;# Slack&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SLACK_WEBHOOK_URL&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; curl &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="nt"&gt;-X&lt;/span&gt; POST &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SLACK_WEBHOOK_URL&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s1"&gt;'Content-Type: application/json'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;jq &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="nt"&gt;--arg&lt;/span&gt; t &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$MSG&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s1"&gt;'{text:$t}'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;true
exit &lt;/span&gt;0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Hook Locations
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Location&lt;/th&gt;
&lt;th&gt;Path&lt;/th&gt;
&lt;th&gt;Scope&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Global&lt;/td&gt;
&lt;td&gt;&lt;code&gt;~/.claude/settings.json&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;All projects&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Project shared&lt;/td&gt;
&lt;td&gt;&lt;code&gt;.claude/settings.json&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Team-wide&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Project local&lt;/td&gt;
&lt;td&gt;&lt;code&gt;.claude/settings.local.json&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Your machine&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Managed policy&lt;/td&gt;
&lt;td&gt;Org-wide&lt;/td&gt;
&lt;td&gt;Enterprise&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Advanced Patterns
&lt;/h2&gt;

&lt;h3&gt;
  
  
  MCP Tool Matching
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"matcher"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"mcp__database__execute_query"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"hooks"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;".claude/hooks/check-sql-safety.sh"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The "once" Flag
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"npm ls --depth=0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"once"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"statusMessage"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Verifying dependencies..."&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Security Best Practices
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Never store secrets in configs — use &lt;code&gt;allowedEnvVars&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Use script files over inline commands&lt;/li&gt;
&lt;li&gt;Set appropriate timeouts&lt;/li&gt;
&lt;li&gt;Log hook actions for audit trails&lt;/li&gt;
&lt;li&gt;Test hooks locally first&lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;/hooks&lt;/code&gt; to verify hooks loaded&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Do hooks work with MCP?&lt;/strong&gt; Yes. MCP tools follow &lt;code&gt;mcp__&amp;lt;server&amp;gt;__&amp;lt;tool&amp;gt;&lt;/code&gt; naming.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Can hooks modify Claude's behavior?&lt;/strong&gt; Yes. PreToolUse blocks actions, PostToolUse triggers automation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Exit code 0 vs 2?&lt;/strong&gt; 0 = allow, 2 = block (PreToolUse).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to debug?&lt;/strong&gt; Use &lt;code&gt;/hooks&lt;/code&gt; command. Validate JSON with &lt;code&gt;python3 -c "import json; json.load(open('.claude/settings.json'))"&lt;/code&gt;.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://serenitiesai.com/articles/claude-code-hooks-guide-2026" rel="noopener noreferrer"&gt;serenitiesai.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>tutorial</category>
      <category>programming</category>
      <category>cli</category>
    </item>
    <item>
      <title>AI Agent Skills Guide 2026: Build Skills for 16+ AI Tools</title>
      <dc:creator>Serenities AI</dc:creator>
      <pubDate>Fri, 06 Mar 2026 20:11:52 +0000</pubDate>
      <link>https://forem.com/serenitiesai/ai-agent-skills-guide-2026-build-skills-for-16-ai-tools-1jea</link>
      <guid>https://forem.com/serenitiesai/ai-agent-skills-guide-2026-build-skills-for-16-ai-tools-1jea</guid>
      <description>&lt;p&gt;Imagine writing a set of instructions once and having it work across Claude Code, Cursor, OpenAI Codex, Gemini CLI, VS Code, and a dozen more AI tools. That's exactly what Agent Skills deliver — and they're rapidly becoming the "npm packages" of AI-assisted development.&lt;/p&gt;

&lt;p&gt;Announced by Anthropic on December 18, 2025, Agent Skills is an open standard that lets you package expertise, workflows, and automation into portable directories that any compatible AI tool can load and execute. As of March 2026, 16+ major AI tools have adopted the standard.&lt;/p&gt;

&lt;p&gt;This isn't a proprietary feature locked to one vendor. It's an industry-wide shift in how we customize and extend AI tools. Think of it like this: if AI coding assistants are the new IDEs, Agent Skills are the extensions marketplace — except one extension works everywhere.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Are Agent Skills?
&lt;/h2&gt;

&lt;p&gt;Agent Skills is "a simple, open format for giving agents new capabilities and expertise." At the most basic level, a skill is just a directory containing a SKILL.md file. That markdown file tells an AI agent what the skill does, how to use it, and what rules to follow.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Core Idea: Write Once, Works Everywhere
&lt;/h3&gt;

&lt;p&gt;Before Agent Skills, customizing AI tools meant writing tool-specific configuration. Your Cursor rules didn't work in Claude Code. Your Codex setup didn't transfer to Gemini CLI.&lt;/p&gt;

&lt;p&gt;Agent Skills changes that. One skill directory, one SKILL.md file, and it works across every tool that supports the standard — 16+ tools and growing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Progressive Disclosure
&lt;/h3&gt;

&lt;p&gt;When an AI agent starts up, it doesn't read every skill's full content. It only loads the name and description from the YAML frontmatter. The agent scans those short descriptions, and only when a skill is actually needed does it load the full content.&lt;/p&gt;

&lt;p&gt;This means you can have dozens or even hundreds of skills installed without any performance penalty.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Directory Structure
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;my-skill/
├── SKILL.md          # Required — the skill definition
├── scripts/          # Optional — executable scripts
├── references/       # Optional — documentation, examples
└── assets/           # Optional — images, templates, configs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Which Tools Support Agent Skills?
&lt;/h2&gt;

&lt;p&gt;The adoption has been remarkable. Within three months, 16+ major AI tools added support:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;Category&lt;/th&gt;
&lt;th&gt;Developer&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Claude Code&lt;/td&gt;
&lt;td&gt;CLI Coding Agent&lt;/td&gt;
&lt;td&gt;Anthropic&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Claude (web/mobile)&lt;/td&gt;
&lt;td&gt;AI Assistant&lt;/td&gt;
&lt;td&gt;Anthropic&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cursor&lt;/td&gt;
&lt;td&gt;AI Code Editor&lt;/td&gt;
&lt;td&gt;Anysphere&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;OpenAI Codex&lt;/td&gt;
&lt;td&gt;CLI Coding Agent&lt;/td&gt;
&lt;td&gt;OpenAI&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gemini CLI&lt;/td&gt;
&lt;td&gt;CLI Coding Agent&lt;/td&gt;
&lt;td&gt;Google&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Junie&lt;/td&gt;
&lt;td&gt;IDE Agent&lt;/td&gt;
&lt;td&gt;JetBrains&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GitHub Copilot&lt;/td&gt;
&lt;td&gt;AI Pair Programmer&lt;/td&gt;
&lt;td&gt;GitHub&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;VS Code&lt;/td&gt;
&lt;td&gt;Code Editor&lt;/td&gt;
&lt;td&gt;Microsoft&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;OpenHands&lt;/td&gt;
&lt;td&gt;Autonomous Agent&lt;/td&gt;
&lt;td&gt;Open Source&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Amp&lt;/td&gt;
&lt;td&gt;AI Coding Agent&lt;/td&gt;
&lt;td&gt;Sourcegraph&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Goose&lt;/td&gt;
&lt;td&gt;AI Agent&lt;/td&gt;
&lt;td&gt;Block&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Firebender&lt;/td&gt;
&lt;td&gt;AI Coding Tool&lt;/td&gt;
&lt;td&gt;Firebender&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Letta&lt;/td&gt;
&lt;td&gt;Agent Framework&lt;/td&gt;
&lt;td&gt;Letta&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;When Google, OpenAI, JetBrains, and GitHub all adopt the same standard, you know it's become the real deal.&lt;/p&gt;

&lt;h2&gt;
  
  
  The SKILL.md Format
&lt;/h2&gt;

&lt;p&gt;Every skill starts with a SKILL.md file with two parts: YAML frontmatter and markdown content.&lt;/p&gt;

&lt;h3&gt;
  
  
  YAML Frontmatter Fields
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Field&lt;/th&gt;
&lt;th&gt;Required&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;name&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Max 64 chars, lowercase + hyphens&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;description&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Max 1024 chars — what agents read at startup&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;license&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;SPDX identifier (e.g., MIT)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;compatibility&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Which tools this skill targets&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;metadata&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Custom key-value pairs (author, version, tags)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;allowed-tools&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Restrict which tools the skill can access&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Example SKILL.md
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;code-review&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Performs&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;thorough&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;code&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;review&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;with&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;focus&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;on&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;security&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;vulnerabilities,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;performance&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;issues,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;and&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;adherence&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;to&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;team&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;coding&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;standards."&lt;/span&gt;
&lt;span class="na"&gt;license&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;MIT&lt;/span&gt;
&lt;span class="na"&gt;compatibility&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;claude-code&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;cursor&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;codex&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;author&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;your-team&lt;/span&gt;
  &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1.0.0&lt;/span&gt;
&lt;span class="na"&gt;allowed-tools&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;read&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;write&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;exec&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Creating Your First Skill: Step by Step
&lt;/h2&gt;

&lt;p&gt;Let's build a "commit-message" skill that generates conventional commit messages.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Create the Directory
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; my-project/.claude/skills/commit-message
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Write the SKILL.md
&lt;/h3&gt;

&lt;p&gt;Create &lt;code&gt;.claude/skills/commit-message/SKILL.md&lt;/code&gt; with name, description, and detailed instructions for analyzing git diffs, categorizing changes (feat/fix/refactor/docs/test/chore), and writing conventional commit messages.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Add Optional Scripts
&lt;/h3&gt;

&lt;p&gt;Skills can include executable scripts for validation, testing, or automation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Test It
&lt;/h3&gt;

&lt;p&gt;Open your project in any supported tool and ask: "Write a commit message for my changes." The agent should detect and activate the skill automatically.&lt;/p&gt;

&lt;h2&gt;
  
  
  Claude Code Skills Deep Dive
&lt;/h2&gt;

&lt;p&gt;Claude Code has the most mature implementation with advanced features:&lt;/p&gt;

&lt;h3&gt;
  
  
  Skill Locations and Priority
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Priority&lt;/th&gt;
&lt;th&gt;Location&lt;/th&gt;
&lt;th&gt;Path&lt;/th&gt;
&lt;th&gt;Use Case&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1 (Highest)&lt;/td&gt;
&lt;td&gt;Enterprise&lt;/td&gt;
&lt;td&gt;Managed by org admin&lt;/td&gt;
&lt;td&gt;Company-wide standards&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;Personal&lt;/td&gt;
&lt;td&gt;~/.claude/skills/&lt;/td&gt;
&lt;td&gt;Your workflow preferences&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;Project&lt;/td&gt;
&lt;td&gt;.claude/skills/&lt;/td&gt;
&lt;td&gt;Project-specific standards&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4 (Lowest)&lt;/td&gt;
&lt;td&gt;Plugin&lt;/td&gt;
&lt;td&gt;Installed via skills.sh&lt;/td&gt;
&lt;td&gt;Community skills&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Bundled Skills
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;/simplify&lt;/strong&gt; — Spawns 3 parallel review agents analyzing readability, performance, and correctness simultaneously&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;/batch&lt;/strong&gt; — Spawns 5-30 worktree agents for large-scale codebase changes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;/debug&lt;/strong&gt; — Interactive debugging workflow that traces through code and proposes fixes&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Skills Across Other Tools
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Cursor
&lt;/h3&gt;

&lt;p&gt;Cursor reads skills from .claude/skills/ in your project. Migrate your .cursorrules into Agent Skills format and they work in both Cursor AND every other supporting tool.&lt;/p&gt;

&lt;h3&gt;
  
  
  OpenAI Codex
&lt;/h3&gt;

&lt;p&gt;Codex adopted Agent Skills as its customization format — a significant endorsement from Anthropic's biggest competitor.&lt;/p&gt;

&lt;h3&gt;
  
  
  Gemini CLI
&lt;/h3&gt;

&lt;p&gt;Google optimized progressive disclosure for fast skill loading. Dozens of skills barely affect startup time.&lt;/p&gt;

&lt;h3&gt;
  
  
  VS Code
&lt;/h3&gt;

&lt;p&gt;Skills placed in .claude/skills/ are automatically picked up by VS Code's AI assistant. Since VS Code is the most popular editor, this reaches the largest audience.&lt;/p&gt;

&lt;h2&gt;
  
  
  5 Practical Skill Recipes
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Code Review&lt;/strong&gt; — Security, performance, and style checks with severity-rated output&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Test Generator&lt;/strong&gt; — Generates comprehensive test suites for Jest, Vitest, Pytest, or Go&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Documentation Generator&lt;/strong&gt; — Creates/updates README, API docs, and changelogs from code&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security Audit&lt;/strong&gt; — OWASP Top 10 analysis with dependency scanning and secrets detection&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Codebase Onboarding&lt;/strong&gt; — Helps new developers understand unfamiliar codebases quickly&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Skills Management with skills.sh
&lt;/h2&gt;

&lt;p&gt;The community has built &lt;a href="https://skills.sh" rel="noopener noreferrer"&gt;skills.sh&lt;/a&gt; — think of it as "npm for agent skills." You can discover, install, and manage skills from a central registry.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security: 341 Malicious Skills Discovered
&lt;/h2&gt;

&lt;p&gt;With any open ecosystem comes security risks. As of early 2026, 341 malicious skills have been discovered. Protect yourself:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Always set &lt;strong&gt;allowed-tools&lt;/strong&gt; for community skills&lt;/li&gt;
&lt;li&gt;Review SKILL.md content before installing&lt;/li&gt;
&lt;li&gt;Use enterprise-level skills for sensitive projects&lt;/li&gt;
&lt;li&gt;Keep skills version-controlled&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How Serenities AI Uses Agent Skills
&lt;/h2&gt;

&lt;p&gt;At &lt;a href="https://serenitiesai.com" rel="noopener noreferrer"&gt;Serenities AI&lt;/a&gt;, we use Agent Skills to standardize workflows across our integrated platform. Since Serenities AI lets you connect your existing AI subscription (ChatGPT Plus, Claude Pro) instead of paying per-token API costs, skills become even more powerful — you can run complex multi-step skill workflows without worrying about token costs eating into your budget.&lt;/p&gt;

&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Do Agent Skills work with all AI tools?
&lt;/h3&gt;

&lt;p&gt;As of March 2026, 16+ tools support the standard, including Claude Code, Cursor, OpenAI Codex, Gemini CLI, VS Code, JetBrains Junie, and GitHub Copilot.&lt;/p&gt;

&lt;h3&gt;
  
  
  Where should I put my skills?
&lt;/h3&gt;

&lt;p&gt;For project skills: &lt;code&gt;.claude/skills/&lt;/code&gt;. For personal skills: &lt;code&gt;~/.claude/skills/&lt;/code&gt;. The directory name is part of the standard, not brand-specific.&lt;/p&gt;

&lt;h3&gt;
  
  
  Can skills run code?
&lt;/h3&gt;

&lt;p&gt;Yes. Skills can include scripts in the &lt;code&gt;scripts/&lt;/code&gt; directory. Use the &lt;code&gt;allowed-tools&lt;/code&gt; field to restrict what untrusted skills can access.&lt;/p&gt;

&lt;h3&gt;
  
  
  Are Agent Skills free?
&lt;/h3&gt;

&lt;p&gt;The standard itself is open and free. Individual skills may have their own licenses.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bottom Line
&lt;/h2&gt;

&lt;p&gt;Agent Skills represent a fundamental shift in how we customize AI tools. Write once, works everywhere, with progressive disclosure keeping things fast. Whether you're a solo developer or managing a team of hundreds, skills give you portable, version-controlled AI customization that moves with you across tools.&lt;/p&gt;

&lt;p&gt;The ecosystem is still young, but with 16+ tools already on board, this is the standard that's going to define how we extend AI assistants for years to come.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://serenitiesai.com/articles/agent-skills-guide-2026" rel="noopener noreferrer"&gt;serenitiesai.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>tutorial</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
