<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Shadab Khan</title>
    <description>The latest articles on Forem by Shadab Khan (@shad_tech).</description>
    <link>https://forem.com/shad_tech</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/shad_tech"/>
    <language>en</language>
    <item>
      <title>Turn Any OpenAPI Spec into Safe MCP Tools for Claude in Seconds</title>
      <dc:creator>Shadab Khan</dc:creator>
      <pubDate>Wed, 06 May 2026 02:38:49 +0000</pubDate>
      <link>https://forem.com/shad_tech/turn-any-openapi-spec-into-safe-mcp-tools-for-claude-in-seconds-bf4</link>
      <guid>https://forem.com/shad_tech/turn-any-openapi-spec-into-safe-mcp-tools-for-claude-in-seconds-bf4</guid>
      <description>&lt;p&gt;If you've tried connecting REST APIs to AI agents, you've probably hit this moment:&lt;/p&gt;

&lt;p&gt;The MCP server starts. Claude sees the tools. You ask it to fetch some data.&lt;/p&gt;

&lt;p&gt;And it calls the wrong endpoint. Or fires a &lt;code&gt;DELETE&lt;/code&gt; it shouldn't. Or gets confused between two tools that look similar and just... picks one.&lt;/p&gt;

&lt;p&gt;Your spec passed every linter. The conversion worked fine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The agent still broke.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is the gap nobody talks about. OpenAPI specs were designed for human developers who can read vague documentation, infer intent, and apply common sense. AI agents can't do any of that. They need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Every endpoint to have a clear, unambiguous name (&lt;code&gt;operationId&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Descriptions that say &lt;em&gt;when&lt;/em&gt; to call a tool, not just &lt;em&gt;what&lt;/em&gt; it does&lt;/li&gt;
&lt;li&gt;Explicit safety signals on destructive operations&lt;/li&gt;
&lt;li&gt;No two tools that look similar enough to cause confusion&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Standard linters don't check for any of this. So the spec looks clean, the linter passes green, and the agent quietly makes bad decisions.&lt;/p&gt;

&lt;p&gt;That's the problem I kept hitting. So I built &lt;a href="https://github.com/yourname/mcp-openapi-doctor" rel="noopener noreferrer"&gt;mcp-openapi-doctor&lt;/a&gt; to fix it.&lt;/p&gt;




&lt;h2&gt;
  
  
  What mcp-openapi-doctor Does
&lt;/h2&gt;

&lt;p&gt;It's a CLI tool with four commands that take a raw OpenAPI or Swagger spec and make it genuinely agent-ready.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;OpenAPI spec → mcp-openapi-doctor → cleaned spec + MCP overlay → your MCP server → Claude
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's a &lt;strong&gt;preprocessor&lt;/strong&gt;, not a replacement. It fits in front of whatever MCP tooling you're already using — AWS OpenAPI MCP, FastMCP, Tyk, or your own server.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 1: Inspect — See Your Agent-Readiness Score
&lt;/h2&gt;

&lt;p&gt;The first thing you want to know is how broken your spec actually is for agent use.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx mcp-openapi-doctor inspect https://petstore3.swagger.io/api/v3/openapi.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You get a score from 0–100 broken into three dimensions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;MCP OpenAPI Doctor 🩺
─────────────────────────────────────
API:        Petstore API v1.0.0
Operations: 19 across 9 paths
Version:    OpenAPI 3.0.3

Agent-readiness score: 61/100

Safety       ████░░░░░░  24/40
Clarity      ███░░░░░░░  21/35
Efficiency   █████░░░░░  16/25

Issues found:
  ✗ error   6x missing operationId
  ✗ error   3x destructive endpoint without warning
  ⚠ warn    8x vague or missing description
  ⚠ warn    1x response schema exceeds 30 fields
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These aren't style issues. Each one is a real agent failure mode:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Issue&lt;/th&gt;
&lt;th&gt;What actually happens&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Missing &lt;code&gt;operationId&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;Agent has no stable tool name to reference&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Destructive endpoint without warning&lt;/td&gt;
&lt;td&gt;Agent fires DELETE with no hesitation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tool name collision&lt;/td&gt;
&lt;td&gt;LLM picks between two similar tools arbitrarily&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Vague description&lt;/td&gt;
&lt;td&gt;Agent calls the wrong tool because intent isn't clear&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Response schema bloat (80+ fields)&lt;/td&gt;
&lt;td&gt;Burns context window, degrades reasoning quality&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Step 2: Generate — Preview Exactly What Claude Will See
&lt;/h2&gt;

&lt;p&gt;Before running a server, &lt;code&gt;generate&lt;/code&gt; shows you the exact tools Claude will have access to — names, descriptions, inputs, and safety classification.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx mcp-openapi-doctor generate ./openapi.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Generated tools: 4

✓ list_users
  Operation: GET /users
  Safety:    SAFE_READ
  Inputs:    none

✓ get_user
  Operation: GET /users/{id}
  Safety:    SAFE_READ
  Inputs:    id*

⚠ update_user
  Operation: PUT /users/{id}
  Safety:    WRITE
  Inputs:    id*, body

✗ delete_user
  Operation: DELETE /users/{id}
  Safety:    DESTRUCTIVE
  Inputs:    id*
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use &lt;code&gt;--read-only&lt;/code&gt; to see only the tools that would be exposed in safe mode:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx mcp-openapi-doctor generate ./openapi.yaml &lt;span class="nt"&gt;--read-only&lt;/span&gt;
&lt;span class="c"&gt;# Only SAFE_READ tools appear — WRITE and DESTRUCTIVE are hidden from the agent&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is useful when you're scoping what you want Claude to access before you open up write operations.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 3: Fix — Auto-Fix What's Safe to Fix
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;fix&lt;/code&gt; generates a cleaned spec in an output folder. It &lt;strong&gt;never modifies your original file.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx mcp-openapi-doctor fix ./openapi.yaml &lt;span class="nt"&gt;--out&lt;/span&gt; ./output/ &lt;span class="nt"&gt;--diff&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What gets generated:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output/
├── cleaned-openapi.yaml    ← patched spec with fixes applied
├── mcp-overlay.yaml        ← x-mcp-* metadata for downstream MCP servers
├── doctor-report.json      ← structured issues + score
├── fixes.md                ← human-readable log of every change
├── summary.md              ← before/after score + remaining actions
├── diff.md                 ← readable diff of every change
└── diff.json               ← structured diff for tooling
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What the fixer will safely change:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Generate missing &lt;code&gt;operationId&lt;/code&gt;&lt;/strong&gt; from method + path, slugified and deduplicated&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Normalize all &lt;code&gt;operationId&lt;/code&gt; values&lt;/strong&gt; to &lt;code&gt;snake_case&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Add fallback summaries&lt;/strong&gt; where missing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Inject &lt;code&gt;x-mcp-destructive: true&lt;/code&gt;&lt;/strong&gt; on DELETE endpoints&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Add placeholder schemas&lt;/strong&gt; for missing request bodies and responses&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What it will never do:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Remove any endpoint&lt;/li&gt;
&lt;li&gt;Change runtime behavior
&lt;/li&gt;
&lt;li&gt;Touch your source file&lt;/li&gt;
&lt;li&gt;Use AI to rewrite anything (all fixes are deterministic)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;code&gt;summary.md&lt;/code&gt; shows you the before/after score and what still needs a human:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Before: 61/100
After:  88/100

Fixed automatically:
  ✓ 6 missing operationIds generated
  ✓ 3 destructive endpoints flagged with x-mcp-destructive
  ✓ 4 missing summaries added

Still needs your attention:
  ⚠ 8 vague descriptions — these need human context to fix well
  ⚠ 1 response schema with 87 fields — consider a filtered endpoint
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 4: Serve — Connect Directly to Claude Desktop
&lt;/h2&gt;

&lt;p&gt;Once your spec is clean, &lt;code&gt;serve&lt;/code&gt; starts an MCP stdio server that Claude Desktop connects to directly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx mcp-openapi-doctor serve ./openapi.yaml &lt;span class="nt"&gt;--read-only&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The terminal will look idle — that's expected. It's waiting for an MCP client over stdio.&lt;/p&gt;

&lt;p&gt;Add it to your Claude Desktop config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"my-api"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"npx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"mcp-openapi-doctor"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"serve"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"/absolute/path/to/openapi.yaml"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"--read-only"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"env"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"API_TOKEN"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"your_token_here"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart Claude Desktop and ask: &lt;code&gt;What tools do you have available?&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The tools from your spec appear. Claude can call your actual REST API through them.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Safety Model
&lt;/h2&gt;

&lt;p&gt;Every endpoint is classified before it's exposed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GET                          →  SAFE_READ    ✓ exposed in --read-only
POST / PUT / PATCH           →  WRITE        ✗ hidden in --read-only  
DELETE                       →  DESTRUCTIVE  ✗ hidden in --read-only
Paths with admin/billing/
payment/secret/token/
credential                   →  SENSITIVE    ✗ hidden in --read-only
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;--read-only&lt;/code&gt; is the recommended mode when you're first exposing an API to an agent. Let Claude explore and read before you decide which write operations to open up.&lt;/p&gt;




&lt;h2&gt;
  
  
  Works With Your Existing MCP Stack
&lt;/h2&gt;

&lt;p&gt;Because it outputs a standard cleaned OpenAPI spec, the output drops straight into whatever you're already using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Clean first&lt;/span&gt;
npx mcp-openapi-doctor fix ./stripe.yaml &lt;span class="nt"&gt;--out&lt;/span&gt; ./output/

&lt;span class="c"&gt;# Then feed cleaned spec to your preferred MCP server&lt;/span&gt;
npx @aws/openapi-mcp-server &lt;span class="nt"&gt;--spec&lt;/span&gt; ./output/cleaned-openapi.yaml
&lt;span class="c"&gt;# or fastmcp, tyk, your own server&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;mcp-overlay.yaml&lt;/code&gt; contains &lt;code&gt;x-mcp-*&lt;/code&gt; extensions that compatible servers can read for additional agent hints.&lt;/p&gt;




&lt;h2&gt;
  
  
  Try It Now — Five Real Examples
&lt;/h2&gt;

&lt;p&gt;The repo ships with five fixture specs covering a range of quality levels:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/yourname/mcp-openapi-doctor
&lt;span class="nb"&gt;cd &lt;/span&gt;mcp-openapi-doctor
pnpm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; pnpm build

&lt;span class="c"&gt;# A clean, well-described read-only API — should score 90+&lt;/span&gt;
pnpm start &lt;span class="nt"&gt;--&lt;/span&gt; inspect ./examples/clean-read-api.yaml

&lt;span class="c"&gt;# An intentionally broken CRM API — good for seeing all the checks fire&lt;/span&gt;
pnpm start &lt;span class="nt"&gt;--&lt;/span&gt; inspect ./examples/risky-crm-api.yaml
pnpm start &lt;span class="nt"&gt;--&lt;/span&gt; fix ./examples/risky-crm-api.yaml &lt;span class="nt"&gt;--out&lt;/span&gt; .output/ &lt;span class="nt"&gt;--diff&lt;/span&gt;

&lt;span class="c"&gt;# Swagger 2.0 compatibility&lt;/span&gt;
pnpm start &lt;span class="nt"&gt;--&lt;/span&gt; inspect ./examples/swagger-2-api.json

&lt;span class="c"&gt;# The classic Petstore&lt;/span&gt;
pnpm start &lt;span class="nt"&gt;--&lt;/span&gt; serve ./examples/simple-openapi.yaml &lt;span class="nt"&gt;--read-only&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or just run it against any public API right now:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# GitHub API&lt;/span&gt;
npx mcp-openapi-doctor inspect https://raw.githubusercontent.com/github/rest-api-description/main/descriptions/api.github.com/api.github.com.json

&lt;span class="c"&gt;# Petstore&lt;/span&gt;
npx mcp-openapi-doctor inspect https://petstore3.swagger.io/api/v3/openapi.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  What's Coming Next
&lt;/h2&gt;

&lt;p&gt;Phase 1 is shipped. On the roadmap for Phase 2:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Web UI&lt;/strong&gt; — paste a spec URL, see issues highlighted inline, download the fixed spec&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI-powered description hints&lt;/strong&gt; — &lt;code&gt;--ai-hints&lt;/code&gt; flag that uses an LLM to suggest better agent-oriented descriptions for vague endpoints&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Action&lt;/strong&gt; — gate your CI on a minimum agent-readiness score&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;VS Code extension&lt;/strong&gt; — inline diagnostics while editing a spec file&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom rule plugins&lt;/strong&gt; — bring your own rules as JS modules for internal API conventions&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The One-Line Summary
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Your API spec passes Spectral. But will Claude use it safely?&lt;br&gt;&lt;br&gt;
&lt;code&gt;mcp-openapi-doctor&lt;/code&gt; finds what linters miss.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you're building anything in the MCP or AI agent space, give it a try and let me know what you find — especially if you hit a spec it handles badly or a check it's missing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/yourname/mcp-openapi-doctor" rel="noopener noreferrer"&gt;github.com/yourname/mcp-openapi-doctor&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If this saves you from a broken agent call, a ⭐ on the repo goes a long way. It helps other developers find it when they hit the same wall.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built in TypeScript. Zero install via npx. MIT licensed.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>mcp</category>
      <category>openapi</category>
      <category>agents</category>
    </item>
    <item>
      <title>LifeOps: An AI System That Aligns Your Daily Actions with Your Future Self</title>
      <dc:creator>Shadab Khan</dc:creator>
      <pubDate>Mon, 27 Apr 2026 07:40:59 +0000</pubDate>
      <link>https://forem.com/shad_tech/lifeops-an-ai-system-that-aligns-your-daily-actions-with-your-future-self-36l7</link>
      <guid>https://forem.com/shad_tech/lifeops-an-ai-system-that-aligns-your-daily-actions-with-your-future-self-36l7</guid>
      <description>&lt;h2&gt;
  
  
  Why I Built LifeOps — A Personal OS for Becoming Your Future Self
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;The app most productivity tools never thought to build&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;There's a moment most self-improvement people know well.&lt;/p&gt;

&lt;p&gt;You download a new app. You set up your habits. You add your goals. You feel organised, motivated, maybe even a little excited. And then — slowly — it becomes just another list you feel guilty about not opening.&lt;/p&gt;

&lt;p&gt;I've been through that cycle more times than I'd like to admit.&lt;/p&gt;

&lt;p&gt;Not because I lacked discipline. Not because the apps were bad. But because every single one of them asked the wrong question.&lt;/p&gt;

&lt;p&gt;They asked: &lt;strong&gt;"Did you finish the task?"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Nobody asked: &lt;strong&gt;"Is today helping you become the person you want to be?"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That one question changed how I thought about productivity. And it's the reason I spent months building LifeOps.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Problem With Productivity Apps
&lt;/h2&gt;

&lt;p&gt;I'm a developer. I've tried everything — Notion, Todoist, Habitica, Obsidian, Roam, Reflect, Monday, TickTick, even my own custom Airtable setups. And I have nothing against any of them. They're well-built tools.&lt;/p&gt;

&lt;p&gt;But they all share the same fundamental flaw: &lt;strong&gt;they treat your life as a collection of isolated systems.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Your habit tracker doesn't know about your goals.&lt;br&gt;
Your task list doesn't know about your identity.&lt;br&gt;
Your notes app is a graveyard of half-formed thoughts.&lt;br&gt;
Your goal planner collects dust after the first week of January.&lt;br&gt;
Your AI assistant gives generic advice because it doesn't actually know you.&lt;/p&gt;

&lt;p&gt;I was running five different apps for five different slices of my life — and none of them talked to each other. There was no thread connecting &lt;em&gt;what I did today&lt;/em&gt; to &lt;em&gt;who I'm trying to become&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The result? A lot of activity. Very little alignment.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Insight That Started Everything
&lt;/h2&gt;

&lt;p&gt;One evening I sat down and wrote out what I actually wanted my life to look like in five years. Not in a vague "I want to be successful" way — but specifically. The kind of person I wanted to be. The habits I'd have built. The work I'd be doing. The relationships I'd have deepened.&lt;/p&gt;

&lt;p&gt;I called this my &lt;strong&gt;Future Self&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;And then I looked at my task list for that week.&lt;/p&gt;

&lt;p&gt;Almost none of it connected to that person.&lt;/p&gt;

&lt;p&gt;I was busy. I was productive by any conventional measure. But I wasn't &lt;em&gt;aligned&lt;/em&gt;. The gap between my daily actions and my future identity was enormous — and invisible, because no tool was showing it to me.&lt;/p&gt;

&lt;p&gt;That's the insight that started LifeOps.&lt;/p&gt;

&lt;p&gt;What if there was a system that started with your identity — the person you want to become — and worked &lt;em&gt;backwards&lt;/em&gt; to your daily actions? What if every habit, every task, every plan was explicitly connected to that future version of you?&lt;/p&gt;




&lt;h2&gt;
  
  
  Building the Loop
&lt;/h2&gt;

&lt;p&gt;I started sketching the core loop on paper.&lt;/p&gt;

&lt;p&gt;It looked like this:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Future Self → Life Areas → Goals → Habits + Tasks → Daily Execution → Weekly Review → back to Future Self&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Each layer feeds the next. Your Future Self defines your life areas — health, relationships, career, creativity, whatever matters to you. Your life areas inform your goals. Your goals generate habits and tasks. Your habits and tasks shape your daily plan. Your daily execution feeds into a weekly review. And your weekly review updates your understanding of your future self.&lt;/p&gt;

&lt;p&gt;It's a closed loop. Everything is connected. Nothing is isolated.&lt;/p&gt;

&lt;p&gt;This is what LifeOps is built around.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;LifeOps is an AI-powered personal operating system. Here's what it includes at MVP:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Future Self&lt;/strong&gt; — You define (or AI-generate) a profile of who you're building toward. Not a vision board. A structured identity: the values you're embodying, the person you're becoming, the life you're designing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Goals&lt;/strong&gt; — Measurable outcomes connected to your life areas. Not floating abstractions — goals anchored to the future self you've defined.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Habits&lt;/strong&gt; — Repeatable actions linked directly to goals. The AI can suggest habits based on a goal you've set. You review them before saving.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tasks&lt;/strong&gt; — Concrete work, with due dates, priorities, and a link to the goal it serves. Not just a flat list.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Notes&lt;/strong&gt; — Quick capture for reflections, ideas, and context. The AI can summarise a note or extract next actions from it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dashboard&lt;/strong&gt; — Everything in one calm view. Today's habits, active goals, tasks due, recent notes, and your progress.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI Daily Planner&lt;/strong&gt; — Each morning, LifeOps generates a structured day plan using your actual goals, tasks, and habits as context. You review it. You edit it. You approve it. Then you execute.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Weekly Review&lt;/strong&gt; — Every week, LifeOps pulls your completed tasks, habit logs, and notes — and generates a summary: what you won at, where the gaps were, what patterns it sees, and what to focus on next week.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why AI — But Not in the Way You Think
&lt;/h2&gt;

&lt;p&gt;When people hear "AI-powered productivity app" they probably imagine a chatbot. A generic assistant that says things like "Great job completing your tasks this week! Here are some tips for better time management 🌟"&lt;/p&gt;

&lt;p&gt;That's not what I built.&lt;/p&gt;

&lt;p&gt;The AI in LifeOps is deeply contextual. It knows your Future Self profile. It knows your active goals. It knows your current habits and open tasks. When it generates a daily plan or a weekly review, it's using &lt;em&gt;your specific life context&lt;/em&gt; — not generic templates.&lt;/p&gt;

&lt;p&gt;But more importantly: &lt;strong&gt;every AI output is review-before-save.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I made this a hard design decision early on. I don't want an AI making decisions for me. I want it to do the heavy lifting of synthesis and pattern recognition — and then show me the result so I can think about it, edit it, and own it.&lt;/p&gt;

&lt;p&gt;The AI surfaces. The human decides.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Design Philosophy
&lt;/h2&gt;

&lt;p&gt;I had a few principles I kept coming back to while building:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Calm, not noisy.&lt;/strong&gt; Productivity apps are often anxiety machines. Overdue tasks in red. Notification badges. Streak counters. LifeOps is designed to feel like a quiet workspace, not a dashboard screaming at you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Guided, not overwhelming.&lt;/strong&gt; Most productivity systems collapse under their own weight. LifeOps is opinionated about what matters — the Future Self loop — so you're never staring at a blank canvas wondering what to do.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Focused on becoming, not completing.&lt;/strong&gt; The question isn't whether you ticked the box. It's whether the day moved you closer to the person you said you want to be.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Honest about progress.&lt;/strong&gt; The weekly review doesn't celebrate fake streaks. It shows you the real picture — wins and gaps — so you can adjust.&lt;/p&gt;




&lt;h2&gt;
  
  
  Who This Is For
&lt;/h2&gt;

&lt;p&gt;LifeOps is for people who feel productive but not aligned.&lt;/p&gt;

&lt;p&gt;You know the type — maybe you are the type. Smart, driven, organised. Always working on something. But occasionally you stop and ask: &lt;em&gt;is all this activity actually taking me somewhere?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;It's for people who've tried every productivity app and found them useful but incomplete. Who want one system instead of five. Who want their daily actions to mean something beyond a completed checkbox.&lt;/p&gt;

&lt;p&gt;It's especially for people drawn to ideas like identity-based habits, future self journaling, life design, and personal operating systems — but who want something more structured and AI-assisted than a paper journal.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;LifeOps is at MVP stage right now. The core loop is complete. The AI features work. The codebase is clean, well-structured, and fully open source on GitHub (MIT licence).&lt;/p&gt;

&lt;p&gt;Future phases on the roadmap include mobile support, deeper integrations, automation workflows, and community features. But I'm intentionally not rushing. The MVP does one thing: helps you stay aligned with your future self, every single day. That's enough to start.&lt;/p&gt;

&lt;p&gt;If you want to try it, self-host it, or contribute — the repo is open.&lt;/p&gt;

&lt;p&gt;And if you've ever felt that nagging disconnect between your daily hustle and the person you're actually trying to become — LifeOps was built for that exact feeling.&lt;/p&gt;




&lt;h2&gt;
  
  
  One Last Thing
&lt;/h2&gt;

&lt;p&gt;After I built the first working version and used it for a week, I had a moment I didn't expect.&lt;/p&gt;

&lt;p&gt;I opened the dashboard on a Tuesday morning and looked at my tasks, my habits, and my goals — all connected, all pointing toward the same Future Self I'd defined. And for the first time in years of using productivity tools, I didn't feel organised. I felt &lt;em&gt;aligned&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;That's the difference I was chasing.&lt;/p&gt;

&lt;p&gt;That's what I hope LifeOps gives you too.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;LifeOps is open source. You can find the repo, self-host instructions, and full documentation at &lt;a href="https://github.com/shadkhan/LifeOps" rel="noopener noreferrer"&gt;github.com/shadkhan/LifeOps&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If this resonated, share it with someone who's been chasing the right productivity tool for years. They might have just found it.&lt;/em&gt;&lt;/p&gt;




</description>
      <category>productivity</category>
      <category>futureself</category>
      <category>ai</category>
      <category>lifeops</category>
    </item>
    <item>
      <title>I built a shell script that sets up your entire AI coding agent workspace in 2 minutes</title>
      <dc:creator>Shadab Khan</dc:creator>
      <pubDate>Sat, 25 Apr 2026 02:52:00 +0000</pubDate>
      <link>https://forem.com/shad_tech/i-built-a-shell-script-that-sets-up-your-entire-ai-coding-agent-workspace-in-2-minutes-13ep</link>
      <guid>https://forem.com/shad_tech/i-built-a-shell-script-that-sets-up-your-entire-ai-coding-agent-workspace-in-2-minutes-13ep</guid>
      <description>&lt;p&gt;Every time I started a new project with an AI coding agent, I was doing the same thing.&lt;/p&gt;

&lt;p&gt;Opening a blank repo. Writing &lt;code&gt;CLAUDE.md&lt;/code&gt; from scratch. Explaining my stack again. Explaining my conventions again. Explaining what NOT to do — again. By the time I had the agent actually doing useful work, I'd already spent two hours just setting up context.&lt;/p&gt;

&lt;p&gt;Then I'd switch projects, and repeat everything from scratch.&lt;/p&gt;

&lt;p&gt;There had to be a better way.&lt;/p&gt;




&lt;h2&gt;
  
  
  The problem with AI coding agents and new projects
&lt;/h2&gt;

&lt;p&gt;If you've used Claude Code or Codex CLI, you already know how much these tools depend on good project context. The agent doesn't know your stack. It doesn't know you prefer &lt;code&gt;pnpm&lt;/code&gt; over &lt;code&gt;npm&lt;/code&gt;. It doesn't know that you never want raw SQL, or that every Prisma query needs a &lt;code&gt;userId&lt;/code&gt; filter to prevent IDOR vulnerabilities, or that your commit messages follow Conventional Commits.&lt;/p&gt;

&lt;p&gt;Without that context, you spend the first hour of every session correcting the agent instead of building.&lt;/p&gt;

&lt;p&gt;The solution everyone discovers eventually is &lt;code&gt;CLAUDE.md&lt;/code&gt; for Claude Code and &lt;code&gt;AGENTS.md&lt;/code&gt; for Codex — instruction files the agent reads at the start of every session. But writing these well takes time, and the best ones include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your exact tech stack with version numbers&lt;/li&gt;
&lt;li&gt;What NOT to do (as important as what to do)&lt;/li&gt;
&lt;li&gt;Security rules baked in from day one&lt;/li&gt;
&lt;li&gt;A testing strategy covering unit, integration, security, adversarial, and performance tests&lt;/li&gt;
&lt;li&gt;Module specs the agent reads before implementing anything&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Writing all of that properly for every new project is genuinely tedious. So I automated it.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I built: AI Agents Template Builder
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/shadkhan/AI-agents-template-builder" rel="noopener noreferrer"&gt;github.com/shadkhan/AI-agents-template-builder&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It's a GitHub Template Repository with a shell script that turns the template into a fully configured agent workspace for your specific project in about 2 minutes.&lt;/p&gt;

&lt;p&gt;Here's what you do:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Use the template on GitHub, then clone your new repo&lt;/span&gt;
git clone https://github.com/shadkhan/AI-agents-template-builder
&lt;span class="nb"&gt;cd &lt;/span&gt;AI-agents-template-builder

&lt;span class="nb"&gt;chmod&lt;/span&gt; +x scripts/init-project.sh
./scripts/init-project.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The script asks you six questions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Project name and description&lt;/li&gt;
&lt;li&gt;Language (TypeScript, Python, Go, JavaScript, or custom)&lt;/li&gt;
&lt;li&gt;Framework and database&lt;/li&gt;
&lt;li&gt;Package manager&lt;/li&gt;
&lt;li&gt;Your modules (Notes, Tasks, Auth — whatever your app has)&lt;/li&gt;
&lt;li&gt;Security profile (user-facing web app, API, static site, CLI tool)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Then it generates everything.&lt;/p&gt;




&lt;h2&gt;
  
  
  What gets generated
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;CLAUDE.md&lt;/code&gt; and &lt;code&gt;AGENTS.md&lt;/code&gt; — filled in for your stack
&lt;/h3&gt;

&lt;p&gt;Both files get your actual project name, tech stack, repo structure, and commands injected. No more &lt;code&gt;{{placeholders}}&lt;/code&gt; — just ready-to-use instructions.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;CLAUDE.md&lt;/code&gt; is read by Claude Code automatically. &lt;code&gt;AGENTS.md&lt;/code&gt; is read by Codex CLI, Cursor, Aider, and Amp — it's now an open standard stewarded by the Linux Foundation with 60,000+ open-source projects using it.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;SECURITY.md&lt;/code&gt; — security rules the agent enforces
&lt;/h3&gt;

&lt;p&gt;This is the file I'm most proud of. It covers eight layers of security baked into every project from day one:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Input validation with Zod on every route — patterns and examples included&lt;/li&gt;
&lt;li&gt;IDOR prevention — every Prisma query must include &lt;code&gt;userId&lt;/code&gt; from the JWT, not from the request body&lt;/li&gt;
&lt;li&gt;JWT verification patterns and refresh token rotation&lt;/li&gt;
&lt;li&gt;HTTP security headers via &lt;code&gt;@fastify/helmet&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Rate limiting rules per endpoint type&lt;/li&gt;
&lt;li&gt;File upload security — MIME type allowlists, path traversal prevention, UUID-based storage&lt;/li&gt;
&lt;li&gt;Database security — parameterized queries, field exclusion patterns&lt;/li&gt;
&lt;li&gt;Logging rules — what to log, what never to log&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The agent reads this alongside &lt;code&gt;CLAUDE.md&lt;/code&gt; and applies these rules on every route it writes. I stopped getting auth-less routes and missing &lt;code&gt;userId&lt;/code&gt; filters in PR reviews.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;docs/testing/&lt;/code&gt; — five-layer testing strategy
&lt;/h3&gt;

&lt;p&gt;Four files covering every testing layer the agent needs to know about:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TESTING.md&lt;/strong&gt; — the master strategy. Unit tests, integration tests, security tests, adversarial tests, and performance evaluation tests. Includes the complete CI/CD pipeline config for GitHub Actions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;VALIDATION.md&lt;/strong&gt; — "validation loops." Every Zod schema gets tested for both valid and invalid inputs, for every field, for every constraint. The pattern that ensures schema drift never silently lets bad data through.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ADVERSARIAL.md&lt;/strong&gt; — deliberately acting like a malicious user. IDOR attacks, mass assignment, SQL injection payloads, file upload attacks, JWT forgery, property-based fuzzing with &lt;code&gt;fast-check&lt;/code&gt;. The agent writes these tests for every new module.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PERFORMANCE.md&lt;/strong&gt; — k6 baseline tests that run before every release. Catches N+1 Prisma queries and missing PostgreSQL indexes before they hit production.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;docs/specs/&lt;/code&gt; — one spec stub per module
&lt;/h3&gt;

&lt;p&gt;For every module you listed during setup, you get a spec file with the structure already in place: data model, API endpoints, request/response schemas, business rules, and acceptance criteria. Fill in the content, hand it to the agent, and it implements the whole module end-to-end without asking clarifying questions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Everything else
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;docs/PRD.md&lt;/code&gt; — product requirements doc with your modules listed&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;docs/ARCHITECTURE.md&lt;/code&gt; — architecture stub&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;docs/adr/ADR-001.md&lt;/code&gt; — first architecture decision record&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;CONTRIBUTING.md&lt;/code&gt; — with your actual commands&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;.env.example&lt;/code&gt; — skeleton based on your stack (Postgres URL, JWT secrets, Redis, etc.)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;.gitignore&lt;/code&gt; — standard, generated if not present&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;.github/workflows/validate-agent-files.yml&lt;/code&gt; — CI that fails if you commit &lt;code&gt;.env&lt;/code&gt;, leave unfilled placeholders, or reference a spec that doesn't exist&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Two more scripts for ongoing use
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;new-module.sh&lt;/code&gt;&lt;/strong&gt; — add a new module to an existing project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./scripts/new-module.sh &lt;span class="s2"&gt;"Notes"&lt;/span&gt;
&lt;span class="c"&gt;# → generates docs/specs/notes.spec.md with full template&lt;/span&gt;
&lt;span class="c"&gt;# → adds Notes to CLAUDE.md module table automatically&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Fill in the spec, then tell the agent:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;"Read docs/specs/notes.spec.md and implement the Notes module end-to-end"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;code&gt;update-module-status.sh&lt;/code&gt;&lt;/strong&gt; — keep CLAUDE.md current as you ship:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./scripts/update-module-status.sh &lt;span class="s2"&gt;"Notes"&lt;/span&gt; &lt;span class="k"&gt;done&lt;/span&gt;
./scripts/update-module-status.sh &lt;span class="s2"&gt;"Tasks"&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt;&lt;span class="nt"&gt;-progress&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The agent reads the module status table at the start of every session. Keeping it accurate prevents it from re-implementing something that already exists.&lt;/p&gt;




&lt;h2&gt;
  
  
  The global file — the part most people miss
&lt;/h2&gt;

&lt;p&gt;Both Claude Code and Codex support a global instruction file that applies to every project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# For Codex — applies to every repo you work in&lt;/span&gt;
~/.codex/AGENTS.md

&lt;span class="c"&gt;# For Claude Code&lt;/span&gt;
~/.claude/CLAUDE.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Put your universal personal preferences there once — &lt;code&gt;pnpm&lt;/code&gt; over &lt;code&gt;npm&lt;/code&gt;, no &lt;code&gt;any&lt;/code&gt; in TypeScript, Conventional Commits — and never repeat them in any project file again. Project-level files inherit and can override.&lt;/p&gt;

&lt;p&gt;This is the layer that truly makes the workflow feel automatic. New project, run the script, and the agent already knows your personal defaults before it even reads the project files.&lt;/p&gt;




&lt;h2&gt;
  
  
  Works with every major coding agent
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Agent&lt;/th&gt;
&lt;th&gt;File it reads&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Claude Code&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;CLAUDE.md&lt;/code&gt; (falls back to &lt;code&gt;AGENTS.md&lt;/code&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Codex CLI&lt;/td&gt;
&lt;td&gt;&lt;code&gt;AGENTS.md&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cursor&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;AGENTS.md&lt;/code&gt; + &lt;code&gt;.cursor/rules&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Aider&lt;/td&gt;
&lt;td&gt;&lt;code&gt;AGENTS.md&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Amp&lt;/td&gt;
&lt;td&gt;&lt;code&gt;AGENTS.md&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GitHub Copilot&lt;/td&gt;
&lt;td&gt;&lt;code&gt;.github/copilot-instructions.md&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The template generates &lt;code&gt;CLAUDE.md&lt;/code&gt; and &lt;code&gt;AGENTS.md&lt;/code&gt;. For Copilot, symlink or copy &lt;code&gt;AGENTS.md&lt;/code&gt; content into &lt;code&gt;.github/copilot-instructions.md&lt;/code&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  The workflow in practice
&lt;/h2&gt;

&lt;p&gt;Once this is set up, my per-project flow looks like this:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Day 1:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;clone template → run init-project.sh → fill TODO sections → commit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;About 20 minutes total.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Per module:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./scripts/new-module.sh &lt;span class="s2"&gt;"ModuleName"&lt;/span&gt;
Fill docs/specs/modulename.spec.md &lt;span class="o"&gt;(&lt;/span&gt;15 min&lt;span class="o"&gt;)&lt;/span&gt;
Tell agent: &lt;span class="s2"&gt;"Read docs/specs/modulename.spec.md and implement end-to-end"&lt;/span&gt;
Review diffs → merge
./scripts/update-module-status.sh &lt;span class="s2"&gt;"ModuleName"&lt;/span&gt; &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Before release:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;pnpm test:security
pnpm test:adversarial  
pnpm test:perf
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The agent handles implementation. I handle architecture decisions and code review. The instruction files make sure we're always speaking the same language.&lt;/p&gt;




&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;p&gt;The repo is at &lt;strong&gt;&lt;a href="https://github.com/shadkhan/AI-agents-template-builder" rel="noopener noreferrer"&gt;github.com/shadkhan/AI-agents-template-builder&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Click "Use this template" to create your own copy, run the init script, and you're set up in under two minutes.&lt;/p&gt;

&lt;p&gt;If you find it useful, a GitHub star helps other developers find it. And if you add support for new stacks or languages in the init script, PRs are very welcome — the more stacks covered, the more useful it gets for everyone.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built this while setting up LifeOps, my personal organization platform that I'm also open-sourcing. More on that soon.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>devtools</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
