<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Jun Suzuki</title>
    <description>The latest articles on Forem by Jun Suzuki (@szkjn).</description>
    <link>https://forem.com/szkjn</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/szkjn"/>
    <language>en</language>
    <item>
      <title>Anthropic's skills playbook vs our custom knowledge layer</title>
      <dc:creator>Jun Suzuki</dc:creator>
      <pubDate>Sat, 04 Apr 2026 21:54:06 +0000</pubDate>
      <link>https://forem.com/szkjn/anthropics-skills-playbook-vs-our-custom-knowledge-layer-29g3</link>
      <guid>https://forem.com/szkjn/anthropics-skills-playbook-vs-our-custom-knowledge-layer-29g3</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4dr7gsfxd68ksfg9kszg.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4dr7gsfxd68ksfg9kszg.jpg" alt="The Location of Lines by Sol LeWitt, 1975" width="800" height="802"&gt;&lt;/a&gt;&lt;br&gt;
        Sol LeWitt, The Location of Lines, 1975&lt;/p&gt;

&lt;p&gt;Thariq Shihipar from Anthropic's Claude Code team recently published a &lt;a href="https://x.com/trq212/status/2033949937936085378" rel="noopener noreferrer"&gt;thread&lt;/a&gt; on how they use skills internally. Hundreds of them in active use, clustering into nine categories once cataloged, from library references to runbooks to CI/CD automation. I read the thread right after publishing a &lt;a href="https://blog.junsuzuki.xyz/blog/beyond-claude-md-repo-scoped-knowledge-layer" rel="noopener noreferrer"&gt;post about building a knowledge layer&lt;/a&gt; on top of &lt;code&gt;CLAUDE.md&lt;/code&gt; to capture repo-scoped domain context. The thread answered a question I'd been sitting with: where does deep domain knowledge go when Anthropic themselves recommend keeping &lt;code&gt;CLAUDE.md&lt;/code&gt; under 200 lines?&lt;/p&gt;

&lt;p&gt;Turns out their team use skills for that.&lt;/p&gt;

&lt;h2&gt;
  
  
  Same architecture, different packaging
&lt;/h2&gt;

&lt;p&gt;If we look at their nine categories, at least three are primarily knowledge containers. The action wrapper (a slash command, a trigger description) makes them discoverable. But the core content is &lt;strong&gt;context, not automation&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Their "Build a Gotchas Section" tip makes it explicit: the highest-signal content in any skill is the gotchas section. Add a line each time Claude trips on something. Day 1, the billing-lib skill says "How to use the internal billing library." Month 3, it has four gotchas covering proration rounding, test-mode webhook gaps, idempotency key expiration, refund ID semantics. Knowledge accumulating over time. Exactly what our &lt;code&gt;.claude/knowledge/&lt;/code&gt; files do.&lt;/p&gt;

&lt;p&gt;Their queue-debugging skill uses a &lt;a href="https://en.wikipedia.org/wiki/Spoke%E2%80%93hub_distribution_paradigm" rel="noopener noreferrer"&gt;hub and spoke&lt;/a&gt; structure: a 30-line &lt;code&gt;SKILL.md&lt;/code&gt; with a symptom-to-file routing table, and spoke files (&lt;code&gt;stuck-jobs.md&lt;/code&gt;, &lt;code&gt;dead-letters.md&lt;/code&gt;, &lt;code&gt;retry-storms.md&lt;/code&gt;) for the detail. This is structurally identical to our &lt;code&gt;CLAUDE.md&lt;/code&gt; knowledge table pointing to &lt;code&gt;.claude/knowledge/*.md&lt;/code&gt; files. Same progressive disclosure. Same "keep the hub lean, push detail to the spokes" principle.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3m84j6u4ifuipr459vof.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3m84j6u4ifuipr459vof.png" alt="Side-by-side comparison: their SKILL.md routing to spoke files vs our CLAUDE.md routing to knowledge files" width="800" height="213"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  One primitive or two
&lt;/h2&gt;

&lt;p&gt;Tbh, if this thread had existed two months ago, we might not have built a separate knowledge layer at all. Their approach handles the same problem. One extension point, one concept to learn. Simpler.&lt;/p&gt;

&lt;p&gt;What we ended up building adds one seam: &lt;strong&gt;knowledge files hold domain context (what Claude should know), skills hold executable actions (what Claude should do)&lt;/strong&gt;. We do use skills with scripts and hooks. The separation just means the context lives in its own files.&lt;/p&gt;

&lt;p&gt;For e.g., we have a knowledge file documenting a data pipeline: 7 jobs in sequence, ontology structure, S3 path conventions, Elasticsearch gotchas. That file is read by four different generic skills for running jobs, querying the database, querying ES, and accessing S3. The knowledge changes depending on which pipeline you're working on. The skills don't. In a skill-only setup, you'd either duplicate that context across skills or wrap it in a dedicated pipeline-knowledge skill, which gets you to roughly the same place.&lt;/p&gt;

&lt;p&gt;Another example: a skill that runs our evaluation suite. 44 lines: dataset names, CLI commands, timeouts. Clean and procedural. But interpreting results requires a separate knowledge file explaining what each dataset tests, known weak spots, and the outcome of recent eval sessions, etc. When someone reads that knowledge file months later, they skip repeating the same investigation. They don't need to invoke the eval skill to get there.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where we land
&lt;/h2&gt;

&lt;p&gt;Going back to their nine categories. Library &amp;amp; API Reference is edge cases and code snippets for internal libraries. Incident Runbooks map symptoms to investigation steps. Both are knowledge containers wearing a skill wrapper. On the other end, CI/CD &amp;amp; Deployment and Scaffolding &amp;amp; Templates are pure automation. Data &amp;amp; Analysis and Code Quality &amp;amp; Review sit somewhere in between: reference data plus scripts, style rules plus enforcement. No one designed that split. It just showed up in the taxonomy. The knowledge/action boundary seems to surface whether you formalize it or not.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffy9a3qs8d0cbozrtu9f0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffy9a3qs8d0cbozrtu9f0.png" alt="Thariq's nine skill categories annotated: Library &amp;amp; API Reference and Incident Runbooks are knowledge, Data &amp;amp; Analysis and Code Quality &amp;amp; Review are both, the remaining five are action" width="800" height="487"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One last thought. Skills didn't exist a year ago. Claude Code launched with commands, then skills arrived, the two overlapped for a couple of months, and eventually commands folded into skills. The core abstraction for extending Claude Code changed twice in twelve months.&lt;/p&gt;

&lt;p&gt;That's the environment we're all building in. The extra seam we added (knowledge separate from skills) is a bet on adaptability. When the next abstraction shift comes, the knowledge stays put. The wiring around it can change.&lt;/p&gt;

</description>
      <category>claudecode</category>
      <category>claudecodeskills</category>
      <category>contextengineering</category>
      <category>knowledgemanagement</category>
    </item>
    <item>
      <title>We outgrew CLAUDE.md: building a knowledge layer that compounds</title>
      <dc:creator>Jun Suzuki</dc:creator>
      <pubDate>Thu, 19 Mar 2026 18:12:43 +0000</pubDate>
      <link>https://forem.com/szkjn/we-outgrew-claudemd-building-a-knowledge-layer-that-compounds-33f</link>
      <guid>https://forem.com/szkjn/we-outgrew-claudemd-building-a-knowledge-layer-that-compounds-33f</guid>
      <description>&lt;p&gt;Earlier this year, Boris Cherny, the creator of Claude Code, published a &lt;a href="https://nitter.net/bcherny/status/2007179832300581177" rel="noopener noreferrer"&gt;thread&lt;/a&gt; on how he and his team use the CLI they built. A dozen tips covering everything from running parallel sessions to slash commands to subagents. The one I kept circling back to: the shared &lt;code&gt;CLAUDE.md&lt;/code&gt; that their entire team feeds into (what he calls &lt;a href="https://every.to/chain-of-thought/compound-engineering-how-every-codes-with-agents" rel="noopener noreferrer"&gt;compound engineering&lt;/a&gt;, borrowing from Dan Shipper).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Our team shares a single CLAUDE.md for the Claude Code repo. We check it into git, and the whole team contributes multiple times a week. Anytime we see Claude do something incorrectly we add it to the CLAUDE.md, so Claude knows not to do it next time.&lt;/p&gt;

&lt;p&gt;— &lt;a href="https://x.com/bcherny/status/2007179832300581177" rel="noopener noreferrer"&gt;Boris Cherny&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I've been running with that idea at &lt;a href="https://wisetax.fr/" rel="noopener noreferrer"&gt;Wisetax&lt;/a&gt;, and ended up extending it into a broader &lt;strong&gt;knowledge layer&lt;/strong&gt;. Here's why, what we built, and how it compounds.&lt;/p&gt;

&lt;h2&gt;
  
  
  README.md, CLAUDE.md, then what?
&lt;/h2&gt;

&lt;p&gt;Every repo already has a &lt;code&gt;README.md&lt;/code&gt;. That's for us humans to read: onboarding, setup, contributing.&lt;/p&gt;

&lt;p&gt;Then there's &lt;code&gt;CLAUDE.md&lt;/code&gt;. Essentially agent literature. It gets loaded into every Claude Code session: repo-wide conventions, common commands, guardrails, environment assumptions. Following Cherny's idea, the whole team updates it constantly. In practice, that means it grows. Fast.&lt;/p&gt;

&lt;p&gt;In some of our repos, it hit 700+ lines. Ok but. Turns out the official docs recommend keeping &lt;code&gt;CLAUDE.md&lt;/code&gt; under ~200 lines. Anthropic's &lt;a href="https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents" rel="noopener noreferrer"&gt;context engineering guide&lt;/a&gt; frames it this way: context is a "finite resource with diminishing marginal returns". The more low-signal tokens you load, the less reliable the agent becomes. Stuffing everything into one auto-loaded file works against that.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Like humans, who have limited working memory capacity, LLMs have an "attention budget" that they draw on when parsing large volumes of context. Every new token introduced depletes this budget by some amount.&lt;/p&gt;

&lt;p&gt;— &lt;a href="https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents" rel="noopener noreferrer"&gt;Effective context engineering for AI agents&lt;/a&gt;, Anthropic&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In our experience, domain-specific context, design rationale for particular subsystems: all of that is too detailed and too volatile for a single file. So we created &lt;code&gt;.claude/knowledge/&lt;/code&gt;. Inside: one markdown file per topic, versioned in the repo. A place to capture &lt;a href="https://en.wikipedia.org/wiki/Tribal_knowledge" rel="noopener noreferrer"&gt;tribal knowledge&lt;/a&gt;, the stuff that lives in developers' heads and gets lost between sessions.&lt;/p&gt;

&lt;p&gt;Each file covers a slice of the system: a specific piece of the data pipeline, the intricacies of a retrieval layer, the motivation behind the design of the evaluation framework, etc.&lt;/p&gt;

&lt;p&gt;In &lt;code&gt;CLAUDE.md&lt;/code&gt;, we point to them so Claude knows when to read what:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;
...

&lt;span class="gu"&gt;## Knowledge&lt;/span&gt;

| File                                  | When to read                                               |
|---------------------------------------|------------------------------------------------------------|
| &lt;span class="sb"&gt;`.claude/knowledge/architecture.md`&lt;/span&gt;   | Overall system design, endpoints, routing, streaming       |
| &lt;span class="sb"&gt;`.claude/knowledge/agent-framework.md`&lt;/span&gt;| Building/modifying agents with WisebrainAgent              |
| &lt;span class="sb"&gt;`.claude/knowledge/autonomous-agent.md`&lt;/span&gt;| Working on the main chat agent, its tools, or prompts     |
| &lt;span class="sb"&gt;`.claude/knowledge/retrieval.md`&lt;/span&gt;      | Search system, semantic/keyword retrieval, corpus taxonomy |
| &lt;span class="sb"&gt;`.claude/knowledge/elasticsearch.md`&lt;/span&gt;  | ES indices, query builders, document structure             |
| &lt;span class="sb"&gt;`.claude/knowledge/plan-navigation.md`&lt;/span&gt;| BOFIP/LEGI plan traversal, plan controllers                |
| &lt;span class="sb"&gt;`.claude/knowledge/testing.md`&lt;/span&gt;        | Writing or running tests, fixtures, evaluation scripts     |
| &lt;span class="sb"&gt;`.claude/knowledge/evaluations.md`&lt;/span&gt;    | Agent evaluation datasets, evaluators, LangSmith setup     |
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This way, Claude only reads the files it needs to. If the current PR is about evaluations, it pulls in the &lt;code&gt;evaluations.md&lt;/code&gt; knowledge file and ignores the rest. This is what makes the approach scale: each file can go into specifics that genuinely help the agent, because only relevant files get loaded into context.&lt;/p&gt;

&lt;h2&gt;
  
  
  What about Claude's auto memory?
&lt;/h2&gt;

&lt;p&gt;Claude Code has an &lt;a href="https://code.claude.com/docs/en/memory#auto-memory" rel="noopener noreferrer"&gt;auto memory&lt;/a&gt; feature: notes Claude writes for itself based on corrections and preferences. They are stored locally under &lt;code&gt;~/.claude/projects//memory/&lt;/code&gt;. It's per-machine and auto-managed. Not versioned, not shared. If a collaborator starts a session, they don't get my memory.&lt;/p&gt;

&lt;p&gt;Repo-scoped knowledge is the opposite. It's checked in. It goes through PRs. Every engineer on the team gets the same context. Every session starts from the same baseline regardless of who's running it. The name is deliberate: "knowledge", not "memory", to keep the two separate in practice.&lt;/p&gt;

&lt;h2&gt;
  
  
  What to store in knowledge files
&lt;/h2&gt;

&lt;p&gt;Knowledge files hold two kinds of things: &lt;strong&gt;know-why&lt;/strong&gt; and &lt;strong&gt;know-how&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Know-why&lt;/strong&gt; is arguably the more obvious one. I learn something about the codebase I didn't know, or had forgotten. The moment I document it, that's the last time Claude figures it out from scratch. I might forget next time. Claude won't.&lt;/p&gt;

&lt;p&gt;At Wisetax, we curate an enriched Elasticsearch index built from a large corpus of French legal texts. At retrieval, our search system splits queries across corpus groups to mitigate language-level differences in the embedding model. Without that context, Claude doesn't know why the retrieval code fans out into three separate queries instead of one. It would figure it out eventually, after burning time and tokens reading through the code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
...

## Partitioned retrieval

Splits corpus into 3 groups to handle language style differences in embeddings:
- `law`: CGI, LPF, CIBS, CCOM, CSS, CTRAV, CMF, CCIV, CJA
- `guidelines`: BOI, BOSS, COMPTA
- `other`: EUR, INT, DOUANE, CADF, JADE, INCA, CASS, CAPP, CGIANX*, PLF, PLFSS, EXTERNAL, NOTICE

Runs each group as a parallel batch with retry (3 attempts).
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Know-how&lt;/strong&gt; is procedure: the sequence of jobs, the commands to run, the filenames that matter. At Wisetax, we ingest legal texts from multiple sources, each with its own pipeline. Take "BOSS" (for &lt;em&gt;Bulletin officiel de la sécurité sociale&lt;/em&gt;), a French government publication that goes through seven distinct jobs before it's indexed and searchable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
...

## Pipeline order

1. **Crawler** (manual, docker compose in `scripts/boss_crawler/`) -&amp;gt; `boss-raw` ES staging index
2. **SOURCE BOSS** (pace5/3d) -&amp;gt; compares `boss-raw` latest timestamp vs last batch, creates new batch
3. **REGISTER BOSS** (pace4/24h) -&amp;gt; fetches HTML, parses versions, uploads to S3, inserts docs in PG
4. **INDEX** (pace0/1min, shared) -&amp;gt; indexes docs to ES index `name-of-index`
5. **PLAN BOSS** (pace5/3d) -&amp;gt; builds static plan from ontology config, indexes to ES
6. **VERSION BOSS** (pace4/24h) -&amp;gt; sets VIGUEUR/MODIFIE on versioned docs
7. **SELECT CHUNKABLE / CHUNK / EMBED / INDEX CHUNK** (shared, frequent)
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The full file covers about a hundred lines: section codes, S3 path conventions, Elasticsearch gotchas, differences from our other main pipeline. This wasn't born from a single session. It's the accumulated map of a dense subsystem, built up over time.&lt;/p&gt;

&lt;p&gt;Before it existed, every BOSS PR started the same way: find the relevant Notion tickets, skim through past PRs for context, paste a summary into Claude to catch it up. Now the context is sitting in a knowledge file, ready for Claude to pull in the moment BOSS comes up.&lt;/p&gt;

&lt;h2&gt;
  
  
  What to leave out
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;What files were changed (git knows this)&lt;/li&gt;
&lt;li&gt;Config values, function signatures (read the code)&lt;/li&gt;
&lt;li&gt;Session logs or step-by-step accounts&lt;/li&gt;
&lt;li&gt;Deprecated features (delete from knowledge files)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The heuristic we use: "Would this help Claude understand the system 3 months from now?" If not, cut it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The knowledge update loop
&lt;/h2&gt;

&lt;p&gt;None of this works if the knowledge goes stale. What makes it compound over time: at the end of each session, we call &lt;code&gt;/update-knowledge&lt;/code&gt;, a &lt;a href="https://docs.anthropic.com/en/docs/claude-code/skills" rel="noopener noreferrer"&gt;custom skill&lt;/a&gt; that reviews the conversation, checks existing knowledge files, and decides whether the repo's instruction surface needs updating.&lt;/p&gt;

&lt;p&gt;No friction, we just run it. Half of the time, nothing changes. When it does, it's almost always a targeted edit to a knowledge file. &lt;code&gt;CLAUDE.md&lt;/code&gt; moves rarely, maybe once every few weeks when a repo-wide rule shifts. Either way, it goes through a PR like any other change.&lt;/p&gt;

&lt;p&gt;Here's what the skill looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;update-knowledge&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Update project knowledge base after a session&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;

&lt;span class="s"&gt;Review the current conversation and update project knowledge.&lt;/span&gt;

&lt;span class="c1"&gt;## Steps&lt;/span&gt;

&lt;span class="s"&gt;1. Read `CLAUDE.md` to see the knowledge table and existing file list&lt;/span&gt;
&lt;span class="s"&gt;2. Read all files in `.claude/knowledge/`&lt;/span&gt;
&lt;span class="s"&gt;3. Review the conversation and recent git diff&lt;/span&gt;
&lt;span class="na"&gt;4. Update knowledge&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;**Existing file needs update**&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;edit the relevant `.md`&lt;/span&gt;
   &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;**New feature/topic**&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;create a new `.md` and add a row&lt;/span&gt;
     &lt;span class="s"&gt;to the Knowledge table in `CLAUDE.md`&lt;/span&gt;
   &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;**General pattern discovered**&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;add to `CLAUDE.md` directly&lt;/span&gt;
&lt;span class="na"&gt;5. Cleanup pass&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;check all existing knowledge files for stale&lt;/span&gt;
   &lt;span class="s"&gt;content. Trim or delete as needed.&lt;/span&gt;
&lt;span class="nn"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Claude decides what's worth keeping, and what to cut: stale content gets trimmed, not archived.&lt;/p&gt;

&lt;p&gt;After a few weeks of running this, the effect is hard to miss. Claude kickstarts sessions with context that used to require manual pasting. No need to tag a file or invoke a command. Claude reads the knowledge table, sees what's relevant, and loads it. The knowledge files fill in gradually, effortlessly.&lt;/p&gt;

&lt;p&gt;Plus. The worflow changes how we think about sessions. We used to keep a single session alive as long as possible to preserve context. Now we work on a piece of the problem, run &lt;code&gt;/update-knowledge&lt;/code&gt;, and start fresh. Shorter sessions mean a leaner context window. A leaner context means a more reliable Claude.&lt;/p&gt;

&lt;p&gt;Every session leaves the next one a little better equipped.&lt;/p&gt;

</description>
      <category>claudecode</category>
      <category>claudemd</category>
      <category>contextengineering</category>
      <category>aiagents</category>
    </item>
  </channel>
</rss>
