<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: DanielleWashington</title>
    <description>The latest articles on Forem by DanielleWashington (@daniellewashington).</description>
    <link>https://forem.com/daniellewashington</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/daniellewashington"/>
    <language>en</language>
    <item>
      <title>Audit Your Docs Against The Decision-System Framework</title>
      <dc:creator>DanielleWashington</dc:creator>
      <pubDate>Mon, 30 Mar 2026 01:56:59 +0000</pubDate>
      <link>https://forem.com/daniellewashington/audit-your-docs-against-the-decision-system-framework-1l50</link>
      <guid>https://forem.com/daniellewashington/audit-your-docs-against-the-decision-system-framework-1l50</guid>
      <description>&lt;h1&gt;
  
  
  Audit Your Docs Against the Decision-System Framework
&lt;/h1&gt;

&lt;p&gt;Most documentation fails not because something is missing, but because it answers the wrong question.&lt;/p&gt;

&lt;p&gt;We've built encyclopedias when people need guides. We've cataloged every possible path when what readers actually need is someone to say: start here, this way works. The result is what I call the library paradox. All the information exists, users just can't find their way to the answer that matters for their specific situation at this specific moment. The problem isn't missing information. The problem is missing navigation.&lt;/p&gt;

&lt;p&gt;&lt;a href="//doc-audit.clouddani.com"&gt;&lt;code&gt;doc-audit&lt;/code&gt;&lt;/a&gt; is a CLI tool that operationalizes this. It scans your markdown files, classifies each one by decision phase, scores how well it guides a reader toward action, and gives you a prioritized list of what to fix. This post walks through how to use it and how to think about what it's telling you.&lt;/p&gt;




&lt;h2&gt;
  
  
  The framework
&lt;/h2&gt;

&lt;p&gt;Every person who arrives at your documentation is standing at a crossroads. They're not asking "what does this do?" They're asking "what should I do, given my constraints, right now?" That's a decision, not a knowledge transfer. Decisions require navigation, not encyclopedias.&lt;/p&gt;

&lt;p&gt;The Day 0/1/2 framework maps the decision landscape across the full adoption lifecycle.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Day 0, Pre-Commitment.&lt;/strong&gt; The reader hasn't adopted your tool yet. They're evaluating: is this right for my use case? What am I signing up for? What are the tradeoffs I should understand before I commit? Think of this as the moment before buying hiking boots. You need to know if you're going on day hikes or through-hiking the Appalachian Trail. The boots you need are different. If your docs skip this phase, you're losing people before they ever install anything.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Day 1, Getting Started.&lt;/strong&gt; The reader has committed. They want the fastest path from zero to working state. They need a clear sequence, a default configuration that fits most cases, and a success signal at the end. This is base camp. Get the tent up, get oriented, don't try to summit on your first day.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Day 2, Production.&lt;/strong&gt; The reader is past the happy path. Something broke, or they're scaling, or conditions changed. They need troubleshooting guides, operational runbooks, and explicit "if X is happening, do Y" callouts. Day 2 readers are not reading for pleasure on a Sunday morning. They're in the moment of crisis and they need the answer fast.&lt;/p&gt;

&lt;p&gt;Each phase has its own decision architecture. Mapping these explicitly is what turns a documentation suite from an information dump into a navigation system. &lt;/p&gt;

&lt;p&gt;The framework maps the human reader's journey, but your docs now have a second audience that moves through that same content completely differently. &lt;/p&gt;

&lt;p&gt;There's a second dimension the tool measures: whether your docs are written for both audiences now consuming them. Your human reader and an agent acting on their behalf have fundamentally different needs. Humans can ask follow-up questions, fill in gaps with context, probe for answers they didn't know they needed. Agents cannot. They fill gaps not with judgment but with hallucination. A document with clear decision points, explicit next steps, and no assumed context serves both.&lt;/p&gt;




&lt;h2&gt;
  
  
  Run this against your own docs
&lt;/h2&gt;

&lt;p&gt;Node.js 18 or higher.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/DanielleWashington/doc-audit
&lt;span class="nb"&gt;cd &lt;/span&gt;doc-audit
npm &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To use the &lt;code&gt;doc-audit&lt;/code&gt; command anywhere:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;link&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;doc-audit &lt;span class="nt"&gt;--help&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 1: Run the audit
&lt;/h2&gt;

&lt;p&gt;Point &lt;code&gt;doc-audit&lt;/code&gt; at any directory containing &lt;code&gt;.md&lt;/code&gt; files. The bundled sample docs are a good place to start:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;node index.js ./test-docs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This launches interactive mode. Before any prompts appear, &lt;code&gt;doc-audit&lt;/code&gt; has already analyzed every file. The interview that follows lets you review and correct that analysis.&lt;/p&gt;

&lt;h3&gt;
  
  
  What the static analysis does
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;doc-audit&lt;/code&gt; reads each file and runs signal matching against three dictionaries.&lt;/p&gt;

&lt;p&gt;Day 0 signals look for evaluation language: &lt;code&gt;overview&lt;/code&gt;, &lt;code&gt;why use&lt;/code&gt;, &lt;code&gt;compare&lt;/code&gt;, &lt;code&gt;alternatives&lt;/code&gt;, &lt;code&gt;when to use&lt;/code&gt;, &lt;code&gt;who is this for&lt;/code&gt;. Day 1 signals look for onboarding language: &lt;code&gt;install&lt;/code&gt;, &lt;code&gt;quickstart&lt;/code&gt;, &lt;code&gt;tutorial&lt;/code&gt;, &lt;code&gt;getting started&lt;/code&gt;, &lt;code&gt;how to&lt;/code&gt;, &lt;code&gt;prerequisites&lt;/code&gt;. Day 2 signals look for operational language: &lt;code&gt;troubleshoot&lt;/code&gt;, &lt;code&gt;debug&lt;/code&gt;, &lt;code&gt;error&lt;/code&gt;, &lt;code&gt;production&lt;/code&gt;, &lt;code&gt;failing&lt;/code&gt;, &lt;code&gt;incident&lt;/code&gt;, &lt;code&gt;rollback&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The phase with the most matches wins. This isn't perfect. A file titled &lt;code&gt;quickstart.md&lt;/code&gt; could have Day 2 content, which is exactly why the interview step exists.&lt;/p&gt;

&lt;p&gt;Alongside phase detection, each file gets a quality score out of 8. Six signals are checked:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Signal&lt;/th&gt;
&lt;th&gt;Points&lt;/th&gt;
&lt;th&gt;What it measures&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;If/then guidance&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;if ... then/→/do/try/run&lt;/code&gt; patterns, the clearest marker of decision-oriented writing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tradeoff language&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;"when not to", "limitation", "⚠", "not recommended", does it help users see around corners?&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ordered steps&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;Numbered lists signal a sequence to follow&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Learning outcome&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;"by the end", "you will learn", sets expectations and helps readers self-select&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Next steps&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;"next steps", "proceed to", does it hand the reader off somewhere?&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Focused length&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;Under 1,000 words. Docs that try to serve all three phases usually serve none well&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The if/then and tradeoff signals carry double weight because they're the hardest to fake. A doc can have ordered steps and a next steps section and still be a knowledge dump. A doc with explicit "if your use case is X, do Y, if it's Z, consider this instead" language has crossed into decision-oriented territory. That's the difference between a map and turn-by-turn directions.&lt;/p&gt;

&lt;h3&gt;
  
  
  The interactive interview
&lt;/h3&gt;

&lt;p&gt;For each file, you'll see its detected phase, quality score, and a short prompt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;──────────────────────────────────────────────────
  api-reference.md
  892 words  ·  Auto-detected: Day 1 — Getting Started
  Decision quality: 2/8
──────────────────────────────────────────────────

? What do you want to do with this file?
❯ Audit it
  Skip it
  Flag for deletion
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Action.&lt;/strong&gt; Skip excludes the file from the report, useful for auto-generated files, changelogs, or anything that shouldn't be part of the audit. Flag for deletion marks it as a candidate for removal, shown separately in the report. Choose Audit it to continue.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase confirmation.&lt;/strong&gt; Correct the auto-detected classification here. An &lt;code&gt;api-reference.md&lt;/code&gt; might have been tagged as Day 1 because it mentions &lt;code&gt;how to use&lt;/code&gt;, but it's actually a reference doc that spans all three phases. Fix it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Decision quality.&lt;/strong&gt; You're asked whether the doc recommends a clear path, presents options but leaves the decision to the reader, or mostly describes without guiding action. Be honest. This is the distinction the whole framework turns on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If/then and next step.&lt;/strong&gt; Two quick yes/no confirmations that override the static analysis when your judgment differs from the regex.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 2: Read the report
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;╔═══════════════════════════════════════════════════════╗
║  doc-audit Report                                     ║
║  /projects/my-tool/docs                               ║
╚═══════════════════════════════════════════════════════╝

PHASE COVERAGE
  Day 0  ██░░░░░░░░   1 of 4    ⚠ Undercovered
  Day 1  ██████░░░░   2 of 4    ✓
  Day 2  ░░░░░░░░░░   0 of 4    ✗ Missing

DECISION QUALITY (avg: 3.8/8)
  ✓ quickstart.md                  6/8  — Strong decision doc
  ⚠ overview.md                    4/8  — Partially decision-oriented
  ✗ api-reference.md               2/8  — Knowledge dump
  ✗ architecture.md                3/8  — Knowledge dump

RECOMMENDATIONS
  1. Missing Day 2 content — add a troubleshooting or production guide
  2. api-reference.md — low decision quality. Add "If X → do Y" callouts...
  3. architecture.md — no If/then guidance detected...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Phase Coverage&lt;/strong&gt; is the structural view. A missing phase means readers at that stage of the adoption lifecycle have nothing to reach for. This is where the pattern shows up every time: Day 1 coverage is solid, Day 0 is weak, Day 2 is almost entirely absent. Teams onboard users and then abandon them in production.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Decision Quality&lt;/strong&gt; is the per-file view. Knowledge dump means the file explains things but doesn't help the reader decide or act. Strong decision doc means it has explicit guidance, surfaces tradeoffs, and ends by pointing somewhere.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recommendations&lt;/strong&gt; are prioritized. Phase gaps come first because they're the largest structural problem. Per-file findings follow.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 3: Auto mode and exports for CI
&lt;/h2&gt;

&lt;p&gt;For CI or fast trend-tracking, use auto mode:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;doc-audit ./docs &lt;span class="nt"&gt;--auto&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No prompts, static analysis only. Useful for catching regressions: a phase that was covered last sprint but isn't now, or an average quality score that's drifting down as docs accumulate.&lt;/p&gt;

&lt;p&gt;To capture results for downstream processing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;doc-audit ./docs &lt;span class="nt"&gt;--json&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; audit.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pipe it into a script that fails CI if a phase is missing or the average quality score drops below your threshold.&lt;/p&gt;

&lt;p&gt;For a shareable written record:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;doc-audit ./docs &lt;span class="nt"&gt;--output&lt;/span&gt; docs-audit-march.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 4: Act on what you find
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Missing Day 0 content.&lt;/strong&gt; Write a document that explicitly answers "should I use this?", covering what problem it solves, what it's not good for, and how it compares to the obvious alternatives. End it with a decision branch. If your use case is X, proceed to the quickstart. If it's Y, you might want Z instead. Give the reader somewhere to go either way.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Missing Day 2 content.&lt;/strong&gt; Mine your issue tracker, Slack history, or support queue for the five most common things that go wrong. For each one, write an &lt;code&gt;if X is happening → do Y&lt;/code&gt; entry. Day 2 readers are in crisis mode. A doc without explicit structure forces both the human reader and any agent acting on their behalf to fill the gap themselves, and agents fill gaps with hallucination.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Low quality score on an existing doc.&lt;/strong&gt; The fastest fix is adding if/then guidance. Find every place the doc describes a choice and make it explicit: if you're using PostgreSQL, use the connection pool strategy. If you're on SQLite, skip this section entirely. One well-placed callout can move the score significantly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Partially decision-oriented docs.&lt;/strong&gt; These present options but hedge on recommending one. Sometimes that's appropriate, context genuinely varies. But often it's just that no one has committed to a recommended path. If that's the case, commit. Move the edge cases into a collapsible section. Surface the tradeoffs explicitly. Providing a framework where you can't be prescriptive is still better than leaving the reader at a crossroads with no guidance.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where to start
&lt;/h2&gt;

&lt;p&gt;If running this against a large docs suite feels overwhelming, start with your highest-traffic pages. Ask one question for each: what decision does this doc help a reader make? If you can't answer that in one sentence, that's your signal. You don't need to rewrite everything. Tightening the scope of a page to serve one phase, one decision, and one next step is usually enough to move it from Knowledge dump to Partially decision-oriented.&lt;/p&gt;

&lt;p&gt;The goal isn't comprehensive coverage measured by word count. It's whether users can move from uncertainty to confident action with minimal friction. Documentation that does that is a growth lever. Documentation that doesn't generates support tickets asking "should I use X or Y?" for questions the docs technically answered but never resolved.&lt;/p&gt;

&lt;p&gt;Run the audit. Fix the phase gaps. Add the if/then guidance. Run it again.&lt;/p&gt;

&lt;p&gt;The docs that actually move people forward were never the most comprehensive ones. They were the ones that knew which question the reader was standing in front of, and answered it&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built on the documentation framework by Danielle Washington: &lt;a href="https://dev.to/daniellewashington/documentation-is-a-decision-system-not-a-knowledge-base-4139"&gt;Documentation is a Decision System, Not a Knowledge Base&lt;/a&gt; and &lt;a href="https://dev.to/daniellewashington/humans-need-narrative-agents-need-decisions-your-docs-need-both-2ni8"&gt;The Documentation Framework We Need Doesn't Exist Yet&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Try the web app &lt;a href="//doc-audit.clouddani.com"&gt;here&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>documentation</category>
      <category>devex</category>
      <category>devtools</category>
      <category>devrel</category>
    </item>
    <item>
      <title>Audit Your Docs Against The Decision-System Framework</title>
      <dc:creator>DanielleWashington</dc:creator>
      <pubDate>Mon, 30 Mar 2026 01:56:59 +0000</pubDate>
      <link>https://forem.com/daniellewashington/audit-your-docs-against-the-decision-system-framework-4kan</link>
      <guid>https://forem.com/daniellewashington/audit-your-docs-against-the-decision-system-framework-4kan</guid>
      <description>&lt;h1&gt;
  
  
  Audit Your Docs Against the Decision-System Framework
&lt;/h1&gt;

&lt;p&gt;Most documentation fails not because something is missing, but because it answers the wrong question.&lt;/p&gt;

&lt;p&gt;We've built encyclopedias when people need guides. We've cataloged every possible path when what readers actually need is someone to say: start here, this way works. The result is what I call the library paradox. All the information exists, users just can't find their way to the answer that matters for their specific situation at this specific moment. The problem isn't missing information. The problem is missing navigation.&lt;/p&gt;

&lt;p&gt;&lt;a href="//doc-audit.clouddani.com"&gt;&lt;code&gt;doc-audit&lt;/code&gt;&lt;/a&gt; is a CLI tool that operationalizes this. It scans your markdown files, classifies each one by decision phase, scores how well it guides a reader toward action, and gives you a prioritized list of what to fix. This post walks through how to use it and how to think about what it's telling you.&lt;/p&gt;




&lt;h2&gt;
  
  
  The framework
&lt;/h2&gt;

&lt;p&gt;Every person who arrives at your documentation is standing at a crossroads. They're not asking "what does this do?" They're asking "what should I do, given my constraints, right now?" That's a decision, not a knowledge transfer. Decisions require navigation, not encyclopedias.&lt;/p&gt;

&lt;p&gt;The Day 0/1/2 framework maps the decision landscape across the full adoption lifecycle.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Day 0, Pre-Commitment.&lt;/strong&gt; The reader hasn't adopted your tool yet. They're evaluating: is this right for my use case? What am I signing up for? What are the tradeoffs I should understand before I commit? Think of this as the moment before buying hiking boots. You need to know if you're going on day hikes or through-hiking the Appalachian Trail. The boots you need are different. If your docs skip this phase, you're losing people before they ever install anything.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Day 1, Getting Started.&lt;/strong&gt; The reader has committed. They want the fastest path from zero to working state. They need a clear sequence, a default configuration that fits most cases, and a success signal at the end. This is base camp. Get the tent up, get oriented, don't try to summit on your first day.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Day 2, Production.&lt;/strong&gt; The reader is past the happy path. Something broke, or they're scaling, or conditions changed. They need troubleshooting guides, operational runbooks, and explicit "if X is happening, do Y" callouts. Day 2 readers are not reading for pleasure on a Sunday morning. They're in the moment of crisis and they need the answer fast.&lt;/p&gt;

&lt;p&gt;Each phase has its own decision architecture. Mapping these explicitly is what turns a documentation suite from an information dump into a navigation system. &lt;/p&gt;

&lt;p&gt;The framework maps the human reader's journey, but your docs now have a second audience that moves through that same content completely differently. &lt;/p&gt;

&lt;p&gt;There's a second dimension the tool measures: whether your docs are written for both audiences now consuming them. Your human reader and an agent acting on their behalf have fundamentally different needs. Humans can ask follow-up questions, fill in gaps with context, probe for answers they didn't know they needed. Agents cannot. They fill gaps not with judgment but with hallucination. A document with clear decision points, explicit next steps, and no assumed context serves both.&lt;/p&gt;




&lt;h2&gt;
  
  
  Run this against your own docs
&lt;/h2&gt;

&lt;p&gt;Node.js 18 or higher.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/DanielleWashington/doc-audit
&lt;span class="nb"&gt;cd &lt;/span&gt;doc-audit
npm &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To use the &lt;code&gt;doc-audit&lt;/code&gt; command anywhere:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;link&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;doc-audit &lt;span class="nt"&gt;--help&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 1: Run the audit
&lt;/h2&gt;

&lt;p&gt;Point &lt;code&gt;doc-audit&lt;/code&gt; at any directory containing &lt;code&gt;.md&lt;/code&gt; files. The bundled sample docs are a good place to start:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;node index.js ./test-docs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This launches interactive mode. Before any prompts appear, &lt;code&gt;doc-audit&lt;/code&gt; has already analyzed every file. The interview that follows lets you review and correct that analysis.&lt;/p&gt;

&lt;h3&gt;
  
  
  What the static analysis does
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;doc-audit&lt;/code&gt; reads each file and runs signal matching against three dictionaries.&lt;/p&gt;

&lt;p&gt;Day 0 signals look for evaluation language: &lt;code&gt;overview&lt;/code&gt;, &lt;code&gt;why use&lt;/code&gt;, &lt;code&gt;compare&lt;/code&gt;, &lt;code&gt;alternatives&lt;/code&gt;, &lt;code&gt;when to use&lt;/code&gt;, &lt;code&gt;who is this for&lt;/code&gt;. Day 1 signals look for onboarding language: &lt;code&gt;install&lt;/code&gt;, &lt;code&gt;quickstart&lt;/code&gt;, &lt;code&gt;tutorial&lt;/code&gt;, &lt;code&gt;getting started&lt;/code&gt;, &lt;code&gt;how to&lt;/code&gt;, &lt;code&gt;prerequisites&lt;/code&gt;. Day 2 signals look for operational language: &lt;code&gt;troubleshoot&lt;/code&gt;, &lt;code&gt;debug&lt;/code&gt;, &lt;code&gt;error&lt;/code&gt;, &lt;code&gt;production&lt;/code&gt;, &lt;code&gt;failing&lt;/code&gt;, &lt;code&gt;incident&lt;/code&gt;, &lt;code&gt;rollback&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The phase with the most matches wins. This isn't perfect. A file titled &lt;code&gt;quickstart.md&lt;/code&gt; could have Day 2 content, which is exactly why the interview step exists.&lt;/p&gt;

&lt;p&gt;Alongside phase detection, each file gets a quality score out of 8. Six signals are checked:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Signal&lt;/th&gt;
&lt;th&gt;Points&lt;/th&gt;
&lt;th&gt;What it measures&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;If/then guidance&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;if ... then/→/do/try/run&lt;/code&gt; patterns, the clearest marker of decision-oriented writing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tradeoff language&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;"when not to", "limitation", "⚠", "not recommended", does it help users see around corners?&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ordered steps&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;Numbered lists signal a sequence to follow&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Learning outcome&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;"by the end", "you will learn", sets expectations and helps readers self-select&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Next steps&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;"next steps", "proceed to", does it hand the reader off somewhere?&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Focused length&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;Under 1,000 words. Docs that try to serve all three phases usually serve none well&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The if/then and tradeoff signals carry double weight because they're the hardest to fake. A doc can have ordered steps and a next steps section and still be a knowledge dump. A doc with explicit "if your use case is X, do Y, if it's Z, consider this instead" language has crossed into decision-oriented territory. That's the difference between a map and turn-by-turn directions.&lt;/p&gt;

&lt;h3&gt;
  
  
  The interactive interview
&lt;/h3&gt;

&lt;p&gt;For each file, you'll see its detected phase, quality score, and a short prompt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;──────────────────────────────────────────────────
  api-reference.md
  892 words  ·  Auto-detected: Day 1 — Getting Started
  Decision quality: 2/8
──────────────────────────────────────────────────

? What do you want to do with this file?
❯ Audit it
  Skip it
  Flag for deletion
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Action.&lt;/strong&gt; Skip excludes the file from the report, useful for auto-generated files, changelogs, or anything that shouldn't be part of the audit. Flag for deletion marks it as a candidate for removal, shown separately in the report. Choose Audit it to continue.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase confirmation.&lt;/strong&gt; Correct the auto-detected classification here. An &lt;code&gt;api-reference.md&lt;/code&gt; might have been tagged as Day 1 because it mentions &lt;code&gt;how to use&lt;/code&gt;, but it's actually a reference doc that spans all three phases. Fix it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Decision quality.&lt;/strong&gt; You're asked whether the doc recommends a clear path, presents options but leaves the decision to the reader, or mostly describes without guiding action. Be honest. This is the distinction the whole framework turns on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If/then and next step.&lt;/strong&gt; Two quick yes/no confirmations that override the static analysis when your judgment differs from the regex.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 2: Read the report
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;╔═══════════════════════════════════════════════════════╗
║  doc-audit Report                                     ║
║  /projects/my-tool/docs                               ║
╚═══════════════════════════════════════════════════════╝

PHASE COVERAGE
  Day 0  ██░░░░░░░░   1 of 4    ⚠ Undercovered
  Day 1  ██████░░░░   2 of 4    ✓
  Day 2  ░░░░░░░░░░   0 of 4    ✗ Missing

DECISION QUALITY (avg: 3.8/8)
  ✓ quickstart.md                  6/8  — Strong decision doc
  ⚠ overview.md                    4/8  — Partially decision-oriented
  ✗ api-reference.md               2/8  — Knowledge dump
  ✗ architecture.md                3/8  — Knowledge dump

RECOMMENDATIONS
  1. Missing Day 2 content — add a troubleshooting or production guide
  2. api-reference.md — low decision quality. Add "If X → do Y" callouts...
  3. architecture.md — no If/then guidance detected...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Phase Coverage&lt;/strong&gt; is the structural view. A missing phase means readers at that stage of the adoption lifecycle have nothing to reach for. This is where the pattern shows up every time: Day 1 coverage is solid, Day 0 is weak, Day 2 is almost entirely absent. Teams onboard users and then abandon them in production.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Decision Quality&lt;/strong&gt; is the per-file view. Knowledge dump means the file explains things but doesn't help the reader decide or act. Strong decision doc means it has explicit guidance, surfaces tradeoffs, and ends by pointing somewhere.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recommendations&lt;/strong&gt; are prioritized. Phase gaps come first because they're the largest structural problem. Per-file findings follow.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 3: Auto mode and exports for CI
&lt;/h2&gt;

&lt;p&gt;For CI or fast trend-tracking, use auto mode:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;doc-audit ./docs &lt;span class="nt"&gt;--auto&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No prompts, static analysis only. Useful for catching regressions: a phase that was covered last sprint but isn't now, or an average quality score that's drifting down as docs accumulate.&lt;/p&gt;

&lt;p&gt;To capture results for downstream processing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;doc-audit ./docs &lt;span class="nt"&gt;--json&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; audit.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pipe it into a script that fails CI if a phase is missing or the average quality score drops below your threshold.&lt;/p&gt;

&lt;p&gt;For a shareable written record:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;doc-audit ./docs &lt;span class="nt"&gt;--output&lt;/span&gt; docs-audit-march.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 4: Act on what you find
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Missing Day 0 content.&lt;/strong&gt; Write a document that explicitly answers "should I use this?", covering what problem it solves, what it's not good for, and how it compares to the obvious alternatives. End it with a decision branch. If your use case is X, proceed to the quickstart. If it's Y, you might want Z instead. Give the reader somewhere to go either way.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Missing Day 2 content.&lt;/strong&gt; Mine your issue tracker, Slack history, or support queue for the five most common things that go wrong. For each one, write an &lt;code&gt;if X is happening → do Y&lt;/code&gt; entry. Day 2 readers are in crisis mode. A doc without explicit structure forces both the human reader and any agent acting on their behalf to fill the gap themselves, and agents fill gaps with hallucination.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Low quality score on an existing doc.&lt;/strong&gt; The fastest fix is adding if/then guidance. Find every place the doc describes a choice and make it explicit: if you're using PostgreSQL, use the connection pool strategy. If you're on SQLite, skip this section entirely. One well-placed callout can move the score significantly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Partially decision-oriented docs.&lt;/strong&gt; These present options but hedge on recommending one. Sometimes that's appropriate, context genuinely varies. But often it's just that no one has committed to a recommended path. If that's the case, commit. Move the edge cases into a collapsible section. Surface the tradeoffs explicitly. Providing a framework where you can't be prescriptive is still better than leaving the reader at a crossroads with no guidance.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where to start
&lt;/h2&gt;

&lt;p&gt;If running this against a large docs suite feels overwhelming, start with your highest-traffic pages. Ask one question for each: what decision does this doc help a reader make? If you can't answer that in one sentence, that's your signal. You don't need to rewrite everything. Tightening the scope of a page to serve one phase, one decision, and one next step is usually enough to move it from Knowledge dump to Partially decision-oriented.&lt;/p&gt;

&lt;p&gt;The goal isn't comprehensive coverage measured by word count. It's whether users can move from uncertainty to confident action with minimal friction. Documentation that does that is a growth lever. Documentation that doesn't generates support tickets asking "should I use X or Y?" for questions the docs technically answered but never resolved.&lt;/p&gt;

&lt;p&gt;Run the audit. Fix the phase gaps. Add the if/then guidance. Run it again.&lt;/p&gt;

&lt;p&gt;The docs that actually move people forward were never the most comprehensive ones. They were the ones that knew which question the reader was standing in front of, and answered it&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built on the documentation framework by Danielle Washington: &lt;a href="https://dev.to/daniellewashington/documentation-is-a-decision-system-not-a-knowledge-base-4139"&gt;Documentation is a Decision System, Not a Knowledge Base&lt;/a&gt; and &lt;a href="https://dev.to/daniellewashington/humans-need-narrative-agents-need-decisions-your-docs-need-both-2ni8"&gt;The Documentation Framework We Need Doesn't Exist Yet&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Try the web app &lt;a href="//doc-audit.clouddani.com"&gt;here&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>documentation</category>
      <category>devex</category>
      <category>devtools</category>
      <category>devrel</category>
    </item>
    <item>
      <title>The Documentation Framework We Need Doesn't Exist Yet</title>
      <dc:creator>DanielleWashington</dc:creator>
      <pubDate>Tue, 24 Mar 2026 01:41:15 +0000</pubDate>
      <link>https://forem.com/daniellewashington/humans-need-narrative-agents-need-decisions-your-docs-need-both-2ni8</link>
      <guid>https://forem.com/daniellewashington/humans-need-narrative-agents-need-decisions-your-docs-need-both-2ni8</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;We can no longer write exclusively for a human audience, someone who reads carefully, asks follow-up questions, and fills in gaps with context and common sense.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I've bet you've seen it everywhere: "your docs need to be readable for agents." Agents, agents, agents, cue the Brady Bunch theme. Everyone is talking about agents taking technical writing jobs, AI this, AI that. But where are the actual solutions?&lt;/p&gt;

&lt;p&gt;The technical writing industry has changed. We can no longer write exclusively for a human audience, someone who reads carefully, asks follow-up questions, and fills in gaps with context and common sense. Our docs now have a second audience. One that doesn't probe for answers, that doesn't ask for clarification, and fills gaps not with judgment but with hallucination.&lt;/p&gt;

&lt;p&gt;The audience is agents. And we need to start writing for them.&lt;br&gt;
Honestly, who actually wants to read pure reference documentation? If this → then that. Condition met → execute action. Technically precise and completely soulless.&lt;/p&gt;

&lt;p&gt;The Diataxis framework has served us well, tutorials, how-to guides, reference, explanation. Clean separations, clear purposes. But it was built for human readers with patience and context. It wasn't built for agents. And it wasn't built for the developer at 11pm with a production issue who doesn't have time to wade through four pages of deployment documentation praying the answer is somewhere in there.&lt;/p&gt;

&lt;p&gt;Most developers aren't reading docs on a Sunday morning for pleasure. They're reading because something is broken. So are we writing for the moment of curiosity or the moment of crisis? And now that agents are in the mix, consuming our docs to make decisions on behalf of that same frustrated developer, the question gets sharper.&lt;/p&gt;

&lt;p&gt;What’s the bridge between barebones and overly informative?&lt;/p&gt;

&lt;p&gt;What does great documentation actually do? To me, it answers the questions you hadn't thought to ask yet. The ones you didn't know you needed until you were already stuck.&lt;/p&gt;

&lt;p&gt;An agent cannot ask follow up questions. It cannot pause mid-task and realize it needs more context. It cannot intuit that the edge case you didn't document is the one it's about to hit. It will simply proceed and fill the gap with its best guess, which is another word for hallucination.&lt;/p&gt;

&lt;p&gt;Documentation that already anticipates the unasked question serves both readers. The human who didn't know what they didn't know. And the agent that cannot know what it wasn't told.&lt;/p&gt;

&lt;p&gt;The goal has never been "write for humans" or "write for agents." The goal was always to write for the moment before the question forms.&lt;br&gt;
We just didn't know we needed to say that out loud until now.&lt;/p&gt;

&lt;p&gt;So what does this actually look like in practice?&lt;/p&gt;

&lt;p&gt;Start here: before you write a single word, ask yourself two questions. What decisions are people reading this doc trying to make? And what will their likely next step be?&lt;br&gt;
Not "what information do I need to convey?" That's the old question. That's how you end up with four pages of similarly named documentation that answers everything except the thing the reader actually needed.&lt;/p&gt;

&lt;p&gt;The new question is about decisions and momentum. Where is this person trying to go? What do they need to know to get there? And what comes after that?&lt;/p&gt;

&lt;p&gt;Something shifts when you write from that vantage. The document stops being a repository and starts being a guide. A guide with clear decision points, explicit next steps, and no assumed context is something both a human and an agent can actually use.&lt;/p&gt;

&lt;p&gt;The decision your reader is trying to make isn't static. It changes depending on where they are in their journey.&lt;br&gt;
Day 0 is testing. Day 1 is production. Day 2 is maintenance. Three stages, three completely different goals, three completely different things a developer, or an agent acting on their behalf, actually needs from your documentation.&lt;/p&gt;

&lt;p&gt;A Day 0 developer needs to know if your product does what they think. A Day 1 developer needs to know exactly what to do and what happens if they get it wrong. And a Day 2 developer already knows your product but they're reading because something changed or something broke and they need the answer fast.&lt;/p&gt;

&lt;p&gt;Trying to write the same doc for all three and you've literally wasted everyone’s time.&lt;/p&gt;

&lt;p&gt;So where do you start?&lt;/p&gt;

&lt;p&gt;Here’s three things you can do tomorrow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Audit your existing docs and ask one question for each one: Why would a reader click on this page? What decision does this doc help a reader make? If you can't answer that in one sentence, that’s a signal that you need to tighten the scope of the page. &lt;/li&gt;
&lt;li&gt;Identify which stage your reader is in. Are they Day 0, Day 1, or Day 2? Write to that stage's specific goal, not all three at once. A developer testing your product and a developer maintaining it in production are not the same reader. Stop writing like they are.&lt;/li&gt;
&lt;li&gt;If possible, end every section with an explicit next step. Not a "see also" or a list of related articles. If X → do Y. Give both your human reader and any agent acting on their behalf a clear path forward.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The conversation about agent-readable documentation isn't going away. But talking about it without changing how we actually write is just noise.&lt;/p&gt;

&lt;p&gt;We now have somewhere to start.&lt;/p&gt;

</description>
      <category>devex</category>
      <category>documentation</category>
      <category>agents</category>
      <category>ai</category>
    </item>
    <item>
      <title>When Docs Meet Tools: The EKS Configurator Story</title>
      <dc:creator>DanielleWashington</dc:creator>
      <pubDate>Mon, 02 Feb 2026 15:56:40 +0000</pubDate>
      <link>https://forem.com/daniellewashington/when-docs-meet-tools-the-eks-configurator-story-1c7k</link>
      <guid>https://forem.com/daniellewashington/when-docs-meet-tools-the-eks-configurator-story-1c7k</guid>
      <description>&lt;p&gt;&lt;em&gt;Note: This tool was never officially released. The day I was scheduled to demo it to our wider engineering team, restructuring happened—affecting me. This post is my way of giving the work the light of day it deserves. The tool is live at the link below, built through countless iterations and hard-won lessons about what makes developer tooling actually useful.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Documentation Paradox
&lt;/h2&gt;

&lt;p&gt;I was drowning in my own documentation. As a technical documentation writer on the Weaviate developer education and experience team, I'd spent months writing comprehensive guides about deploying our vector database on EKS. Configuration guides, best practices, security recommendations, networking tutorials—all meticulously crafted, reviewed, and published. Yet the support tickets kept climbing.&lt;/p&gt;

&lt;p&gt;"How do I configure my node groups for Weaviate?"&lt;br&gt;&lt;br&gt;
"What instance types should I use for vector indexing workloads?"&lt;br&gt;&lt;br&gt;
"My Weaviate pods keep getting evicted—what am I doing wrong?"&lt;br&gt;&lt;br&gt;
"Can someone just tell me what YAML I need?"&lt;/p&gt;

&lt;p&gt;The pattern was clear: developers weren't struggling because the information didn't exist. They were struggling because &lt;strong&gt;assembling that information into a working Weaviate deployment on EKS was overwhelming&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;I needed to find a better way.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem: When Complexity Meets Choice
&lt;/h2&gt;

&lt;p&gt;Amazon Elastic Kubernetes Service (EKS) is powerful, but that power comes with complexity—especially when you're deploying a stateful, memory-intensive application like Weaviate. A production-ready EKS cluster for Weaviate requires decisions across multiple dimensions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Infrastructure&lt;/strong&gt;: Node groups, instance types (memory-optimized vs compute-optimized), scaling policies for vector workloads&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Storage&lt;/strong&gt;: EBS volume types (gp3 vs io2), provisioning for HNSW indices, persistent volume configurations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Networking&lt;/strong&gt;: VPC configuration, subnets, security groups, CNI plugins, service mesh considerations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security&lt;/strong&gt;: IAM roles for S3 backups, RBAC policies, pod security standards, encryption at rest&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Observability&lt;/strong&gt;: Prometheus metrics for query performance, CloudWatch integration, distributed tracing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Add-ons&lt;/strong&gt;: CoreDNS, kube-proxy, VPC CNI, EBS CSI driver, cluster autoscaler&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And here's where it gets tricky for Weaviate deployments: each decision influences others in ways specific to vector database workloads.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Choose &lt;code&gt;r6i&lt;/code&gt; memory-optimized instances? You'll need specific EBS volume configurations to match IOPS requirements for HNSW indexing.&lt;/li&gt;
&lt;li&gt;Enable multi-tenancy? Your RBAC and network policies need to isolate tenant data.&lt;/li&gt;
&lt;li&gt;Planning for horizontal scaling? You need StatefulSets with headless services and proper pod disruption budgets.&lt;/li&gt;
&lt;li&gt;Using S3 for backups? You'll need IAM roles with specific permissions attached to service accounts via IRSA (IAM Roles for Service Accounts).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I'd written comprehensive documentation covering each topic. We had guides on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Selecting instance types based on vector dimensionality and dataset size&lt;/li&gt;
&lt;li&gt;Configuring persistent storage for optimal HNSW performance&lt;/li&gt;
&lt;li&gt;Setting up monitoring for query latency and indexing throughput&lt;/li&gt;
&lt;li&gt;Implementing backup strategies with S3 integration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But developers still needed to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Read multiple scattered pages across Weaviate docs and AWS documentation&lt;/li&gt;
&lt;li&gt;Understand how EKS, Kubernetes, and Weaviate-specific requirements interact&lt;/li&gt;
&lt;li&gt;Translate conceptual knowledge into correct YAML manifests&lt;/li&gt;
&lt;li&gt;Size their cluster appropriately for their specific vector workload&lt;/li&gt;
&lt;li&gt;Validate their choices against best practices&lt;/li&gt;
&lt;li&gt;Troubleshoot when combinations didn't work (and they often didn't)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I was asking people to become experts in EKS, Kubernetes storage, vector database performance characteristics, AND AWS networking just to deploy Weaviate. That's unreasonable.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Inspiration: Learning from Competitors and Our Own Tools
&lt;/h2&gt;

&lt;p&gt;Then I saw something that shifted my thinking. One of our competitors had built a configuration and sizing tool for their vector database. It asked questions about dataset size, query patterns, and performance requirements, then recommended hardware specifications and configuration settings. It wasn't perfect, but users loved it.&lt;/p&gt;

&lt;p&gt;Around the same time, I was looking at Weaviate's existing Docker Configurator—a tool that helped users generate &lt;code&gt;docker-compose.yml&lt;/code&gt; files for local development. It worked beautifully for getting developers started quickly without drowning them in documentation.&lt;/p&gt;

&lt;p&gt;The connection clicked: &lt;strong&gt;What if I built something similar for EKS deployments?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This wouldn't just be nice to have—it could be a competitive differentiator. If our competitor could help users size their infrastructure, why couldn't we help them configure an entire production-ready EKS cluster specifically optimized for Weaviate?&lt;/p&gt;

&lt;h2&gt;
  
  
  The Realization: Documentation Has Limits
&lt;/h2&gt;

&lt;p&gt;After analyzing support patterns and seeing what worked elsewhere, I had an uncomfortable realization: &lt;strong&gt;some concepts resist text-based explanation&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;When configuration involves:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Multiple interdependent choices&lt;/strong&gt; (changing instance type affects storage IOPS, which affects indexing performance, which affects scaling decisions)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Context-dependent recommendations&lt;/strong&gt; (a semantic search application has different requirements than a recommendation engine)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complex syntax&lt;/strong&gt; (Kubernetes YAML that must be precisely correct, plus Helm values, plus AWS-specific annotations)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Validation requirements&lt;/strong&gt; (certain combinations are invalid or suboptimal for vector workloads)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Domain-specific expertise&lt;/strong&gt; (understanding both EKS operations AND vector database performance characteristics)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;...traditional documentation struggles. I could write it, but developers couldn't easily &lt;em&gt;apply&lt;/em&gt; it.&lt;/p&gt;

&lt;p&gt;I needed a different approach.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hypothesis: Show, Don't Just Tell
&lt;/h2&gt;

&lt;p&gt;What if instead of explaining all the configuration possibilities, I &lt;strong&gt;guided developers through making the right choices for their specific Weaviate deployment&lt;/strong&gt;?&lt;/p&gt;

&lt;p&gt;What if the tool could:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ask contextual questions about their vector search use case&lt;/li&gt;
&lt;li&gt;Recommend instance types based on vector dimensionality and dataset size&lt;/li&gt;
&lt;li&gt;Explain options inline, right when decisions are made&lt;/li&gt;
&lt;li&gt;Validate combinations specific to Weaviate workloads in real-time&lt;/li&gt;
&lt;li&gt;Generate correct, tested EKS and Weaviate configuration automatically&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This wouldn't replace my documentation—it would &lt;strong&gt;integrate with it&lt;/strong&gt;, creating a complementary experience. Just like our Docker Configurator worked alongside our Docker deployment docs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building the EKS Configurator
&lt;/h2&gt;

&lt;p&gt;I built the &lt;a href="https://k8s-config-nzfndvmnwxppa6zegwocxm.streamlit.app/" rel="noopener noreferrer"&gt;EKS Configurator&lt;/a&gt; as an interactive tool that bridges the gap between documentation and implementation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Design Principles
&lt;/h3&gt;

&lt;p&gt;I needed this tool to feel natural, not like another form to fill out. Here's what guided my design:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Context-Aware Guidance for Weaviate Workloads&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
The tool knows Weaviate-specific context that generic EKS docs can't anticipate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Actionable Output&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
I generate complete, ready-to-use configurations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;eksctl&lt;/code&gt; cluster configuration YAML&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;storage-classes&lt;/code&gt; configuration templates for EBS volumes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No translation layer—just copy, customize if needed, and deploy.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Technical Stack
&lt;/h3&gt;

&lt;p&gt;I chose &lt;strong&gt;Streamlit&lt;/strong&gt; for rapid development. As a self-taught developer and technical writer, I needed something I could iterate on quickly without wrestling with frontend frameworks. Streamlit's Python-based approach let me focus on the logic and user experience.&lt;/p&gt;

&lt;p&gt;The architecture:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Input Layer:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Streamlit forms and widgets capture user requirements&lt;/li&gt;
&lt;li&gt;Session state management keeps track of choices across pages&lt;/li&gt;
&lt;li&gt;Conditional rendering shows/hides options based on context&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Decision Engine:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Python logic validates combinations and applies AWS best practices&lt;/li&gt;
&lt;li&gt;Weaviate-specific sizing calculations (memory = vectors × dimensions × 4 bytes × safety factor)&lt;/li&gt;
&lt;li&gt;Decision trees map use cases to optimal configurations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Template Generation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;eks-cluster-config&lt;/code&gt; &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;storage-classes&lt;/code&gt; for EBS volumes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The key technical challenge was &lt;strong&gt;balancing flexibility with simplicity&lt;/strong&gt;. I didn't want to expose every possible EKS configuration parameter, but I needed enough options for real-world use cases.&lt;/p&gt;

&lt;p&gt;Another challenge: &lt;strong&gt;keeping generated YAML valid&lt;/strong&gt;. I wrote validators that check:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Required fields are present&lt;/li&gt;
&lt;li&gt;Values match expected types and formats&lt;/li&gt;
&lt;li&gt;Kubernetes resource names follow naming conventions&lt;/li&gt;
&lt;li&gt;Storage sizes are appropriate for instance types&lt;/li&gt;
&lt;li&gt;Memory requests don't exceed node capacity&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The tool makes it feel less like filling out a form and more like consulting with a Weaviate deployment expert.&lt;/p&gt;

&lt;h2&gt;
  
  
  Iteration Through Feedback
&lt;/h2&gt;

&lt;p&gt;The first version was... imperfect. I learned quickly through user feedback and support ticket patterns:&lt;/p&gt;

&lt;h3&gt;
  
  
  Too Many Options
&lt;/h3&gt;

&lt;p&gt;My first iteration tried to expose every possible EKS configuration option because I wanted to be comprehensive. Users were paralyzed by choice. &lt;/p&gt;

&lt;p&gt;I was recreating the documentation problem in tool form.&lt;/p&gt;

&lt;p&gt;I learned to provide &lt;strong&gt;sensible defaults&lt;/strong&gt; based on environment type, with the ability to customize when needed. Most users just needed to specify their workload characteristics—the tool could infer the rest.&lt;/p&gt;

&lt;h4&gt;
  
  
  One-Size-Fits-All
&lt;/h4&gt;

&lt;p&gt;My first templates assumed everyone wanted production-ready configurations with high availability, monitoring, and security hardening. But some users just needed a quick dev environment to test Weaviate features. Others needed proof-of-concepts with minimal costs.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Impact
&lt;/h2&gt;

&lt;p&gt;Honestly, I didn't expect the impact to be  significant, but internally the signals were clear - this tool would be helpful for our community users. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Personal Win:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The tool became a differentiator, not just a helper—exactly what I'd hoped when I first saw our competitor's sizing tool.&lt;/p&gt;

&lt;p&gt;The tool and documentation became complementary, each strengthening the other. The tool didn't replace my docs; it made them more accessible and actionable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lessons Learned: Tools vs. Docs
&lt;/h2&gt;

&lt;p&gt;Building the EKS Configurator taught me when to build tools instead of (or alongside) documentation. Here's my framework:&lt;/p&gt;

&lt;h3&gt;
  
  
  Build a Tool When:
&lt;/h3&gt;

&lt;p&gt;✅ &lt;strong&gt;Configuration has many interdependencies&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Manual validation is error-prone. For Weaviate on EKS, instance type affects memory, which affects storage sizing, which affects IOPS requirements, which affects volume type selection. A tool can validate these combinations automatically.&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Context dramatically changes recommendations&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
A semantic search application has different requirements than a recommendation engine. Static docs can't be sufficiently specific for every use case, but a tool can ask questions and tailor output.&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Syntax precision is critical&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Kubernetes YAML is unforgiving. One wrong indent, one missing field, one incompatible value—and your deployment fails. Generating correct YAML prevents hours of troubleshooting.&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Onboarding friction is measurable&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
High support ticket volumes and long time-to-first-deployment indicate pain. If you're repeatedly answering the same configuration questions, that's a signal.&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Validation can be automated&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
If you can programmatically determine "this combination won't work" or "this configuration is suboptimal," build that into a tool. Don't make every user learn the hard way.&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;You have working examples to build from&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
I had the Docker Configurator as a template and competitor tools as inspiration. Starting from scratch is harder.&lt;/p&gt;

&lt;h3&gt;
  
  
  Stick with Documentation When:
&lt;/h3&gt;

&lt;p&gt;📝 &lt;strong&gt;The concept is fundamentally simple&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Don't build a configurator for "How to install kubectl." That's over-engineering.&lt;/p&gt;

&lt;p&gt;📝 &lt;strong&gt;Flexibility is paramount&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Some users need complete control without guardrails. Advanced users might want configurations the tool doesn't support. Always provide an escape hatch to manual configuration.&lt;/p&gt;

&lt;p&gt;📝 &lt;strong&gt;Maintenance burden would be high&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
If your configuration options change weekly, a tool might become a maintenance nightmare. I was lucky—EKS best practices are relatively stable.&lt;/p&gt;

&lt;p&gt;📝 &lt;strong&gt;Understanding is more important than speed&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Some topics require deep reading and comprehension. You can't shortcut learning Kubernetes fundamentals with a tool.&lt;/p&gt;

&lt;p&gt;📝 &lt;strong&gt;The audience is already expert-level&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
If you're writing for platform engineers who live in Kubernetes daily, they might prefer full control over guided configuration.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Sweet Spot: Integration
&lt;/h3&gt;

&lt;p&gt;The most powerful approach isn't "tool OR documentation"—it's &lt;strong&gt;documentation-integrated tooling&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tools provide fast, guided paths to success for beginners&lt;/li&gt;
&lt;li&gt;Documentation provides depth, context, and customization knowledge for those who need it&lt;/li&gt;
&lt;li&gt;Each links to the other contextually&lt;/li&gt;
&lt;li&gt;Users choose their own learning path based on their experience level and timeline&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I'm a technical writer who built a tool. That perspective helped me keep documentation at the center—the tool exists to make the docs more actionable, not to replace them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Looking Forward
&lt;/h2&gt;

&lt;p&gt;The EKS Configurator changed how I think about developer experience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Short-term:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GKE and AKS configurators&lt;/strong&gt;: Users deploy Weaviate on other Kubernetes platforms too&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Terraform output&lt;/strong&gt;: The most-requested feature from users managing infrastructure as code&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Advanced customization options&lt;/strong&gt;: More granular control for power users while keeping defaults simple&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Long-term:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Interactive documentation&lt;/strong&gt;: Embedding configurator-style tools directly in our docs site&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Feedback loops&lt;/strong&gt;: Using tool usage patterns to identify documentation gaps and unclear concepts&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-tool experiences&lt;/strong&gt;: Connecting the EKS Configurator with monitoring setup tools, backup configuration tools, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The bigger vision:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I want to create a full deployment journey: sizing tool → infrastructure configurator → application deployment → monitoring setup → backup configuration. Each tool focused on one problem, but all working together seamlessly.&lt;/p&gt;

&lt;p&gt;Some think that as a technical writer, our job is to explain things clearly. But I was a developer first, and I know it's to &lt;strong&gt;remove friction from understanding and implementation&lt;/strong&gt;. Sometimes that's documentation. Sometimes that's tooling. Often, it's both working together.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Some concepts are better shown through tools than explained in text&lt;/strong&gt; - Complex, interdependent configurations benefit from interactive guidance&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Tools and documentation should be tightly integrated&lt;/strong&gt; - They're not competitors; they're complementary experiences&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Interactive experiences reduce cognitive load for complex topics&lt;/strong&gt; - Guided decision-making beats exhaustive reference material for onboarding&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;User feedback should drive tool and documentation evolution&lt;/strong&gt; - Monitor both support tickets and usage patterns to identify friction&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Know when to build vs. when to write&lt;/strong&gt; - Not every topic needs a tool, but some topics desperately do&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Try It Yourself
&lt;/h2&gt;

&lt;p&gt;Explore the &lt;a href="https://k8s-config-nzfndvmnwxppa6zegwocxm.streamlit.app/" rel="noopener noreferrer"&gt;EKS Configurator&lt;/a&gt; and see how interactive tooling complements documentation. We'd love to hear how you're bridging docs and tools in your organization.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Questions? Feedback?&lt;/strong&gt; Share your thoughts on building better developer experiences at the intersection of documentation and tooling.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Have you built tools to complement your documentation? What worked, what didn't, and what would you do differently? Let's continue the conversation.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>tooling</category>
      <category>devops</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Documentation Is a Decision System, Not a Knowledge Base</title>
      <dc:creator>DanielleWashington</dc:creator>
      <pubDate>Sun, 04 Jan 2026 21:33:57 +0000</pubDate>
      <link>https://forem.com/daniellewashington/documentation-is-a-decision-system-not-a-knowledge-base-4139</link>
      <guid>https://forem.com/daniellewashington/documentation-is-a-decision-system-not-a-knowledge-base-4139</guid>
      <description>&lt;p&gt;Think about the last time you stood in front of a restaurant menu with forty items. &lt;/p&gt;

&lt;p&gt;You knew what each dish was. The descriptions were thorough. Everything was explained. But you still couldn't decide what to order. Too many options, not enough guidance. You wanted someone to just tell you what's good.&lt;/p&gt;

&lt;p&gt;That's what your documentation does to users.&lt;/p&gt;

&lt;p&gt;We've built encyclopedias when people need guides. We've cataloged every possible path when what they really need is someone to say: "Start here. This way works."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A note on context:&lt;/strong&gt; This exploration comes from years of documenting cloud and AI-native infrastructure and developer tools, where technical decisions have real consequences. The patterns I'm sharing translate across domains, but how you apply them will depend on your specific context (regulatory requirements, user diversity, organizational politics). Take what resonates. Adapt what doesn't.&lt;/p&gt;

&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;p&gt;Documentation should help users make decisions, not just transfer knowledge.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Core ideas:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Users arrive seeking confidence to act, not information to absorb&lt;/li&gt;
&lt;li&gt;Map the decision points in your users' journey (Day 0, Day 1, Day 2)&lt;/li&gt;
&lt;li&gt;Provide recommendations where you can, frameworks where you can't&lt;/li&gt;
&lt;li&gt;Decision-oriented guides complement comprehensive reference material&lt;/li&gt;
&lt;li&gt;This approach scales beyond team constraints&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Read on for the framework&lt;/strong&gt; ↓&lt;/p&gt;




&lt;h2&gt;
  
  
  The Library Paradox
&lt;/h2&gt;

&lt;p&gt;Imagine a library where every book exists but there's no card catalog. No librarian. No categorization system. Just millions of books and a sign that says "Everything you need is here."&lt;/p&gt;

&lt;p&gt;Technically true. Practically useless.&lt;/p&gt;

&lt;p&gt;This is what comprehensive documentation without decision architecture looks like. All the information exists. Users just can't find their way to the answer that matters for their specific situation at this specific moment.&lt;/p&gt;

&lt;p&gt;The problem isn't missing information. The problem is missing navigation.&lt;/p&gt;

&lt;p&gt;A developer opens your docs to integrate an API. The authentication section lists three methods: OAuth2, API keys, JWT tokens. Each is thoroughly explained. Security considerations are documented. Implementation complexity is clear.&lt;/p&gt;

&lt;p&gt;The developer closes the tab.&lt;/p&gt;

&lt;p&gt;Not because they didn't understand the options. Because they couldn't decide which option was right for them. The cognitive load of evaluation exceeded their available decision-making energy.&lt;/p&gt;

&lt;p&gt;They went looking for a simpler product. One with opinions.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Users Actually Come Looking For
&lt;/h2&gt;

&lt;p&gt;When someone arrives at your documentation, they're usually standing at a crossroads.&lt;/p&gt;

&lt;p&gt;"Should I use deployment pattern A or B?"&lt;br&gt;
"Is this even the right tool for what I'm trying to do?"&lt;br&gt;
"What breaks if I choose wrong?"&lt;/p&gt;

&lt;p&gt;These aren't requests for information. They're requests for navigation.&lt;/p&gt;

&lt;p&gt;Users don't want to know every possible path through the forest. They want to know: which path gets me where I'm going? What do I need to watch out for? When should I take the detour?&lt;/p&gt;

&lt;p&gt;The difference between knowledge and wisdom. Knowledge is understanding the map. Wisdom is knowing which route to take given the weather, your skill level, and how much daylight you have left.&lt;/p&gt;

&lt;p&gt;Most documentation stops at knowledge. The wisdom layer is missing.&lt;/p&gt;

&lt;p&gt;💭 &lt;strong&gt;What crossroads do your users stand at? Where do they get stuck choosing between paths?&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Architecture of Decision
&lt;/h2&gt;

&lt;p&gt;Here's how I think about mapping the decision landscape:&lt;/p&gt;

&lt;h3&gt;
  
  
  Day 0: Before the Journey Begins
&lt;/h3&gt;

&lt;p&gt;Before users commit to your tool, they need to know if this path makes sense for them.&lt;/p&gt;

&lt;p&gt;"Is this right for my use case?"&lt;br&gt;
"What am I signing up for?"&lt;br&gt;
"What does success look like on the other side?"&lt;/p&gt;

&lt;p&gt;Think of this as the moment before you buy hiking boots. You need to know if you're going on day hikes or through-hiking the Appalachian Trail. The boots you need are different.&lt;/p&gt;

&lt;h3&gt;
  
  
  Day 1: The First Steps
&lt;/h3&gt;

&lt;p&gt;Users have committed. Now they need to get from zero to working state without getting lost.&lt;/p&gt;

&lt;p&gt;"Which starting configuration matches my constraints?"&lt;br&gt;
"What's the fastest path to something working?"&lt;br&gt;
"What decisions can I defer until later?"&lt;/p&gt;

&lt;p&gt;This is base camp. Get the tent up. Get oriented. Don't try to summit on your first day.&lt;/p&gt;

&lt;h3&gt;
  
  
  Day 2: The Long Game
&lt;/h3&gt;

&lt;p&gt;Now they're running in production. Different decisions matter.&lt;/p&gt;

&lt;p&gt;"How do I troubleshoot when things go wrong?"&lt;br&gt;
"When should I scale up? What's the trigger?"&lt;br&gt;
"What actually matters for observability?"&lt;/p&gt;

&lt;p&gt;This is expedition mode. You need to read the terrain, adjust to conditions, make judgment calls based on what you're seeing.&lt;/p&gt;

&lt;p&gt;Each phase has its own decision architecture. Map these explicitly, and your documentation transforms from an information dump into a navigation system.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Decision-Oriented Documentation Actually Looks Like
&lt;/h2&gt;

&lt;p&gt;Let me show you the difference in practice.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Traditional approach (the encyclopedia):&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Our platform supports three deployment models: single-node, clustered, and distributed. Single-node deployment uses a monolithic architecture with all components running in a single process. Clustered deployment distributes components across multiple nodes with leader election. Distributed deployment uses a service mesh architecture with independent scaling of each component."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Accurate. Complete. Useless for making a decision.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Decision-oriented approach (the guide):&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Start with single-node deployment unless you need high availability. Single-node is simpler to operate and sufficient for most production workloads under 10,000 requests/minute. Move to clustered deployment when you need zero-downtime upgrades or your workload exceeds single-node capacity. Distributed deployment is for organizations running at significant scale (100,000+ requests/minute) or with complex compliance requirements. Each step up adds operational complexity. Don't optimize prematurely."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Same information at the core. Different purpose. One explains what exists. The other tells you where to start and when to change course.&lt;/p&gt;

&lt;p&gt;Think of it like the difference between a map and turn-by-turn directions. Both are useful. But when you're driving in unfamiliar territory at night in the rain, you want directions, not a map.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The important nuance:&lt;/strong&gt; This works when there's a legitimate "good default path." When your product serves radically different contexts (enterprise vs. startup, compliance-heavy vs. speed-focused, geographic constraints), you need decision trees, not single recommendations. The goal is reducing friction, not eliminating necessary complexity.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Confidence Gap
&lt;/h2&gt;

&lt;p&gt;Here's what I've observed watching developers interact with documentation:&lt;/p&gt;

&lt;p&gt;They don't lack intelligence. They don't lack technical skill. What they lack is confidence that they're making the right choice given their constraints.&lt;/p&gt;

&lt;p&gt;Confidence comes from:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Clear defaults&lt;/strong&gt; that work for most situations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Visible tradeoffs&lt;/strong&gt; so you understand what you're giving up&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consequence clarity&lt;/strong&gt; so you know what happens next&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Decision criteria&lt;/strong&gt; when the answer genuinely depends on context&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When users close your docs and immediately ask in Slack, "Should I use X or Y?" you haven't failed to explain. You've failed to provide navigational confidence.&lt;/p&gt;

&lt;p&gt;They found the information. They couldn't convert it to action.&lt;/p&gt;

&lt;p&gt;This is the gap that kills adoption. Fix it, and you've eliminated a major friction point in your user's journey.&lt;/p&gt;

&lt;p&gt;💭 &lt;strong&gt;Have you watched users struggle with this in your community? What decisions trip them up?&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Building Documentation That Scales Like a System
&lt;/h2&gt;

&lt;p&gt;Shifting from knowledge transfer to decision architecture requires rethinking how documentation works.&lt;/p&gt;

&lt;h3&gt;
  
  
  Start with the Crossroads
&lt;/h3&gt;

&lt;p&gt;Before writing anything, map every significant choice point in the user journey. Where do they need to make decisions? What are the implications of those decisions?&lt;/p&gt;

&lt;p&gt;Don't guess. Don't assume. Map it based on support tickets, community questions, user research, telemetry. Find the actual friction points.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reality check for legacy systems:&lt;/strong&gt; You can't retrofit everything at once. Start with high-traffic pages or new features. Build the decision layer incrementally. Document the pattern, then expand it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Know When to Guide, When to Provide Tools
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Be prescriptive when:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Best practices are established in your domain&lt;/li&gt;
&lt;li&gt;You have data on what most users choose&lt;/li&gt;
&lt;li&gt;The "wrong" choice has manageable consequences&lt;/li&gt;
&lt;li&gt;Your product embeds sensible defaults&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Provide frameworks when:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Multiple approaches are legitimately valid&lt;/li&gt;
&lt;li&gt;Context varies dramatically (regulatory, budget, scale)&lt;/li&gt;
&lt;li&gt;Recommendations could create liability&lt;/li&gt;
&lt;li&gt;The complexity is inherent, not accidental&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Think of it like teaching someone to cook. For basic techniques, you give instructions: "Sear the meat at high heat for 2 minutes per side." For flavor development, you give frameworks: "Taste as you go. Add acid if it's flat, fat if it's harsh, salt if it's bland."&lt;/p&gt;

&lt;p&gt;The goal isn't prescription for its own sake. It's reducing cognitive load where you can, and providing decision tools where you can't.&lt;/p&gt;

&lt;h3&gt;
  
  
  Surface What Happens Next
&lt;/h3&gt;

&lt;p&gt;Users need to understand consequences before they commit to a path.&lt;/p&gt;

&lt;p&gt;"If I choose single-node deployment, what's my migration path when I need to scale?"&lt;br&gt;
"If I pick OAuth2, what's the ongoing maintenance burden?"&lt;br&gt;
"If I defer this decision, what's the cost of changing later?"&lt;/p&gt;

&lt;p&gt;Make the future visible. Let users see around corners.&lt;/p&gt;

&lt;h3&gt;
  
  
  Design for Contributors at Scale
&lt;/h3&gt;

&lt;p&gt;When hundreds of engineers contribute to your docs, decision-oriented patterns give them scaffolding for writing better first drafts. They understand what guidance to provide, not just what features to explain.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The real talk:&lt;/strong&gt; This still requires investment in templates, contributor guides, review processes. At a 200:1 developer-to-writer ratio, you're always triaging. Decision architecture helps, but it doesn't solve chronic understaffing. Advocate for resources where you can.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sometimes the Fix Is the Product, Not the Docs
&lt;/h3&gt;

&lt;p&gt;If users constantly struggle to choose between three authentication methods, maybe the problem isn't documentation. Maybe your SDK should pick the secure default and let advanced users override it.&lt;/p&gt;

&lt;p&gt;Documentation can guide decisions. It can't fix bad product design. Know the difference.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Two-Layer System
&lt;/h2&gt;

&lt;p&gt;Here's what this framework does NOT mean: throw away comprehensive reference documentation.&lt;/p&gt;

&lt;p&gt;Think of it like a building with two floors:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ground floor: Decision-oriented guides&lt;/strong&gt; (getting started, best practices, troubleshooting)&lt;br&gt;
This is where people enter. It's optimized for navigation. Clear signage. Recommended paths. Guidance for common journeys.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Second floor: Comprehensive reference&lt;/strong&gt; (API docs, configuration parameters, schemas)&lt;br&gt;
This is where people go for precision. It's optimized for completeness. Exhaustive coverage. Neutral, factual tone. Required for complex work.&lt;/p&gt;

&lt;p&gt;The mistake is treating the ground floor like the second floor. Or forcing the second floor to have ground floor characteristics when precision matters more than guidance.&lt;/p&gt;

&lt;p&gt;Build the navigation layer on top of your reference foundation. Serve both purposes well.&lt;/p&gt;




&lt;h2&gt;
  
  
  Measuring What Actually Matters
&lt;/h2&gt;

&lt;p&gt;Documentation succeeds when users can move from uncertainty to confident action with minimal friction.&lt;/p&gt;

&lt;p&gt;The metrics that capture this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Support tickets asking "should I use X or Y?" after reading docs&lt;/li&gt;
&lt;li&gt;Community questions about choosing between documented options
&lt;/li&gt;
&lt;li&gt;User testing: can people confidently select a path?&lt;/li&gt;
&lt;li&gt;Time-to-first-deployment or time-to-production&lt;/li&gt;
&lt;li&gt;Post-implementation surveys: "Did you feel confident in your choices?"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Not whether every feature is documented. Not whether every edge case is explained. Whether users can act confidently based on what you've provided.&lt;/p&gt;

&lt;p&gt;This requires different skills than traditional technical writing. Decision architecture. Strategic guidance. Understanding how humans actually make choices under uncertainty.&lt;/p&gt;

&lt;p&gt;The best documentation doesn't just inform. It navigates.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where This Works (And Where It Needs Translation)
&lt;/h2&gt;

&lt;p&gt;This framework emerged from documenting developer tools and cloud infrastructure. It works particularly well for:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Product contexts:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tools where users make architectural decisions&lt;/li&gt;
&lt;li&gt;Platforms with multiple valid configuration paths&lt;/li&gt;
&lt;li&gt;Services where decisions have operational consequences&lt;/li&gt;
&lt;li&gt;Products targeting technical audiences&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Organizational contexts:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Small teams relative to product surface area&lt;/li&gt;
&lt;li&gt;Docs-as-code workflows with engineer contributions&lt;/li&gt;
&lt;li&gt;Organizations that can iterate based on user feedback&lt;/li&gt;
&lt;li&gt;Products with telemetry showing usage patterns&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Places where you'll adapt, not adopt wholesale:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you're documenting regulated systems (medical, financial, aerospace), compliance may require comprehensive coverage. Layer decision guides on top, but don't sacrifice required completeness.&lt;/p&gt;

&lt;p&gt;If your product genuinely serves wildly different contexts, focus on decision criteria and multiple paths rather than single recommendations.&lt;/p&gt;

&lt;p&gt;If you're in a highly political environment where every recommendation needs approval, start small. Prove value in one area before attempting systemic change.&lt;/p&gt;

&lt;p&gt;The principles translate. The implementation varies.&lt;/p&gt;




&lt;h2&gt;
  
  
  What This Unlocks
&lt;/h2&gt;

&lt;p&gt;When documentation successfully reduces decision friction, something changes.&lt;/p&gt;

&lt;p&gt;Users adopt faster. They implement with more confidence. They require less support. Your documentation scales beyond your team's capacity to personally guide each user.&lt;/p&gt;

&lt;p&gt;Whether you're working with a 10:1 ratio or 200:1, decision-oriented architecture lets you punch above your weight.&lt;/p&gt;

&lt;p&gt;It's not about writing more. It's about writing strategically for the decisions that actually matter in your users' journey.&lt;/p&gt;

&lt;p&gt;This is how documentation becomes a growth lever instead of a cost center.&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Patterns to Remember
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Map the crossroads&lt;/strong&gt;: Identify Day 0, Day 1, Day 2 decision points&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Guide where you can, provide tools where you can't&lt;/strong&gt;: Know when to be prescriptive vs. when to offer frameworks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Make consequences visible&lt;/strong&gt;: Help users see around corners&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Layer navigation over reference&lt;/strong&gt;: Serve both purposes, optimized differently&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Measure decision confidence&lt;/strong&gt;: Track whether users can act, not just whether content exists&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Adapt to your context&lt;/strong&gt;: These principles translate, but implementation varies by domain&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  An Invitation to Explore
&lt;/h2&gt;

&lt;p&gt;Try this experiment this week:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Pick one feature in your docs&lt;/li&gt;
&lt;li&gt;Map every decision a user must make to use it successfully&lt;/li&gt;
&lt;li&gt;Ask: does your documentation explicitly guide each decision?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Share what you discover 👇&lt;/strong&gt; I'm particularly interested in hearing from folks in different domains (enterprise software, regulated industries, legacy systems). How do these patterns need to adapt for your context?&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Users don't come to documentation seeking knowledge. They come seeking confidence to act. The question is: are you providing it?&lt;/em&gt;&lt;/p&gt;

</description>
      <category>documentation</category>
      <category>devex</category>
      <category>productivity</category>
      <category>discuss</category>
    </item>
    <item>
      <title>From Code to Clarity: Embedding Technical Writers in Engineering Teams</title>
      <dc:creator>DanielleWashington</dc:creator>
      <pubDate>Tue, 09 Jul 2024 01:24:24 +0000</pubDate>
      <link>https://forem.com/daniellewashington/from-code-to-clarity-embedding-technical-writers-in-engineering-teams-47gc</link>
      <guid>https://forem.com/daniellewashington/from-code-to-clarity-embedding-technical-writers-in-engineering-teams-47gc</guid>
      <description>&lt;p&gt;Engineering teams are the face of technological innovation, they focus on developing new products, improving existing systems, and solving complex technical challenges. But their success often hinges on clear and effective communication, both within the team and with external stakeholders. This is where technical writers play a crucial role. &lt;br&gt;
When the expertise of technical writers, and technical content teams, is leveraged, documentation quality is enhanced, communication is streamlined, and ultimately project outcomes are improved. You can think of technical writers as the engineering team’s secret weapon. Let’s explore how engineering teams can effectively utilize their technical communication teams to their advantage.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Role of Technical Writers in Engineering
&lt;/h3&gt;

&lt;p&gt;Technical writers specialize in crafting clear, concise, and accurate documentation that translates complex technical information into accessible content. Our work encompasses a wide range of documents, including, but not limited to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Onboarding guides&lt;/li&gt;
&lt;li&gt;User manuals&lt;/li&gt;
&lt;li&gt;Release notes&lt;/li&gt;
&lt;li&gt;Runbooks&lt;/li&gt;
&lt;li&gt;Technical guides&lt;/li&gt;
&lt;li&gt;API documentation&lt;/li&gt;
&lt;li&gt;Training materials&lt;/li&gt;
&lt;li&gt;System documentation&lt;/li&gt;
&lt;li&gt;Process and procedure documentation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By producing high-quality documentation, technical writers help ensure that all stakeholders—developers, end-users, and business partners—understand the technical aspects of a project or software. This clarity is vital for the successful implementation and adoption of engineering solutions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Benefits of Integrating Technical Writers into Engineering Teams
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Enhanced Documentation Quality
&lt;/h4&gt;

&lt;p&gt;Engineering projects often involve intricate systems and complex procedures, and we bring a unique skill set that includes the ability to understand technical details and translate them into user-friendly content. We specialize in demystifying confusing technical terms without confusing jargon while also adhering to specific style guides. Have you ever had an engineer attempt to explain a complex, technical feature?&lt;/p&gt;

&lt;h4&gt;
  
  
  Improved Communication
&lt;/h4&gt;

&lt;p&gt;Clear documentation facilitates better communication within the engineering team and with external parties. As technical writers, we act as intermediaries who can convey technical information in a manner that is understandable to non-technical stakeholders. This can prevent misunderstandings and ensure that everyone has a clear understanding of the work completed. For example, release notes can be confusing to the uninitiated with terms and phrases that aren’t quite understandable. A technical writer working on release notes can craft release notes that accurately describe any new changes and features of a product, and ensure that the release notes are “executive-ready.” &lt;/p&gt;

&lt;h4&gt;
  
  
  Increased Efficiency
&lt;/h4&gt;

&lt;p&gt;With technical writers handling the documentation, engineers can focus more on their core tasks—designing, developing, and testing. This division of labor ensures that engineers are not bogged down by writing tasks and can contribute more effectively to the project’s technical aspects.&lt;/p&gt;

&lt;h4&gt;
  
  
  Consistency Across Documentation
&lt;/h4&gt;

&lt;p&gt;Technical writers ensure that all project documentation adheres to a consistent style and format. This consistency is crucial for maintaining a professional standard and ensuring that documents are easy to navigate and use. It also helps in creating a cohesive brand image and enhances the user experience. Imagine Engineer A writing a document in Confluence while Engineer B creates the same document using GitHub/Markdown. Having a technical documentation team ensures that these occurrences are far and few in between. &lt;/p&gt;

&lt;h3&gt;
  
  
  Fostering collaboration with the technical writers on your team
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Early Involvement
&lt;/h4&gt;

&lt;p&gt;From the onset of a project, technical writers need to be involved. It may seem trivial to have writers attend kick-off meetings and planning meetings, but it is imperative that the documentation team is able to attend. This allows for the team to gain a comprehensive understanding of project goals, technical requirements, key milestones, and any existing pain points that can be resolved with a simple runbook or addition to existing documentation. Early involvement ensures that all documentation is developed concurrently, rather than as an afterthought. &lt;/p&gt;

&lt;h4&gt;
  
  
  Regular Communication
&lt;/h4&gt;

&lt;p&gt;Maintaining regular communication between technical writers and engineers is essential. This can be facilitated through regular meetings, collaborative tools, and shared documentation platforms. Regular updates help technical writers stay informed about project developments and allow them to ask questions and clarify technical details as needed. Again, ensuring that writers are included in weekly status meetings can ensure that pain points are quickly addressed. &lt;/p&gt;

&lt;h4&gt;
  
  
  Access to Subject Matter Experts
&lt;/h4&gt;

&lt;p&gt;Technical writers need direct access to subject matter experts (SMEs) within the engineering team, in fact when a user story is created for a piece of documentation, a SME should be mentioned. SMEs can provide detailed explanations and answer specific questions that help technical writers create accurate and detailed documentation. This collaboration ensures that the documentation is technically sound and comprehensive.&lt;/p&gt;

&lt;p&gt;In conclusion, technical writers are invaluable assets to engineering teams, providing expertise in creating clear, concise, and accurate documentation. By integrating technical writers into their teams, engineers can enhance documentation quality, improve communication, and increase overall project efficiency. Implementing best practices for collaboration ensures that technical writers can effectively contribute to the success of engineering projects, ultimately leading to better outcomes and more satisfied stakeholders.&lt;/p&gt;

</description>
      <category>devrel</category>
      <category>devops</category>
      <category>career</category>
    </item>
  </channel>
</rss>
