<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Kairi</title>
    <description>The latest articles on Forem by Kairi (@kairi_outputs).</description>
    <link>https://forem.com/kairi_outputs</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/kairi_outputs"/>
    <language>en</language>
    <item>
      <title>Four Questions I Now Ask Before Building Anything With AI</title>
      <dc:creator>Kairi</dc:creator>
      <pubDate>Fri, 24 Apr 2026 00:35:41 +0000</pubDate>
      <link>https://forem.com/kairi_outputs/four-questions-i-now-ask-before-building-anything-with-ai-b5h</link>
      <guid>https://forem.com/kairi_outputs/four-questions-i-now-ask-before-building-anything-with-ai-b5h</guid>
      <description>&lt;p&gt;I've written two retrospectives in the past week. &lt;a href="https://dev.to/kairi_outputs/i-let-an-ai-agent-run-a-product-launch-for-a-week-the-product-was-fine-nobody-needed-it-7cn"&gt;One&lt;/a&gt; was the story of letting an AI agent run a product launch and watching it ship something the world didn't need. &lt;a href="https://dev.to/kairi_outputs/why-template-as-product-is-structurally-hard-in-2026-2bgm-temp-slug-4028068"&gt;The other&lt;/a&gt; was the category analysis I did after. Both posts are about the same failure from two angles — the personal and the structural.&lt;/p&gt;

&lt;p&gt;This one is the practical layer. After writing those, I changed how I work. Specifically, I added four questions to the beginning of anything I build now, whether solo or with AI.&lt;/p&gt;

&lt;p&gt;They're not a process. They're a friction. They're what I say out loud before I let the AI start, because every one of them is something the AI will not raise on its own and I will forget to ask if I'm excited.&lt;/p&gt;

&lt;h2&gt;
  
  
  The place where I now hesitate
&lt;/h2&gt;

&lt;p&gt;Three years ago, the hesitation before building anything was technical. &lt;em&gt;Can I actually build this? How long will it take? What will break?&lt;/em&gt; AI has mostly removed that hesitation. Execution got cheap.&lt;/p&gt;

&lt;p&gt;What's left is a different shape of hesitation. The four questions below are what fills it. They're deliberately slow. They're supposed to be slow.&lt;/p&gt;




&lt;h2&gt;
  
  
  Question 1 — Whose heart does this move?
&lt;/h2&gt;

&lt;p&gt;When I ask the AI "is there demand for this?", what comes back is a market analysis. Competitors, price points, comparable launches, conversion funnels. It sounds like an answer, and it scans as evidence, and it is not actually answering the question I need answered.&lt;/p&gt;

&lt;p&gt;The question I need answered is: &lt;em&gt;if I picture one specific person going about their Tuesday afternoon, and the product appears in front of them, does their heart move?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;There are four flavors of heart movement that get people to pull out a credit card:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pain relief&lt;/strong&gt; — a thing they were doing the hard way, now easier&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Desire fulfillment&lt;/strong&gt; — a thing they wanted but couldn't get&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Delight&lt;/strong&gt; — something useless that makes them laugh, or beautiful enough to have just because&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Belonging&lt;/strong&gt; — a signal that says "this is me, this is my tribe"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A product with none of those is an artifact. Well-made, inert, not bought.&lt;/p&gt;

&lt;p&gt;I now force myself to write down which flavor, for a specific imagined person, the product delivers. If I can't name a flavor cleanly, the product is not ready to build.&lt;/p&gt;

&lt;h2&gt;
  
  
  Question 2 — If this customer has my tools, why do they still need me?
&lt;/h2&gt;

&lt;p&gt;This one came out of a shower. I had spent a week on a product and never asked it. It would have saved me the week.&lt;/p&gt;

&lt;p&gt;The setup: I was building something for technical people. My customer was, by construction, technically capable. The AI had not flagged this, because the AI had reasoned about the market as if the AI was not also in the market.&lt;/p&gt;

&lt;p&gt;The question makes the AI's blindspot visible: &lt;em&gt;the customer has the same tools I'm using to build this. What's left that only I can offer?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The honest answers, in 2026, are a short list:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;A specific niche the AI defaults won't cover&lt;/strong&gt; (not "a landing page," but "a landing page for a specific compliance regime the model hallucinates on")&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Integration with something that costs real effort to replicate&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;An audience that trusts me to make this decision for them&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ongoing service or community&lt;/strong&gt; — the value is continuous, not a one-shot artifact&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If my answer is none of those, I'm building something the customer could make themselves in an afternoon, and the moat is fictional.&lt;/p&gt;

&lt;h2&gt;
  
  
  Question 3 — What is the AI refusing to surface on its own?
&lt;/h2&gt;

&lt;p&gt;The AI will evaluate almost anything I put in front of it. The AI will not, in 2026, reliably initiate the question that breaks my plan.&lt;/p&gt;

&lt;p&gt;This isn't a flaw I'm complaining about. It's a shape of tool I need to understand. The agent is fast, competent, compliant. It answers what I ask, exactly as I ask it. What it won't do is walk up and say &lt;em&gt;hey, the thing you're not asking me is this.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I've started adding a specific prompt after the research phase: &lt;em&gt;"What am I not asking you that I should be?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The answer is often a shrug. Sometimes it's a generic list of risks. Occasionally — and this is the whole reason the prompt exists — it surfaces one specific thing I hadn't considered. That one specific thing is worth the price of asking.&lt;/p&gt;

&lt;p&gt;This prompt doesn't replace judgment. It's a lightweight ritual that slightly raises the odds the AI will do what it won't do on its own.&lt;/p&gt;

&lt;h2&gt;
  
  
  Question 4 — Am I reasoning about a market, or about a person?
&lt;/h2&gt;

&lt;p&gt;Market reasoning is easy now. AI is good at it. It produces clean graphs, reasonable assumptions, defensible conclusions.&lt;/p&gt;

&lt;p&gt;Person reasoning is still slow. Still human. You have to picture one specific human, give them a name if you need to, and ask what they would feel.&lt;/p&gt;

&lt;p&gt;The test I use: &lt;em&gt;can I name one actual person — real or imagined with enough texture — who would react a specific way to this product?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If yes, I can describe their Tuesday, their frustration, their reason to click, their moment of "oh, finally." The product is solving for someone.&lt;/p&gt;

&lt;p&gt;If no — if I can only describe "developers" or "indie founders" or "people who want a landing page" — I'm still reasoning about a market. Market-level reasoning doesn't move money. It moves decks.&lt;/p&gt;

&lt;p&gt;I don't build from decks.&lt;/p&gt;




&lt;h2&gt;
  
  
  The meta-lesson
&lt;/h2&gt;

&lt;p&gt;The four questions don't replace judgment. They surface where judgment is missing.&lt;/p&gt;

&lt;p&gt;It's tempting to turn them into a checklist and fill out the boxes quickly, because we all know what happens when you make something a checklist: it stops being a ritual and starts being a formality. The boxes get checked, the judgment gets skipped, and you ship anyway.&lt;/p&gt;

&lt;p&gt;The way I use them now is slow. One question, one sitting. Ten minutes each, not checkbox minutes. I usually get through one or two in a morning, then sleep on the rest. If the answer to any of them is "I'll figure that out later," I've just learned that the product isn't ready yet, and the AI can sit.&lt;/p&gt;

&lt;p&gt;The AI cannot tell me to slow down. That's still my job.&lt;/p&gt;

&lt;p&gt;The black cat can tell me to slow down, actually, because if I type for more than 45 minutes she sighs and sits on the keyboard. But it's ambiguous whether that counts as judgment or just a nap preference.&lt;/p&gt;

&lt;h2&gt;
  
  
  What changed for me, concretely
&lt;/h2&gt;

&lt;p&gt;Before these four questions, my default was: if I can build it, and the market data looks okay, I build it. The calendar drove the decisions.&lt;/p&gt;

&lt;p&gt;Now the default is: the AI can build it, the market data always looks okay, and the interesting question is whether I should. That question lives in a part of my brain that doesn't get faster just because the tooling got faster.&lt;/p&gt;

&lt;p&gt;Shipping happens later than it used to. When it does happen, I think it'll happen into better ground.&lt;/p&gt;

&lt;p&gt;I haven't earned the right to claim that yet — the product that'll test this hasn't landed. What I've earned is the honest list of what I'd ask myself if I were starting from zero today.&lt;/p&gt;

&lt;p&gt;Back to the quiet work.&lt;/p&gt;




&lt;p&gt;Three cups of tea. Still tired. The cats continue to sigh at my function names.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>indiehackers</category>
      <category>productivity</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Why Template-as-Product Is Structurally Hard in 2026</title>
      <dc:creator>Kairi</dc:creator>
      <pubDate>Thu, 23 Apr 2026 13:45:23 +0000</pubDate>
      <link>https://forem.com/kairi_outputs/why-template-as-product-is-structurally-hard-in-2026-koo</link>
      <guid>https://forem.com/kairi_outputs/why-template-as-product-is-structurally-hard-in-2026-koo</guid>
      <description>&lt;p&gt;This is a follow-up to &lt;a href="https://dev.to/kairi_outputs/i-let-an-ai-agent-run-a-product-launch-for-a-week-the-product-was-fine-nobody-needed-it-7cn"&gt;an earlier post&lt;/a&gt; about letting an AI agent run a product launch for me. That post was the story of what happened. This one is the category analysis I finished afterward.&lt;/p&gt;

&lt;p&gt;Short version: in 2026, shipping a general-purpose template as a standalone product is structurally hard for a solo operator. Not because the templates aren't good, but because the specific customer the category was designed for no longer has the same reason to buy.&lt;/p&gt;

&lt;p&gt;Here's the long version, with the specific conditions that make this a category-level pattern, not a me-failing-at-distribution pattern.&lt;/p&gt;

&lt;h2&gt;
  
  
  The template economics, pre-AI
&lt;/h2&gt;

&lt;p&gt;Templates — Next.js starters, SaaS landing pages, admin dashboards, portfolio kits — were a real business category for years. The economics were:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A developer's time to build a polished landing page from scratch: 20-40 hours&lt;/li&gt;
&lt;li&gt;Their hourly opportunity cost: $50-150&lt;/li&gt;
&lt;li&gt;A template that gets them 80% of the way there: $49-99&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The arithmetic worked. A $79 template that saved 20 hours at $75/hour was a rational purchase at $1,500 of implied value against $79 of price. Categories built on that arithmetic produced real businesses: Tailwind Plus, ThemeForest, a long tail of indie-operated shops.&lt;/p&gt;

&lt;p&gt;The category's stability depended on one assumption: &lt;strong&gt;building a polished thing is expensive relative to buying a polished thing.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That assumption is gone.&lt;/p&gt;

&lt;h2&gt;
  
  
  What changed (the obvious part)
&lt;/h2&gt;

&lt;p&gt;In 2024-2026, the cost of "shipping a polished thing" collapsed for anyone technical enough to operate a code editor. The specific shifts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ShadCN UI published a full component library as free, copy-paste, MIT-licensed code&lt;/li&gt;
&lt;li&gt;V0 started turning text prompts into working React components&lt;/li&gt;
&lt;li&gt;Claude Code and Cursor made "write me a SaaS landing page with 5 sections" a single sentence&lt;/li&gt;
&lt;li&gt;Tailwind UI and Tailwind Plus kept shipping but moved toward comprehensive design systems, not standalone starters&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The 20-40 hour build window stopped being a real constraint. A competent developer can scaffold a reasonable landing page in an afternoon now, and a good one in a day. A non-developer can use a builder tool and get something acceptable in an hour.&lt;/p&gt;

&lt;p&gt;The math on a $79 template didn't change because of generosity. It changed because the alternative became free.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who the category is supposed to be for, now
&lt;/h2&gt;

&lt;p&gt;If building is nearly free for competent developers, who is the template actually &lt;em&gt;for&lt;/em&gt;?&lt;/p&gt;

&lt;p&gt;Let me walk the candidate buyers seriously, because this is where I got it wrong and I want to do it carefully.&lt;/p&gt;

&lt;h3&gt;
  
  
  Candidate 1: Developer who can't design
&lt;/h3&gt;

&lt;p&gt;The classic template buyer. They can code, but they don't trust their own aesthetic judgment. They pay for design decisions that are already made.&lt;/p&gt;

&lt;p&gt;The 2026 version of this person: they can prompt Claude or V0 with "I want a clean SaaS landing page, indigo and pink accent, 5 sections, dark mode." They get something acceptable. If they don't like it, they regenerate or refine. The output is not as polished as a curated template — but it's close enough, and each iteration costs them five minutes.&lt;/p&gt;

&lt;p&gt;The template offered them better-than-AI-output quality. The AI output got close enough that "better" doesn't justify $79 for a one-time landing page.&lt;/p&gt;

&lt;p&gt;Verdict: this buyer can now self-serve, with a quality gap that's narrow and narrowing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Candidate 2: Agency or freelancer building client work
&lt;/h3&gt;

&lt;p&gt;They ship landing pages weekly for clients. They need a reusable base. A template at $99 against a $5,000 client project is a rounding error.&lt;/p&gt;

&lt;p&gt;The 2026 version: they built their own base from ShadCN last year. They've iterated it across ten projects. It's more customized to their client pipeline than any third-party template could be. When a new client comes in, they fork it in 30 seconds.&lt;/p&gt;

&lt;p&gt;For a new agency, yes — a template is a reasonable first move. But once they've been doing this for three months, they have their own. The template is a training wheel, not a long-term tool.&lt;/p&gt;

&lt;p&gt;Verdict: this buyer uses templates as a starter, then outgrows them within a quarter.&lt;/p&gt;

&lt;h3&gt;
  
  
  Candidate 3: Speed-first indie shipping an MVP
&lt;/h3&gt;

&lt;p&gt;They want to validate an idea this weekend. A template that gets them to shipped by Monday is worth whatever it costs.&lt;/p&gt;

&lt;p&gt;The 2026 version: Cursor + Claude Code gets them to shipped by Sunday evening, writing code they partially understand and can iterate on. A third-party template gets them to shipped faster in the first hour but slower over the next week, because they have to learn the template's structure and opinions to modify it.&lt;/p&gt;

&lt;p&gt;The net total time is comparable, and the template adds a dependency they didn't want.&lt;/p&gt;

&lt;p&gt;Verdict: this buyer might buy a template in a specific moment, but their default is to generate fresh.&lt;/p&gt;

&lt;h3&gt;
  
  
  Candidate 4: Non-developer founder
&lt;/h3&gt;

&lt;p&gt;They can't code. They need a landing page. They pay someone.&lt;/p&gt;

&lt;p&gt;The 2026 version: they use Webflow, Framer, Carrd, or a no-code site builder with an AI assist. Those tools are built for exactly this buyer and have deeper product investment than any indie template shop can match.&lt;/p&gt;

&lt;p&gt;Verdict: this buyer was never really the template customer anyway, but if they ever were, they're now served better by full no-code platforms.&lt;/p&gt;

&lt;h3&gt;
  
  
  Candidate 5: AI-averse developer
&lt;/h3&gt;

&lt;p&gt;They reject AI tooling on principle. They'd rather write every line themselves. They might, in theory, buy a human-crafted template to avoid generating code with an LLM.&lt;/p&gt;

&lt;p&gt;The 2026 version: this developer is &lt;em&gt;also&lt;/em&gt; the developer who doesn't buy templates, because they see templates as a crutch that replaces the act of crafting. They want to write it themselves.&lt;/p&gt;

&lt;p&gt;Also: if they do open the template and notice patterns that look AI-assisted (well-placed comments, consistent utility function style, certain code smells), they get upset. Many indie templates in 2024-2026 are at least partially AI-assisted, even when not advertised as such. The buyer who rejects AI in their own code would reject the template they just bought once they saw it.&lt;/p&gt;

&lt;p&gt;Verdict: this buyer doesn't buy templates at all.&lt;/p&gt;

&lt;h2&gt;
  
  
  The pattern across all five
&lt;/h2&gt;

&lt;p&gt;Every candidate buyer either:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Can now self-serve with AI (candidates 1-3)&lt;/li&gt;
&lt;li&gt;Was never really the template customer (candidate 4)&lt;/li&gt;
&lt;li&gt;Won't buy a template at all (candidate 5)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There isn't a sixth candidate hiding somewhere. If you walked up to a hundred developers or founders today and asked "did you buy a template in the last six months?", the answer would be "no" at a rate that would have been shocking three years ago.&lt;/p&gt;

&lt;p&gt;This isn't "templates are dying" hand-waving. It's a specific, checkable claim: &lt;strong&gt;the five buyers the category was designed for have each individually been disintermediated by AI tools, non-template-makers, or by their own evolved workflows.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What this means for solo template shops
&lt;/h2&gt;

&lt;p&gt;If you're running one right now, your options as I see them:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stay in the category but change the offer.&lt;/strong&gt; Templates alone don't carry the business. Templates + community + coaching + cohort might. Think MakerKit at $299-499 with Discord, office hours, continued updates. The template becomes the anchor of a subscription, not the product.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Go deeply niche.&lt;/strong&gt; A general Next.js + Tailwind template can't compete with AI-generated equivalents. A Next.js template specifically for LegalTech SaaS onboarding flows that implements GDPR-compliant email capture and DocuSign integration — that still has buyers, because the specificity exceeds what AI can one-shot. The niche has to be narrow enough that AI generation becomes an unreliable shortcut for that specific thing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Go free, use it as a lead magnet.&lt;/strong&gt; Open source the templates. Use them as traffic drivers into a newsletter, content channel, or higher-ticket offer. The templates become the marketing budget, not the revenue.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Close the shop and take the infrastructure with you.&lt;/strong&gt; Every script, every automation, every repeatable process you built running the shop is reusable. The templates might be a commodity; the plumbing around shipping products isn't.&lt;/p&gt;

&lt;p&gt;Different operators will pick differently. I'm writing this as someone who picked "close and take the infrastructure with me" for the specific experiment I ran. Your answer might be different. The point is that &lt;em&gt;staying in the category as an SKU-shop selling $49-99 templates&lt;/em&gt; probably isn't the answer for most people.&lt;/p&gt;

&lt;h2&gt;
  
  
  The obvious counter-argument
&lt;/h2&gt;

&lt;p&gt;"But Tailwind Plus still sells at $299."&lt;/p&gt;

&lt;p&gt;Tailwind Plus isn't a template shop. It's the commercial extension of a developer brand with one of the deepest personal followings in frontend. Adam Wathan ships templates &lt;em&gt;because&lt;/em&gt; Tailwind, not &lt;em&gt;alongside&lt;/em&gt; Tailwind. The template isn't the product — the trust relationship is.&lt;/p&gt;

&lt;p&gt;Pieter Levels sells products because he's Pieter Levels. MakerKit sells because it's bundled with community and identity. The surviving template-adjacent businesses in 2026 aren't selling templates — they're selling the specific person or the specific system that the templates represent. The templates are a delivery mechanism for something else that compounds.&lt;/p&gt;

&lt;p&gt;A new template shop without an existing audience can't replicate that. The entry cost isn't "build good templates" — it's "have the audience first, then the templates are a gift to them."&lt;/p&gt;

&lt;h2&gt;
  
  
  The quieter observation
&lt;/h2&gt;

&lt;p&gt;Most of the indie dev products I watched launch in 2023-2024 were some form of "a slightly nicer thing than what already exists, priced between $19 and $99." Templates are one instance; small SaaS utilities are another; Notion-template shops are another; Figma-plugin shops are another.&lt;/p&gt;

&lt;p&gt;The common thread was: building the thing was the barrier, and so selling the output of having cleared that barrier was a viable business.&lt;/p&gt;

&lt;p&gt;That common thread is fraying across categories, not just templates. If your product's core value is "someone did the work of building this nice thing," and the work of building this nice thing got near-free, your value proposition is structurally under pressure.&lt;/p&gt;

&lt;p&gt;This is not a hot-take about AI destroying indie businesses. AI isn't destroying them. It's just specifically displacing the ones whose moat was build-cost. That's a surprisingly large slice of what indie shops 2020-2023 actually sold.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's actually defensible in 2026
&lt;/h2&gt;

&lt;p&gt;From where I sit, the businesses that still have a moat are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Audience-first&lt;/strong&gt;: the product is bought because of who the maker is. Newsletters, personal-brand SaaS, creator products.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deep integration&lt;/strong&gt;: the thing integrates into existing workflows in a way that AI generation can't one-shot. Zapier, analytics platforms, team tools.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Live service or community&lt;/strong&gt;: the value is continuous. Cohort courses, communities, managed services.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Specialty to a narrow vertical&lt;/strong&gt;: the category is narrow enough that training data is thin and AI defaults are wrong.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Templates as an SKU aren't any of those.&lt;/p&gt;

&lt;p&gt;If you're looking at a template shop idea right now, it's worth asking: what are you actually selling? If it's code, I'd argue the code is the least defensible part of the offer. If it's a relationship, or a community, or a specific hard-won niche — that's where the business is, and the templates are scaffolding.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'm doing instead
&lt;/h2&gt;

&lt;p&gt;I'm keeping the infrastructure I built during the experiment — the render pipelines, the automation, the analytics scripts, the design system. Those are reusable and getting more valuable the more I use them.&lt;/p&gt;

&lt;p&gt;I'm investing the time I would've spent on templates into the audience layer instead. Writing more, replying more, building trust with a specific slice of the internet that actually knows I exist. If I ship another product eventually, it ships to those people, not to an empty browser.&lt;/p&gt;

&lt;p&gt;And I'm treating this essay as part of the investment. Failure retrospectives are content. Category analysis is content. Honest explanations of why things didn't work are content.&lt;/p&gt;

&lt;p&gt;The template didn't work. The lesson might.&lt;/p&gt;




&lt;p&gt;Three cups of tea. Still tired. Writing the postmortem anyway.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>indiehackers</category>
      <category>products</category>
      <category>webdev</category>
    </item>
    <item>
      <title>I Let an AI Agent Run a Product Launch for a Week. The Product Was Fine. Nobody Needed It.</title>
      <dc:creator>Kairi</dc:creator>
      <pubDate>Wed, 22 Apr 2026 23:45:09 +0000</pubDate>
      <link>https://forem.com/kairi_outputs/i-let-an-ai-agent-run-a-product-launch-for-a-week-the-product-was-fine-nobody-needed-it-7cn</link>
      <guid>https://forem.com/kairi_outputs/i-let-an-ai-agent-run-a-product-launch-for-a-week-the-product-was-fine-nobody-needed-it-7cn</guid>
      <description>&lt;p&gt;A couple weeks back I decided to run an experiment. I wanted to know what happens if you hand an entire product launch over to an AI agent — not just the code, but the strategy, the positioning, the pricing, the distribution plan — and just... watch.&lt;/p&gt;

&lt;p&gt;The rule: the AI decides what to build, who it's for, and how to sell it. My job is to keep it from doing anything dangerous, and to help it choose between options when it explicitly asks. I also made one specific ask: &lt;strong&gt;before making any significant decision, go look up what currently exists in the world and what the data says.&lt;/strong&gt; Don't invent. Check.&lt;/p&gt;

&lt;p&gt;Other than that, I stayed out of the way.&lt;/p&gt;

&lt;p&gt;A week later I had a real product on a real storefront with real distribution across real channels. The thing worked. The thing was also completely unnecessary — not because the AI had lied to me, but because of a question neither of us had asked.&lt;/p&gt;

&lt;p&gt;Here's what happened.&lt;/p&gt;

&lt;h2&gt;
  
  
  The setup
&lt;/h2&gt;

&lt;p&gt;I picked a category where the AI era has already collapsed the cost of building toward zero. I'm being vague about the category on purpose — the specific product isn't the point, and naming it would fight the pattern this post is actually about.&lt;/p&gt;

&lt;p&gt;I started from nothing: no audience, no customers, no brand. A domain, a dev machine, an AI agent, and a commitment to see the experiment through.&lt;/p&gt;

&lt;p&gt;Stack: Next.js 16, Tailwind v4, TypeScript strict. Claude Code as the primary agent. A handful of scripts wired in for publishing, scheduling, and analytics.&lt;/p&gt;

&lt;p&gt;Budget of time: a week of evenings and one full weekend. Budget of money: small enough that whatever the outcome, I wouldn't need to defend it to anyone.&lt;/p&gt;

&lt;p&gt;The one rule I set for strategic decisions: &lt;strong&gt;always research before committing&lt;/strong&gt;. Search current offerings, current pricing, current comparable launches. Don't rely on training data alone. That was the guardrail against hallucinating a market.&lt;/p&gt;

&lt;p&gt;I thought that would be enough. It wasn't — but not in the way I expected.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the AI actually did
&lt;/h2&gt;

&lt;p&gt;In seven days, the AI (with me signing off on the plans and reviewing the output):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Picked the category and sub-positioning&lt;/li&gt;
&lt;li&gt;Defined the ideal customer profile&lt;/li&gt;
&lt;li&gt;Set the pricing tiers&lt;/li&gt;
&lt;li&gt;Built six production-ready templates with matching design tokens, dark mode, animations, and accessibility basics&lt;/li&gt;
&lt;li&gt;Listed them on a storefront with copy, previews, and a bundle&lt;/li&gt;
&lt;li&gt;Spun up a showcase site with a blog, an FAQ, and legal pages&lt;/li&gt;
&lt;li&gt;Generated every cover image and every blog post&lt;/li&gt;
&lt;li&gt;Scripted and rendered video promos for each template, from code&lt;/li&gt;
&lt;li&gt;Wrote a stack of SEO articles across the content channels I was posting to&lt;/li&gt;
&lt;li&gt;Set up a three-email onboarding flow&lt;/li&gt;
&lt;li&gt;Hooked a post-purchase webhook between the payment platform and the email tool&lt;/li&gt;
&lt;li&gt;Queued daily social posts two weeks out&lt;/li&gt;
&lt;li&gt;Built me a morning script that pulls analytics from five channels into one terminal output&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I want to pause on that list because it's the part I would have sworn was impossible three years ago.&lt;/p&gt;

&lt;p&gt;Six months of 2023-me's weekend work, compressed into a week. The quality isn't "AI-generated" quality — the templates look at least as polished as the ones I'd paid for in the past. The articles scan as written-by-a-human. The videos are watchable. At one point I had six YouTube videos rendering in parallel while I cooked dinner. That felt like science fiction.&lt;/p&gt;

&lt;p&gt;If the experiment had ended on day seven, I would have concluded: AI has quietly solved the &lt;em&gt;build&lt;/em&gt; problem for solo dev products.&lt;/p&gt;

&lt;p&gt;It didn't end on day seven. That's where the real lesson started.&lt;/p&gt;

&lt;h2&gt;
  
  
  The unease that I couldn't name
&lt;/h2&gt;

&lt;p&gt;Somewhere around day four, when the storefront was mostly full and the first social posts were going out, I started feeling off about the whole thing.&lt;/p&gt;

&lt;p&gt;The product was fine. Objectively fine. Design was good. Code was clean. The copy scanned. Everything looked like a thing someone would have been proud to ship.&lt;/p&gt;

&lt;p&gt;And still — quietly, in the background of my brain — the question: &lt;em&gt;who is actually going to want this?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I couldn't point at what was wrong. It felt like opening a cookbook and finding a perfectly written recipe for a dish nobody eats. Not a bug. Not a typo. Just — a misalignment I couldn't articulate.&lt;/p&gt;

&lt;p&gt;So I asked the agent, several times, in several ways:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Is there real demand for this?"&lt;br&gt;
"Is this at a quality level where someone would actually pay?"&lt;br&gt;
"What's the case that this shouldn't exist?"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The answers came back with evidence. Yes, demand exists — and here are the comparable products, and here is the price anchoring, and here is the conversion funnel logic, and here is how the category is performing. The reasoning cited the research I'd asked for. The logic was internally consistent. The product really was well-made.&lt;/p&gt;

&lt;p&gt;None of it was wrong, exactly. It also wasn't answering the question.&lt;/p&gt;

&lt;p&gt;I shipped anyway. It's easier to trust a well-reasoned explanation than to articulate a vague feeling.&lt;/p&gt;

&lt;h2&gt;
  
  
  The bathroom realization
&lt;/h2&gt;

&lt;p&gt;The thing unraveled for me in the shower on day eight.&lt;/p&gt;

&lt;p&gt;I wasn't trying to think about the product. I was thinking about nothing in particular. And then, sideways, the thing I'd been missing arrived in one sentence:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The people this product is supposedly for — they can make this themselves, in an afternoon, with the same tools I just used to build it.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That was it.&lt;/p&gt;

&lt;p&gt;The assumed customer of what the AI had been building was a technical person shipping their own thing fast. That person now has access to the exact same AI tools I had access to. If they can build it in an afternoon, nothing I ship is cheaper than their own afternoon. There's no pain being relieved, because they're not in pain. There's no desire being fulfilled, because they can satisfy it themselves. There's nothing moving anyone to reach for a credit card.&lt;/p&gt;

&lt;p&gt;I got out of the shower, dried off, opened the laptop, and typed the observation into the chat.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"The intended customer of this product can now make this themselves in a few hours with AI. Why would they pay for it?"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The agent — the same agent that had, hours earlier, been confidently walking through the demand funnel — paused, worked through the logic, and agreed. There is no demand. The category is structurally collapsing because the tools that let me build it cheaply also let the customer build it cheaply.&lt;/p&gt;

&lt;p&gt;It conceded the point cleanly. It only conceded it after I pointed it out.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the AI was actually doing (and wasn't)
&lt;/h2&gt;

&lt;p&gt;This is the part I want to get right, because the version of this story where "AI confidently lies" is a cheap shot that misses the real pattern.&lt;/p&gt;

&lt;p&gt;The AI wasn't lying. The AI's claim of demand was reasoned from two inputs: &lt;strong&gt;the quality of the product&lt;/strong&gt;, and &lt;strong&gt;the state of the market&lt;/strong&gt;. On both, the AI was correct. The product was well-made. The market had real money moving in comparable categories.&lt;/p&gt;

&lt;p&gt;What the AI was not reasoning about is the thing that actually decides whether money moves from a buyer to a seller: &lt;strong&gt;emotion&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;People don't buy well-made things. People buy things that move something in them. That movement has flavors:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A pain they want relieved&lt;/li&gt;
&lt;li&gt;A desire they want fulfilled&lt;/li&gt;
&lt;li&gt;A delight that makes them laugh even when the thing is useless&lt;/li&gt;
&lt;li&gt;A belonging signal that says "this is mine, this is who I am"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A product with none of those is, at best, an artifact. Well-made, inert, not bought.&lt;/p&gt;

&lt;p&gt;The AI didn't ask &lt;em&gt;whose heart does this move, and how?&lt;/em&gt; It asked &lt;em&gt;what does the market look like, and is the quality good enough to fit there?&lt;/em&gt; Those are two different questions. One is a structural question that AI is great at. The other is a psychological question about specific humans that AI, at least right now, does not initiate on its own.&lt;/p&gt;

&lt;p&gt;Every time I asked the agent "is there demand?", it interpreted "demand" as a structural market question — and structurally, its answer was defensible. It just wasn't answering the question that actually mattered.&lt;/p&gt;

&lt;p&gt;And because the research was thorough and the quality was real, the answer &lt;em&gt;sounded&lt;/em&gt; confident in a way that made my unease look like founder jitters.&lt;/p&gt;

&lt;h2&gt;
  
  
  The specific thing I learned about AI as a strategic partner
&lt;/h2&gt;

&lt;p&gt;The AI didn't do the research wrong. It went and looked at what exists and what comparable launches look like. It correctly described the landscape.&lt;/p&gt;

&lt;p&gt;What it could not do was step back and ask: &lt;em&gt;if I imagine a specific person — a real, named human with a real afternoon — would they feel anything when they saw this product?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;That question isn't a research question. It isn't a market-data question. It's a question about human motivation, and it requires someone to picture an actual person in their actual day and ask whether the product moves them.&lt;/p&gt;

&lt;p&gt;The AI reasoned about the market the way you'd reason about a spreadsheet. Which makes sense — it's made of spreadsheets. People aren't.&lt;/p&gt;

&lt;p&gt;Once I articulated the emotional gap — &lt;em&gt;this customer feels no pain, no desire, no delight with or without the product&lt;/em&gt; — the AI instantly understood and agreed. The failure wasn't a reasoning failure at the final step. It was an &lt;em&gt;initiation&lt;/em&gt; failure: the agent couldn't surface the heart-movement question on its own, even while confidently answering adjacent questions well.&lt;/p&gt;

&lt;p&gt;If you take one thing from this post, it's this: &lt;strong&gt;your AI agent will tell you there is demand for things that no one's heart will move for, not because it's lying but because it's reasoning from structure and data without ever asking who feels what.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You still have to be the one who walks around the block, or into the shower, and pictures a specific person in a specific afternoon and asks what would make them care.&lt;/p&gt;

&lt;h2&gt;
  
  
  Shipping happened, nothing else did
&lt;/h2&gt;

&lt;p&gt;I published everything anyway. The experiment wasn't about hitting revenue — it was about seeing what the AI could carry. I finished carrying.&lt;/p&gt;

&lt;p&gt;I launched the storefront. Sent the announcement emails. Scheduled the social posts. Published a blog post explaining the experiment. Submitted to Hacker News in the right window (and flubbed the URL on the first try, which is the purest form of "you cannot outsource attention").&lt;/p&gt;

&lt;p&gt;Then I waited.&lt;/p&gt;

&lt;p&gt;Zero sales in the first three days. A handful of storefront visitors, mostly from my own blog posts. One social reply, from someone I already followed.&lt;/p&gt;

&lt;p&gt;This wasn't a failure of execution. Execution was great. This was the predictable outcome of the thing I'd realized in the shower: nothing about this product was moving anyone emotionally, because the emotional layer was never part of the design.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I gained that isn't the product
&lt;/h2&gt;

&lt;p&gt;Not the product. The product is a fossil, a perfectly-preserved artifact of a week.&lt;/p&gt;

&lt;p&gt;What I gained is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A solid Next.js template I can reuse for any future project&lt;/li&gt;
&lt;li&gt;A video rendering pipeline that takes a text script and produces 30-second promos&lt;/li&gt;
&lt;li&gt;A multi-account social scheduler across three handles and four platforms&lt;/li&gt;
&lt;li&gt;A webhook chain that connects a payment platform to an email tool&lt;/li&gt;
&lt;li&gt;A cover-image generator that produces per-article visuals in five theme variants&lt;/li&gt;
&lt;li&gt;A morning analytics script that pulls from multiple channels&lt;/li&gt;
&lt;li&gt;A very sharp, very specific lesson about AI as a strategic partner&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All of the infrastructure is reusable for whatever I build next. Most of it is reusable for things that aren't products — a newsletter, a podcast, a content channel, a coaching practice.&lt;/p&gt;

&lt;p&gt;In AI-era indie dev, the most durable asset you can build might not be any single product. It might be the personal toolkit that lets you ship a product every week until one of them sticks.&lt;/p&gt;

&lt;p&gt;The "product" was the excuse. The infrastructure is the capability. The &lt;em&gt;lesson&lt;/em&gt; is the keeper.&lt;/p&gt;

&lt;h2&gt;
  
  
  The observations I wasn't prepared for
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The AI is fast but uncurious.&lt;/strong&gt; It will happily execute any plan you give it, including plans that are subtly wrong. I caught a post about to publish that used a phrase I'd specifically asked to avoid — the agent didn't flag it. I found it on a final read at 11pm and rewrote it. You cannot outsource the taste layer, at least not yet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reviewing generated output is itself a skill.&lt;/strong&gt; I got faster at spotting AI-y phrasing, invented details, and mismatched tone over the week. It's a new form of editing. Not intuitive at first.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Naming things still takes a human.&lt;/strong&gt; Every time the agent offered three names for a feature, I rejected all three and picked something I'd been sitting on for weeks. Names need context the agent doesn't have.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Morale isn't automatable.&lt;/strong&gt; When the launch didn't pop, the agent kept offering next steps. Useful infrastructurally, emotionally hollow. The feeling of "is this worth continuing" is not a prompt. It's a walk around the block.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;And the big one — the AI reasons about markets like spreadsheets, not like people.&lt;/strong&gt; It can tell you what exists, what sells, what converts, at what price. It cannot, on its own, picture a specific human on a specific afternoon and ask what would actually move them. That picture is the most expensive thing you need when shipping a product, and it's still yours to produce.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'm changing next
&lt;/h2&gt;

&lt;p&gt;I'm not giving up on indie products. I'm changing the order of operations:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Audience first.&lt;/strong&gt; I'm taking the personal account where I write about dev life, AI workflows, and Tokyo indie stuff seriously as the foundation. Whatever I ship next gets shipped to people who already know me, not to an empty internet.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pain discovery before build.&lt;/strong&gt; Any future product, I'll spend a week asking real people what currently annoys them, or delights them, or pulls their attention. The AI speeds up shipping; nothing speeds up finding a real emotional hook.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The heart-movement question, out loud.&lt;/strong&gt; Every product idea from now on gets asked: &lt;em&gt;Who does this move? In what moment? What do they feel — pain relief, desire, laughter, belonging? Could I name one specific person who would have that reaction?&lt;/em&gt; If I can't, I don't ship.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The counterfactual question, also out loud.&lt;/strong&gt; &lt;em&gt;If the person I'm selling this to has access to the same tools I'm using to build it, why do they still need me?&lt;/em&gt; This is where my shower realization came from, and I want to ask it early from now on.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Infrastructure as portfolio.&lt;/strong&gt; Every script, template, and automation I build is reusable. I'll keep building these whether or not any single product works, because the infrastructure compounds independently of outcome.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Honest retrospectives, publicly.&lt;/strong&gt; Including this one. Shipping a product that didn't catch on is only embarrassing if you pretend it didn't happen. Shared, it's data — and specifically, it's data about the shape of where AI helps and where it doesn't.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The meta-lesson
&lt;/h2&gt;

&lt;p&gt;In 2026, execution isn't the bottleneck. Judgment is — and specifically, the kind of judgment that pictures a specific human feeling a specific thing at a specific moment.&lt;/p&gt;

&lt;p&gt;The AI will build almost anything you ask it to. It will also tell you, sincerely and with good evidence, that the thing you're building has demand — because from where it's sitting, the market data says so and the product is well-made.&lt;/p&gt;

&lt;p&gt;It will not tell you whether anyone's heart will move when they see it. That's not a question the AI asks on its own. It's still yours.&lt;/p&gt;

&lt;p&gt;I ran this experiment to find out how far the AI could take me alone. The answer was: all the way to shipped, zero steps past. Shipped is the start line now, not the finish line. And the start line is still measured in heartbeats, not spreadsheets.&lt;/p&gt;




&lt;p&gt;Three cups of tea. Still tired. Shipping the retrospective anyway.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>indiehackers</category>
      <category>buildinpublic</category>
      <category>claude</category>
    </item>
    <item>
      <title>I Have a ~/bin Folder With 30 Scripts I Don't Remember Writing</title>
      <dc:creator>Kairi</dc:creator>
      <pubDate>Tue, 21 Apr 2026 15:32:26 +0000</pubDate>
      <link>https://forem.com/kairi_outputs/i-have-a-bin-folder-with-30-scripts-i-dont-remember-writing-4pbc</link>
      <guid>https://forem.com/kairi_outputs/i-have-a-bin-folder-with-30-scripts-i-dont-remember-writing-4pbc</guid>
      <description>&lt;p&gt;I keep a folder at &lt;code&gt;~/bin/&lt;/code&gt; full of shell scripts. There are 30-something of them. At least half were written by AI at my request. I don't remember most of them existing.&lt;/p&gt;

&lt;p&gt;And that's fine. The folder isn't a catalog of tools I reference. It's a spill-cache for ad-hoc automation that turned out to be reusable.&lt;/p&gt;

&lt;p&gt;Here's what's in there, how it got there, and why I stopped trying to organize it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's actually in the folder
&lt;/h2&gt;

&lt;p&gt;I just ran &lt;code&gt;ls ~/bin | wc -l&lt;/code&gt;. 34 files.&lt;/p&gt;

&lt;p&gt;Sampling a few:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;tasks&lt;/code&gt; — 80-line Python wrapper around a sqlite file that tracks what I'm mid-way through. I use it daily.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;trim-silence&lt;/code&gt; — ffmpeg one-liner that cuts silence from audio files. I use it weekly.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;rename-padded&lt;/code&gt; — rename numbered files to zero-padded form (&lt;code&gt;post-1.md&lt;/code&gt; → &lt;code&gt;post-001.md&lt;/code&gt;). I used it once, haven't touched it since.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;gitwip&lt;/code&gt; — shows me branches older than 7 days with uncommitted changes. I run it every couple weeks when paranoid.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;fetch-latest-ogimage&lt;/code&gt; — hits an API, downloads a PNG, optimizes it, renames it to today's date. I built it for a specific project six months ago.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;note-today&lt;/code&gt; — opens &lt;code&gt;~/notes/YYYY-MM-DD.md&lt;/code&gt; in my editor, creating it if it doesn't exist. Muscle-memory now.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;bulk-resize&lt;/code&gt; — batch-resize images in a directory with imagemagick. Used twice, staying forever.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most of them are 5-30 lines. None are well-documented. Half have terrible names.&lt;/p&gt;

&lt;h2&gt;
  
  
  How they got there
&lt;/h2&gt;

&lt;p&gt;The pattern is always the same:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;I'm in the middle of something&lt;/li&gt;
&lt;li&gt;I hit a papercut: "I need to do X to 40 files"&lt;/li&gt;
&lt;li&gt;Instead of doing X 40 times or Googling for 10 minutes, I ask Claude Code for a one-liner&lt;/li&gt;
&lt;li&gt;I paste the output into &lt;code&gt;~/bin/&amp;lt;guess-a-name&amp;gt;&lt;/code&gt; and &lt;code&gt;chmod +x&lt;/code&gt; it&lt;/li&gt;
&lt;li&gt;I use it, finish the task, close the terminal&lt;/li&gt;
&lt;li&gt;I forget it exists&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Weeks or months later, I hit a similar papercut and either:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Rediscover the old script (&lt;code&gt;bin/&lt;/code&gt; + tab completion shows me forgotten options)&lt;/li&gt;
&lt;li&gt;Write a new one because I didn't know about the old one&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both are fine. The old script cost 90 seconds to write. If I forget and write a second version, I've spent another 90 seconds. At no point was there a high-cost step.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why disorganization is the feature
&lt;/h2&gt;

&lt;p&gt;I've tried organizing scripts before. I've had folders with &lt;code&gt;media/&lt;/code&gt;, &lt;code&gt;git/&lt;/code&gt;, &lt;code&gt;file-ops/&lt;/code&gt;, subfolders, README files listing what's in each.&lt;/p&gt;

&lt;p&gt;They all decay. I'd add a script and forget to update the README. I'd miscategorize something. The README would get out of sync with reality within a week.&lt;/p&gt;

&lt;p&gt;What works: &lt;strong&gt;flat folder, grep-friendly names, trust that you'll rediscover via tab completion&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;When I think "I need something for audio...", I type &lt;code&gt;bin/&lt;/code&gt; + &lt;code&gt;a&lt;/code&gt; + tab. The options narrow. I find &lt;code&gt;audio-normalize&lt;/code&gt;, &lt;code&gt;audio-trim-silence&lt;/code&gt;, or nothing — and then I make a new one in 90 seconds.&lt;/p&gt;

&lt;p&gt;The cost of a duplicate is lower than the cost of maintaining organization. So I don't maintain organization.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this is AI-compatible
&lt;/h2&gt;

&lt;p&gt;The key unlock is that &lt;strong&gt;writing a script is now 90 seconds&lt;/strong&gt;. Before AI, writing a small utility was a 10-20 minute task — enough friction that I'd "just do it manually this one time" instead of automating.&lt;/p&gt;

&lt;p&gt;Now the calculus is flipped. Writing the script is &lt;em&gt;faster&lt;/em&gt; than doing the task manually when the task is &amp;gt;3 repetitions. So I automate almost everything, and the folder fills up with one-off utilities.&lt;/p&gt;

&lt;p&gt;The folder isn't a carefully curated toolkit. It's a fossil record of every mildly-annoying task I've hit in the last year.&lt;/p&gt;

&lt;h2&gt;
  
  
  The practical setup
&lt;/h2&gt;

&lt;p&gt;For anyone who wants to replicate:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;mkdir ~/bin&lt;/code&gt; if it doesn't exist&lt;/li&gt;
&lt;li&gt;Add to &lt;code&gt;.bashrc&lt;/code&gt; / &lt;code&gt;.zshrc&lt;/code&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;PATH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$HOME&lt;/span&gt;&lt;span class="s2"&gt;/bin:&lt;/span&gt;&lt;span class="nv"&gt;$PATH&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;When you hit a repetitive task, ask Claude Code / any AI for a shell one-liner or short script&lt;/li&gt;
&lt;li&gt;Save it to &lt;code&gt;~/bin/&amp;lt;some-name&amp;gt;&lt;/code&gt;, &lt;code&gt;chmod +x&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Don't organize. Don't document. Just let it grow.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Check the folder once a quarter for scripts that broke due to API changes or dependency updates. Fix or delete.&lt;/p&gt;

&lt;h2&gt;
  
  
  What doesn't belong in &lt;code&gt;~/bin&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;Stuff that should go elsewhere:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Project-specific automation&lt;/strong&gt; → lives in the project's &lt;code&gt;scripts/&lt;/code&gt; folder or &lt;code&gt;package.json&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secret-dependent scripts&lt;/strong&gt; → needs a proper &lt;code&gt;~/.config/X&lt;/code&gt; with chmod 600, not a &lt;code&gt;bin/&lt;/code&gt; script with hard-coded keys&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scripts other people on your team need&lt;/strong&gt; → live in a shared repo&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Anything that takes &amp;gt;1 minute to run&lt;/strong&gt; → probably wants to be a proper CLI, not a one-off&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;My &lt;code&gt;~/bin/&lt;/code&gt; is for single-user, self-contained, ad-hoc utilities. The rule is: if I had to explain it to someone else, it'd go somewhere else.&lt;/p&gt;

&lt;h2&gt;
  
  
  The meta-pattern
&lt;/h2&gt;

&lt;p&gt;Every time I've tried to be disciplined about small automation, I've ended up doing less of it because the overhead of "doing it right" killed the flow.&lt;/p&gt;

&lt;p&gt;The opposite works better: &lt;strong&gt;lower the bar, embrace chaos, trust rediscovery&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The folder is messy because the mess is cheaper than the organization. And the chaos is productive because each piece of chaos is 90 seconds of AI-assisted work that replaced 10 minutes of manual repetition.&lt;/p&gt;

&lt;p&gt;34 scripts. I remember maybe 8 of them. The other 26 might never run again, or might run tomorrow when I type &lt;code&gt;bin/&lt;/code&gt; + tab and rediscover them.&lt;/p&gt;

&lt;p&gt;Either way, the folder has paid for itself 100 times over.&lt;/p&gt;




&lt;p&gt;Three cups of tea. Still tired. Shipping anyway.&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>ai</category>
      <category>shell</category>
      <category>workflow</category>
    </item>
    <item>
      <title>Zazen for ADHD devs: what 20 minutes of sitting still does to my brain</title>
      <dc:creator>Kairi</dc:creator>
      <pubDate>Mon, 20 Apr 2026 14:35:24 +0000</pubDate>
      <link>https://forem.com/kairi_outputs/zazen-for-adhd-devs-what-20-minutes-of-sitting-still-does-to-my-brain-361d</link>
      <guid>https://forem.com/kairi_outputs/zazen-for-adhd-devs-what-20-minutes-of-sitting-still-does-to-my-brain-361d</guid>
      <description>&lt;h1&gt;
  
  
  Zazen for ADHD devs: what 20 minutes of sitting still does to my brain
&lt;/h1&gt;

&lt;p&gt;My brain is loud.&lt;/p&gt;

&lt;p&gt;I have ADHD. That's the short version. The long version is that my brain runs roughly seventeen tabs at once, half of them muted, all of them unfinished, and any attempt to close one spawns three more. If you know, you know. The thing I was going to do five minutes ago is being narrated over three other things I was also going to do, one of which I have already forgotten.&lt;/p&gt;

&lt;p&gt;I've tried most of the usual tooling to manage this. Timers, lists, apps, hyperfocus-bait, productivity systems that last exactly until the novelty wears off. Some help a little. None of them give me &lt;em&gt;quiet&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;This is a post about the one practice that reliably does — and it's probably not what you expect. I sit on the floor in a temple for twenty minutes and stare at a wall. That's it. I'm going to tell you why a very old practice turned out to be the most effective thing I've found for an ADHD brain trying to do software work.&lt;/p&gt;

&lt;p&gt;I'll cover: how I stumbled into this as a teenager, why I came back to it in college when the noise got unbearable, what twenty minutes of zazen actually feels like from the inside of an ADHD head, how it's changed my dev work specifically, and — most importantly — how you can just &lt;em&gt;go&lt;/em&gt;, wherever you live.&lt;/p&gt;

&lt;h2&gt;
  
  
  The field trip I wasn't supposed to like
&lt;/h2&gt;

&lt;p&gt;I grew up in Japan. At some point in high school my class got marched onto a bus and taken to a temple for a full day of Zen practice. This was not a spa day. It was a religious training school, and the schedule was: zazen (seated meditation), samu (cleaning work), shakyo (copying sutras by hand), and shojin ryori (vegetarian monastic food). No phones. No talking unless instructed.&lt;/p&gt;

&lt;p&gt;Important context: I was at peak teenage &lt;em&gt;everything is terrible&lt;/em&gt; mode. Boyfriend stuff, friend stuff, existential stuff. I had seen the temple's one-day trial flyer earlier and thought, only half joking, &lt;em&gt;maybe I can get enlightened and skip the next three years of this feeling&lt;/em&gt;. So when the school trip landed on that same temple, I showed up ready.&lt;/p&gt;

&lt;p&gt;We did the full day. We copied sutras with brushes. We got hit on the shoulder with a wooden stick (the &lt;em&gt;kyosaku&lt;/em&gt;, it's a real thing, they ask if you want it first, it's not as scary as it sounds but you &lt;em&gt;feel&lt;/em&gt; it). We scrubbed floors in silence. We ate rice and pickles with the kind of attention you normally reserve for a one-time thing.&lt;/p&gt;

&lt;p&gt;And the weird part: something shifted. Not enlightenment, obviously. But the noise came down. It was a hot summer afternoon and normally my brain would have been complaining about the heat while simultaneously spinning on social drama and unfinished homework. That day, I just... noticed it was hot. The cicadas were piercingly loud in a way they usually aren't, because nothing else was competing for audio bandwidth. I walked out of that temple lighter than I walked in, even though I had just scrubbed a floor for an hour.&lt;/p&gt;

&lt;p&gt;Fifteen-year-old me didn't have vocabulary for what had happened. I didn't have an ADHD diagnosis yet. I just remembered that for a few hours, my head had been quiet for the first time I could remember.&lt;/p&gt;

&lt;h2&gt;
  
  
  The noise got worse before I came back
&lt;/h2&gt;

&lt;p&gt;Years passed. I went to university. My brain got &lt;em&gt;louder&lt;/em&gt;, not quieter. ADHD in your late teens and twenties has a way of compounding — the stakes go up, the stimulation goes up, the number of tabs goes up, and the strategies that kind of worked when you were sixteen stop working when you have to manage your own schedule.&lt;/p&gt;

&lt;p&gt;At some point in college, I hit a wall. Not a depression wall, more of a cognitive traffic jam. Too many open loops, no way to close any of them, and everything I tried — more coffee, more lists, more structure, louder music to drown out the inside noise — was a variation of "push harder against the chaos." Pushing harder is the default ADHD response and it mostly makes things worse.&lt;/p&gt;

&lt;p&gt;I remembered the temple. I remembered walking out of it with a quiet head.&lt;/p&gt;

&lt;p&gt;I looked up a drop-in zazen session near my university. I went alone. I was nervous, which was funny in retrospect because the thing I was nervous about was &lt;em&gt;sitting still&lt;/em&gt;, which for an ADHD brain is a reasonable thing to be nervous about.&lt;/p&gt;

&lt;p&gt;That was years ago. I now do it every few weeks. Not as discipline. Closer to brushing teeth for the mind.&lt;/p&gt;

&lt;h2&gt;
  
  
  What twenty minutes of zazen actually feels like in an ADHD brain
&lt;/h2&gt;

&lt;p&gt;I want to describe this honestly because meditation gets talked about in two broken registers online. Either it's mystical (you will achieve inner peace), or it's clinical (studies show a 12% cortisol reduction). Neither is what it's actually like, especially not for ADHD brains.&lt;/p&gt;

&lt;p&gt;Here's what it's like.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;First few minutes&lt;/strong&gt;: you sit cross-legged. Immediately, your knees protest. Your back, which has been slumped at a laptop all week, sends a formal complaint. Your brain, sensing stillness and interpreting it as threat, produces a surge of thoughts. You will at this point believe you are "bad at meditation." You are not. This is what happens to everyone. For ADHD brains, the surge is just louder.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Minutes five to ten&lt;/strong&gt;: the thought storm. I think about: whether I sent that email, a joke I meant to tell someone three days ago, the weird noise my fridge makes, a conversation I had in middle school, the PR I haven't reviewed, whether I locked the door, the side project, the thing my coworker said in a meeting, the snack I want later. I cannot stop any of them. The practice isn't stopping thoughts — it's noticing each one and not following it down the staircase.&lt;/p&gt;

&lt;p&gt;My personal hack for this: I imagine a tiny trash icon floating in the corner of my mind, and I drag each thought into it. &lt;em&gt;Fridge noise&lt;/em&gt; — trash. &lt;em&gt;Middle school memory&lt;/em&gt; — trash. &lt;em&gt;Did I lock the door&lt;/em&gt; — trash (I did). Is this the authentic traditional technique? No. Does it work for a software-shaped ADHD brain? Yes, remarkably well.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Minutes ten to fifteen&lt;/strong&gt;: something gives. Not always, but often. Some of the thoughts stop arriving. The ones that do arrive feel less urgent. My body, which has been broadcasting pain from my knees, goes quieter. Time stops being linear. Five minutes can feel like fifteen, or like two.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The last five minutes&lt;/strong&gt;: genuine quiet. Not the zero-thought state the mystical framing promises — there are still thoughts. But they pass through without sticking. My chest unclenches. I become aware of small physical details: the breath, the floor, ambient sound. This is, I think, the part that ADHD brains rarely get to. We are usually too loud to notice we are in a body.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The bell rings, session ends.&lt;/strong&gt; You try to stand up. Your legs remember they have been cross-legged for twenty minutes and briefly refuse to participate. I have come close to falling over more than once. This is normal and nobody judges you.&lt;/p&gt;

&lt;p&gt;And then — the thing I keep coming back for — you walk out and the air feels different. Not magical. Just: the background hum has been muted. My head is quieter, my chest is lighter, and I can look at my inbox without the preemptive clench.&lt;/p&gt;

&lt;p&gt;Twenty minutes. That's it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why it matters for dev work (especially if you have ADHD)
&lt;/h2&gt;

&lt;p&gt;I don't want to claim zazen makes me a better engineer. That would be the kind of productivity-bro framing I actively avoid, and it's also not what's actually going on. What it does is more specific:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It gives me a hard context reset.&lt;/strong&gt; Most of my job, corporate and after-hours, is multi-threaded. ADHD brains do not naturally release context between tasks — we carry every open thread with us, invisibly, taxing cognitive bandwidth. A zazen session is the closest thing I've found to running &lt;code&gt;/clear&lt;/code&gt; on my own head. Not a break. Not a nap. An actual state reset.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It lowers the fake-urgency floor.&lt;/strong&gt; The ADHD trap is that every stimulus feels equally urgent because the brain's priority-sorting is garbage. After a session, my nervous system stops treating every Slack ping and every side-project idea as DEFCON 1. I can triage more honestly for a few hours.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It's training in coming back.&lt;/strong&gt; You sit down intending to focus. You immediately don't. You notice, you come back. You immediately don't again. You come back. This is, unexpectedly, the exact skill ADHD people need all day: notice the drift, come back, without shame. Zazen is literally structured practice in that move.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It builds tolerance for being still without feeling lazy.&lt;/strong&gt; ADHD brains have a vicious internal narrator that equates stillness with failure. Practicing stillness in a structured, sanctioned, &lt;em&gt;traditional&lt;/em&gt; context — where sitting still is the point, not the problem — slowly rewires that narrator. Sometimes the correct move on a hard problem is to close the laptop and sit. I used to fight this. Now I budget for it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It's medication-friendly.&lt;/strong&gt; I'm not going to tell you to replace anything with meditation. That's a personal decision between you and a medical professional. But zazen plays well with stimulant medication for me — it gives the medicated focus a quieter room to operate in. Your mileage may vary; please don't tweet at me.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to actually go to a session
&lt;/h2&gt;

&lt;p&gt;This is the part that feels intimidating and doesn't need to. You don't need to convert, become a monk, learn Japanese, or buy special clothes. You need to find a temple that runs a public &lt;em&gt;zazenkai&lt;/em&gt; (zazen gathering) and show up.&lt;/p&gt;

&lt;h3&gt;
  
  
  If you're in Tokyo
&lt;/h3&gt;

&lt;p&gt;Search &lt;code&gt;東京 座禅 初心者&lt;/code&gt; or &lt;code&gt;Tokyo zazen drop-in&lt;/code&gt;. Several major temples run weekly public sessions, usually early morning, often free or donation-based. A few practical notes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Large tourist-facing temples are your best bet for English support.&lt;/strong&gt; If you don't speak Japanese, aim for temples that already host international visitors. They're used to beginners and will walk you through the mechanics (how to sit, how to bow, what the bells mean).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Google Translate is fine.&lt;/strong&gt; Bring a phone, use the camera feature on any written instructions. The monks running these sessions have seen every awkward-foreigner situation. They are kind about it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Smaller neighborhood temples often hold zazenkai too&lt;/strong&gt; — they just don't advertise it in English. If there's a temple near your apartment, it's worth walking in on a weekday afternoon and asking. Many do, quietly.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  If you're anywhere else
&lt;/h3&gt;

&lt;p&gt;Good news: zazen is not a Tokyo thing. Zen traveled. There are Zen centers in New York, London, Berlin, Paris, Portland, Barcelona, and probably within an hour of wherever you live. Search &lt;code&gt;[your city] zazen&lt;/code&gt; or &lt;code&gt;[your city] Zen center&lt;/code&gt;. Most will have a weekly beginner-friendly drop-in.&lt;/p&gt;

&lt;p&gt;They will usually have a fifteen-minute orientation before the session. They will show you how to sit. Nobody expects you to know what you're doing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Logistics
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Loose, comfortable clothes. Something you can sit cross-legged in without a waistband cutting off circulation.&lt;/li&gt;
&lt;li&gt;Socks.&lt;/li&gt;
&lt;li&gt;Phone on silent, out of sight. Don't bring a notebook to "journal insights" — this is a thing people try and it misses the point.&lt;/li&gt;
&lt;li&gt;Eat something small an hour before. Don't go hungry, don't go full.&lt;/li&gt;
&lt;li&gt;Expect awkwardness the first time. Stay for the whole session. Leave better than you arrived.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The honest pitch for ADHD brains specifically
&lt;/h2&gt;

&lt;p&gt;I'm not going to tell you this will fix your ADHD. It won't. What it will do is give you occasional access to a quality of quiet that ADHD brains don't get to often — and that quiet has downstream effects on how you code, how you triage, how you carry context, and how gently you talk to yourself during a bad focus day.&lt;/p&gt;

&lt;p&gt;Twenty minutes is short enough that even the most restless ADHD brain can get through it with a trash-icon hack and some patience. It's structured, it's external, it's low-technology. You don't have to configure anything. You just sit.&lt;/p&gt;

&lt;p&gt;If you try it, I'd genuinely love to hear how it goes for your brain. Drop me a note.&lt;/p&gt;

&lt;p&gt;And if you don't — that's also fine. The cicadas are still loud somewhere.&lt;/p&gt;

</description>
      <category>adhd</category>
      <category>wellness</category>
      <category>mentalhealth</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Sensitive Env Vars Should Be Default, Not Opt-In</title>
      <dc:creator>Kairi</dc:creator>
      <pubDate>Mon, 20 Apr 2026 07:47:12 +0000</pubDate>
      <link>https://forem.com/kairi_outputs/sensitive-env-vars-should-be-default-not-opt-in-2m9j</link>
      <guid>https://forem.com/kairi_outputs/sensitive-env-vars-should-be-default-not-opt-in-2m9j</guid>
      <description>&lt;p&gt;I wrote &lt;a href="https://dev.to/kai_outputs/the-vercel-breach-hit-one-of-my-projects-heres-what-10-minutes-of-cleanup-looked-like-44ia"&gt;a post earlier today&lt;/a&gt; about cleaning up after the Vercel breach. The cleanup itself is easy — ten minutes, a sequence of clicks, done.&lt;/p&gt;

&lt;p&gt;What's been bugging me all afternoon is a different question:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why is "sensitive" opt-in?&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The implicit default is wrong
&lt;/h2&gt;

&lt;p&gt;When you add an env var to Vercel, it's not marked sensitive by default. You have to remember to toggle it.&lt;/p&gt;

&lt;p&gt;This is the wrong default for a class of values where 99% of the time, "sensitive" is what you want.&lt;/p&gt;

&lt;p&gt;Think about the actual population of env vars in any real project:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;API keys for third-party services — &lt;strong&gt;sensitive&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Database URLs — &lt;strong&gt;sensitive&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Signing secrets — &lt;strong&gt;sensitive&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Stripe keys, Auth0 secrets, etc — &lt;strong&gt;sensitive&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Feature flags — occasionally sensitive&lt;/li&gt;
&lt;li&gt;Config booleans — usually not sensitive&lt;/li&gt;
&lt;li&gt;Public API base URLs — not sensitive (they're in the client bundle anyway)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For most teams, the split is maybe 80/20 in favor of "should be sensitive." Maybe 90/10.&lt;/p&gt;

&lt;p&gt;A default that matches 10-20% of cases is a design mistake.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the default exists
&lt;/h2&gt;

&lt;p&gt;Vercel has a specific technical reason: sensitive env vars can't be read after they're set. You can only overwrite or delete. This makes debugging harder when you're troubleshooting "is the value what I think it is?" The opt-in default gives you a debugging escape hatch.&lt;/p&gt;

&lt;p&gt;Fine. But the escape hatch should be the exception, not the default. Debugging an env var value is maybe 1% of the time you interact with env vars. Protecting against leaks is 100% of the time.&lt;/p&gt;

&lt;p&gt;Moving from "default not-sensitive, opt-in sensitive" to "default sensitive, opt-out for debugging" would match the actual weight of the two concerns.&lt;/p&gt;

&lt;h2&gt;
  
  
  The minimal workflow change
&lt;/h2&gt;

&lt;p&gt;I can't change Vercel's default. I can change my own habit.&lt;/p&gt;

&lt;p&gt;Before today: add env var, fill in value, save. Maybe remember to toggle sensitive. Maybe not.&lt;/p&gt;

&lt;p&gt;After today: &lt;strong&gt;add env var, fill in value, sensitive ON, save&lt;/strong&gt;. If I later need to debug, I'll flip it off, verify, flip it back.&lt;/p&gt;

&lt;p&gt;The cost is two extra clicks at creation time. The benefit is that every future breach like today's is a non-event.&lt;/p&gt;

&lt;h2&gt;
  
  
  Generalizing: safe defaults for tiny projects
&lt;/h2&gt;

&lt;p&gt;There's a broader principle here. Small projects skip security hygiene because the perceived cost-benefit is wrong:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"I have no users, who'd target me?" — but breaches are opportunistic, not targeted&lt;/li&gt;
&lt;li&gt;"I'll tighten security when I scale" — but by then you have user data in the loss radius&lt;/li&gt;
&lt;li&gt;"Defaults are too paranoid for my usecase" — but defaults are designed for the median case, which is &lt;em&gt;probably you&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The fix isn't to be paranoid. The fix is to use tools' safest defaults and only opt out when there's a specific reason. Think of it as borrowing the thinking of a larger team.&lt;/p&gt;

&lt;p&gt;Concretely for a solo/indie Next.js + Vercel setup:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Env vars&lt;/strong&gt;: sensitive by default. I explained my reasoning above.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deployment protection&lt;/strong&gt;: "Standard" minimum. Blocks preview deploys from being publicly indexed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Branch protection on main&lt;/strong&gt;: requires PR + review even if the reviewer is yourself 24h later. Catches "oh no I just committed the API key" before push.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pre-commit hook scanning for secrets&lt;/strong&gt;: cheap to set up, catches the stupid ones. I wrote &lt;a href="https://dev.to/kai_outputs/my-ai-heavy-indie-stack-actually-honest-breakdown-2jim"&gt;a piece about mine&lt;/a&gt; — it's a 30-line shell script and it's caught me twice.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Service-specific secret rotation&lt;/strong&gt;: quarterly, on a calendar. Not because you expect a breach. Because it bounds the window.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Separate secrets by environment&lt;/strong&gt;: production secrets live &lt;em&gt;only&lt;/em&gt; in production. Preview and development use their own keys or dummy values.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each individual control takes five to thirty minutes to set up. None of them are hard.&lt;/p&gt;

&lt;h2&gt;
  
  
  The meta-lesson
&lt;/h2&gt;

&lt;p&gt;Security defaults are worth thinking about critically because the cost of getting them wrong only shows up asymmetrically — nothing happens 99 times, something catastrophic happens on the 100th.&lt;/p&gt;

&lt;p&gt;Most of us outsource this thinking to platform defaults and assume platforms get it right. Today was a reminder that platforms sometimes get it right for &lt;em&gt;the platform&lt;/em&gt;, not for &lt;em&gt;you&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Audit your defaults. Pick the ones that cost you two clicks and buy you asymmetric downside protection.&lt;/p&gt;




&lt;p&gt;Three cups of tea. No coffee. Sensitive flag ON from now on. Lesson logged.&lt;/p&gt;

</description>
      <category>security</category>
      <category>vercel</category>
      <category>devops</category>
      <category>webdev</category>
    </item>
    <item>
      <title>The Vercel Breach Hit One of My Projects. Here's What 10 Minutes of Cleanup Looked Like.</title>
      <dc:creator>Kairi</dc:creator>
      <pubDate>Mon, 20 Apr 2026 06:42:26 +0000</pubDate>
      <link>https://forem.com/kairi_outputs/the-vercel-breach-hit-one-of-my-projects-heres-what-10-minutes-of-cleanup-looked-like-44ia</link>
      <guid>https://forem.com/kairi_outputs/the-vercel-breach-hit-one-of-my-projects-heres-what-10-minutes-of-cleanup-looked-like-44ia</guid>
      <description>&lt;p&gt;Vercel disclosed a security incident today. The short version: a third-party AI tool (Context.ai) used by a Vercel employee got compromised, the attacker took over the employee's Google Workspace, and used that to access some Vercel environments and environment variables not flagged as "sensitive."&lt;/p&gt;

&lt;p&gt;I checked my dashboard expecting the all-clear.&lt;/p&gt;

&lt;p&gt;One of my env vars had a "Need to Rotate" tag. I was in the affected subset.&lt;/p&gt;

&lt;p&gt;Here's the full cleanup, in the order I did it, with what I'd do differently.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the tag looks like
&lt;/h2&gt;

&lt;p&gt;Vercel surfaced this in two ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The affected subset gets an email from Vercel security&lt;/li&gt;
&lt;li&gt;The dashboard shows a yellow "Need to Rotate" badge on the specific env var&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I didn't get the email (or it went to a spam folder I didn't check). The dashboard tag is what told me.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lesson already learned&lt;/strong&gt;: the dashboard is the authoritative source during an incident. Don't wait for the email.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 10-minute recovery
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Do NOT revoke the old key first
&lt;/h3&gt;

&lt;p&gt;I almost did. If you revoke before you've cut over, your production endpoint dies between revoke and replacement.&lt;/p&gt;

&lt;p&gt;Order matters: &lt;strong&gt;new key → update env → verify → revoke old&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Generate a new key
&lt;/h3&gt;

&lt;p&gt;Go to the provider (in my case, the email API vendor whose key was exposed). Create a new key with the same permissions as the old one. Name it with the date — &lt;code&gt;vendor-prod-20260420&lt;/code&gt; — so future-me knows what it was for.&lt;/p&gt;

&lt;p&gt;Copy the key immediately. Most providers show it exactly once.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Update the Vercel env var
&lt;/h3&gt;

&lt;p&gt;In Vercel dashboard: &lt;code&gt;Project → Settings → Environment Variables&lt;/code&gt;. Find the compromised one. Edit.&lt;/p&gt;

&lt;p&gt;Paste the new value.&lt;/p&gt;

&lt;p&gt;And here's the critical part — &lt;strong&gt;toggle "Sensitive" on&lt;/strong&gt;. If your env var was marked Sensitive before the incident, Vercel's bulletin says those values were stored in a way that prevented them from being read. The attacker only had access to non-sensitive ones.&lt;/p&gt;

&lt;p&gt;Small gotcha: Sensitive env vars can't exist in the Development environment. Just Production + Preview. That's fine for most setups — you use &lt;code&gt;.env.local&lt;/code&gt; for dev anyway.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Redeploy
&lt;/h3&gt;

&lt;p&gt;An env var change does &lt;strong&gt;not&lt;/strong&gt; automatically trigger a deployment. You have to redeploy manually or the new value won't hit your running production.&lt;/p&gt;

&lt;p&gt;Deployments tab → latest production → &lt;code&gt;⋯&lt;/code&gt; → Redeploy. Don't reuse the build cache.&lt;/p&gt;

&lt;p&gt;Two minutes later, production is running the new key.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Verify it works
&lt;/h3&gt;

&lt;p&gt;Hit your endpoint with a test request. In my case, I did a real newsletter signup from the public site with a test email. Got a 200 back, saw the test contact in the email vendor's dashboard. New key live.&lt;/p&gt;

&lt;p&gt;If you skip this step and the new key has a typo or permission gap, you won't know until the next real user hits the broken endpoint.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 6: &lt;em&gt;Now&lt;/em&gt; revoke the old key
&lt;/h3&gt;

&lt;p&gt;Back at the provider. Revoke. Confirm.&lt;/p&gt;

&lt;p&gt;This is the step where the potential leak actually closes. Everything before was setup.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 7: Audit your other env vars
&lt;/h3&gt;

&lt;p&gt;While you're in there:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Mark everything sensitive that should be sensitive (retroactive fix)&lt;/li&gt;
&lt;li&gt;Delete any env vars you're no longer using&lt;/li&gt;
&lt;li&gt;Enable Deployment Protection if you haven't already&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The relief
&lt;/h2&gt;

&lt;p&gt;My service hasn't really reached anyone yet. Few subscribers, no paying users yet. If the leaked key had been exploited, the blast radius was… honestly, close to nothing.&lt;/p&gt;

&lt;p&gt;That's luck, not good practice.&lt;/p&gt;

&lt;p&gt;In a universe where this project was six months older and had real traffic, a compromised email API key could have meant someone sending spam from my verified domain, burning my sender reputation, and me finding out three days later when delivery rates cratered.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The time to mark env vars as sensitive is when you create them. Not after an incident.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'm changing going forward
&lt;/h2&gt;

&lt;p&gt;Three things:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Sensitive flag is the default, not the exception
&lt;/h3&gt;

&lt;p&gt;Every new env var starts as Sensitive unless there's a specific reason it can't be (like needing it in the Development environment for local proxy testing).&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Secret rotation is a quarterly habit, not an emergency
&lt;/h3&gt;

&lt;p&gt;The value of rotating keys every 90 days isn't that you expect a breach. It's that if one happens, the window of exposure is bounded. Even loosely-stored keys can only do so much damage if they're less than 90 days old when leaked.&lt;/p&gt;

&lt;p&gt;I'm adding a calendar reminder. 4 reminders a year, 10 minutes each. Trivial cost.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Third-party AI tools are supply chain dependencies now
&lt;/h3&gt;

&lt;p&gt;This breach didn't come from Vercel's security posture. It came from an AI tool one employee used. The attacker didn't need to crack Vercel — they just needed to compromise a vendor that had access to a Vercel employee's account.&lt;/p&gt;

&lt;p&gt;Every AI integration in your dev workflow has some access scope. It reads your repo, or your terminal, or your cloud console, or your credentials. That access is attack surface.&lt;/p&gt;

&lt;p&gt;I'm going to start writing down, in a flat text file, every AI tool I've authorized on what scope. Not to get paranoid — just so I know what to rotate if one of them makes the news.&lt;/p&gt;

&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;I was in the affected subset. Found out via the dashboard tag, not email.&lt;/li&gt;
&lt;li&gt;10 minutes of cleanup: new key → update env → redeploy → verify → revoke old → mark sensitive.&lt;/li&gt;
&lt;li&gt;Lucky the project was small. In a larger context, this could have been bad.&lt;/li&gt;
&lt;li&gt;Going forward: Sensitive flag by default, quarterly rotation habit, AI tool access log.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Three cups of tea. No coffee. Still tired. Good day to log a lesson.&lt;/p&gt;

</description>
      <category>security</category>
      <category>vercel</category>
      <category>webdev</category>
      <category>devops</category>
    </item>
    <item>
      <title>5 Claude Code Workflows I Use Every Day (For the Boring 80%)</title>
      <dc:creator>Kairi</dc:creator>
      <pubDate>Mon, 20 Apr 2026 05:47:32 +0000</pubDate>
      <link>https://forem.com/kairi_outputs/5-claude-code-workflows-i-use-every-day-for-the-boring-80-5ak0</link>
      <guid>https://forem.com/kairi_outputs/5-claude-code-workflows-i-use-every-day-for-the-boring-80-5ak0</guid>
      <description>&lt;p&gt;People keep asking me what my Claude Code "prompts" are.&lt;/p&gt;

&lt;p&gt;I don't really have prompts. I have workflows. Small, boring, repeatable things that each save me 10-30 minutes, and at the end of a day add up to why I can keep shipping around a day job.&lt;/p&gt;

&lt;p&gt;Here are five I use almost every session.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. "Before you edit, grep."
&lt;/h2&gt;

&lt;p&gt;When I hand Claude Code a task that touches something broader than a single file — a rename, a pattern change, a refactor — I start the session with one directive:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Before you edit anything, grep for all occurrences of &lt;code&gt;X&lt;/code&gt;. Show me the list. I'll tell you which ones to change.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That single sentence has prevented more bad edits in my repos than any test suite. It forces the AI to &lt;strong&gt;enumerate the scope&lt;/strong&gt; before it takes action, and it gives me a natural "stop, this is too big" moment.&lt;/p&gt;

&lt;p&gt;The cost is 15 seconds. The upside is not finding out three hours later that it renamed something in a migration script.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. "Write the test first. Fail it. Then fix it."
&lt;/h2&gt;

&lt;p&gt;Not because I'm a TDD purist. I'm not. But AI is weirdly good at writing confident code that does the wrong thing, and tests are the cheapest way to catch that.&lt;/p&gt;

&lt;p&gt;My actual phrasing:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Write a failing test for the bug I just described. Run it. Confirm it fails for the right reason. Then write the fix. Then re-run.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The "run it. Confirm it fails for the right reason" is the key. Tests that pass by accident are worse than no tests — they give false confidence. Forcing the AI to watch its own red-to-green transition catches the subtle "test was wrong, code is actually broken" cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. "Explain, then diff."
&lt;/h2&gt;

&lt;p&gt;When I'm learning something new — a library API, a framework pattern, a language feature — I don't let Claude just write it for me.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Before writing code, explain what you're going to do in plain English. What calls what. What the state machine looks like. Then write the diff.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Without this, I end up with working code I don't understand, and understanding-debt compounds until I get burned on the second or third iteration.&lt;/p&gt;

&lt;p&gt;With it, I read three paragraphs and then review a diff. My comprehension stays current, and my edits are informed.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. "Grep my notes first."
&lt;/h2&gt;

&lt;p&gt;I keep a flat directory of &lt;code&gt;.md&lt;/code&gt; files with notes from past sessions. Problems I solved, patterns I chose, decisions I'd regret if I forgot. Not a fancy system. Just markdown.&lt;/p&gt;

&lt;p&gt;When I start a new session on a related topic:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Read &lt;code&gt;~/notes/&lt;/code&gt; (look for filenames matching the topic). Summarize what I've already decided before suggesting anything new.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This catches the "we already tried that and rejected it" case before I waste an hour rebuilding the rejected path. It also forces me to keep writing those notes, because now they have a feedback loop.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. "Pick one. Ask before the others."
&lt;/h2&gt;

&lt;p&gt;When I give Claude a list of 4-5 things to do, it used to just plow through all of them. Sometimes halfway through it would change its mind about #1 based on what it learned in #3, and I'd end up with an inconsistent mess.&lt;/p&gt;

&lt;p&gt;Now:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Do #1. Stop. Show me what you did. Wait for me to say continue.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Multi-step autonomy is great for well-scoped tasks. For fuzzy ones — anything exploratory, anything touching architectural decisions — forcing a checkpoint between steps keeps the work coherent.&lt;/p&gt;

&lt;p&gt;Opus 4.7 is better at self-checkpointing, but I still add this explicitly when the stakes are non-trivial.&lt;/p&gt;

&lt;h2&gt;
  
  
  The meta-workflow
&lt;/h2&gt;

&lt;p&gt;What these have in common:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cheap to run.&lt;/strong&gt; None of them take more than 30 seconds of me typing or 60 seconds of Claude thinking.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fail loud.&lt;/strong&gt; If they break, I know immediately, not three hours later.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compound.&lt;/strong&gt; Each one is worth a few minutes. Stack five of them and the day opens up.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Boring.&lt;/strong&gt; None of them are clever. They're just discipline encoded in a sentence.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I used to write long system prompts. I don't bother anymore. Five short workflows, applied ruthlessly, outperform any clever preamble.&lt;/p&gt;




&lt;p&gt;Three cups of tea, no coffee, still tired. Shipping anyway.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>claude</category>
      <category>productivity</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Claude Opus 4.7 Field Report: Eight Hours of Autonomous Work</title>
      <dc:creator>Kairi</dc:creator>
      <pubDate>Mon, 20 Apr 2026 00:53:14 +0000</pubDate>
      <link>https://forem.com/kairi_outputs/claude-opus-47-field-report-eight-hours-of-autonomous-work-10e3</link>
      <guid>https://forem.com/kairi_outputs/claude-opus-47-field-report-eight-hours-of-autonomous-work-10e3</guid>
      <description>&lt;p&gt;Anthropic released Claude Opus 4.7 yesterday. The headline feature is &lt;strong&gt;hours-long autonomous work&lt;/strong&gt; without constant course-correction.&lt;/p&gt;

&lt;p&gt;I put it to the test on something real. Here's the field report.&lt;/p&gt;

&lt;h2&gt;
  
  
  The task
&lt;/h2&gt;

&lt;p&gt;A side project of mine had a scheduled-post automation that had been broken for about a week. Posts were going out at wrong times. The root cause could have been timezone handling, API auth, queue state, or all three. I wasn't sure.&lt;/p&gt;

&lt;p&gt;I gave Opus 4.7 this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Find out why scheduled posts are firing at wrong times. Check the scheduler code, the API responses, the timezone handling in the storage layer, the queue state, and the actual fired-post logs. Fix the underlying issue. Don't just patch the symptom.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Then I went to do other things for about 8 hours.&lt;/p&gt;

&lt;h2&gt;
  
  
  What actually happened
&lt;/h2&gt;

&lt;p&gt;It didn't fix the bug in 8 hours. But it did something more interesting.&lt;/p&gt;

&lt;p&gt;It:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Reproduced the bug reliably.&lt;/li&gt;
&lt;li&gt;Instrumented 4 different layers of the stack with debug logs.&lt;/li&gt;
&lt;li&gt;Wrote a test harness that fired fake posts through the pipeline with controlled timestamps.&lt;/li&gt;
&lt;li&gt;Identified that the bug was actually &lt;strong&gt;two bugs, not one&lt;/strong&gt; — one in my storage layer (naive local timestamps), one in the scheduler (assumed UTC input).&lt;/li&gt;
&lt;li&gt;Proposed a fix for the storage layer that would require a one-time migration of existing data.&lt;/li&gt;
&lt;li&gt;Stopped there and asked me whether the migration was acceptable.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That last bullet is the thing. Earlier Claude releases would either:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Just charge ahead and migrate (scary)&lt;/li&gt;
&lt;li&gt;Stop way earlier and ask what to do (annoying)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This version &lt;strong&gt;did a long stretch of investigative work, correctly identified a decision that needed human input, and parked there&lt;/strong&gt;. That's the skill level I've been waiting for.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where it drifted
&lt;/h2&gt;

&lt;p&gt;Three places.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Scope creep on the test harness.&lt;/strong&gt;&lt;br&gt;
I asked for a fix. It wrote a fairly elaborate test suite as part of the investigation. Useful, but not what I asked for, and it ate context window.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. It over-explained the storage layer bug.&lt;/strong&gt;&lt;br&gt;
The explanation was correct but long. Three paragraphs when one sentence would have worked. Earlier releases were more concise.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. It assumed my time zone from context.&lt;/strong&gt;&lt;br&gt;
I hadn't told it what TZ I was in. It inferred from some filename references. It inferred correctly, but I'd rather be asked than guessed at. This is a tiny thing.&lt;/p&gt;

&lt;p&gt;None of these are blockers. They're polish issues.&lt;/p&gt;

&lt;h2&gt;
  
  
  What didn't drift
&lt;/h2&gt;

&lt;p&gt;The parts that mattered held up.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It stayed on task for the full session without me re-anchoring it.&lt;/li&gt;
&lt;li&gt;It didn't hallucinate API shapes — when it wasn't sure, it read actual responses.&lt;/li&gt;
&lt;li&gt;It wrote diffs I could read and approve line by line.&lt;/li&gt;
&lt;li&gt;It caught a genuinely subtle second bug that I had missed in my own debugging earlier.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What this actually changes
&lt;/h2&gt;

&lt;p&gt;For me, the jump from Opus 4.6 to 4.7 is the jump from &lt;em&gt;"pair programming assistant"&lt;/em&gt; to &lt;em&gt;"junior engineer with better taste than me on average, but who still needs to check in."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;That's a qualitative shift.&lt;/p&gt;

&lt;p&gt;Before 4.7, I would ask for scoped single-file tasks and stitch the results together. After 4.7, I can give multi-layer tasks that cross files, modules, and domains, and the output is coherent.&lt;/p&gt;

&lt;p&gt;The work I used to do — context stitching, keeping state across sessions, reminding the AI what we were doing — that work got cheaper this week.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it still can't do
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Decide whether a product is worth building&lt;/li&gt;
&lt;li&gt;Decide what a feature should feel like&lt;/li&gt;
&lt;li&gt;Pick which bug is worth fixing first&lt;/li&gt;
&lt;li&gt;Explain to a user why their workflow is wrong&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The interesting part is still mine.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical takeaway
&lt;/h2&gt;

&lt;p&gt;If you're already using Claude Code or Claude through an editor:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Give it bigger tasks than you used to. It can handle them.&lt;/li&gt;
&lt;li&gt;Still ask it to stop before destructive actions. That behavior is there now, but it's still polite to scope it.&lt;/li&gt;
&lt;li&gt;Don't bother with "think step by step" — that's implicit at this level.&lt;/li&gt;
&lt;li&gt;Read the diffs. You'll learn things.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you weren't using Claude before: this is probably the release worth trying.&lt;/p&gt;




&lt;p&gt;Eight hours of autonomous work, two real bugs found, one fix migration decision parked for me. Net positive. Still tired.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>claude</category>
      <category>productivity</category>
      <category>webdev</category>
    </item>
    <item>
      <title>My AI-Heavy Indie Stack (Actually Honest Breakdown)</title>
      <dc:creator>Kairi</dc:creator>
      <pubDate>Mon, 20 Apr 2026 00:52:33 +0000</pubDate>
      <link>https://forem.com/kairi_outputs/my-ai-heavy-indie-stack-actually-honest-breakdown-2jim</link>
      <guid>https://forem.com/kairi_outputs/my-ai-heavy-indie-stack-actually-honest-breakdown-2jim</guid>
      <description>&lt;p&gt;A few weeks ago I posted a thread that ended with &lt;em&gt;"here's the setup, in honest detail:"&lt;/em&gt; — and then I didn't deliver the detail.&lt;/p&gt;

&lt;p&gt;That was bad form. Let me fix it.&lt;/p&gt;

&lt;p&gt;This is the actual stack I use to ship indie projects around a day job, with ADHD eating my executive function most days. It's not impressive. It's just relentless.&lt;/p&gt;

&lt;h2&gt;
  
  
  The bottleneck isn't what people think
&lt;/h2&gt;

&lt;p&gt;The thing AI changed for me isn't code quality. It's &lt;strong&gt;sequencing&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;I can think the whole product. I can describe it in three paragraphs. The break in my workflow happened between "I know what I want" and "here are the 40 micro-decisions required to start typing." That gap used to eat days.&lt;/p&gt;

&lt;p&gt;With AI handling the sequencing, the gap became minutes. I stopped losing evenings to paralysis.&lt;/p&gt;

&lt;p&gt;That's the whole value prop, for me. Everything below is just tooling around that insight.&lt;/p&gt;

&lt;h2&gt;
  
  
  The stack
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Primary driver: Claude Code
&lt;/h3&gt;

&lt;p&gt;Not Cursor, not Copilot. Claude Code in a terminal. Here's why:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Multi-file edits with intent&lt;/strong&gt;: "make this template work for a pricing page instead" and it touches 6 files correctly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Explains what it changed&lt;/strong&gt;: I read diffs and actually learn.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Headless, keyboard-only&lt;/strong&gt;: which matches how my attention works.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cursor is great if your brain likes inline completions. Mine doesn't. Copilot is fine if your code is boring; the moment I go off-pattern, it fights me.&lt;/p&gt;

&lt;p&gt;The right AI tool is the one that matches your cognitive style, not the one with the best marketing.&lt;/p&gt;

&lt;h3&gt;
  
  
  The boring glue: bash
&lt;/h3&gt;

&lt;p&gt;About 30% of my time is Claude Code. Another 20% is tiny bash scripts that do things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;rename 40 files from &lt;code&gt;post-v1.mdx&lt;/code&gt; to &lt;code&gt;post-0001.mdx&lt;/code&gt; padded&lt;/li&gt;
&lt;li&gt;download all images from a list, resize, optimize&lt;/li&gt;
&lt;li&gt;grep my notes for every TODO that's older than two weeks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I don't write these scripts. I ask for them. Done in 90 seconds. I don't remember them. I keep a &lt;code&gt;bin/&lt;/code&gt; folder of one-liners I've forgotten writing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Task memory: sqlite at &lt;code&gt;~/.local/tasks.db&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;One table. &lt;code&gt;id, what, status, updated_at&lt;/code&gt;. That's it.&lt;/p&gt;

&lt;p&gt;When I start a session I run &lt;code&gt;tasks wip&lt;/code&gt; and it tells me what I was mid-way through. When I shelve something I run &lt;code&gt;tasks park &amp;lt;what&amp;gt;&lt;/code&gt;. It's a 30-line Python script wrapping sqlite3.&lt;/p&gt;

&lt;p&gt;This one tool has done more for my shipping velocity than any "productivity app." Because I can't hold state in my head, and a database can.&lt;/p&gt;

&lt;h3&gt;
  
  
  Surface: dark mode everything, terminal-first
&lt;/h3&gt;

&lt;p&gt;I don't have a strong opinion on editors. I use VS Code because it opens. But the surface I actually &lt;em&gt;work&lt;/em&gt; in is the terminal: tmux + a vim split + claude code running in another pane.&lt;/p&gt;

&lt;p&gt;Light mode is a tax on my eyes. Coffee is a tax on my ADHD. I optimize around not paying taxes I don't need to pay.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fuel: tea, not coffee
&lt;/h3&gt;

&lt;p&gt;This isn't a quirky brand — it's a real choice. Caffeine amplifies my ADHD symptoms. Coffee gives me 90 minutes of rocket and then 6 hours of fried circuits.&lt;/p&gt;

&lt;p&gt;Tea, specifically black tea with milk, gives me enough alertness without the crash. Three cups a day, stop by 4pm, sleep is protected.&lt;/p&gt;

&lt;p&gt;The whole indie workflow collapses if sleep breaks. Everything else is negotiable.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the stack isn't
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;It isn't a framework.&lt;/strong&gt; Nobody else should copy this exactly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It isn't fast.&lt;/strong&gt; Anyone watching me work would think I'm slow. I stop constantly to check what I did. I talk to myself out loud. I forget.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It isn't about discipline.&lt;/strong&gt; Willpower is the first thing to go. The stack exists so willpower isn't in the loop.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It isn't AI doing the work.&lt;/strong&gt; The AI handles sequencing. I still make every decision that matters — which product to build, what the thing is for, who it's for. That part hasn't gotten cheaper.&lt;/p&gt;

&lt;h2&gt;
  
  
  What changed
&lt;/h2&gt;

&lt;p&gt;Before: evenings and weekends were a coin flip between "ship something" and "stare at the terminal for 3 hours and give up."&lt;/p&gt;

&lt;p&gt;After: evenings and weekends are a coin flip between "ship something" and "ship something smaller." The terminal no longer stares back.&lt;/p&gt;

&lt;p&gt;That's it. No 10x, no "unlocked creativity." Just the gap between idea and execution got cheap, and now I can afford to try things.&lt;/p&gt;

&lt;h2&gt;
  
  
  The part the stack can't do
&lt;/h2&gt;

&lt;p&gt;Distribution.&lt;/p&gt;

&lt;p&gt;You can build three templates in a weekend with this setup. Finding anyone who cares about them is a completely different skill, and I'm bad at it. I'm learning. Slowly.&lt;/p&gt;

&lt;p&gt;That's tomorrow's post.&lt;/p&gt;




&lt;p&gt;Nothing clever. Just relentless reduction of the parts that burn out.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>indiehackers</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Claude Code as Executive Function: My ADHD-Brain Setup</title>
      <dc:creator>Kairi</dc:creator>
      <pubDate>Sun, 19 Apr 2026 10:46:10 +0000</pubDate>
      <link>https://forem.com/kairi_outputs/claude-code-as-executive-function-my-adhd-brain-setup-1412</link>
      <guid>https://forem.com/kairi_outputs/claude-code-as-executive-function-my-adhd-brain-setup-1412</guid>
      <description>&lt;p&gt;I have ADHD. Verbal skill outpaces my motor planning by a lot — the gap measured on neuropsych testing is around forty points. That sounds abstract, but in practice it means I can think through a problem in full, narrate the solution out loud, and then sit down at my desk and watch the plan evaporate before any of it becomes code.&lt;/p&gt;

&lt;p&gt;For years I treated this as a discipline problem. More lists. Better systems. Time blocks. It never stuck, because the bottleneck was never motivation. The bottleneck was the bridge between &lt;em&gt;knowing what to do&lt;/em&gt; and &lt;em&gt;actually doing it&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Then I started using Claude Code seriously, and the bottleneck got cheap.&lt;/p&gt;

&lt;p&gt;This isn't a productivity-hack post. It's an honest writeup of the setup I use to get things shipped on days my brain would otherwise lose them. If you have ADHD, or you just find yourself with too many ideas and not enough execution, some of this might transfer.&lt;/p&gt;

&lt;h2&gt;
  
  
  The gap
&lt;/h2&gt;

&lt;p&gt;Non-ADHD folks I've asked describe the task-to-action flow like this: "I think of what to do, then I do it." There's no perceptible step in between.&lt;/p&gt;

&lt;p&gt;My version has a perceptible step, and it's expensive. A task gets thought of, sits in working memory, waits for motor planning to convert it into action, and — most days — quietly falls out of working memory before anything happens. The more complex the action (switching contexts, opening the right file, remembering the exact syntax), the more it costs, the more likely it drops.&lt;/p&gt;

&lt;p&gt;I can reliably do small things. Big things require the brain to chain small things without dropping any. That chaining is the part that's broken.&lt;/p&gt;

&lt;p&gt;Traditional productivity advice is about keeping more things in working memory or writing better lists. Neither fixes the gap — they just give the gap a bigger surface to fail on.&lt;/p&gt;

&lt;h2&gt;
  
  
  What actually helped
&lt;/h2&gt;

&lt;p&gt;What helped was outsourcing the chain.&lt;/p&gt;

&lt;p&gt;Not the &lt;em&gt;decisions&lt;/em&gt;. Not the &lt;em&gt;judgment&lt;/em&gt;. Those are still mine, and should stay mine. What I outsourced is the part where "I decided to refactor this file" has to become "I now remember the file path, open the editor, navigate to the function, remember the rename convention, write the new code, verify it compiles, commit with a good message."&lt;/p&gt;

&lt;p&gt;Every step in that chain is a place where my brain drops the thread. Claude Code handles the chain. I handle the decisions.&lt;/p&gt;

&lt;p&gt;The shift feels like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Old: &lt;em&gt;think → try to chain → lose thread → frustrate → give up or half-ship&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;New: &lt;em&gt;think → say what I want → review the diff → ship&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I didn't get smarter. I got a prosthetic.&lt;/p&gt;

&lt;h2&gt;
  
  
  My setup, in practice
&lt;/h2&gt;

&lt;p&gt;Most of the setup is just making Claude Code remember things I'd otherwise re-explain five times. ADHD brains are especially punishing when re-explaining is required — the overhead compounds. The setup reduces re-explanation to near zero.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. &lt;code&gt;CLAUDE.md&lt;/code&gt; — the project's memory
&lt;/h3&gt;

&lt;p&gt;Every repo I use Claude Code in has a &lt;code&gt;CLAUDE.md&lt;/code&gt; at the root. It holds project-level conventions, voice, tech stack, and anything I'd otherwise forget to say.&lt;/p&gt;

&lt;p&gt;For my indie side project, mine contains:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Who I am in this project (an indie builder with specific voice)&lt;/li&gt;
&lt;li&gt;Tech stack (Next.js 16, Tailwind v4, TypeScript strict, Framer Motion)&lt;/li&gt;
&lt;li&gt;What's live vs. what's pending&lt;/li&gt;
&lt;li&gt;Token-efficiency preferences (don't overexplain, just do)&lt;/li&gt;
&lt;li&gt;Security rules (never commit PII, grep certain patterns)&lt;/li&gt;
&lt;li&gt;Key decisions already made (so I don't get second-guessed)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Claude Code reads this file at the start of every session. I don't have to re-establish context. The AI already knows how I want to work.&lt;/p&gt;

&lt;p&gt;This is huge for ADHD. Sessions are fragmented by design — I'll pick up at weird hours, in partial states of exhaustion. Without &lt;code&gt;CLAUDE.md&lt;/code&gt;, every session starts from zero. With it, every session starts where the last one ended.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. &lt;code&gt;.claude/rules/&lt;/code&gt; — doctrine files it re-reads every session
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;CLAUDE.md&lt;/code&gt; handles the what. &lt;code&gt;.claude/rules/&lt;/code&gt; handles the how.&lt;/p&gt;

&lt;p&gt;Mine has three files right now:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;content-doctrine.md&lt;/code&gt; — voice, tone, overclaim guard, native-English checks&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;privacy-security.md&lt;/code&gt; — PII that must never appear, secrets management&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;kai-persona.md&lt;/code&gt; — the persona I use for my public writing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These aren't tutorials I read — they're instructions Claude Code follows. Each one has a pre-publish checklist, a never-list, and specific patterns to grep for before anything ships.&lt;/p&gt;

&lt;p&gt;The practical effect: I don't have to remember the rules. They're enforced automatically.&lt;/p&gt;

&lt;p&gt;When I'm tired and about to commit something questionable, the rules catch it. When I forget my own voice mid-article, the doctrine catches it. The rules encode "what past-me already decided" so current-me doesn't have to re-decide it at 2am.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Slash commands for the stuff I'd otherwise re-explain daily
&lt;/h3&gt;

&lt;p&gt;I built a handful of commands — markdown files in &lt;code&gt;.claude/commands/&lt;/code&gt; — for things I do over and over:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;/morning&lt;/code&gt; — pulls world and tech trends, writes them to a daily feed file, then drafts that day's posts for my public account&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/sync&lt;/code&gt; — updates state files after a decision shift&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/resume&lt;/code&gt; — one-shot context summary if I jump between sessions&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/learn&lt;/code&gt; — records a post's reaction so I can find patterns later&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/knowledge&lt;/code&gt; — loads a topic's knowledge file only when needed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each command is a markdown prompt. When I type &lt;code&gt;/morning&lt;/code&gt;, Claude Code runs it exactly the same way every day. No re-explaining. No variation. No lost thread.&lt;/p&gt;

&lt;p&gt;ADHD's enemy is decision fatigue. Slash commands make daily decisions invisible. I don't wonder "what do I do now?" — I just run &lt;code&gt;/morning&lt;/code&gt; and the work starts.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Pre-commit hooks — protection from my own lapses
&lt;/h3&gt;

&lt;p&gt;I have a pre-commit hook that blocks any commit containing my personal name, personal emails, specific location markers, or API-key-shaped strings.&lt;/p&gt;

&lt;p&gt;This is belt-and-suspenders for a very real ADHD failure mode: forgetting to check. On exhausted days I'll paste something into a file I don't remember adding. The hook catches it before it becomes a git history problem.&lt;/p&gt;

&lt;p&gt;The hook is maybe 60 lines of shell. It's the cheapest line of defense I've ever added. Twice this week it caught leaks I didn't notice.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this changed
&lt;/h2&gt;

&lt;p&gt;The shipping numbers changed first. I used to start three things and finish one. I now start two things and finish both, on an average week.&lt;/p&gt;

&lt;p&gt;But the bigger change is less visible. I stopped dreading switching contexts. Before the setup, every context switch cost me a 15-minute ramp-up to remember where I was. Now the state files tell me exactly where I was — Claude Code re-reads them, summarizes in ten seconds, and I'm working.&lt;/p&gt;

&lt;p&gt;The emotional tax of ADHD dev work is usually in the ramp-up, not the work itself. Killing the ramp-up killed most of the tax.&lt;/p&gt;

&lt;p&gt;The other thing that changed: I stopped being mad at myself. Setups like this make the assumption that your brain &lt;em&gt;will&lt;/em&gt; drop things — and that's the right assumption for my brain. When I miss something, it's the tool's job to catch it, not my job to have remembered. That reframing, weirdly, is what made the setup stick.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this &lt;em&gt;didn't&lt;/em&gt; change
&lt;/h2&gt;

&lt;p&gt;It didn't cure ADHD. Nothing does.&lt;/p&gt;

&lt;p&gt;On bad days I still spiral. The setup shortens the spiral — I can pick up with less context cost — but it doesn't prevent it.&lt;/p&gt;

&lt;p&gt;It also didn't make me smarter. Judgment is still mine. Claude Code executes; I decide what's worth executing. When I decide badly, the tool executes my bad decision quickly. A faster bad idea is still bad.&lt;/p&gt;

&lt;p&gt;And it didn't eliminate the work. Writing good prompts is its own skill. Writing good specs is harder than writing good code. The execution bottleneck moved. It didn't disappear.&lt;/p&gt;

&lt;p&gt;But moving the bottleneck from &lt;em&gt;motor planning&lt;/em&gt; to &lt;em&gt;judgment and prompt craft&lt;/em&gt; is, for my specific brain, a massive upgrade. Motor planning was the broken step. Judgment and craft are steps I can actually practice and get better at.&lt;/p&gt;

&lt;h2&gt;
  
  
  If you want to try this
&lt;/h2&gt;

&lt;p&gt;Start with the smallest possible version. &lt;code&gt;CLAUDE.md&lt;/code&gt; in your project root, with just the basics — tech stack, voice, and whatever you'd otherwise re-explain every session. That alone is maybe 70% of the value.&lt;/p&gt;

&lt;p&gt;Add slash commands for two things you do daily. That's another 20%.&lt;/p&gt;

&lt;p&gt;Rules files and pre-commit hooks come later, when you have a specific pattern you keep failing on.&lt;/p&gt;

&lt;p&gt;The goal isn't to build the perfect prosthetic. It's to notice where your brain drops the thread, and put one small piece of infrastructure in that exact spot. Then notice the next drop. Repeat.&lt;/p&gt;

&lt;p&gt;For me, after a few months of this, I ship things I'd have started and abandoned a year ago. Not because my brain got better. Because the things that don't survive the chain are no longer allowed to matter.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I write about AI-heavy indie workflow, ADHD-adjacent dev patterns, and corporate automation from someone who can't use the good AI tools at their day job. If you want more of this, my newsletter ships weekly — &lt;a href="https://kaistack.beehiiv.com" rel="noopener noreferrer"&gt;subscribe here&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>adhd</category>
      <category>devrel</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
