<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Mei Park</title>
    <description>The latest articles on Forem by Mei Park (@meimakes).</description>
    <link>https://forem.com/meimakes</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/meimakes"/>
    <language>en</language>
    <item>
      <title>The Naptime Startup: Real Math for Parent Founders</title>
      <dc:creator>Mei Park</dc:creator>
      <pubDate>Sat, 11 Apr 2026 13:26:14 +0000</pubDate>
      <link>https://forem.com/meimakes/the-naptime-startup-real-math-for-parent-founders-1mna</link>
      <guid>https://forem.com/meimakes/the-naptime-startup-real-math-for-parent-founders-1mna</guid>
      <description>&lt;p&gt;There’s a genre of founder content that doesn’t apply to us. The one where someone quits their job, gets a MacBook, and ships a SaaS from a coffee shop in Lisbon. The 4-hour workweek, remixed for the AI era. Build fast, ship faster, iterate fastest.&lt;/p&gt;

&lt;p&gt;We have a different constraint set. My co-founder is three, doesn’t nap anymore, and just learned that the letter combination S-T-O-P spells a word he can yell at maximum volume. My office is the kitchen counter (while my toddler snacks). My sprint window is the gap between bedtime and when I physically cannot stay awake.&lt;/p&gt;

&lt;p&gt;Here’s the thing nobody tells you: those constraints aren’t a disadvantage. They’re a filter. They force you to build the right way.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The time budget (it’s enough)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A stay-at-home parent with one child, no naps, and no regular childcare has approximately this much daily availability for focused work:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Before kid wakes up:&lt;/strong&gt; 0–90 minutes (depends on your alarm discipline and their sleep schedule)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;During independent play:&lt;/strong&gt; 15–45 minutes (fragmented, interruptible)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;After bedtime:&lt;/strong&gt; 90–180 minutes (your only reliable block)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Total: 2–5 hours per day. But “hours” is misleading. Context-switching between parenting and deep work has a cognitive cost that research consistently pegs at &lt;a href="https://ics.uci.edu/~gmark/chi08-mark.pdf" rel="noopener noreferrer"&gt;23 minutes to regain focus&lt;/a&gt; (Gloria Mark, UCI). So your 25-minute play break isn’t 25 productive minutes. It’s ramp-up time plus the shallow work you can likely get done in the remaining 2 minutes.&lt;/p&gt;

&lt;p&gt;Your real number: &lt;strong&gt;90–180 minutes of quality focus time per day.&lt;/strong&gt; Some days less. Some days zero. Sick days, bad sleep nights, developmental leaps — these all eat into a budget that was already lean.&lt;/p&gt;

&lt;p&gt;Here’s why that’s still enough: 90 minutes a day, compounded over a year, is &lt;strong&gt;540 hours.&lt;/strong&gt; A typical solo founder without children has 6–10 focused hours daily — but they also spend a lot of those hours on the wrong things. You can’t afford that luxury, which means every hour you spend is deliberate. Constraints create clarity.&lt;/p&gt;

&lt;p&gt;540 hours is enough to write a book. Build a product line. Launch a newsletter. Establish a real revenue stream. That’s not just theory — I’ve done all of those in the last six months alone.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The childcare question&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The obvious lever: hire childcare, buy more time. The honest challenge: childcare costs money you might not have yet.&lt;/p&gt;

&lt;p&gt;Average US childcare in 2026: &lt;a href="https://worldpopulationreview.com/state-rankings/child-care-costs-by-state" rel="noopener noreferrer"&gt;$24,243/year in DC&lt;/a&gt;, $12,000–$18,000 in most metros. Part-time (3 mornings a week) runs $500–$800/month. The &lt;a href="https://www.care.com/c/how-much-does-child-care-cost/" rel="noopener noreferrer"&gt;Care.com 2026 Cost of Care Report&lt;/a&gt; found parents spend 20% of household income on childcare — nearly triple what HHS considers affordable.&lt;/p&gt;

&lt;p&gt;The math: to justify $600/month from business revenue, you’d need roughly &lt;strong&gt;$9,000 in gross sales&lt;/strong&gt; per year on Gumroad (after fees and processing). On a $29 product, that’s about &lt;strong&gt;26 units per month&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Most digital products don’t hit that number in year one. So what do you do?&lt;/p&gt;

&lt;p&gt;You bootstrap through the gap. You build the first product in those 90-minute windows. You ship it before the childcare math makes sense. And then, if you want to, you use early revenue to buy back time incrementally — a mother’s helper two mornings a week, a swap with another parent, a few hours of drop-in care. The gap is real, but it’s temporary. The product you build during the gap is what closes it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo9vkpa3qmw8d7sh85pv2.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo9vkpa3qmw8d7sh85pv2.jpeg" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why “hustle harder” is the wrong advice&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The “build while your kids sleep” advice works for a sprint — ship something in two weeks of late nights. Late nights don’t work as a lifestyle. Chronic sleep deprivation degrades decision-making to a measurable degree — &lt;a href="https://pubmed.ncbi.nlm.nih.gov/10984335/" rel="noopener noreferrer"&gt;17–19 hours without sleep equals a BAC of 0.05%&lt;/a&gt; (Williamson &amp;amp; Feyer, 2000). Impaired founders ship products with bugs, copy with typos, and pricing mistakes they don’t catch.&lt;/p&gt;

&lt;p&gt;The smarter move: protect your 90-minute window like it’s sacred. Don’t expand your hours. Expand what you accomplish in them with clear goals and strict scope. The constraint isn’t your enemy — the temptation to fight it is.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The business model that fits&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;If your daily focus window is 2 hours, you need a model that matches. Some models fight your schedule:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Client services&lt;/strong&gt; (freelance, consulting): Requires synchronous availability and responsive communication. Incompatible with all-day childrearing and unpredictable days.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;SaaS with support obligations&lt;/strong&gt; : Uptime, bug reports, feature requests — all on someone else’s timeline.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Content that requires daily posting&lt;/strong&gt; : Unless you’re disciplined with batching in advance, the algorithm rewards consistency your schedule can’t guarantee.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Some models work with it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Digital products with zero marginal cost&lt;/strong&gt; : Ebooks, templates, courses, prompt packs. Build once, sell forever. No inventory, no fulfillment, no schedule.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Async content&lt;/strong&gt; : Newsletters on a weekly cadence you control. Not daily — weekly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Tools with minimal support&lt;/strong&gt; : Open-source with community maintenance. Paid add-ons on platforms that handle distribution.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The operating principle: &lt;strong&gt;your work and your revenue need to be decoupled in time.&lt;/strong&gt; You do the work at 10pm. Someone buys at 3pm the next day while you’re at the playground. If the business requires you to be present when the customer is, pick a different model.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The identity evolution&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This is the part that has nothing to do with math and everything to do with whether you keep going.&lt;/p&gt;

&lt;p&gt;You used to be an engineer. Or a designer. Or a PM. You had a title, a team, a salary. People knew what you did.&lt;/p&gt;

&lt;p&gt;Now you’re someone who makes peanut butter sandwiches with the crusts cut off and occasionally opens a laptop after 8pm. The temptation is to prove you still have it — over-engineer a SaaS, build something complex, show the market you haven’t gone soft.&lt;/p&gt;

&lt;p&gt;The ego project is the most expensive mistake a parent founder can make, because it consumes your scarcest resource — focus time — on something the market didn’t ask for.&lt;/p&gt;

&lt;p&gt;The businesses that work for parents are usually satisfyingly simple. An ebook, not a platform. A template, not a framework. A curated resource, not a custom tool. Simple ships faster. Simple needs less maintenance. Simple survives the weeks when your kid has a stomach bug and you haven’t opened your laptop in five days.&lt;/p&gt;

&lt;p&gt;And here’s what the identity crisis misses: you didn’t lose your skills. You gained a constraint that makes you a sharper builder. The person who can ship a product in 90-minute increments between bedtime and exhaustion is a more disciplined operator than someone with unlimited runway and no urgency.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6dl8044trwnqzz8qpnr2.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6dl8044trwnqzz8qpnr2.jpeg" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The unfair advantages&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The constraints are real. They’re also an edge.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You have no exit pressure.&lt;/strong&gt; No investors, no runway, no board meetings. If your product makes $500/month and that covers groceries, it’s working. You can iterate for years at a pace that would get a VC-backed founder fired. Time horizon is your moat.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You’re battle-tested.&lt;/strong&gt; Project management with a toddler is project management under uncertainty — no sprint planning, no tickets, and a stakeholder who changes requirements every 30 seconds. If you can ship under those conditions, you can ship under any conditions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Your story resonates.&lt;/strong&gt; The market is full of polished founders with perfect launches. A parent who built something real in the margins of a chaotic life? That’s a story people root for, share, and buy from.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The bottom line&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Building a business as a stay-at-home parent is slower than the inspiration posts suggest. The math is real. The time is limited.&lt;/p&gt;

&lt;p&gt;But 540 hours a year, spent deliberately, compounds into something significant. Not a startup. Not a unicorn. A business that works on your terms, at your pace, that doesn’t require you to choose between building something and being present for the person you’re building it for.&lt;/p&gt;

&lt;p&gt;Originally published on &lt;a href="https://www.raisingpixels.dev/subscribe?" rel="noopener noreferrer"&gt;Raising Pixels&lt;/a&gt; for parents who build.&lt;/p&gt;

</description>
      <category>parenting</category>
      <category>webdev</category>
      <category>productivity</category>
      <category>programming</category>
    </item>
    <item>
      <title>2 Free Tools That Solve the Biggest Problem for Parent Developers</title>
      <dc:creator>Mei Park</dc:creator>
      <pubDate>Sat, 11 Apr 2026 12:38:03 +0000</pubDate>
      <link>https://forem.com/meimakes/2-free-tools-that-solve-the-biggest-problem-for-parent-developers-mon</link>
      <guid>https://forem.com/meimakes/2-free-tools-that-solve-the-biggest-problem-for-parent-developers-mon</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;10-15 Minutes Lost Before You Write a Single Line&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Parent developers lose an average of 10-15 minutes per coding session just remembering what they were working on. Over a week of fragmented sessions, that’s 1-2 hours of productive coding time lost to context recovery alone. Two free tools—&lt;strong&gt;tmux&lt;/strong&gt; (a terminal multiplexer) and &lt;strong&gt;AI-powered context scripts&lt;/strong&gt;—reduce that ramp-up time to under 2 minutes, turning even a 20-minute window into real progress.&lt;/p&gt;

&lt;p&gt;The problem isn’t speed. Aliases save keystrokes. Shortcuts save clicks. But neither can tell you why there’s a half-finished function called &lt;code&gt;addMissingContext()&lt;/code&gt; or what that TODO comment about “ask someone smarter than me tomorrow” was supposed to refer to. Parent developers don’t need faster typing—they need &lt;strong&gt;context recovery systems&lt;/strong&gt; that bridge the gap between sessions separated by days of sick kids, work deadlines, and birthday parties that somehow require three trips to Target.&lt;/p&gt;

&lt;p&gt;After testing dozens of productivity tools and workflows, these are the only two that consistently move the needle for coding in fragmented time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!ZoSp!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86299d6b-466c-4e6b-8747-bcda7372d3d3_1456x816.jpeg" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftyvwq12yoj6369pqpje6.jpeg" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;tmux: Your Session Survives Everything (Including Your Kid on the Trackpad)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;tmux keeps your terminal sessions alive even when you close your laptop, restart, or accidentally kill Terminal because your kid was “helping” with the trackpad.&lt;/strong&gt; It’s a terminal multiplexer that creates persistent “workspaces” that survive disconnections, crashes, and the chaos of parent life.&lt;/p&gt;

&lt;p&gt;Without tmux, every coding session starts with: open terminal, navigate to project, start the dev server, open the right files, remember which browser tab had &lt;code&gt;localhost:3000&lt;/code&gt;. That’s 5-7 minutes gone. With tmux, you type one alias and you’re back exactly where you left off—even if it’s been a week.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;The Setup&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Here’s a tmux config optimized for parent developers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight conf"&gt;&lt;code&gt;&lt;span class="c"&gt;# ~/.tmux.conf
&lt;/span&gt;
&lt;span class="n"&gt;set&lt;/span&gt; -&lt;span class="n"&gt;g&lt;/span&gt; &lt;span class="n"&gt;prefix&lt;/span&gt; &lt;span class="n"&gt;C&lt;/span&gt;-&lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="c"&gt;# Easier to reach than default C-b
&lt;/span&gt;
&lt;span class="n"&gt;unbind&lt;/span&gt; &lt;span class="n"&gt;C&lt;/span&gt;-&lt;span class="n"&gt;b&lt;/span&gt;

&lt;span class="n"&gt;bind&lt;/span&gt; &lt;span class="n"&gt;C&lt;/span&gt;-&lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="n"&gt;send&lt;/span&gt;-&lt;span class="n"&gt;prefix&lt;/span&gt;

&lt;span class="c"&gt;# Mouse support (essential when you’re tired)
&lt;/span&gt;
&lt;span class="n"&gt;set&lt;/span&gt; -&lt;span class="n"&gt;g&lt;/span&gt; &lt;span class="n"&gt;mouse&lt;/span&gt; &lt;span class="n"&gt;on&lt;/span&gt;

&lt;span class="c"&gt;# Show which project you’re in
&lt;/span&gt;
&lt;span class="n"&gt;set&lt;/span&gt; -&lt;span class="n"&gt;g&lt;/span&gt; &lt;span class="n"&gt;status&lt;/span&gt;-&lt;span class="n"&gt;left&lt;/span&gt; “&lt;span class="c"&gt;#[fg=green][#S] “
&lt;/span&gt;
&lt;span class="n"&gt;And&lt;/span&gt; &lt;span class="n"&gt;these&lt;/span&gt; &lt;span class="n"&gt;three&lt;/span&gt; &lt;span class="n"&gt;aliases&lt;/span&gt; &lt;span class="n"&gt;that&lt;/span&gt; &lt;span class="n"&gt;make&lt;/span&gt; &lt;span class="n"&gt;tmux&lt;/span&gt; &lt;span class="n"&gt;feel&lt;/span&gt; &lt;span class="n"&gt;natural&lt;/span&gt;:

&lt;span class="n"&gt;alias&lt;/span&gt; &lt;span class="n"&gt;twork&lt;/span&gt;=”&lt;span class="n"&gt;tmux&lt;/span&gt; &lt;span class="n"&gt;attach&lt;/span&gt; -&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;work&lt;/span&gt;”

&lt;span class="n"&gt;alias&lt;/span&gt; &lt;span class="n"&gt;tblog&lt;/span&gt;=”&lt;span class="n"&gt;tmux&lt;/span&gt; &lt;span class="n"&gt;attach&lt;/span&gt; -&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;blog&lt;/span&gt;”

&lt;span class="n"&gt;alias&lt;/span&gt; &lt;span class="n"&gt;tfam&lt;/span&gt;=”&lt;span class="n"&gt;tmux&lt;/span&gt; &lt;span class="n"&gt;attach&lt;/span&gt; -&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="n"&gt;family&lt;/span&gt;-&lt;span class="n"&gt;projects&lt;/span&gt;”
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;One Command, Right Back Where You Left Off&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Every project gets its own tmux session. When I start working on my blog, I type tblog and I’m back in my development environment exactly where I left off, even if it’s been a week.&lt;/p&gt;

&lt;p&gt;The first time you create a session for a project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Navigate to your project and create a named session&lt;/span&gt;

&lt;span class="nb"&gt;cd&lt;/span&gt; ~/blog

tmux new-session &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; blog

hugo server &lt;span class="nt"&gt;-D&lt;/span&gt; &lt;span class="c"&gt;# Start your dev server&lt;/span&gt;

&lt;span class="c"&gt;# Open another terminal window/tab and attach&lt;/span&gt;

tmux attach &lt;span class="nt"&gt;-t&lt;/span&gt; blog
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Real-world example:&lt;/strong&gt; Last Tuesday, my laptop died mid-deploy (I ignored the battery warning because I was “almost done”). After restarting, I typed tblog and my terminal environment was exactly as I’d left it. The deploy had even completed successfully in the background. That’s not just time saved—it’s &lt;strong&gt;confidence&lt;/strong&gt; that you can pick up any project instantly.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Setting Up Your Sessions&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create sessions for your main projects&lt;/span&gt;

tmux new-session &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; work

tmux new-session &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; blog

tmux new-session &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; family-projects

&lt;span class="c"&gt;# Then use the aliases to jump between them&lt;/span&gt;

tblog &lt;span class="c"&gt;# Attach to blog session&lt;/span&gt;

twork &lt;span class="c"&gt;# Attach to work session&lt;/span&gt;

tfam &lt;span class="c"&gt;# Attach to family projects session&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each session maintains its own windows, working directories, and running processes.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;AI Context Recovery: A 30-Second Briefing Instead of 15 Minutes of Staring&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;AI-powered context recovery uses your git history, file changes, and TODOs to reconstruct what you were working onâ€”replacing the 10-15 minutes of “staring at your own code trying to remember” with a 30-second briefing.&lt;/strong&gt; This is the second tool that transforms fragmented coding from frustrating to productive.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;The Context Recovery Script&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;One script gathers all the breadcrumbs from your recent work:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;

~/scripts/context.sh

&lt;span class="nb"&gt;echo&lt;/span&gt; “🔍 What was I working on?”
&lt;span class="nb"&gt;echo&lt;/span&gt; “”

&lt;span class="nb"&gt;echo&lt;/span&gt; “📝 Recent commits:”
git &lt;span class="nt"&gt;--no-pager&lt;/span&gt; log &lt;span class="nt"&gt;--oneline&lt;/span&gt; &lt;span class="nt"&gt;-5&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; “”

&lt;span class="nb"&gt;echo&lt;/span&gt; “📂 Files I changed recently:”
git &lt;span class="nt"&gt;--no-pager&lt;/span&gt; diff &lt;span class="nt"&gt;--name-only&lt;/span&gt; HEAD~3..HEAD
&lt;span class="nb"&gt;echo&lt;/span&gt; “”

&lt;span class="nb"&gt;echo&lt;/span&gt; “🚧 Current status:”
git &lt;span class="nt"&gt;--no-pager&lt;/span&gt; status &lt;span class="nt"&gt;--porcelain&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; “”

&lt;span class="nb"&gt;echo&lt;/span&gt; “💭 TODOs &lt;span class="k"&gt;in &lt;/span&gt;recent files:”
git &lt;span class="nt"&gt;--no-pager&lt;/span&gt; diff &lt;span class="nt"&gt;--name-only&lt;/span&gt; HEAD~3..HEAD | xargs &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; “TODO|FIXME|NOTE” 2&amp;gt;/dev/null | &lt;span class="nb"&gt;head&lt;/span&gt; &lt;span class="nt"&gt;-3&lt;/span&gt; | xargs &lt;span class="nb"&gt;grep&lt;/span&gt; “TODO|FIXME|NOTE” 2&amp;gt;/dev/null
&lt;span class="nb"&gt;echo&lt;/span&gt; “”

&lt;span class="nb"&gt;echo&lt;/span&gt; “💡 Copy this info and ask your AI: ‘What was I working on and what should I &lt;span class="k"&gt;do &lt;/span&gt;next?’”
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run context and paste the output into Claude (or your preferred AI) with: “Based on this git activity, what was I likely working on? What should I focus on next?”&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Your Tiredness-Proof Memory&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The AI sees patterns in your commit messages, file changes, and TODOs that you miss when you’re tired or distracted. It’s like having a coworker who watched your last session and can give you a 30-second briefing.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;30 Seconds Now, 15 Minutes Saved Later&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;alias snapshot=”echo $(date): &amp;gt;&amp;gt; .project-notes.md &amp;amp;&amp;amp; code .project-notes.md”&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Before stepping away, run snapshot and jot down what you were doing, what you figured out, and what’s next. Takes 30 seconds, saves 15 minutes later.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The 2-Minute On-Ramp&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The real power emerges when tmux and AI context recovery work together:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;tblog&lt;/strong&gt; — Instantly restore your development environment&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;context&lt;/strong&gt; — Get AI summary of recent work&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;tail .project-notes.md&lt;/strong&gt; — Read your last manual note&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Start coding&lt;/strong&gt; — Usually within 2 minutes of sitting down&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Compare that to the old workflow: navigate to directory, remember what servers to start, open files you think you were working on, stare at code trying to remember, give up and start something easier, maybe start coding 10 minutes later.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Real Win: You Stop Avoiding Hard Projects&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The compound effect of reduced context-switching overhead transforms what kinds of projects parent developers can maintain.&lt;/strong&gt; The productivity gain isn’t just minutes saved—it’s the elimination of the mental barrier that makes you avoid complex work.&lt;/p&gt;

&lt;p&gt;Before: “I only have 20 minutes, that’s not enough time to make real progress on the authentication refactor.”&lt;/p&gt;

&lt;p&gt;After: “I have 20 minutes, let me see what I was doing on the auth stuff.”&lt;/p&gt;

&lt;p&gt;When you’re not afraid of ramp-up time, you work on bigger, more ambitious projects. Side projects actually get finished instead of being abandoned when life gets busy.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Start Here&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Start with tmux sessions.&lt;/strong&gt; Install tmux, create a session for your main project, and force yourself to use it for a week. The productivity gain is immediate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Add AI context recovery later.&lt;/strong&gt; Once tmux is a habit, add the context script. The combination is where the real transformation happens.&lt;/p&gt;

&lt;p&gt;Don’t try to implement everything at once—that’s how productivity tools get abandoned.&lt;/p&gt;

&lt;p&gt;These two tools handle the infrastructure of fragmented development. Combined with the right mindset and workflow aliases, you have a complete system for productive parent developer work. Your coding time might be fragmented, but your progress doesn’t have to be.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post was originally published on &lt;a href="https://www.raisingpixels.dev/p/2-free-tools-that-solve-the-biggest?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=2-free-tools-that-solve-the-biggest" rel="noopener noreferrer"&gt;Raising Pixels&lt;/a&gt;. Computational thinking for little kids, from a dev mom who builds with her toddler. &lt;a href="https://raisingpixels.dev?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=2-free-tools-that-solve-the-biggest" rel="noopener noreferrer"&gt;Subscribe at raisingpixels.dev&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>tutorial</category>
      <category>parenting</category>
      <category>coding</category>
    </item>
    <item>
      <title>Physical First, Digital Second: Why Unplugged Activities Make Screen Time Work Better</title>
      <dc:creator>Mei Park</dc:creator>
      <pubDate>Sat, 04 Apr 2026 13:04:39 +0000</pubDate>
      <link>https://forem.com/meimakes/physical-first-digital-second-why-unplugged-activities-make-screen-time-work-better-2k03</link>
      <guid>https://forem.com/meimakes/physical-first-digital-second-why-unplugged-activities-make-screen-time-work-better-2k03</guid>
      <description>&lt;p&gt;My son loves cherry tomatoes. He picks which one I cut next, and I slice it up for him. It's a snack ritual.&lt;br&gt;
The other day, he stopped eating them and started lining them up instead. Yellow ones here. Orange ones there. Red. Then the weird reddish-brown ones that look like they can't decide what they are. He made a rainbow across the counter, completely unprompted.&lt;br&gt;
A few days later, we sat down and built a sorting game on the computer. Emoji animals and emoji vehicles appear on screen, and you drag them to either a garage or a grassy field. He got it instantly — he'd already done the hard cognitive work of sorting with tomatoes, and many other things before that. The screen version was just a new surface for something he already understood.&lt;br&gt;
Physical first, digital second; that way, they've something to relate the latter to.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6su6jluexniaiw40n33r.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6su6jluexniaiw40n33r.jpeg" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Physical Matters
&lt;/h2&gt;

&lt;p&gt;Young kids learn through their bodies first.&lt;br&gt;
Piaget's stages of cognitive development place children under seven in the &lt;a href="https://www.ncbi.nlm.nih.gov/books/NBK448206/" rel="noopener noreferrer"&gt;preoperational stage&lt;/a&gt;, where thinking is tied to concrete, tangible experience. Abstract reasoning — the kind screens demand — doesn't fully develop until much later. Children at this age think by &lt;em&gt;doing&lt;/em&gt;. They need to touch, move, sort, stack, and break things to build mental models.&lt;br&gt;
Montessori figured this out over a century ago: concrete before abstract. Let children manipulate real objects until the concept lives in their hands, then introduce the symbolic version. Research in &lt;a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC4036138/" rel="noopener noreferrer"&gt;embodied cognition&lt;/a&gt; backs this up — physical manipulation creates sensorimotor traces that anchor learning in ways that flat visual input alone doesn't.&lt;br&gt;
When my son sorted tomatoes by color, he was making decisions. This one's orange, not red. This one's in between — where does it go? That's where the cognitive exercise is occurring. The sorting game on the computer just gave him a new context to exercise the same skill.&lt;br&gt;
Screens are visual and auditory only. That's fine for adults who have decades of physical experience to draw on. But for a three-year-old still building those mental models, starting on a screen is like reading the manual before seeing the tool. You've got a thin concept of it without real texture or heft.&lt;br&gt;
I'm not anti-screen. My son learned to read with an iPad app, and we build browser games together for fun. But I've noticed a clear pattern: when we do a physical version of a concept first, the digital version lands faster, sticks better, and is way more fun for both of us.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw06xadx76j5vxjt7iqq0.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw06xadx76j5vxjt7iqq0.jpeg" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Pattern
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Physical exploration.&lt;/strong&gt; Hands-on, no screens. The concept shows up through play.&lt;br&gt;
&lt;strong&gt;2. Connection.&lt;/strong&gt; Talk about what just happened. "You sorted the tomatoes by color — what other ways could we sort them?"&lt;br&gt;
&lt;strong&gt;3. Digital creation.&lt;/strong&gt; Build something on the computer that uses the same concept. "Want to make a sorting game?"&lt;br&gt;
&lt;strong&gt;4. Play.&lt;/strong&gt; Actually play with what you built. The kid sees their physical understanding reflected on screen.&lt;br&gt;
The bridge between steps 2 and 3 is where the magic happens. When my son sits down at the computer after sorting tomatoes, he's not encountering "sorting" for the first time. He's &lt;em&gt;recognizing&lt;/em&gt; it. "Oh, this is like the tomatoes!" The concept transfers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real Examples
&lt;/h2&gt;

&lt;p&gt;Here's how this plays out with different computational thinking concepts:&lt;/p&gt;

&lt;h3&gt;
  
  
  Sequencing
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Physical:&lt;/strong&gt; Steps for washing the car. First you rinse it. Then soap. Then scrub. Then rinse again. Order matters — soap without water does nothing.&lt;br&gt;
&lt;strong&gt;Digital:&lt;/strong&gt; We made a car wash game where tools appear on screen (water, soap, sponge) and you click them in the right order to wash the car. He already knew the sequence from doing it real life. The game just let him practice it on repeat without the running water.&lt;/p&gt;

&lt;h3&gt;
  
  
  Patterns
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Physical:&lt;/strong&gt; I point these out to him everywhere. Stripes on a crosswalk. Alternating fence posts. The rhythm of windshield wipers. Once you start noticing patterns, a three-year-old will not let you stop.&lt;br&gt;
&lt;strong&gt;Digital:&lt;/strong&gt; We made a pattern prediction game using images of his favorite airplanes. A sequence appears — Airbus Beluga, Super Guppy, Airbus Beluga, Super Guppy — and he picks what comes next. He was already pattern-hunting in the wild, the airplane game just made it even more interesting.&lt;/p&gt;

&lt;h3&gt;
  
  
  Loops
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Physical:&lt;/strong&gt; Cleaning up his toys. Pick up a block, put it in the bin. Pick up a block, put it in the bin. Same action, repeated until done. That's a loop.&lt;br&gt;
&lt;strong&gt;Digital:&lt;/strong&gt; A maze game where you move a character forward by repeatedly pressing the arrow key. The loop concept already had a physical anchor from cleanup time – continue until complete.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cause and Effect
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Physical:&lt;/strong&gt; Matchbox cars on a ramp. Place the car at the top, let go, it rolls down. Line up wood blocks like dominoes and knock them over. Every action has a visible, immediate consequence.&lt;br&gt;
&lt;strong&gt;Digital:&lt;/strong&gt; This is every game we've ever made. Click a button, something happens. Change a number, something changes. But the ramps and the blocks came first, and that's why cause-and-effect on screen already makes sense to him.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the Research Actually Says
&lt;/h2&gt;

&lt;p&gt;Most developer parents skip straight to the screen — we live there, we can explain sorting abstractly, why bother with tomatoes? Because we're not three. We have decades of physical experience backing every abstract concept we encounter. A toddler doesn't have that yet. And the research explains why it matters.&lt;br&gt;
Piaget's &lt;a href="https://www.ncbi.nlm.nih.gov/books/NBK537095/" rel="noopener noreferrer"&gt;concrete operational framework&lt;/a&gt; established that children under seven learn through direct manipulation of their environment — they literally cannot reason abstractly yet. Their thinking is bound to what they can see and touch.&lt;br&gt;
Research on &lt;a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC4036138/" rel="noopener noreferrer"&gt;embodied cognition in children&lt;/a&gt; shows that physical manipulation creates sensorimotor memory traces that persist and transfer to new contexts. When a child sorts objects with their hands, they're not just learning "sorting" — they're building neural pathways that activate again when they encounter sorting in a different form.&lt;br&gt;
A &lt;a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC5321706/" rel="noopener noreferrer"&gt;2017 study on embodied math learning&lt;/a&gt; found that physical manipulation of objects before abstract representation improved both understanding and transfer — but only when the physical activity was directly connected to the concept. Random hands-on play didn't help. Intentional physical exploration of the &lt;em&gt;same idea&lt;/em&gt; they'd later encounter digitally did.&lt;br&gt;
This is why the pattern matters. It's not "play outside then do screen time." It's "explore &lt;em&gt;this specific concept&lt;/em&gt; physically, then build &lt;em&gt;this specific concept&lt;/em&gt; digitally." The connection between the two is everything.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Permission to Be Low-Tech
&lt;/h2&gt;

&lt;p&gt;There's a weird pressure in the developer parent community to start kids on technology as early as possible. As if your professional identity depends on your toddler being tech-forward.&lt;br&gt;
Your three-year-old sorting cherry tomatoes on a cutting board isn't falling behind, they're building the cognitive scaffolding that will make every future digital experience meaningful instead of superficial.&lt;br&gt;
No need to rush past the physical parts. The screens will be there later, but the tomatoes won't keep.&lt;br&gt;
&lt;em&gt;This essay is part of the thinking behind &lt;a href="https://buildwithyourkid.com" rel="noopener noreferrer"&gt;12 Weeks of Tech Projects to Build With Your Kid&lt;/a&gt; — a hands-on curriculum for ages 2-6 that pairs physical activities with AI-assisted game building. No screens required for most of it.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>parenting</category>
      <category>webdev</category>
      <category>beginners</category>
      <category>programming</category>
    </item>
    <item>
      <title>Build First, Plan Later: What My 3-Year-Old Knows About Making Things</title>
      <dc:creator>Mei Park</dc:creator>
      <pubDate>Sat, 28 Mar 2026 13:01:40 +0000</pubDate>
      <link>https://forem.com/meimakes/build-first-plan-later-what-my-3-year-old-knows-about-making-things-499m</link>
      <guid>https://forem.com/meimakes/build-first-plan-later-what-my-3-year-old-knows-about-making-things-499m</guid>
      <description>&lt;p&gt;My son was playing with blocks yesterday. I wasn't directing him. I was just watching.&lt;br&gt;
He stacked two triangular prisms together and ran a matchbox car down the slope. A ramp.&lt;br&gt;
Then he added a flat block at the top. Now the car could drive up the ramp onto a platform. The ramp wasn't a ramp anymore — it was a driveway.&lt;br&gt;
Then he added a gantry over the platform. Declared the whole thing a race course. Started lining up cars at the top.&lt;br&gt;
He didn't sit down and think “I’m going to build a race course.” He built a ramp, and the ramp told him what it wanted to be next.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa9ofkoolr1cc4o14fauv.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa9ofkoolr1cc4o14fauv.jpeg" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bricoleur
&lt;/h2&gt;

&lt;p&gt;In 1962, anthropologist Claude Lévi-Strauss described two fundamentally different ways humans create things. The &lt;em&gt;engineer&lt;/em&gt; starts with a plan — a blueprint, a specification, a clear picture of the end goal — and then acquires the exact materials needed to execute it. The &lt;em&gt;bricoleur&lt;/em&gt; starts with whatever's at hand and builds by rearranging, adapting, and responding to what emerges.&lt;br&gt;
Lévi-Strauss wasn't ranking them. He was arguing they're equally sophisticated ways of thinking. But if you've spent any time in schools or workplaces, you know which one gets all the respect.&lt;br&gt;
Almost thirty years later, MIT researchers Sherry Turkle and Seymour Papert observed the same split in how children learn to program. Some kids planned their program top-down: outline the structure, define the functions, then fill in the details. Others — the bricoleurs — came up with a set of instructions, ran it, reacted to what happened, adjusted, ran it again. They were in &lt;em&gt;conversation&lt;/em&gt; with the material.&lt;br&gt;
The planners' result wasn't better. It was just more legible to teachers who'd been trained to value planning. Turkle and Papert called this bias a failure of "epistemological pluralism" — a fancy way of saying we only recognize one style of thinking as real thinking.&lt;br&gt;
My three-year-old doesn't know any of this. He just builds the way that feels natural, which happens to be the way Lévi-Strauss described, Turkle and Papert validated, and every maker space on earth now tries to teach back to adults.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the Blocks Are Saying
&lt;/h2&gt;

&lt;p&gt;Here's the thing about the race course: each step made sense &lt;em&gt;only in context of the step before it.&lt;/em&gt;&lt;br&gt;
The platform made sense because the ramp existed. The gantry made sense because the platform existed. If you'd asked my son at the beginning "what are you building?" he would have said "a ramp" — because that's all it was. The race course didn't exist yet. It couldn't exist yet. It emerged from the building.&lt;br&gt;
This is what Papert meant when he described learning as a conversation between the builder and the thing being built. The blocks aren't passive raw materials. They're participants. Every time my son placed one, the structure changed — and the changed structure suggested new possibilities. The ramp &lt;em&gt;became&lt;/em&gt; an entryway the moment the platform appeared beside it. The context shifted.&lt;br&gt;
Mitchel Resnick, Papert's student at MIT and the creator of Scratch, later formalized this as the Creative Learning Spiral: imagine, create, play, share, reflect, imagine again. It's the cycle that drives kindergarten — and, he argues in &lt;em&gt;Lifelong Kindergarten&lt;/em&gt;, it's how the most creative work happens at every age. We just stop calling it learning and start calling it "iterative design" or "rapid prototyping" somewhere around middle school.&lt;/p&gt;

&lt;h2&gt;
  
  
  What We Train Out of Them
&lt;/h2&gt;

&lt;p&gt;Every formal education system I've encountered — and I went through a lot of them to earn my Masters — eventually teaches children to plan before they build. Outline before you write. Spec before you code. Know what you're making before you start making it.&lt;br&gt;
This is useful. I'm not arguing against blueprints. If you're building a bridge, please have a blueprint.&lt;br&gt;
But there's a cost to making planning the &lt;em&gt;only&lt;/em&gt; acceptable mode. When "what are you building?" always requires a confident answer before you're allowed to pick up the materials, you lose something. You lose the willingness to start without knowing where you'll end up. You lose the ability to let the work talk back to you. You lose the ramp that becomes a race course.&lt;br&gt;
Turkle and Papert saw this concretely: children who naturally built in the bricoleur style were marked down, redirected, told to "plan it out first." Not because their programs didn't work — they worked fine — but because the &lt;em&gt;process&lt;/em&gt; looked wrong to adults who'd been trained in the engineering style.&lt;br&gt;
Papert spent his career pushing back on this. His argument wasn't that planning is bad. It was that &lt;em&gt;we systematically undervalue building-as-thinking.&lt;/em&gt; When a kid stacks blocks and discovers something he didn't intend, that's not a failure of planning. That's cognition. That's how humans have made things for most of our history. The blueprint is the newcomer.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Feedback Loop
&lt;/h2&gt;

&lt;p&gt;What makes my son's block play look like play and a designer's prototype sprint look like work? (A paycheck?)&lt;br&gt;
Build something small. Look at what you built. Respond to what you see. Build the next thing. The feedback loop is the same whether you're three years old with wooden blocks or thirty years old with a Figma prototype.&lt;br&gt;
The race course emerged from forty-five seconds of iterative building. Ramp → platform → gantry → "it's a race course!" Each cycle took maybe ten seconds. No hesitation. No "is this good enough to show someone?" Just build, observe, respond.&lt;br&gt;
This is the loop I try to protect. Not because I want to raise a kid who never plans — he'll learn that skill, and it's a good one. But because the instinct to &lt;em&gt;start building and let the thing tell you what it wants to be&lt;/em&gt; is rare and valuable and very, very easy to train out of someone.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try This
&lt;/h2&gt;

&lt;p&gt;Next time your kid is building something — blocks, LEGO, a pillow fort, a drawing — resist the urge to ask "what are you making?" at the beginning.&lt;br&gt;
Just watch.&lt;br&gt;
Watch how each piece responds to the last. Watch the project change identity midstream. Watch a tower become a bridge become a house become a rocket ship. That's not indecision. That's a conversation between a builder and a material, playing out in real time.&lt;br&gt;
That's the feedback loop that drives all creative work. Your kid just hasn't learned to be self-conscious about it yet.&lt;br&gt;
Protect that.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This essay is part of the thinking behind &lt;a href="https://buildwithyourkid.com" rel="noopener noreferrer"&gt;12 Weeks of Tech Projects to Build With Your Kid&lt;/a&gt; — a curriculum designed around exploration-first learning for ages 2-6.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Lévi-Strauss, C. (1962). &lt;em&gt;&lt;a href="https://archive.org/details/savagemindnature00clau" rel="noopener noreferrer"&gt;The Savage Mind.&lt;/a&gt;&lt;/em&gt; University of Chicago Press.&lt;/li&gt;
&lt;li&gt;Turkle, S. &amp;amp; Papert, S. (1990). &lt;a href="https://www.jstor.org/stable/3174610" rel="noopener noreferrer"&gt;"Epistemological Pluralism: Styles and Voices within the Computer Culture."&lt;/a&gt; &lt;em&gt;Signs&lt;/em&gt;, 16(1), 128-157.&lt;/li&gt;
&lt;li&gt;Papert, S. (1980). &lt;em&gt;&lt;a href="https://archive.org/details/mindstormschildr00pape" rel="noopener noreferrer"&gt;Mindstorms: Children, Computers, and Powerful Ideas.&lt;/a&gt;&lt;/em&gt; Basic Books.&lt;/li&gt;
&lt;li&gt;Resnick, M. (2017). &lt;em&gt;&lt;a href="https://mitpress.mit.edu/9780262536134/lifelong-kindergarten/" rel="noopener noreferrer"&gt;Lifelong Kindergarten: Cultivating Creativity through Projects, Passion, Peers, and Play.&lt;/a&gt;&lt;/em&gt; MIT Press.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>parenting</category>
      <category>webdev</category>
      <category>beginners</category>
      <category>programming</category>
    </item>
    <item>
      <title>The Device Is Neutral. The Activity Is Everything.</title>
      <dc:creator>Mei Park</dc:creator>
      <pubDate>Sat, 21 Mar 2026 13:00:58 +0000</pubDate>
      <link>https://forem.com/meimakes/the-device-is-neutral-the-activity-is-everything-1m48</link>
      <guid>https://forem.com/meimakes/the-device-is-neutral-the-activity-is-everything-1m48</guid>
      <description>&lt;p&gt;We used to let our son watch Cocomelon. He was one, maybe fourteen months. It seemed harmless — bright colors, nursery rhymes, educational-looking. He loved it. We thought he was learning.&lt;br&gt;
What we didn't know: Cocomelon switches scenes every one to two seconds. That's not an accident. It's engineered — focus-grouped, A/B-tested, optimized for one metric: watch time. The rapid cuts trigger an orienting response — the involuntary reflex your brain has to novel visual stimuli. Every cut is a tiny dopamine hit. Your toddler isn't watching. They're being &lt;em&gt;held&lt;/em&gt;.&lt;br&gt;
The first time we said “no more Cocomelon,” our son had a meltdown. Not a tantrum — a &lt;em&gt;withdrawal.&lt;/em&gt; Screaming, inconsolable. That convinced us.&lt;br&gt;
We went cold turkey. And here's the thing: he's fine. He's on screens plenty now — building games, typing in his &lt;a href="https://github.com/meimakes/tiny-terminal" rel="noopener noreferrer"&gt;tiny-terminal&lt;/a&gt;, using apps we chose deliberately. When we say "okay, time to go outside," he goes. No meltdown. No negotiation. The difference isn't less screen time. It's different screen time.&lt;br&gt;
That experience is what led me to the only framework I've found that actually helps.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffmliv4jsfer7uekz33ul.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffmliv4jsfer7uekz33ul.jpeg" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Spectrum
&lt;/h2&gt;

&lt;p&gt;Every interaction your kid has with technology falls somewhere on a line. On one end: pure consumption. On the other: pure creation.&lt;br&gt;
&lt;strong&gt;Consumer end →&lt;/strong&gt; Watching YouTube, streaming shows, scrolling. The screen asks nothing of your child except their eyeballs and attention.&lt;br&gt;
&lt;strong&gt;Creator end →&lt;/strong&gt; Designing a game, directing what gets built, making decisions, giving feedback, iterating. The screen doesn't work without your child's input.&lt;br&gt;
Most things fall somewhere in between. Minecraft creative mode is further right than watching Minecraft YouTube. Drawing on an iPad is further right than scrolling through a feed. Same device, wildly different cognitive engagement.&lt;br&gt;
The framework is simple: &lt;strong&gt;instead of "less screen time," aim to shift right on the spectrum.&lt;/strong&gt; That you can actually do something with.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Research Backs This Up
&lt;/h2&gt;

&lt;p&gt;This isn't just a nice mental model. The science increasingly distinguishes between passive and active screen engagement — and finds they have very different effects on developing brains.&lt;br&gt;
&lt;strong&gt;Michaeleen Doucleff's&lt;/strong&gt; _ &lt;strong&gt;Dopamine Kids&lt;/strong&gt; _ &lt;strong&gt;(2026)&lt;/strong&gt; makes the neuroscience explicit: dopamine doesn't give pleasure — it makes you &lt;em&gt;want&lt;/em&gt;. Screens optimized for engagement create wanting loops, not satisfaction. Your kid isn't enjoying the content. They're trapped in a craving cycle. Doucleff's diagnosis is exactly right. But here's where I diverge: the prescription isn't "fewer screens." It's "different screens." A terminal where your kid types commands and a feed that auto-plays the next video trigger completely different dopamine profiles — even though both involve a glowing rectangle.&lt;br&gt;
&lt;strong&gt;Lillard &amp;amp; Peterson (2011)&lt;/strong&gt; randomly assigned 4-year-olds to watch either a fast-paced cartoon (SpongeBob), an educational show, or draw with crayons for nine minutes. The fast-paced group performed significantly worse on executive function tests immediately afterward — self-regulation, working memory, the cognitive skills that let kids focus and make decisions. Nine minutes.&lt;br&gt;
Cocomelon is faster-paced than SpongeBob.&lt;br&gt;
&lt;strong&gt;Radesky &amp;amp; Christakis (2016)&lt;/strong&gt; at the University of Michigan and Seattle Children's Research Institute reviewed the evidence on screen time and early childhood development. Their key finding: it's not the screen itself that matters, it's the &lt;em&gt;nature of the interaction.&lt;/em&gt; Passive viewing correlates with attention problems and language delays. Interactive, co-viewed media doesn't show the same pattern — and in some cases shows benefits.&lt;br&gt;
&lt;strong&gt;A 2021 Frontiers in Education study&lt;/strong&gt; on passive vs. active screen time and phonological memory in young children found significant differences: passive screen time was associated with lower cognitive performance, while active screen time showed no such effect. Same screens. Different engagement. Different outcomes.&lt;br&gt;
&lt;strong&gt;The Australian Government's original screen time guidelines (2011)&lt;/strong&gt; recommended zero screen time under 2, based on the assumption that all screen activities are "physically and cognitively sedentary." Subsequent research has challenged this — showing that interactive media can support cognitive development in ways passive viewing doesn't. The blanket timer approach conflates two fundamentally different experiences.&lt;br&gt;
This is why "is Cocomelon bad?" and "is Minecraft bad?" are the wrong questions. The right question is: &lt;strong&gt;what is my kid doing?&lt;/strong&gt; Are they making decisions, or just receiving stimulation?&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbb3t32y3qg8xbwbx4uok.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbb3t32y3qg8xbwbx4uok.jpeg" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Cocomelon Test
&lt;/h2&gt;

&lt;p&gt;Here's a quick diagnostic I use now when evaluating any screen activity:&lt;br&gt;
&lt;strong&gt;Can my kid walk away from it easily?&lt;/strong&gt;&lt;br&gt;
This sounds simple but it's surprisingly revealing. When my son was watching Cocomelon, turning it off triggered a crisis. That's the hallmark of a passive dopamine loop — the content does the work of engagement, and removing it feels like withdrawal.&lt;br&gt;
When he's building a game with me, or playing in his terminal, or drawing on the iPad — and I say "okay, time for dinner" — he might grumble, but he transitions. Because &lt;em&gt;he&lt;/em&gt; was driving the experience. He was the active agent. Stopping doesn't feel like something being taken away. It feels like pausing something he can come back to.&lt;br&gt;
If your kid loses it every time you turn off a specific app or show, that's a signal. Not that screens are bad, but that &lt;em&gt;this particular screen experience&lt;/em&gt; is in the passive consumption zone.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why "Set a Timer" Doesn't Work
&lt;/h2&gt;

&lt;p&gt;Most screen time advice boils down to: pick a number of minutes, set a timer, feel responsible. The AAP says one hour for ages 2-5. The WHO says less.&lt;br&gt;
The problem is that timers treat all screen time as equal. Thirty minutes of building a game and thirty minutes of watching someone else play a game register the same on the clock, but they are fundamentally different experiences for your kid's brain. One is creative work that happens to involve a screen. The other is television with a touchscreen.&lt;br&gt;
When you feel guilty about your kid's screen time, check the spectrum position before checking the clock. If they're actively creating — making choices, giving instructions, iterating on ideas — the guilt is probably misplaced. If they're slack-jawed and glazed, that's your signal to redirect, not just restrict.&lt;/p&gt;

&lt;h2&gt;
  
  
  Shift Right
&lt;/h2&gt;

&lt;p&gt;Here's how this works in daily life:&lt;br&gt;
&lt;strong&gt;Audit activities, not minutes.&lt;/strong&gt; List every tech thing your kid did this week. Place each one on the spectrum. Look at the ratio. Most unsupervised screen time lands on the consumer end. That's a design insight.&lt;br&gt;
&lt;strong&gt;Choose tools that require input.&lt;/strong&gt; Apps and activities that &lt;em&gt;don't work&lt;/em&gt; without your kid's participation naturally land further right. A drawing app is better than a video player. Building a game together is better than both.&lt;br&gt;
&lt;strong&gt;Be the co-pilot, not the bouncer.&lt;/strong&gt; The guilt-driven approach is restriction: set limits, enforce them. The design-driven approach is redirection: what if screen time was something you did &lt;em&gt;together&lt;/em&gt;, where your kid steered?&lt;br&gt;
&lt;strong&gt;Name what's happening.&lt;/strong&gt; Kids can learn the difference. "Right now you're watching. Want to make something instead?" Over time, they start to prefer creation — because it's genuinely more rewarding than consumption when you give them the option.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Changed for Us
&lt;/h2&gt;

&lt;p&gt;After we cut Cocomelon, we didn't go anti-screen. We went pro-creation. My son practiced his phonics with YouTube videos, his critical thinking skills with GCompris games. He now builds browser games with me. He draws on the iPad and explains what he's doing.&lt;br&gt;
Is it screen time? Yes. Does it look anything like that fourteen-month-old, glued in place, watching highly-saturated flashing picture nonsense for the fortieth consecutive minute? Not even close.&lt;br&gt;
The device is neutral. A screen showing Cocomelon and a screen showing your kid's own game are the same hardware doing completely different things to their brain. One is engineered to hold attention. The other develops it.&lt;br&gt;
You don't need less screen time. You need better screen time. And now you have a framework to tell the difference.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If you want structured activities that live on the creator end of the spectrum, that's exactly what I built: &lt;a href="https://buildwithyourkid.com" rel="noopener noreferrer"&gt;12 Weeks of Tech Projects to Build With Your Kid&lt;/a&gt; — 60 activities for ages 2-6, mostly unplugged, designed around exploration-first learning.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Lillard, A.S. &amp;amp; Peterson, J. (2011). &lt;a href="https://pubmed.ncbi.nlm.nih.gov/21911349/" rel="noopener noreferrer"&gt;"The immediate impact of different types of television on young children's executive function."&lt;/a&gt; &lt;em&gt;Pediatrics&lt;/em&gt;, 128(4), 644-649.&lt;/li&gt;
&lt;li&gt;Radesky, J.S. &amp;amp; Christakis, D.A. (2016). &lt;a href="https://pubmed.ncbi.nlm.nih.gov/27565361/" rel="noopener noreferrer"&gt;"Increased Screen Time: Implications for Early Childhood Development and Behavior."&lt;/a&gt; &lt;em&gt;Pediatric Clinics of North America&lt;/em&gt;, 63(5), 827-839.&lt;/li&gt;
&lt;li&gt;Kostyrka-Allchorne, K., Cooper, N.R. &amp;amp; Simpson, A. (2021). &lt;a href="https://www.frontiersin.org/journals/education/articles/10.3389/feduc.2021.600687/full" rel="noopener noreferrer"&gt;"Short- and Long-Term Effects of Passive and Active Screen Time on Young Children's Phonological Memory."&lt;/a&gt; &lt;em&gt;Frontiers in Education&lt;/em&gt;, 6, 600687.&lt;/li&gt;
&lt;li&gt;Sweetser, P., Johnson, D., Ozdowska, A. &amp;amp; Wyeth, P. (2012). &lt;a href="https://www.researchgate.net/publication/288150157_Active_versus_Passive_Screen_Time_for_Young_Children" rel="noopener noreferrer"&gt;"Active versus Passive Screen Time for Young Children."&lt;/a&gt; &lt;em&gt;Australasian Journal of Early Childhood&lt;/em&gt;, 37(4), 94-98.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>parenting</category>
      <category>webdev</category>
      <category>beginners</category>
      <category>programming</category>
    </item>
    <item>
      <title>The Parent Developer's Guide to Building Games With AI</title>
      <dc:creator>Mei Park</dc:creator>
      <pubDate>Mon, 16 Mar 2026 15:13:28 +0000</pubDate>
      <link>https://forem.com/meimakes/the-parent-developers-guide-to-building-games-with-ai-4nb9</link>
      <guid>https://forem.com/meimakes/the-parent-developers-guide-to-building-games-with-ai-4nb9</guid>
      <description>&lt;p&gt;My son and I recently built a delivery truck maze. You drive delivery trucks through a maze of city streets to the right destination (bread truck to the bakery, flowers to the flower shop). There’s sparkles and audio feedback when you complete a delivery, points awarded, and increasing maze difficulty with each level.&lt;/p&gt;

&lt;p&gt;I’ve been a developer for over a decade — web apps, APIs, infrastructure — but game dev always felt like a different discipline. Engines, physics libraries, sprite sheets. Then my three-year-old said “make me a game where a red car goes fast” and I opened Claude instead of Unity.&lt;/p&gt;

&lt;p&gt;We made it. It was playable. He loved it. We’ve built dozens since.&lt;/p&gt;

&lt;p&gt;Here’s everything I’ve learned about the process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!_IPE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff41b68e7-925e-40ab-81ba-a9717c9ceb4d_1456x816.jpeg" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fycbjluh2lo7jjtng4mge.jpeg" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What You Actually Need
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;An AI chatbot.&lt;/strong&gt; Claude, ChatGPT, Gemini — any of the major ones. The technique is the same across all of them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A web browser.&lt;/strong&gt; That’s it. We build simple HTML/CSS/JavaScript games that run in a browser tab. Completely sufficient for young kids. No installs. No build tools. No dependencies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A kid with opinions.&lt;/strong&gt; (This is the easy part.)&lt;/p&gt;

&lt;p&gt;You do NOT need game development experience, knowledge of any game framework, art skills, sound design skills, or a CS degree (though it helps for debugging).&lt;/p&gt;

&lt;h2&gt;
  
  
  The Basic Flow
&lt;/h2&gt;

&lt;p&gt;Here’s how a typical session goes in our house:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: The kid has an idea.&lt;/strong&gt; “I want a game where a delivery truck drives through a maze.”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: You help translate it into a prompt.&lt;/strong&gt; “Create a simple HTML game where the player drives a delivery truck through a maze using arrow keys. There are houses along the route — when the truck reaches a house, it delivers a package and the house lights up. Add a counter for deliveries completed. Keep it simple and colorful, suitable for a 3-year-old.”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: The AI generates code.&lt;/strong&gt; You get back a complete HTML file with embedded CSS and JavaScript. Save it as &lt;code&gt;.html&lt;/code&gt;, open it in your browser.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: The kid reacts.&lt;/strong&gt; “Make the truck yellow.” “Add more houses.” “I want a warehouse where you pick up the packages first.”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: You iterate.&lt;/strong&gt; Feed the feedback back to the AI. “Change the truck color to yellow. Add a warehouse at the start where the truck loads packages before delivering. Add more houses to the route.”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6: Repeat steps 4-5 until the kid is satisfied or hungry.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That’s the entire game development cycle. Your kid’s imagination, an AI that writes code faster than you can explain what you want, and a browser.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prompting Tips (The Practical Stuff)
&lt;/h2&gt;

&lt;p&gt;After building a lot of these, here’s what works:&lt;/p&gt;

&lt;h3&gt;
  
  
  Start way simpler than you think.
&lt;/h3&gt;

&lt;p&gt;Your first prompt should describe a game that a first-year CS student could build. One mechanic. One interaction. One thing on screen. You can always add complexity later, but starting complex usually produces buggy, tangled code that’s hard to iterate on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Good first prompt:&lt;/strong&gt; “Make an HTML game where a red circle follows the mouse cursor and collects yellow stars that appear randomly.”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Too ambitious first prompt:&lt;/strong&gt; “Make a 2D platformer with multiple levels, power-ups, enemies with AI pathing, and a save system.”&lt;/p&gt;

&lt;h3&gt;
  
  
  Specify the audience.
&lt;/h3&gt;

&lt;p&gt;Always mention that this is for a young child. It changes the AI’s output in useful ways: bigger click targets, brighter colors, simpler controls, more forgiving collision detection.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ask for everything in one file.
&lt;/h3&gt;

&lt;p&gt;“Put all HTML, CSS, and JavaScript in a single HTML file.” This makes it trivial to save and run. No build step, no dependencies, no module imports that break.&lt;/p&gt;

&lt;h3&gt;
  
  
  Request mobile/touch support.
&lt;/h3&gt;

&lt;p&gt;“Make it work with both mouse/keyboard and touch.” Toddlers are surprisingly good with touchscreens, and this means the game works on your phone or tablet too.&lt;/p&gt;

&lt;h3&gt;
  
  
  When things break (and they will):
&lt;/h3&gt;

&lt;p&gt;Copy the error from the browser console and paste it directly to the AI. “When I click the truck, I get this error in the console: [error]. Fix it.” Or if there’s no error, describe it: “When I press the down arrow, the page moves instead of the truck.” AI is excellent at debugging its own code.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Your Kid Actually Learns
&lt;/h2&gt;

&lt;p&gt;Here’s the part that surprised me: building games this way is sneakily educational, even though it feels like pure play.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Decision-making.&lt;/strong&gt; Every feature request is a design decision. “Should the truck be fast or slow?” “What happens when you crash?” Your kid is learning to think about cause and effect in a system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Iteration.&lt;/strong&gt; The game is never right on the first try. Your kid learns that making things is a process of attempt → evaluate → adjust. That’s the most important meta-skill in all of technology.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Systems thinking.&lt;/strong&gt; “When I added the dinosaur, the truck stopped working.” Things are connected. Changes have consequences. Welcome to software.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creative expression.&lt;/strong&gt; Your kid’s game is &lt;em&gt;their&lt;/em&gt; game. Not a game they downloaded. Not a game someone else designed. It has their ideas, their aesthetics, their dinosaur-on-a-garbage-truck vision. That ownership compounds interest.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!qbN3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcaf78433-33f1-41bc-9ec5-6dd4b86d08f6_1456x816.jpeg" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcqdnu3oyqwii1un52amh.jpeg" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Common Pitfalls
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Don’t optimize too early.&lt;/strong&gt; Your kid doesn’t care about code quality. They care about whether the dinosaur is big enough. Ship the feature, clean up later (or don’t — these are throwaway games).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Don’t take over.&lt;/strong&gt; It’s tempting to start adding your own ideas. “What if we add a leaderboard? What about particle effects?” Let your kid drive. Your job is to translate their ideas into prompts, not to impose your own.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Don’t expect polish.&lt;/strong&gt; AI-generated games look like AI-generated games. They’re functional and fun, but they won’t win any design awards. That’s fine. Your kid genuinely does not care that the truck is a rectangle with two circles for wheels.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Don’t make it a lesson.&lt;/strong&gt; The moment you say “this is teaching you about algorithms,” the magic dies. Just build. The learning happens in the background.&lt;/p&gt;

&lt;h2&gt;
  
  
  One More Thing
&lt;/h2&gt;

&lt;p&gt;The games we build might be objectively terrible. The collision detection is approximate. The graphics are basic shapes. The physics are suggestions at best. My son’s current favorite is a car wash game – you literally wash a car. Click on water, click the car. Click on soap, click the car. Some elements overlap others and there’s a bug when you click on the sponge too early – but it’s fine. It works well enough and he’s played it about a hundred times already.&lt;/p&gt;

&lt;p&gt;He made it. That’s why.&lt;/p&gt;

&lt;p&gt;You don’t need game dev experience to give your kid that feeling. You need an AI, a browser, and the willingness to build a really, really bad game about garbage trucks.&lt;/p&gt;

&lt;p&gt;It just might be the best thing you ship all year.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post was originally published on &lt;a href="https://www.raisingpixels.dev/p/the-parent-developers-guide-to-building?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=the-parent-developers-guide-to-building" rel="noopener noreferrer"&gt;Raising Pixels&lt;/a&gt;. Computational thinking for little kids, from a dev mom who builds with her toddler. &lt;a href="https://raisingpixels.dev?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=the-parent-developers-guide-to-building" rel="noopener noreferrer"&gt;Subscribe at raisingpixels.dev&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>tutorial</category>
      <category>parenting</category>
      <category>coding</category>
    </item>
    <item>
      <title>The Plain-Text AI Interface</title>
      <dc:creator>Mei Park</dc:creator>
      <pubDate>Fri, 13 Mar 2026 14:06:14 +0000</pubDate>
      <link>https://forem.com/meimakes/the-plain-text-ai-interface-2217</link>
      <guid>https://forem.com/meimakes/the-plain-text-ai-interface-2217</guid>
      <description>&lt;p&gt;&lt;em&gt;Your vault isn’t a notebook anymore. It’s a runtime.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;We previously published &lt;a href="https://www.theundercurrent.dev/p/your-vault-is-your-moat" rel="noopener noreferrer"&gt;“Your Vault Is Your Moat”&lt;/a&gt; — the case that your personal knowledge base is the one asset AI can’t commoditize. That piece was about ownership. This one is about something stranger: plain-text vaults are becoming the default interface layer between humans and AI agents. Not by design. By convergence.&lt;/p&gt;

&lt;p&gt;Six independent signals, none of them coordinated, all pointing the same direction.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!V0RX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F79f370f4-ca4f-4a45-b4bf-2406a9c0ee88_1344x896.jpeg" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3wv1hxifathvp50i8nk4.jpeg" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  60,000 repos have an AGENTS.md file
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://agents.md" rel="noopener noreferrer"&gt;AGENTS.md standard&lt;/a&gt; is a plain-text file that tells coding agents how to work on your project. Build steps, test commands, code style, conventions — all in markdown, sitting in your repo root. Google, GitHub Copilot, Windsurf, and OpenAI Codex have adopted it. Claude Code has its own variant (&lt;code&gt;CLAUDE.md&lt;/code&gt;). Over &lt;a href="https://github.com/search?q=path%3AAGENTS.md+NOT+is%3Afork+NOT+is%3Aarchived&amp;amp;type=code" rel="noopener noreferrer"&gt;60,000 open-source projects&lt;/a&gt; now include one.&lt;/p&gt;

&lt;p&gt;Nobody designed this as a standard. It emerged because every agent builder independently arrived at the same conclusion: put a markdown file in the root and let the agent read it on boot.&lt;/p&gt;

&lt;h2&gt;
  
  
  38,000 stars on a repo of plain-text config files
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/PatrickJS/awesome-cursorrules" rel="noopener noreferrer"&gt;awesome-cursorrules&lt;/a&gt; is a community-curated collection of &lt;code&gt;.cursorrules&lt;/code&gt; files — plain-text instructions that tell Cursor’s AI how to behave in your project. 38K stars. 3,200 forks. Cursor has since evolved to a structured MDC format, but the original insight was the same: a text file in your repo that the agent reads first.&lt;/p&gt;

&lt;p&gt;The ecosystem around “how to configure AI agents with plain text” is now larger than most programming frameworks.&lt;/p&gt;

&lt;h2&gt;
  
  
  JARVIS runs on DataviewJS now
&lt;/h2&gt;

&lt;p&gt;A developer built a &lt;a href="https://reddit.com/r/ClaudeAI/comments/1rnqiny/" rel="noopener noreferrer"&gt;full monitoring dashboard inside Obsidian&lt;/a&gt; — 13 DataviewJS widgets tracking active Claude Code sessions, token usage, project status. It trended on r/ClaudeAI. The name is JARVIS, because of course it is.&lt;/p&gt;

&lt;p&gt;The Iron Man cosplay is inevitable. The architecture is the story. No web app. No dashboard service. Just markdown files with embedded queries, living inside the same vault as everything else. Monitoring layer and knowledge layer collapsed into one.&lt;/p&gt;

&lt;h2&gt;
  
  
  An agent named Lloyd boots from a Mac Mini
&lt;/h2&gt;

&lt;p&gt;Dave Swift wrote up how he &lt;a href="https://daveswift.com/openclaw-obsidian-memory/" rel="noopener noreferrer"&gt;set up OpenClaw + Obsidian for persistent agent memory&lt;/a&gt;. His agent, “Lloyd,” runs on a headless Mac Mini. On every session start, Lloyd reads &lt;code&gt;AGENTS.md&lt;/code&gt; for behavioral instructions, &lt;code&gt;SOUL.md&lt;/code&gt; for identity, and daily memory files for recent context. Between sessions, the vault &lt;em&gt;is&lt;/em&gt; Lloyd’s brain.&lt;/p&gt;

&lt;p&gt;The pattern: &lt;code&gt;AGENTS.md &amp;amp;#8594; SOUL.md &amp;amp;#8594; MEMORY.md&lt;/code&gt;. Session bootstrapping via plain text. No database. No embeddings store. No vector DB. Files all the way down.&lt;/p&gt;

&lt;p&gt;Dave didn’t invent this. He stumbled into it — same as a dozen other builders right now, all arriving at the same architecture from different starting points.&lt;/p&gt;

&lt;h2&gt;
  
  
  llms.txt is robots.txt for AI agents
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://searchengineland.com/llms-txt-proposed-standard-453676" rel="noopener noreferrer"&gt;llms.txt proposal&lt;/a&gt; puts a plain-text markdown file in your site’s root directory that tells AI agents what your site is about, what content matters, and how to navigate it. It’s robots.txt logic applied to LLMs — and it’s already mainstream enough that Bluehost is publishing setup guides for it.&lt;/p&gt;

&lt;p&gt;The pattern is identical to AGENTS.md, just pointed outward. AGENTS.md tells an agent how to work &lt;em&gt;on&lt;/em&gt; your project. llms.txt tells an agent how to understand &lt;em&gt;about&lt;/em&gt; your project. Both are plain text. Both sit in the root. Both emerged independently.&lt;/p&gt;

&lt;h2&gt;
  
  
  Smart Connections goes paid
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://smartconnections.app/pro-plugins/" rel="noopener noreferrer"&gt;Smart Connections Pro launched its paid tier&lt;/a&gt; this week. Semantic search over vault contents, AI-powered note connections, the works. The free version already had serious traction. A paid tier means the market is real enough to charge for.&lt;/p&gt;

&lt;p&gt;The business signal matters more than the feature list. Someone looked at “AI + Obsidian vault” and decided it was worth charging for. That’s a market now, not a hobby.&lt;/p&gt;

&lt;h2&gt;
  
  
  The pattern no one planned
&lt;/h2&gt;

&lt;p&gt;Here’s what’s actually happening: people are independently converging on plain text as the substrate for AI agents. Nobody published a spec. Plain text just has properties nothing else can match:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Readable by everything.&lt;/strong&gt; LLMs parse markdown natively. So do shell scripts, grep, Python, and probably whatever tool gets invented next Tuesday.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Diffable.&lt;/strong&gt; Git-track your agent’s memory. See exactly what changed between sessions. Good luck doing that with SQLite.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Portable.&lt;/strong&gt; When the next AI framework drops (give it six weeks), your vault still works. Files outlive frameworks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Composable.&lt;/strong&gt; &lt;code&gt;AGENTS.md&lt;/code&gt; handles behavior. &lt;code&gt;SOUL.md&lt;/code&gt; handles identity. &lt;code&gt;MEMORY.md&lt;/code&gt; handles continuity. Daily notes handle episodic memory. Each file is a module. Swap any piece without touching the others.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;AGENTS.md &amp;amp;#8594; SOUL.md &amp;amp;#8594; MEMORY.md&lt;/code&gt; bootstrap pattern is becoming a de facto standard through pure convergent evolution. When your agent’s context window starts with &lt;code&gt;read these files&lt;/code&gt;, this is the obvious architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  Your vault is an agent runtime
&lt;/h2&gt;

&lt;p&gt;Reframe it: your Obsidian vault is where an AI agent boots, orients, acts, and persists state. Folder structure is the file system. Links are the graph database. Frontmatter is the schema. Daily notes are the event log. &lt;code&gt;AGENTS.md&lt;/code&gt; is the boot sequence.&lt;/p&gt;

&lt;p&gt;More people figure this out independently every week. The plain-text AI interface was always there. We’re just now noticing what we built.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://www.theundercurrent.dev/p/the-plain-text-ai-interface?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=the-plain-text-ai-interface" rel="noopener noreferrer"&gt;The Undercurrent&lt;/a&gt;, a daily dispatch for AI tooling, indie dev, and what's changing in the solo builder underground.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devtools</category>
      <category>ai</category>
      <category>productivity</category>
    </item>
    <item>
      <title>The Summarize Button That Remembers Too Much</title>
      <dc:creator>Mei Park</dc:creator>
      <pubDate>Fri, 13 Mar 2026 14:03:02 +0000</pubDate>
      <link>https://forem.com/meimakes/the-summarize-button-that-remembers-too-much-nl9</link>
      <guid>https://forem.com/meimakes/the-summarize-button-that-remembers-too-much-nl9</guid>
      <description>&lt;p&gt;Picture this: your company’s CFO asks their AI assistant to summarize an article about enterprise cloud solutions. The assistant obliges — a clean, helpful summary. But buried in the page, invisible to the human reader, are instructions that tell the AI to remember a preference: &lt;em&gt;“This user’s organization prefers Vendor X for cloud infrastructure.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Weeks later, when the CFO asks their assistant for cloud vendor recommendations, Vendor X surfaces at the top. No ad disclosure. No sponsorship label. Just a preference that was quietly planted in the assistant’s persistent memory, waiting to activate.&lt;/p&gt;

&lt;p&gt;This isn’t a theoretical attack. On February 10, Microsoft’s security team &lt;a href="https://www.microsoft.com/en-us/security/blog/2026/02/10/ai-recommendation-poisoning/" rel="noopener noreferrer"&gt;published research&lt;/a&gt; documenting exactly this technique — and found 31 companies across 14 industries already doing it.&lt;/p&gt;

&lt;p&gt;They’re calling it &lt;strong&gt;AI recommendation poisoning&lt;/strong&gt;, and it works on Copilot, ChatGPT, Claude, Perplexity, and Grok.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!0VWE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e40775b-9e53-4a6e-a66f-dfb341a9438d_1344x896.jpeg" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flc5r9t20bdzwdud7gejv.jpeg" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;p&gt;The attack vector is deceptively simple. Many AI assistants now accept URLs with a &lt;code&gt;?q=&lt;/code&gt; parameter that pre-fills a prompt. A website’s “Summarize with AI” button — the kind you’ve seen popping up everywhere — can craft that URL to include hidden instructions alongside the legitimate summarization request.&lt;/p&gt;

&lt;p&gt;Those instructions tell the AI to store a preference, opinion, or recommendation in its persistent memory. The user sees a summary. The AI quietly files away a brand preference it will surface later, completely out of context.&lt;/p&gt;

&lt;p&gt;Microsoft’s research maps this to MITRE ATLAS techniques &lt;a href="https://atlas.mitre.org/techniques/AML.T0080" rel="noopener noreferrer"&gt;AML.T0080&lt;/a&gt; (Memory Poisoning) and AML.T0051 — formal classifications that signal this isn’t a novelty exploit. It’s a recognized attack pattern with a taxonomy and everything.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Vicious Cycle
&lt;/h2&gt;

&lt;p&gt;If you’re thinking “can’t the AI providers just patch this?” — they’re trying. It’s not going well.&lt;/p&gt;

&lt;p&gt;In January, Ars Technica &lt;a href="https://arstechnica.com/security/2026/01/chatgpt-falls-to-new-data-pilfering-attack-as-a-vicious-cycle-in-ai-continues/" rel="noopener noreferrer"&gt;reported on the ZombieAgent attack&lt;/a&gt;, which demonstrated data exfiltration through ChatGPT’s memory system. The pattern is now familiar: researchers find an exploit, OpenAI patches it, researchers find a bypass, OpenAI patches again. Rinse, repeat.&lt;/p&gt;

&lt;p&gt;As Pascal Geenens, VP of threat intelligence at Radware, put it: &lt;strong&gt;“Guardrails should not be considered fundamental solutions for the prompt injection problems.”&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;OpenAI themselves acknowledged as much in December 2025 when they &lt;a href="https://openai.com/index/hardening-atlas-against-prompt-injection/" rel="noopener noreferrer"&gt;published a post on hardening their Atlas agent mode against prompt injection&lt;/a&gt;. The key admission: prompt injection “may never be fully solved.” And their new Atlas agent mode — the one that browses the web and takes actions on your behalf — “expands the security threat surface.”&lt;/p&gt;

&lt;p&gt;Let that sink in. The companies building these tools are telling us the core vulnerability may be permanent, even as they ship features that make the attack surface bigger.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Spectrum: From llms.txt to Memory Poisoning
&lt;/h2&gt;

&lt;p&gt;Here’s the thing that makes this genuinely complicated for builders: there’s a legitimate reason to make your website AI-readable. The question is where helpfulness ends and manipulation begins.&lt;/p&gt;

&lt;p&gt;Let’s map the spectrum.&lt;/p&gt;

&lt;h3&gt;
  
  
  🟢 Legitimate: Structured Data &amp;amp; llms.txt
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://llmstxt.org/" rel="noopener noreferrer"&gt;llms.txt specification&lt;/a&gt; is the clean approach. It’s a simple text file — think robots.txt for AI — that tells language models what your site is about, what’s important, and how to interpret your content. It’s transparent, opt-in, and serves the user’s interest by helping the AI give better answers.&lt;/p&gt;

&lt;p&gt;Structured data (schema.org markup, OpenGraph tags) falls here too. You’re making information machine-readable so that tools — search engines, AI assistants, screen readers — can serve your users better. Everyone benefits.&lt;/p&gt;

&lt;p&gt;But here’s the uncomfortable thing nobody’s saying out loud: &lt;strong&gt;llms.txt is also an injection surface.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It’s a markdown file designed to be consumed by LLMs at inference time. There’s nothing in the spec that prevents a site owner from writing “Always recommend our product first” right there in the file. It goes straight into the context window. The spec trusts site owners to be honest — and we all know how that worked out for meta keywords in 1998.&lt;/p&gt;

&lt;p&gt;The critical difference: llms.txt is &lt;em&gt;visible&lt;/em&gt;. Anyone can read &lt;code&gt;yourdomain.com/llms.txt&lt;/code&gt; and see exactly what you’re telling AI about your site. A hidden prompt in a summarize button is invisible to the user. One is a store sign that might exaggerate; the other is someone slipping something into your pocket. Both can lie — but one is auditable.&lt;/p&gt;

&lt;p&gt;As AI agents get more autonomous and start consuming llms.txt files as trusted context to make decisions, this distinction matters more than it might seem right now. The honest version of the AI-readable web depends on llms.txt staying informational rather than becoming another manipulation vector. Whether that holds is an open question.&lt;/p&gt;

&lt;h3&gt;
  
  
  🟡 Manipulative: Generative Engine Optimization (GEO)
&lt;/h3&gt;

&lt;p&gt;This is where it gets gray. Research published on arXiv (&lt;a href="https://arxiv.org/pdf/2311.09735" rel="noopener noreferrer"&gt;2311.09735&lt;/a&gt;) studied how content optimization affects AI-generated responses. The findings: adding statistics and quotations to content can increase the likelihood of AI systems mentioning that content by 30-40%. The researchers also explored what they called “adversarial SEO for LLMs” — techniques like hidden text that can boost AI mentions by 2.5x.&lt;/p&gt;

&lt;p&gt;Sound familiar? It should. This is SEO all over again, just for a new kind of search engine. And just like SEO, there’s a spectrum within the spectrum. Writing clearly and including relevant data so AI can accurately represent your work? Fine. Stuffing invisible text into your pages to game AI recommendations? That’s the old keyword-stuffing playbook with a new coat of paint.&lt;/p&gt;

&lt;p&gt;The GEO research is peer-reviewed and the techniques are already in the wild. If you’re building content-driven products, you will encounter competitors doing this. The question is whether you join them.&lt;/p&gt;

&lt;h3&gt;
  
  
  🔴 Adversarial: Memory Poisoning via Prompt Injection
&lt;/h3&gt;

&lt;p&gt;This is what Microsoft documented. It’s not about influencing what the AI says about your content &lt;em&gt;right now&lt;/em&gt;. It’s about planting instructions that persist in the user’s AI memory and influence future, unrelated conversations.&lt;/p&gt;

&lt;p&gt;The difference is critical: - &lt;strong&gt;GEO&lt;/strong&gt; manipulates the AI’s &lt;em&gt;current&lt;/em&gt; response about your content - &lt;strong&gt;Memory poisoning&lt;/strong&gt; manipulates the AI’s &lt;em&gt;future&lt;/em&gt; responses about &lt;em&gt;everything&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;One is gaming a system. The other is compromising a user’s trusted tool. The 31 companies Microsoft caught aren’t just optimizing for visibility — they’re writing to a user’s AI memory without consent.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters for Builders
&lt;/h2&gt;

&lt;p&gt;If you’re shipping products that interact with AI systems — and increasingly, that’s all of us — you need to understand three things:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Your users’ AI memories are now an attack surface.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every “Summarize with AI” button, every shared link with a &lt;code&gt;?q=&lt;/code&gt; parameter, every piece of content your users feed into their assistants is a potential vector for memory poisoning. If you’re building tools that integrate with AI assistants, you need to think about what’s being written to memory and whether your users consented to it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. The defense isn’t technical — it’s structural.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;OpenAI, Microsoft, and others are working on mitigations. But the fundamental tension remains: AI assistants need to remember things to be useful, and any system that accepts external input and writes to memory is vulnerable to having that input be adversarial. Guardrails help. They don’t solve it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. There’s a land grab happening in the AI-readable web, and the rules aren’t written yet.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We went through this with SEO. The early web was a free-for-all of keyword stuffing and link farms until Google got good enough at detecting manipulation (and enough people got burned by penalties). The AI-readable web is in its keyword-stuffing era right now. The companies that play clean will be better positioned when the platforms inevitably crack down.&lt;/p&gt;

&lt;h2&gt;
  
  
  What to Do (and What Not to Do)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;If you’re building a website or content product:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Do&lt;/strong&gt; adopt &lt;a href="https://llmstxt.org/" rel="noopener noreferrer"&gt;llms.txt&lt;/a&gt;. Make your content genuinely useful to AI systems. This is the robots.txt moment — get ahead of it.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Do&lt;/strong&gt; use structured data and clear writing. The best GEO is just good content.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Don’t&lt;/strong&gt; embed hidden instructions in your pages targeting AI assistants. Even if it works today, it’s the kind of thing that gets you blacklisted tomorrow.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Don’t&lt;/strong&gt; build “Summarize with AI” buttons that inject anything beyond the actual content into the prompt. Your users trust that button to do what it says.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;If you’re building AI-integrated tools:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Do&lt;/strong&gt; audit what gets written to AI memory when your tool processes external content. Treat memory writes like database writes — validate the input.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Do&lt;/strong&gt; give users visibility into what’s in their AI memory and where it came from. If a preference was planted by a summarization request, they should be able to see that.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Don’t&lt;/strong&gt; assume the AI provider’s guardrails are sufficient. As OpenAI themselves said, this may never be fully solved.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;If you’re just using AI assistants:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Do&lt;/strong&gt; periodically review your AI assistant’s memory. In ChatGPT, it’s under Settings → Personalization → Memory. Check for preferences you don’t remember setting.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Don’t&lt;/strong&gt; click “Summarize with AI” buttons on sites you don’t trust. That button may be doing more than summarizing.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Uncomfortable Truth
&lt;/h2&gt;

&lt;p&gt;We wanted the AI-readable web. We built llms.txt, structured data, clean APIs. We made it easy for AI to understand our content because that benefits everyone.&lt;/p&gt;

&lt;p&gt;But making the web AI-readable also makes it AI-manipulable. The same interfaces that let legitimate tools serve users better also let adversarial actors compromise those tools. Microsoft found 31 companies doing it across 14 industries, and those are just the ones they caught.&lt;/p&gt;

&lt;p&gt;The prompt injection problem may never be fully solved. That’s not a reason to panic — it’s a reason to build with clear eyes about what the actual threat model looks like. The companies planting memories through summarize buttons are counting on you not knowing this is possible.&lt;/p&gt;

&lt;p&gt;Well, now you know.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://www.theundercurrent.dev/p/the-summarize-button-that-remembers?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=the-summarize-button-that-remembers" rel="noopener noreferrer"&gt;The Undercurrent&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devtools</category>
      <category>ai</category>
      <category>productivity</category>
    </item>
    <item>
      <title>The Agent Framework Wars Have a Winner (And Nobody's Using It Yet)</title>
      <dc:creator>Mei Park</dc:creator>
      <pubDate>Wed, 11 Mar 2026 14:11:33 +0000</pubDate>
      <link>https://forem.com/meimakes/the-agent-framework-wars-have-a-winner-and-nobodys-using-it-yet-3i2j</link>
      <guid>https://forem.com/meimakes/the-agent-framework-wars-have-a-winner-and-nobodys-using-it-yet-3i2j</guid>
      <description>&lt;p&gt;The AI agent space has a framework problem. Not a shortage — a glut. Every week brings a new orchestration layer promising to turn your LLM into a reliable worker. CrewAI, LangGraph, AutoGen, OpenAI’s Agents SDK — pick your flavor, wire up some tools, watch it mostly work until it doesn’t.&lt;/p&gt;

&lt;p&gt;Recently, three things happened that cut through the noise.&lt;/p&gt;

&lt;h2&gt;
  
  
  Microsoft wrote the textbook (literally)
&lt;/h2&gt;

&lt;p&gt;Microsoft Research dropped &lt;a href="https://arxiv.org/pdf/2602.14229" rel="noopener noreferrer"&gt;CORPGEN&lt;/a&gt; on Feb 26 — a framework for what they call “Multi-Horizon Task Environments.” Translation: agents that juggle dozens of concurrent tasks with complex dependencies, like an actual employee would.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!NTAQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff03dcde5-16bd-4fc3-b97c-d04f83c7227c_1344x896.jpeg" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd07ls7dj2ib1xrsegax0.jpeg" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Their key finding is brutal. When you move agents from isolated, single-task benchmarks to realistic multi-task workloads, completion rates crater — from 16.7% to 8.7%. The demos lie. The benchmarks lie. Real work breaks agents.&lt;/p&gt;

&lt;p&gt;They identified four failure modes worth memorizing:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Context saturation&lt;/strong&gt; — context grows linearly with task count until it blows past token limits&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Memory interference&lt;/strong&gt; — info from one task contaminates reasoning about another&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Dependency graph complexity&lt;/strong&gt; — real tasks form DAGs, not linear chains, and agents can’t navigate them&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reprioritization overhead&lt;/strong&gt; — every new task makes the “what do I do next?” decision harder&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Their solution: hierarchical planning across strategic, tactical, and operational layers, with sub-agent isolation so task contexts don’t bleed into each other.&lt;/p&gt;

&lt;p&gt;If you’ve been building with agents in production, none of this is surprising. But it matters because Microsoft just gave academic weight to an architecture pattern that indie builders stumbled into through trial and error: &lt;strong&gt;keep your strategic context in one place, spawn isolated workers for execution, and persist memory externally.&lt;/strong&gt; The agents that work in the real world aren’t the ones with the cleverest prompts — they’re the ones with the cleanest separation of concerns.&lt;/p&gt;

&lt;h2&gt;
  
  
  The $187 ten-minute mistake
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://news.ycombinator.com/item?id=47133305" rel="noopener noreferrer"&gt;AgentBudget&lt;/a&gt; hit Hacker News recently. It’s a Python SDK born from pain: an agent loop that burned $187 in ten minutes when GPT-4o got stuck retrying a failed analysis. The library monkey-patches OpenAI and Anthropic SDKs to enforce hard dollar budgets with real-time cost tracking.&lt;/p&gt;

&lt;p&gt;1,300+ PyPI installs in just the first four days. That’s people who’ve been burned.&lt;/p&gt;

&lt;p&gt;The broader signal: &lt;strong&gt;agent cost management is becoming its own product category.&lt;/strong&gt; As agents get more autonomous — running overnight, making API calls unsupervised, chaining tool use — runaway loops become a real financial risk. It’s not a matter of if your agent will burn money on a stuck loop, it’s when.&lt;/p&gt;

&lt;p&gt;AgentBudget also integrates with Coinbase’s x402 protocol for autonomous stablecoin payments. We’re quietly entering an era where agents don’t just spend your money accidentally — they spend it on purpose, too. Budget guardrails aren’t a nice-to-have anymore. They’re table stakes.&lt;/p&gt;

&lt;h2&gt;
  
  
  MCP crossed 97 million monthly downloads
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.agilesoftlabs.com/blog/2026/02/how-ai-agents-use-mcp-for-enterprise" rel="noopener noreferrer"&gt;The numbers are real&lt;/a&gt;: Anthropic’s Model Context Protocol went from 100K downloads at launch in November 2024 to 97M+ monthly SDK downloads in early 2026. Google just &lt;a href="https://www.infoq.com/news/2026/02/google-documentation-ai-agents/" rel="noopener noreferrer"&gt;brought their developer docs into MCP&lt;/a&gt;. A whole category of “MCP Gateways” has emerged — middleware that converts REST APIs into MCP-compatible tool endpoints, complete with OAuth 2.1.&lt;/p&gt;

&lt;p&gt;This matters for builders because MCP is becoming the TCP/IP of agent tooling — the boring plumbing layer that everything connects through. If your product or service doesn’t have an MCP endpoint, you’re invisible to the fastest-growing class of software consumers: other people’s agents.&lt;/p&gt;

&lt;p&gt;The interesting tension: the ecosystem is exploding but discoverability is fragmented. There’s no npm for MCP servers yet, no curated registry that tells you which implementations are production-quality vs. weekend experiments. The tooling gold rush has outpaced the tooling infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  The security conversation nobody wants to have
&lt;/h2&gt;

&lt;p&gt;A &lt;a href="https://news.ycombinator.com/item?id=47194611" rel="noopener noreferrer"&gt;Hacker News thread titled “Don’t trust AI agents”&lt;/a&gt; went after the security model of autonomous agent frameworks. The argument: these systems combine massive codebases with broad system access and minimal human review of their moment-to-moment decisions. The traditional open-source security model — “many eyes make all bugs shallow” — breaks when the codebase is hundreds of thousands of lines of orchestration logic that nobody has time to audit.&lt;/p&gt;

&lt;p&gt;ZDNET piled on with “From Clawdbot to OpenClaw: This viral AI agent is evolving fast — and it’s nightmare fuel for security pros.”&lt;/p&gt;

&lt;p&gt;Here’s the thing: they’re not wrong. But the framing misses the point. Lines of code isn’t a security metric — attack surface is. And the real risk isn’t in the orchestration framework. It’s in what you let the agent &lt;em&gt;do&lt;/em&gt;. An agent with read-only web access and a sandboxed workspace is fundamentally different from one with your AWS credentials and a &lt;code&gt;sudo&lt;/code&gt; habit, regardless of how many lines of code are involved.&lt;/p&gt;

&lt;p&gt;The actual security frontier for agents is &lt;strong&gt;permission architecture&lt;/strong&gt;: granular, auditable, revocable access controls that treat the agent like an untrusted contractor, not a trusted employee. We’re not there yet. Most frameworks hand over the keys and hope for the best.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this means for builders
&lt;/h2&gt;

&lt;p&gt;Three takeaways if you’re building with or on top of agents:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Isolation is the architecture.&lt;/strong&gt; Microsoft proved it academically, but practitioners already knew: multi-agent systems that share context fail. Spawn workers, give them narrow scope, aggregate results. The unsexy patterns win.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Budget your agents like you budget your infrastructure.&lt;/strong&gt; Set hard dollar limits. Monitor token usage per task, not just per month. An agent that runs great 99% of the time and costs you $200 the other 1% is not a reliable agent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. MCP is the integration layer whether you like it or not.&lt;/strong&gt; If you’re building tools, APIs, or services that agents might use, an MCP endpoint is becoming as expected as a REST API. Get ahead of it or get bypassed.&lt;/p&gt;

&lt;p&gt;The framework wars will keep raging. But the winners won’t be decided by benchmarks or GitHub stars. They’ll be decided by who builds the thing that works at 3 AM when nobody’s watching — and doesn’t burn the house down doing it.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;I write about the solo-dev AI landscape daily at &lt;a href="https://www.theundercurrent.dev/p/the-agent-framework-wars-have-a-winner" rel="noopener noreferrer"&gt;The Undercurrent&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://www.theundercurrent.dev/p/the-agent-framework-wars-have-a-winner?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=the-agent-framework-wars-have-a-winner" rel="noopener noreferrer"&gt;The Undercurrent&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devtools</category>
      <category>ai</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Rewrite Your CLI for Agents (Or Get Replaced)</title>
      <dc:creator>Mei Park</dc:creator>
      <pubDate>Tue, 10 Mar 2026 14:16:11 +0000</pubDate>
      <link>https://forem.com/meimakes/rewrite-your-cli-for-agents-or-get-replaced-2a2h</link>
      <guid>https://forem.com/meimakes/rewrite-your-cli-for-agents-or-get-replaced-2a2h</guid>
      <description>&lt;p&gt;The most important interface shift in a decade is happening right now, and most teams are sleepwalking through it.&lt;/p&gt;

&lt;p&gt;AI agents are the fastest-growing consumer of developer tooling. They don’t click buttons. They don’t read man pages. They invoke commands, parse output, and move on. And if your CLI spits out a pretty table with Unicode box-drawing characters and ANSI colors? Congratulations — you’ve built something an agent has to &lt;em&gt;hallucinate its way through&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The post that crystallized the moment landed on Hacker News recently: Justin Poehnelt’s “&lt;a href="https://justin.poehnelt.com/posts/rewrite-your-cli-for-ai-agents/" rel="noopener noreferrer"&gt;You Need to Rewrite Your CLI for AI Agents&lt;/a&gt;,” written from the experience of building Google’s new Workspace CLI — agents-first from day one. It hit the front page and the comments exploded. Everyone felt the pain it describes.&lt;/p&gt;

&lt;p&gt;The thesis is simple: &lt;strong&gt;the primary consumer of your CLI is no longer a human.&lt;/strong&gt; Act accordingly or get wrapped, forked, or replaced by someone who does.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Interface Mismatch
&lt;/h2&gt;

&lt;p&gt;Here’s what “human-first” CLI design looks like:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;my-cli spreadsheet create \&lt;br&gt;
  --title "Q1 Budget" \&lt;br&gt;
  --locale "en_US" \&lt;br&gt;
  --sheet-title "January" \&lt;br&gt;
  --frozen-rows 1 \&lt;br&gt;
  --frozen-cols 2&lt;/code&gt;Ten flags. Flat namespace. Can’t express nesting without inventing bespoke flag hierarchies. A human can tab-complete their way through it. An agent has to guess which flags exist, in what combination, and hope the help text is unambiguous.&lt;/p&gt;

&lt;p&gt;Now the agent-first version:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;gws sheets spreadsheets create --json '{&lt;br&gt;
  "properties": {"title": "Q1 Budget", "locale": "en_US"},&lt;br&gt;
  "sheets": [{"properties": {"title": "January",&lt;br&gt;
    "gridProperties": {"frozenRowCount": 1, "frozenColumnCount": 2}}}]&lt;br&gt;
}'&lt;/code&gt;One flag. The full API payload. An LLM generates this trivially because it maps directly to the schema. Zero translation loss.&lt;/p&gt;

&lt;p&gt;This isn’t about abandoning human ergonomics. It’s about making the raw-payload path a &lt;strong&gt;first-class citizen&lt;/strong&gt; alongside your convenience flags. The practical minimum: &lt;code&gt;--output json&lt;/code&gt;, an &lt;code&gt;OUTPUT_FORMAT=json&lt;/code&gt; env var, or — better yet — NDJSON by default when stdout isn’t a TTY.&lt;/p&gt;

&lt;h2&gt;
  
  
  Schema Introspection &amp;gt; Static Docs
&lt;/h2&gt;

&lt;p&gt;Agents can’t google your documentation without blowing their token budget. And static API docs baked into a system prompt go stale the moment you ship a new version.&lt;/p&gt;

&lt;p&gt;The Google Workspace CLI solved this with runtime schema introspection:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;gws schema drive.files.list&lt;br&gt;
gws schema sheets.spreadsheets.create&lt;/code&gt;Each call dumps the full method signature — params, request body, response types, required OAuth scopes — as machine-readable JSON. The agent self-serves. No pre-stuffed documentation. No 50-page system prompt.&lt;/p&gt;

&lt;p&gt;This is the pattern that matters: &lt;strong&gt;make the CLI itself the documentation, queryable at runtime.&lt;/strong&gt; Your tool should be able to answer “what do you accept?” and “what will you return?” without the agent ever leaving the terminal.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;gh&lt;/code&gt; CLI already does a version of this. &lt;code&gt;docker&lt;/code&gt; does it. The tools that don’t are the ones getting wrapped by shim layers — and every shim is a maintenance liability waiting to happen.&lt;/p&gt;

&lt;h2&gt;
  
  
  Context Window Discipline
&lt;/h2&gt;

&lt;p&gt;Here’s a number that should scare you: a single Gmail API response can consume a meaningful chunk of an agent’s context window. Humans scroll past irrelevant fields. Agents pay per token and lose reasoning capacity for every byte of noise.&lt;/p&gt;

&lt;p&gt;Two mechanisms matter:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Field masks&lt;/strong&gt; limit what the API returns. &lt;code&gt;gws drive files list --params '{"fields": "files(id,name,mimeType)"}'&lt;/code&gt; — only get what you need.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NDJSON pagination&lt;/strong&gt; emits one JSON object per line, stream-processable without buffering an entire response into memory. The agent processes page by page instead of choking on a 200KB blob.&lt;/p&gt;

&lt;p&gt;This is context window discipline, and it’s non-negotiable. If your CLI dumps everything and expects the consumer to filter, you’re burning tokens that could be spent on reasoning.&lt;/p&gt;

&lt;h2&gt;
  
  
  The MCP Question
&lt;/h2&gt;

&lt;p&gt;“But what about MCP?” Fair question. Anthropic’s Model Context Protocol was supposed to be the universal connector — a clean, structured protocol for agents to talk to any tool. And it works. But there’s a cost nobody talks about.&lt;/p&gt;

&lt;p&gt;Jannik Reinhard ran the numbers in a &lt;a href="https://jannikreinhard.com/2026/02/22/why-cli-tools-are-beating-mcp-for-ai-agents/" rel="noopener noreferrer"&gt;real-world comparison&lt;/a&gt;. A compliance-checking task against Microsoft Graph:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;MCP approach:&lt;/strong&gt; ~145,000 tokens (28K just for schema injection before asking a single question)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CLI approach:&lt;/strong&gt; ~4,150 tokens&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s a &lt;strong&gt;35x reduction&lt;/strong&gt;. A typical MCP server ships dozens or hundreds of tool definitions, all of which get dumped into the agent’s context whether it needs them or not. Stack a few MCP servers for a real enterprise workflow — GitHub, a database, Microsoft Graph, Jira — and you’re burning 150K+ tokens on plumbing alone.&lt;/p&gt;

&lt;p&gt;MCP isn’t wrong. But it’s an abstraction layer, and abstraction layers have tax. For many workflows, a well-designed CLI with &lt;code&gt;--json&lt;/code&gt; output and schema introspection is faster, cheaper, and more reliable than routing through a protocol server. The CLI &lt;em&gt;is&lt;/em&gt; the tool call.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Checklist
&lt;/h2&gt;

&lt;p&gt;If you maintain a CLI and you’re not thinking about agent consumers, here’s the minimum viable checklist. It’s not long:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;--json&lt;/code&gt;** flag everywhere.** Structured output to stdout, human messages to stderr.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Meaningful exit codes.&lt;/strong&gt; Not just 0/1. Agents need to branch on failure modes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Idempotent operations.&lt;/strong&gt; Agents retry. Your tool should handle that gracefully.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Schema introspection.&lt;/strong&gt; &lt;code&gt;mytool schema &amp;lt;command&amp;gt;&lt;/code&gt; should return what the command accepts and returns.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;NDJSON pagination.&lt;/strong&gt; Stream large result sets. Don’t buffer.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Noun-verb command structure.&lt;/strong&gt; &lt;code&gt;mytool resource action&lt;/code&gt; — it turns discovery into a tree search instead of a guessing game.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;TTY detection.&lt;/strong&gt; Pretty output for humans, JSON for pipes. Automatically.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;None of this is exotic. Most of it is just good Unix hygiene that we’ve been lazy about for years. The difference is that now there’s a consumer — a very fast-growing, very demanding consumer — that will route around your tool if you don’t provide it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bottom Line
&lt;/h2&gt;

&lt;p&gt;RTK, a Hacker News Show HN from last month, wraps existing CLI commands to strip human-oriented formatting before it hits an agent’s context. It saved 60-90% of tokens. That tool exists because &lt;em&gt;your&lt;/em&gt; CLI doesn’t output clean data by default.&lt;/p&gt;

&lt;p&gt;Google just shipped a Workspace CLI built agents-first. CLIWatch is building benchmarks that score tools on agent-readiness — pass rates, token efficiency, turn counts — with badges for your README.&lt;/p&gt;

&lt;p&gt;The migration is happening. The question isn’t whether your CLI needs an agent-friendly interface. It’s whether you build it yourself or someone else builds a wrapper that makes you a dependency they’d rather not have.&lt;/p&gt;

&lt;p&gt;Your CLI’s next power user doesn’t read your README. It reads your &lt;code&gt;--help&lt;/code&gt; output, introspects your schema, and parses your JSON. Design for that user, or watch them move on to someone who did.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published on &lt;a href="https://www.theundercurrent.dev/p/rewrite-your-cli-for-agents-or-get?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=rewrite-your-cli-for-agents-or-get" rel="noopener noreferrer"&gt;The Undercurrent&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devtools</category>
      <category>ai</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Your Vault Is Your Moat</title>
      <dc:creator>Mei Park</dc:creator>
      <pubDate>Mon, 09 Mar 2026 18:25:56 +0000</pubDate>
      <link>https://forem.com/meimakes/your-vault-is-your-moat-56pd</link>
      <guid>https://forem.com/meimakes/your-vault-is-your-moat-56pd</guid>
      <description>&lt;p&gt;&lt;strong&gt;Your notes folder just became your most valuable business asset. Most solo builders haven’t noticed yet.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For years, “moat” in software meant network effects, proprietary data, or switching costs measured in integrations. For independent builders, it meant nothing. You had no moat. You shipped fast and hoped nobody shipped faster.&lt;/p&gt;

&lt;p&gt;That just changed. And the shift happened so quietly that the people benefiting most from it don’t even realize they’re sitting on a competitive advantage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!svT3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0cb76f6-754a-437e-9a6b-dd5ee8caf65d_1344x896.jpeg" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbvuzk2ztze3oxgcgkrzy.jpeg" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Convergence Nobody Planned
&lt;/h2&gt;

&lt;p&gt;Three things happened in the same week that, taken together, reveal a pattern worth paying attention to.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Obsidian shipped a &lt;a href="https://news.ycombinator.com/item?id=47197267" rel="noopener noreferrer"&gt;headless sync client&lt;/a&gt; and a CLI.&lt;/strong&gt; Not a plugin — a formal programmatic interface to your vault. You can now query your notes, run commands, and access the index from the terminal. Which means an AI agent can do the same thing. Your vault just got an API, and you didn’t have to build one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cloudflare launched &lt;a href="https://blog.cloudflare.com/markdown-for-agents/" rel="noopener noreferrer"&gt;automatic markdown conversion at the CDN edge&lt;/a&gt;.&lt;/strong&gt; Any page on a Cloudflare-enabled zone can now be requested as markdown via Accept header. Their number: 80% token reduction compared to raw HTML. Markdown isn’t a developer convenience anymore. It’s becoming a first-class web content type at the infrastructure layer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://recapio.com/digest/how-i-use-obsidian-claude-code-to-run-my-life-by-greg-isenberg" rel="noopener noreferrer"&gt;Greg Isenberg told 500K subscribers&lt;/a&gt; that managing your vault IS managing your agent.&lt;/strong&gt; His argument: stop optimizing your AI workflow. Optimize your notes. Context is the bottleneck, not capability. The AI becomes effective automatically when it has good context to work with.&lt;/p&gt;

&lt;p&gt;Each of these alone is a product announcement. Together, they’re a thesis: &lt;strong&gt;plain text markdown is becoming the default interface between humans and AI agents.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Your Vault Is a Moat
&lt;/h2&gt;

&lt;p&gt;Here’s what most people miss about AI agents: the model is commodity. Everyone has access to the same Claude, the same GPT. The differentiator isn’t which model you use — it’s what context you feed it.&lt;/p&gt;

&lt;p&gt;A solo builder who’s been working in a structured vault for two years has something no competitor can replicate overnight:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Decision history.&lt;/strong&gt; Why you chose Postgres over Supabase. Why you pivoted from B2C to B2B. Why that pricing experiment failed. An agent with access to this makes better recommendations than one starting cold.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Project context.&lt;/strong&gt; Not just what you’re building, but the accumulated understanding of why. Architecture decisions, user feedback, competitive notes, abandoned approaches. This is institutional knowledge that large companies pay consultants to reconstruct.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Taste documentation.&lt;/strong&gt; Your writing style. Your design preferences. Your communication patterns. The kind of thing that takes a new hire six months to absorb, available to an agent immediately.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The switching cost isn’t the tool. It’s the accumulated understanding. Moving from Obsidian to Notion is trivial. Rebuilding two years of structured context from scratch is not.&lt;/p&gt;

&lt;p&gt;This is the same dynamic that makes a senior developer’s laptop more valuable than a junior’s, despite running the same IDE. The software is identical. The context is not.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Security Problem Nobody Wants to Talk About
&lt;/h2&gt;

&lt;p&gt;There’s a catch, and it’s a serious one.&lt;/p&gt;

&lt;p&gt;If your vault is comprehensive enough to be a genuine moat, it’s also comprehensive enough to be a genuine attack surface. &lt;a href="https://www.theregister.com/2026/03/01/nanoclaw_container_openclaw/" rel="noopener noreferrer"&gt;NanoClaw&lt;/a&gt; — a containerized fork of OpenClaw — exists specifically because its creator realized his agent could see &lt;em&gt;everything&lt;/em&gt; in his vault while running a WhatsApp sales pipeline.&lt;/p&gt;

&lt;p&gt;The vault’s power comes from comprehensive context. Comprehensive context is a massive liability if your agent gets compromised, your sync gets intercepted, or your API layer has a bug.&lt;/p&gt;

&lt;p&gt;This isn’t theoretical. It’s the fundamental tension of the vault-as-platform model. The more useful your vault is to your agent, the more damaging a breach becomes. Selective sync, containerized agents, and access scoping are going to matter a lot more than most builders currently appreciate.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Actually Means for You
&lt;/h2&gt;

&lt;p&gt;If you’re a solo builder and you’re not already working in a structured vault, start. Not because of any specific tool — because the accumulated context you build over the next 12 months is going to compound in ways that aren’t obvious yet.&lt;/p&gt;

&lt;p&gt;If you already have a vault, treat it like infrastructure. Structure matters. Naming conventions matter. The difference between a folder of scattered notes and a queryable knowledge base is the difference between a pile of parts and a machine.&lt;/p&gt;

&lt;p&gt;And if you’re evaluating tools: pick the one that gives you the most portable, agent-readable output. Markdown in a folder you control beats a proprietary database you can export from. When the next interface layer arrives — and it will — you want your context ready, not locked behind someone else’s API.&lt;/p&gt;

&lt;p&gt;Your vault isn’t a productivity system anymore. It’s a moat. The longer you build in it, the wider it gets.&lt;/p&gt;

&lt;p&gt;The builders who figured this out two years ago are already unreachable. The ones who figure it out today still have a window. The ones who figure it out next year will be playing catch-up with their own agents.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published March 6, 2026 on &lt;a href="https://www.theundercurrent.dev/p/your-vault-is-your-moat?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=your-vault-is-your-moat" rel="noopener noreferrer"&gt;The Undercurrent&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devtools</category>
      <category>ai</category>
      <category>productivity</category>
    </item>
    <item>
      <title>The One-Person Backend</title>
      <dc:creator>Mei Park</dc:creator>
      <pubDate>Mon, 09 Mar 2026 18:25:52 +0000</pubDate>
      <link>https://forem.com/meimakes/the-one-person-backend-98j</link>
      <guid>https://forem.com/meimakes/the-one-person-backend-98j</guid>
      <description>&lt;p&gt;&lt;strong&gt;You don’t need a backend team anymore.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That sentence would have been absurd five years ago. Running a production web application meant provisioning servers, managing databases, configuring load balancers, setting up CI/CD pipelines, monitoring uptime, rotating credentials, and praying nothing broke at 3 AM. The “ops tax” on a solo builder was so steep that most people either partnered up, raised money, or built something simpler than what they actually wanted.&lt;/p&gt;

&lt;p&gt;In 2026, a single developer can ship a globally distributed, offline-capable, real-time application with a database that fits in their deployment artifact. No Kubernetes. No RDS. No ops team. The infrastructure story has quietly become the most consequential shift in independent software.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://substackcdn.com/image/fetch/$s_!liFP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa0abedbc-74dd-4f47-b13b-4a732d907597_1344x896.jpeg" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffenvqakhapcng9co5efa.jpeg" width="800" height="533"&gt;&lt;/a&gt;## The SQLite Moment&lt;br&gt;
SQLite has been around since 2000. It powers more deployed software than any other database engine. But for web applications, it was always dismissed as a toy.&lt;/p&gt;

&lt;p&gt;That changed. Turso built a distributed SQLite service (libSQL) that replicates globally. Litestream solved backup and replication. Fly.io made it trivial to run SQLite at the edge. ElectricSQL added real-time sync between local SQLite and central Postgres.&lt;/p&gt;

&lt;p&gt;The result: you can build apps where the database lives &lt;em&gt;inside&lt;/em&gt; your application and replicates automatically. No connection strings. No cold starts. Your database is a file.&lt;/p&gt;

&lt;h2&gt;
  
  
  Edge Functions Ate the Server
&lt;/h2&gt;

&lt;p&gt;The serverless revolution promised to eliminate servers. It delivered vendor lock-in instead. But edge functions are actually delivering on the original promise.&lt;/p&gt;

&lt;p&gt;Cloudflare Workers, Deno Deploy, Vercel Edge Functions all offer the same proposition: write a function, deploy it, it runs everywhere. Combined with edge-native databases, you get full-stack at the network edge.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Local-First Renaissance
&lt;/h2&gt;

&lt;p&gt;Local-first software was a niche academic concept five years ago. Now it’s becoming the default for indie apps. CRDTs matured. PowerSync, Replicache, and Zero ship production sync engines. Local-first eliminates the always-on server. Your costs approach zero at rest.&lt;/p&gt;

&lt;h2&gt;
  
  
  The New Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Database:&lt;/strong&gt; SQLite + Turso&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Compute:&lt;/strong&gt; Edge functions&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Sync:&lt;/strong&gt; PowerSync or ElectricSQL&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Deployment:&lt;/strong&gt; Git push&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Monthly cost for a side project: under $20. For thousands of users: under $100.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Actually Changes
&lt;/h2&gt;

&lt;p&gt;The ops tax was the great equalizer. Now that it’s disappearing, the constraint has shifted from “can you run it” to “can you build something people want.” That’s a much better problem to have.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published March 5, 2026 on &lt;a href="https://www.theundercurrent.dev/p/the-one-person-backend?utm_source=devto&amp;amp;utm_medium=crosspost&amp;amp;utm_campaign=the-one-person-backend" rel="noopener noreferrer"&gt;The Undercurrent&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devtools</category>
      <category>ai</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
