<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: MVPBuilder_io</title>
    <description>The latest articles on Forem by MVPBuilder_io (@energetekk).</description>
    <link>https://forem.com/energetekk</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/energetekk"/>
    <language>en</language>
    <item>
      <title>Day 4. VS Code open. Twenty minutes staring at the same function. Tab closed. You tell yourself you'll make up for it tomorrow. You don't.</title>
      <dc:creator>MVPBuilder_io</dc:creator>
      <pubDate>Thu, 23 Apr 2026 10:44:13 +0000</pubDate>
      <link>https://forem.com/energetekk/day-4-vs-code-open-twenty-minutes-staring-at-the-same-function-tab-closed-you-tell-yourself-59in</link>
      <guid>https://forem.com/energetekk/day-4-vs-code-open-twenty-minutes-staring-at-the-same-function-tab-closed-you-tell-yourself-59in</guid>
      <description>&lt;p&gt;The planning-execution gap in software development describes the condition where a developer can fully articulate what needs to be built, generate a complete implementation plan using AI, and still fail to ship — because knowledge of a system does not transfer into the daily discipline required to complete it. This is not a new problem. But AI tools have made it sharper, more visible, and — for experienced developers especially — more surprising when it hits.&lt;/p&gt;

&lt;p&gt;There is a structural description of this gap that holds across domains — someone who has studied music theory, can read notation, understands chord progressions, and still cannot sit down and play a Mozart sonata. The explanation is not intelligence or effort. It is that theoretical knowledge and practiced execution are two different systems. "Being able to play Mozart level is very different from knowing how to play piano," as a learning and career development coach put it — and the same split applies to software development. Knowing your tech stack, understanding architecture patterns, and generating a complete sprint plan does not mean you will open your editor at 9pm after your day job and actually build.&lt;/p&gt;

&lt;p&gt;AI has solved the first half. The second half is still yours.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Overconfidence Mechanism
&lt;/h2&gt;

&lt;p&gt;Hada is a Senior PM at Amazon with eight-plus years of experience working with AI systems. When she decided to transition roles, she went into the process with what she describes as high confidence. She knew the domain. She had the credentials. She had the AI tools to help her prepare.&lt;/p&gt;

&lt;p&gt;"I was very overconfident going into this process," she said later. "It took me easily nine months... after maybe 30 or 40 rejections, that's when I got this role."&lt;/p&gt;

&lt;p&gt;The tools gave her a complete preparation plan. They did not give her the daily follow-through to execute it when rejection compounded over months. More competence plus better tools did not produce less failure. It produced a sharper collision when reality did not match the plan the tools had generated.&lt;/p&gt;

&lt;p&gt;This is the overconfidence mechanism: AI tools compress the distance between not knowing what to do and having a complete roadmap. That compression feels like progress. Systematic research on human planning has documented for decades that people overestimate their ability to execute plans they themselves created — AI tools add a new variable by generating plans that feel even more credible because they were produced by something that processes information faster than the person holding them. The plan looks more complete. The gap between plan and execution stays exactly the same.&lt;/p&gt;




&lt;h2&gt;
  
  
  What the Numbers Confirm
&lt;/h2&gt;

&lt;p&gt;METR's July 2025 study on experienced developers and AI tools found that participants completed real-world tasks 19% slower when using AI assistance — not faster. If you want the full analysis of what that means for side project development, I covered it in a &lt;a href="https://dev.to/energetekk/ai-made-experienced-devs-19-slower-heres-the-side-project-trap-that-created-3o5i"&gt;previous post&lt;/a&gt;. The short version here: the overhead of integrating AI suggestions into an existing mental model can outweigh the generation speed benefit. Experience, in this case, was a liability, not an asset.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Checkpoint Condition
&lt;/h2&gt;

&lt;p&gt;Security researcher Dr. Karsten Nohl has described a structural problem in AI deployment that offers a structural parallel here: without defined decision points where a human reviews and approves, the human role in any AI-assisted process dissolves into passive monitoring rather than active control. I went into the full case for &lt;a href="https://dev.to/energetekk/the-9010-rule-that-security-researchers-figured-out-before-developers-did-2nd0"&gt;human-in-the-loop accountability structure&lt;/a&gt; in an earlier post. What matters here is the mechanism.&lt;/p&gt;

&lt;p&gt;A checkpoint is not a check-in. A checkpoint is a point in time where something is either validated or it is not — and the absence of validation has a defined consequence. Enterprises that skip this structure end up with AI agents producing outputs that no one actually reviewed. Developers who skip this structure end up with sprint plans that no one actually enforced.&lt;/p&gt;

&lt;p&gt;Without a checkpoint, you are not running a sprint. You are running a plan that expires quietly.&lt;/p&gt;




&lt;h2&gt;
  
  
  Day 4
&lt;/h2&gt;

&lt;p&gt;You had a plan. The plan was good — specific tasks, reasonable scope, your actual tech stack.&lt;/p&gt;

&lt;p&gt;Day 1: you set up the repo and made a list.&lt;br&gt;
Day 2: you read documentation for something you were not sure about.&lt;br&gt;
Day 3: you started the function but got interrupted.&lt;br&gt;
Day 4: VS Code open. Twenty minutes staring at the same function. Tab closed. You tell yourself you'll make up for it tomorrow.&lt;/p&gt;

&lt;p&gt;Nobody noticed. The plan did not notice. The AI that generated the plan did not notice.&lt;/p&gt;

&lt;p&gt;Knowing how to build something and being able to build it under real-world conditions are two separate competencies — the same gap that separates music theory students from performing pianists, and that separates developers with AI-generated roadmaps from developers who ship.&lt;/p&gt;

&lt;p&gt;AI coding tools eliminate the planning problem while leaving the execution problem intact: a developer can produce a technically correct 30-day roadmap in four minutes and abandon it by day three, because the tool that generated the plan has no mechanism to enforce it.&lt;/p&gt;

&lt;p&gt;This is not a motivation problem. It is a structure problem. Motivation is available — you wanted to build the thing. Structure is what was missing.&lt;/p&gt;




&lt;h2&gt;
  
  
  What a Deadline Actually Does
&lt;/h2&gt;

&lt;p&gt;The word "deadline" sounds like pressure. What a hard deadline actually provides is visibility.&lt;/p&gt;

&lt;p&gt;A piano teacher who assigns a recital in six weeks is not adding pressure to a student's life. They are adding a structure that makes invisible daily decisions suddenly visible. Whether you practiced today matters because there is a point in six weeks where the result of every daily decision will be audible to other people in a room.&lt;/p&gt;

&lt;p&gt;Without the recital, practice is optional in a way that is very hard to feel in the moment. You can always practice tomorrow. The knowledge is not going anywhere. The gap between theory and execution remains comfortable because nothing makes it visible.&lt;/p&gt;

&lt;p&gt;A sprint is not a roadmap. A roadmap is a description of what needs to happen. A sprint is a time-bounded container with hard stops where something is either done or it is not — and a person who has reviewed it can confirm the difference.&lt;/p&gt;

&lt;p&gt;This is the structural distinction that AI tools cannot provide and courses do not provide: not more information about what to build, not more planning capability, but an external system with actual enforcement — daily tasks designed for your specific project context, and milestone reviews that make the gap between plan and execution visible to someone other than yourself.&lt;/p&gt;

&lt;p&gt;The music theory student who can read notation and explain chord progressions does not need more theory. They need to sit down at a piano on a fixed schedule with someone who will notice whether they played or not.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where You Stand
&lt;/h2&gt;

&lt;p&gt;Your project is probably not dead. It is probably paused in a state that feels recoverable until enough time passes that recovering it would require starting over.&lt;/p&gt;

&lt;p&gt;AI will give you a perfect plan. It won't notice when you skip Day 4.&lt;/p&gt;

&lt;p&gt;If someone asked you tomorrow what happened to your project, what would you say?&lt;/p&gt;




&lt;p&gt;Cohort #1 of MVP Builder is free. If you have a side project that is stuck and a day job that makes every evening a negotiation, the application is at mvpbuilder.io/pipeline — five steps, no pitch deck required.&lt;/p&gt;




&lt;h2&gt;
  
  
  Frequently Asked Questions
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Why do developers fail to ship side projects even when they know exactly what to build?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Knowing what to build does not create the daily discipline required to build it. Research on experienced developers shows that planning capability and execution follow-through are structurally separate — AI tools have improved the first while leaving the second unchanged.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is the AI planning-execution gap?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The AI planning-execution gap is the growing distance between a developer's ability to generate a complete project roadmap (now trivially easy with AI tools) and their ability to follow that roadmap to completion without external accountability structure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why did AI make some experienced developers slower, not faster?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A 2025 METR study found that experienced developers completed real-world tasks 19% slower when using AI tools — the overhead of integrating AI suggestions into an existing mental model outweighed the generation speed benefit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is the difference between knowing how to code and being able to ship?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The same gap that separates a piano student who can read sheet music from one who can perform Mozart: theoretical competence does not automatically produce execution under pressure, deadlines, and competing priorities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What actually helps developers finish their side projects?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;External accountability structure with hard deadlines, daily calibrated tasks specific to the actual project, and human review of submitted proof — not more planning tools, not more AI-generated roadmaps, and not courses that add knowledge without enforcing output.&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>career</category>
      <category>ai</category>
      <category>sideprojects</category>
    </item>
    <item>
      <title>The 90/10 rule that security researchers figured out before developers did</title>
      <dc:creator>MVPBuilder_io</dc:creator>
      <pubDate>Thu, 16 Apr 2026 04:37:03 +0000</pubDate>
      <link>https://forem.com/energetekk/the-9010-rule-that-security-researchers-figured-out-before-developers-did-2nd0</link>
      <guid>https://forem.com/energetekk/the-9010-rule-that-security-researchers-figured-out-before-developers-did-2nd0</guid>
      <description>&lt;h2&gt;
  
  
  The security researcher who wasn't talking about you
&lt;/h2&gt;

&lt;p&gt;Dr. Karsten Nohl is a German security researcher, best known for publicly exposing critical vulnerabilities in GSM and SS7 mobile infrastructure — systems that affect how billions of phone calls are routed. He doesn't work in developer productivity. He's not affiliated with any side project tool. He was describing enterprise AI security pipelines when he said this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"The goal can't be to replace humans 100%. Let the machine do 90% of the work — but keep a human in the loop at every critical decision point. The same person who used to do the work themselves now supervises the machine."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And then, more pointedly:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"A lot of AI experiments are failing right now — exactly because people are chaining AI agents together, feeding sensible data in the front, getting a wrong result out the back."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;He was talking about enterprise security infrastructure. Not side projects. That's exactly what makes it useful — independent validation from a completely unrelated domain.&lt;/p&gt;

&lt;p&gt;The 90/10 model he described didn't emerge from a product manager optimizing conversion rates. It came from engineers trying to figure out why fully automated AI pipelines kept producing wrong outputs with high confidence.&lt;/p&gt;

&lt;p&gt;The answer was: no one was watching at the critical junctures.&lt;/p&gt;




&lt;h2&gt;
  
  
  The productivity paradox
&lt;/h2&gt;

&lt;p&gt;In July 2025, METR published a study measuring experienced, professional developers on real software tasks — with and without AI tools. The finding: developers using AI tools were &lt;strong&gt;19% slower&lt;/strong&gt; on average.&lt;/p&gt;

&lt;p&gt;Not junior developers learning to code. Experienced engineers on real work.&lt;/p&gt;

&lt;p&gt;The follow-up findings didn't reverse this — they narrowed it. The effect is smaller than originally measured for some task types, but directionally negative for complex tasks remains the finding.&lt;/p&gt;

&lt;p&gt;Separately, BCG documented 39% more serious errors in AI-intensive work environments. The mechanism in both cases is the same: over-trust, reduced verification, and a diffuse sense that "the AI handled it" — even when it didn't.&lt;/p&gt;

&lt;p&gt;A developer describing this recently put it bluntly: he had 40 minutes budgeted per ticket because his manager assumed AI would speed things up. He committed to the next ticket anyway. At the end of the day, he didn't know what he had actually done.&lt;/p&gt;

&lt;p&gt;That's not a motivation problem. That's a structural problem. And it happens to be exactly what Nohl was describing — except his engineers weren't building side projects, they were running AI pipelines with production consequences.&lt;/p&gt;




&lt;h2&gt;
  
  
  Your brain has already checked out
&lt;/h2&gt;

&lt;p&gt;Here's the part that removes the shame.&lt;/p&gt;

&lt;p&gt;Kahneman's Planning Fallacy (1979) shows that humans systematically underestimate how long their own projects take and overestimate their future motivation. This is not a character flaw. It's how cognition works. You plan from best-case conditions, then execute in reality.&lt;/p&gt;

&lt;p&gt;Solo side projects are a perfect environment for this effect to compound. No deadline anyone else cares about. No one checking in. No consequence for letting the sprint slip a week. The only accountability is self-generated — and self-generated accountability is the weakest kind.&lt;/p&gt;

&lt;p&gt;There's a secondary mechanism that makes this worse: passive monitoring. When you're watching rather than doing — reviewing a plan, reading architecture docs, scanning an AI-generated task list — the brain shifts into an energy-saving mode. You're present, but not engaged. You feel like you're working. You're not building forward momentum.&lt;/p&gt;

&lt;p&gt;The planning problem is solved. What AI tools haven't touched is the execution problem — the Tuesday at 7pm when you have 45 minutes, the project is 90% done, and you open something else instead.&lt;/p&gt;

&lt;p&gt;This isn't about willpower. It's about the absence of checkpoint structure. When nothing external marks the difference between "Day 4 done" and "Day 4 skipped," the brain registers no loss. The project stays 80% complete, indefinitely.&lt;/p&gt;




&lt;h2&gt;
  
  
  What 90/10 actually means in practice
&lt;/h2&gt;

&lt;p&gt;Back to Nohl's model. He described the architecture like this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Each of those AI agents reports back to a person who approves — and passes it on to the next."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Applied to a side project sprint, this translates directly:&lt;/p&gt;

&lt;p&gt;The 90% is automated daily continuity — prompts tailored to where you are in the build, what you've shipped so far, what's left. They arrive without you having to decide to open the project. They create a low-friction point of entry on the Tuesday at 7pm. They're not motivational content. They're structured questions that re-engage you with the actual work.&lt;/p&gt;

&lt;p&gt;The 10% is human judgment at the moments that determine whether the project ships or dies. Not every day — at the milestones. Day 13. Day 21. Day 30. Did you build what you said you'd build? Is the scope still coherent? Do you move forward or do we recalibrate?&lt;/p&gt;

&lt;p&gt;An accountability system for developers isn't about motivation. It's about creating the checkpoint structure that AI tools don't provide by default.&lt;/p&gt;

&lt;p&gt;The 90/10 model isn't a workaround — it's the architecture. Automate the daily continuity. Preserve human judgment for the moments that determine whether the project ships or dies.&lt;/p&gt;




&lt;h2&gt;
  
  
  One instantiation of this model
&lt;/h2&gt;

&lt;p&gt;I'm testing this as a product. It's called MVP Builder.&lt;/p&gt;

&lt;p&gt;The structure is a 30-day sprint for developers with a full-time job. You apply with your project. Daily prompts are sent based on your stack, your tier (13, 21, or 30 days depending on where you are), and what you've built so far. At milestones, there's a checkpoint review before the next phase unlocks.&lt;/p&gt;

&lt;p&gt;Not an AI reviewing it. Me. Because right now, at Cohort #1, the human in the loop is the founder.&lt;/p&gt;

&lt;p&gt;That's the 10%. And it doesn't scale. That's exactly why Cohort #1 is free — the manual review is the product that I'm validating, not the automation layer.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Gawdat objection
&lt;/h2&gt;

&lt;p&gt;Mo Gawdat — former Chief Business Officer at Google X, founder of Emma AI — made a point worth taking seriously:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"If I had started Emma in 2022 it would have taken me 350 engineers and four years. It took less than three months and basically four of us."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If AI tools enable that kind of leverage, doesn't the 90/10 model become unnecessary? Can't you just ship faster and skip the checkpoint structure entirely?&lt;/p&gt;

&lt;p&gt;Steel-man accepted. Gawdat's experience is real. But the constraint set is completely different.&lt;/p&gt;

&lt;p&gt;Gawdat is a full-time founder with co-founders and twelve years of institutional knowledge from Google X. The developers in MVP Builder's ICP are running a side project on 5–10 hours per week, competing with a full-time job, without co-founders, without a team, and without anyone who will notice if the project stalls at 80%.&lt;/p&gt;

&lt;p&gt;Same AI tools. Completely different execution environment.&lt;/p&gt;

&lt;p&gt;Gawdat has the external structure built into his setup — co-founders provide daily accountability by default. A solo developer with a full-time job doesn't have that. The tools don't create it. That's the gap.&lt;/p&gt;




&lt;h2&gt;
  
  
  The actual question
&lt;/h2&gt;

&lt;p&gt;If you've been building with AI tools and the project still isn't shipped, the uncomfortable question isn't whether the tools are good enough. They're good enough. The architecture question is what's missing.&lt;/p&gt;

&lt;p&gt;A plan is not a checkpoint system. An AI-generated task list is not a deadline. A repo with working code is not a shipped product.&lt;/p&gt;

&lt;p&gt;Cohort #1 is free. Application takes 2 minutes: &lt;a href="https://mvpbuilder.io/pipeline?utm_source=devto&amp;amp;utm_medium=essay&amp;amp;utm_campaign=cohort1" rel="noopener noreferrer"&gt;mvpbuilder.io/pipeline&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  If the 90% is in place and Day 4 still gets skipped, the question isn't whether AI is useful — it's whether anyone is watching.
&lt;/h2&gt;

&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Why are experienced developers slower with AI tools?&lt;/strong&gt;&lt;br&gt;
The METR study (July 2025) found experienced developers were 19% slower on real software tasks when using AI tools. The primary causes are over-trust in AI output, increased verification overhead, and reduced active engagement with the work. BCG separately documented 39% more serious errors in AI-intensive environments — consistent with a pattern where developers assume the AI handled something it didn't.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is the 90/10 model for AI-assisted development?&lt;/strong&gt;&lt;br&gt;
The 90/10 model — described independently by security researcher Dr. Karsten Nohl in the context of enterprise AI pipelines — proposes that AI should handle approximately 90% of routine, repeatable work, while a human remains in the loop at every critical decision point. Applied to software development: automate daily continuity (prompts, reminders, task framing), preserve human judgment for milestone reviews that determine whether a project ships or stalls.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is the planning fallacy and why does it affect side projects?&lt;/strong&gt;&lt;br&gt;
The planning fallacy (Kahneman, 1979) describes the systematic human tendency to underestimate effort and overestimate future motivation when planning personal projects. Solo side projects are especially vulnerable because there are no external deadlines, no one watching, and no consequence for slipping the timeline. The result: projects stay 80% complete indefinitely, not because the developer isn't capable, but because there's no external structure creating a meaningful difference between "done today" and "done next week."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is MVP Builder?&lt;/strong&gt;&lt;br&gt;
MVP Builder is a structured 30-day sprint for developers with a full-time job who have a stalled side project. Participants apply with their project, receive daily prompts calibrated to their build stage and tech stack, and go through milestone checkpoint reviews at Days 13, 21, and 30 depending on their tier. Cohort #1 is free. The product tests the hypothesis that what developers need isn't a better plan — it's a checkpoint system that holds them to the one they already have.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;The checkpoint model described here connects to a broader pattern —&lt;br&gt;&lt;br&gt;
  the planning-execution gap that AI tools create. I explored that in&lt;br&gt;&lt;br&gt;
  &lt;a href="https://dev.to/energetekk/day-4-vs-code-open-twenty-minutes-staring-at-the-same-function-tab-closed-you-tell-yourself-59in"&gt;a follow-up post&lt;/a&gt;.&lt;/em&gt; &lt;/p&gt;

</description>
      <category>development</category>
      <category>productivity</category>
      <category>ai</category>
      <category>sideprojects</category>
    </item>
    <item>
      <title>Buildspace shut down. AI got better. Developers are still not shipping.</title>
      <dc:creator>MVPBuilder_io</dc:creator>
      <pubDate>Mon, 13 Apr 2026 04:37:33 +0000</pubDate>
      <link>https://forem.com/energetekk/buildspace-shut-down-ai-got-better-developers-are-still-not-shipping-124l</link>
      <guid>https://forem.com/energetekk/buildspace-shut-down-ai-got-better-developers-are-still-not-shipping-124l</guid>
      <description>&lt;p&gt;In August 2024, Buildspace shut down.&lt;/p&gt;

&lt;p&gt;Buildspace wasn't a course. It wasn't an accelerator. It was a structure — a deadline, a cohort, someone watching. Farza called it "a place to build things." What he actually built was external accountability at scale.&lt;/p&gt;

&lt;p&gt;When it closed, tens of thousands of developers lost the one thing that had been making them ship.&lt;/p&gt;

&lt;p&gt;Then, six months later, every major AI coding tool got dramatically better.&lt;/p&gt;

&lt;p&gt;And somehow, developers are still not finishing their side projects.&lt;/p&gt;




&lt;h3&gt;
  
  
  The explanation nobody wants to hear
&lt;/h3&gt;

&lt;p&gt;In July 2025, METR published a study that should have been a bigger deal.&lt;/p&gt;

&lt;p&gt;They measured the productivity of experienced, professional developers on real software tasks — with and without AI tools. The result: developers using AI tools were &lt;strong&gt;19% slower&lt;/strong&gt; on average. Not junior developers. Not people learning to code. Experienced engineers, on real work.&lt;/p&gt;

&lt;p&gt;The immediate reaction was what you'd expect. Denial. "That's not my experience." "The study is flawed." "Wait for better models."&lt;/p&gt;

&lt;p&gt;But a follow-up study in February 2026 didn't reverse the finding. It narrowed it — the effect is smaller than originally measured, but still directionally negative for complex tasks.&lt;/p&gt;

&lt;p&gt;The uncomfortable explanation isn't that AI tools are bad.&lt;/p&gt;

&lt;p&gt;It's that AI tools solved the wrong problem.&lt;/p&gt;




&lt;h3&gt;
  
  
  The planning problem vs. the execution problem
&lt;/h3&gt;

&lt;p&gt;Every stuck developer I've talked to has the same story. They know what to build. They have a tech stack. They've probably sketched the architecture in a notebook, a Notion doc, or a Claude conversation.&lt;/p&gt;

&lt;p&gt;The problem isn't planning. It was never planning.&lt;/p&gt;

&lt;p&gt;The problem is the Tuesday at 7pm when you have 45 minutes, you open your IDE, and somehow you end up watching something else entirely. Or you do open the project — and you spend the time refactoring a file you've already refactored twice, because starting the thing you actually need to build is harder than it looks.&lt;/p&gt;

&lt;p&gt;AI tools made the planning step faster, cheaper, and more detailed than ever. You can get a full architecture in 10 minutes. A database schema. A component tree. A deployment strategy.&lt;/p&gt;

&lt;p&gt;None of that generates your discipline for the next 30 days.&lt;/p&gt;

&lt;p&gt;Buildspace understood this. The product wasn't the curriculum. It was the cohort, the checkpoint, and the fact that someone would notice if you disappeared on Day 4.&lt;/p&gt;

&lt;p&gt;When Buildspace closed, nobody replaced what it actually was.&lt;/p&gt;




&lt;h3&gt;
  
  
  What happens without external structure
&lt;/h3&gt;

&lt;p&gt;The research on this goes deeper than the METR study.&lt;/p&gt;

&lt;p&gt;Kahneman's Planning Fallacy (1979) shows that humans systematically underestimate how long tasks take — and overestimate their future motivation. This isn't a character flaw. It's how cognition works. You plan from a best-case scenario, then execute in reality.&lt;/p&gt;

&lt;p&gt;Solo side projects are a perfect storm for this. No deadline anyone else cares about. No one watching. No consequence for slipping the timeline. The only accountability is self-generated — and self-generated accountability is the weakest kind.&lt;/p&gt;

&lt;p&gt;AI tools made this worse, not better. The gap between "I have a plan" and "I shipped" is not a knowledge problem. AI closed the knowledge gap. The execution gap got wider.&lt;/p&gt;




&lt;h3&gt;
  
  
  What I'm testing
&lt;/h3&gt;

&lt;p&gt;I'm running an experiment on this. It's called MVP Builder.&lt;/p&gt;

&lt;p&gt;The idea is simple: a structured 30-day sprint for developers with a full-time job. You apply with your project. You get daily prompts tailored to your stack and where you are in the sprint. And there are milestone checkpoints where someone reviews your progress before you move forward.&lt;/p&gt;

&lt;p&gt;Not an AI reviewing it. Me. Because right now, at Cohort #1, the human in the loop is the founder.&lt;/p&gt;

&lt;p&gt;That doesn't scale. That's exactly why Cohort #1 is free.&lt;/p&gt;

&lt;p&gt;I'm not selling a solution. I'm testing a hypothesis: that what developers with side projects actually need isn't a better plan — it's a system that holds them to the one they already have.&lt;/p&gt;

&lt;p&gt;If that resonates: &lt;a href="https://mvpbuilder.io/pipeline?utm_source=devto&amp;amp;utm_medium=essay&amp;amp;utm_campaign=cohort1" rel="noopener noreferrer"&gt;mvpbuilder.io/pipeline&lt;/a&gt;. Applications are open. 8 spots. No credit card.&lt;/p&gt;

&lt;p&gt;If it doesn't: I'd still genuinely like to know what &lt;em&gt;has&lt;/em&gt; worked for you. The comments are the interesting part.&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>career</category>
      <category>ai</category>
      <category>webdev</category>
    </item>
    <item>
      <title>The applicant who didn't reply taught me more than the ones who did.</title>
      <dc:creator>MVPBuilder_io</dc:creator>
      <pubDate>Fri, 03 Apr 2026 07:13:21 +0000</pubDate>
      <link>https://forem.com/energetekk/the-applicant-who-didnt-reply-taught-me-more-than-the-ones-who-did-141l</link>
      <guid>https://forem.com/energetekk/the-applicant-who-didnt-reply-taught-me-more-than-the-ones-who-did-141l</guid>
      <description>&lt;p&gt;Two weeks ago, a developer with a real project applied to MVP Builder's beta cohort.&lt;/p&gt;

&lt;p&gt;The project was legitimate. The tech stack was specific. They'd clearly already built something — not just an idea in a Notion doc.&lt;/p&gt;

&lt;p&gt;I sent a qualification email. One real question: "Is this ending up at a URL or a local download?"&lt;/p&gt;

&lt;p&gt;Silence.&lt;/p&gt;

&lt;p&gt;Seven days later I sent a follow-up. One sentence. No pressure, no countdown, just: "Still here if you want to jump in."&lt;/p&gt;

&lt;p&gt;More silence.&lt;/p&gt;

&lt;p&gt;I closed the spot this week.&lt;/p&gt;

&lt;h2&gt;
  
  
  The thing I kept almost missing
&lt;/h2&gt;

&lt;p&gt;My first reaction was frustration. I'd spent time on the application, written a personal email, held a spot.&lt;/p&gt;

&lt;p&gt;Then I realized: the silence is the answer. And it's probably the most useful data I collected this week.&lt;/p&gt;

&lt;p&gt;Here's why.&lt;/p&gt;

&lt;p&gt;The entire premise of MVP Builder is that the blocking problem isn't technical ability — it's consistency under no external pressure. Showing up on Day 4 when you don't feel like it. Responding to a daily prompt when work was exhausting. Moving forward when there's no one immediately watching.&lt;/p&gt;

&lt;p&gt;A developer who doesn't respond to two emails over two weeks isn't a bad person. But they're showing you exactly what happens when the stakes are low and the friction is minimal. If you don't reply to one email — one, with no deadline — how do you respond to Day 11 when the sprint gets hard?&lt;/p&gt;

&lt;p&gt;This is the actual sprint problem. Not the technical parts.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the selection process is really for
&lt;/h2&gt;

&lt;p&gt;I thought I was running a qualification process to find good projects. Turns out I'm also running a behavioral filter.&lt;/p&gt;

&lt;p&gt;The developers who reply quickly, who ask clarifying questions, who tell me their stack without being asked — those are the ones who are already in motion. The sprint doesn't need to create that momentum from scratch. It channels it.&lt;/p&gt;

&lt;p&gt;The ones who go quiet aren't necessarily unqualified on paper. But shipping a product requires dozens of decisions under low motivation and zero external accountability. The application process is the smallest version of that test.&lt;/p&gt;

&lt;p&gt;A lot of early-stage products gate behind an application for marketing reasons — exclusivity, perceived value. I started doing it for that reason too. Now I think the filtering is actually the feature.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 48-hour heuristic
&lt;/h2&gt;

&lt;p&gt;Not scientifically validated. But: if someone doesn't reply to a one-sentence follow-up within 48 hours when there's a free cohort spot on the table — I move on.&lt;/p&gt;

&lt;p&gt;Not because they're a bad candidate. Because I'm optimizing for sprint completion rate, not cohort size. A sprint that ends with 3 people who shipped is worth more to the next cohort than one with 8 people who dropped out at Day 10.&lt;/p&gt;

&lt;p&gt;The beta's goal is proof: that this format produces finished products. One person shipping and able to say "I went from stuck to deployed in 21 days" is the asset. Eight half-finished sprints is noise.&lt;/p&gt;

&lt;p&gt;So the silence was useful. It made the decision simple.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where I am now
&lt;/h2&gt;

&lt;p&gt;Two applications this week. Both closed — one too early in the process, one via the silence rule.&lt;/p&gt;

&lt;p&gt;Still recruiting for Cohort #1. Five to eight spots. Free.&lt;/p&gt;

&lt;p&gt;The pipeline is open. The application takes about four minutes. I respond personally to every one.&lt;/p&gt;

&lt;p&gt;If you've got a project that's been "almost done" for more than a month, that's probably the tier you need.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Cohort #1 is free:&lt;/strong&gt; &lt;a href="https://mvpbuilder.io/pipeline?utm_source=devto&amp;amp;utm_medium=article&amp;amp;utm_campaign=cohort1&amp;amp;utm_content=post4" rel="noopener noreferrer"&gt;https://mvpbuilder.io/pipeline?utm_source=devto&amp;amp;utm_medium=article&amp;amp;utm_campaign=cohort1&amp;amp;utm_content=post4&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Building in public. Previous post: AI made experienced devs 19% slower. Here's the side project trap that created.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>indiehacker</category>
      <category>buildinpublic</category>
      <category>sideprojects</category>
      <category>productivity</category>
    </item>
    <item>
      <title>AI made experienced devs 19% slower. Here's the side project trap that created.</title>
      <dc:creator>MVPBuilder_io</dc:creator>
      <pubDate>Fri, 27 Mar 2026 12:56:07 +0000</pubDate>
      <link>https://forem.com/energetekk/ai-made-experienced-devs-19-slower-heres-the-side-project-trap-that-created-3o5i</link>
      <guid>https://forem.com/energetekk/ai-made-experienced-devs-19-slower-heres-the-side-project-trap-that-created-3o5i</guid>
      <description>&lt;h2&gt;
  
  
  METR measured it in 2025 — senior devs with AI coding assistants worked 19% slower and thought they were 43% faster. For side projects, this gap is career-ending for your product.
&lt;/h2&gt;




&lt;p&gt;In July 2025, METR ran a controlled trial with experienced software developers using AI coding assistants.&lt;/p&gt;

&lt;p&gt;The result: they worked &lt;strong&gt;19% slower&lt;/strong&gt; than without AI tools. Not faster.&lt;/p&gt;

&lt;p&gt;Worse: they &lt;em&gt;thought&lt;/em&gt; they were working &lt;strong&gt;43% faster&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A 62-percentage-point gap between perception and reality.&lt;/p&gt;

&lt;p&gt;I kept thinking about what this means for side projects specifically — because I think it matters more there than anywhere else.&lt;/p&gt;

&lt;h2&gt;
  
  
  The planning fallacy didn't go away
&lt;/h2&gt;

&lt;p&gt;Daniel Kahneman named it in 1979: we systematically underestimate how long our own tasks take, and overestimate how much we'll get done. External reference points help. Deadlines help. Someone watching helps.&lt;/p&gt;

&lt;p&gt;The assumption was: AI will fix this. AI will plan better, build faster, reduce rework.&lt;/p&gt;

&lt;p&gt;METR's data says otherwise. Not because AI tools are bad — they're genuinely capable. But because the planning fallacy is a &lt;em&gt;cognitive&lt;/em&gt; pattern, not a &lt;em&gt;speed&lt;/em&gt; problem. The issue isn't how fast you can execute a decision. It's whether the decision was right in the first place.&lt;/p&gt;

&lt;h2&gt;
  
  
  What actually changed
&lt;/h2&gt;

&lt;p&gt;AI tools like Claude Code, Cursor, and Codex shifted something real. The problem isn't "I can't build this feature." You can build almost anything faster than two years ago.&lt;/p&gt;

&lt;p&gt;The problem shifted:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"I'm building very fast in the wrong direction."&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Without structure, every session starts with: &lt;em&gt;what do I actually work on today?&lt;/em&gt; With AI, you can execute that decision faster — but if the decision is wrong, you've just done more damage more efficiently.&lt;/p&gt;

&lt;p&gt;For a solo dev with a full-time job, this compounds. You have 90 minutes on a Tuesday evening. You fire up Claude Code. It helps you build. But toward what? The commitment is to the project in your head — not to the specific thing that makes a user pay.&lt;/p&gt;

&lt;h2&gt;
  
  
  The side project-specific problem
&lt;/h2&gt;

&lt;p&gt;Side projects don't have a PM. No sprint planning. No "why are we building this?" meeting that forces articulation. No one asking "is this the right thing to ship this week?"&lt;/p&gt;

&lt;p&gt;For a full-time team, structure is often annoying overhead. For a solo dev with 8 hours a week, it's the difference between shipping in three months or archiving in six.&lt;/p&gt;

&lt;p&gt;AI makes you more productive within a session. It doesn't know that you skipped Day 4. It doesn't know that you spent your last three sessions on a feature nobody asked for. It won't ask you "is this the thing that makes someone pay?"&lt;/p&gt;

&lt;p&gt;That accountability — the external forcing function — has to come from somewhere else.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I built
&lt;/h2&gt;

&lt;p&gt;I ran into this loop three times across three different products. I started building a system to break it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MVP Builder&lt;/strong&gt; is a structured 30-day sprint with one daily AI prompt, delivered to your inbox each morning. The prompt isn't generic. It knows your project — your stack, what's built, what's left, what you said yesterday — and asks you to move one specific thing forward.&lt;/p&gt;

&lt;p&gt;Three tiers, based on where you actually are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Bronze (13 days):&lt;/strong&gt; Idea only → working prototype&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Silver (21 days):&lt;/strong&gt; Started but stuck → shippable product&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gold (30 days):&lt;/strong&gt; Almost done → actually shipped&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The structure is the feature. The daily prompt is the external "what do I work on today?" so you don't spend your 90 minutes answering that question yourself.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where I am now
&lt;/h2&gt;

&lt;p&gt;Cohort #1 is free. Beta. 5–8 spots. I'm in Day 10 of recruiting.&lt;/p&gt;

&lt;p&gt;The product works — the cron system, the prompt pipeline, the milestone tracking. The part I'm still figuring out is distribution. Which is its own kind of side project problem.&lt;/p&gt;




&lt;p&gt;If you've shipped something after a long "almost done" phase — or if you're currently in one — I'd genuinely like to know what finally broke the loop.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cohort #1 is free:&lt;/strong&gt; &lt;a href="https://mvpbuilder.io/pipeline?utm_source=devto&amp;amp;utm_medium=article&amp;amp;utm_campaign=cohort1&amp;amp;utm_content=post3" rel="noopener noreferrer"&gt;mvpbuilder.io/pipeline&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Building in public. Previous post: &lt;a href="https://dev.to/energetekk/i-built-3-mvps-that-never-shipped-heres-what-i-learned-18cp"&gt;I built 3 MVPs that never shipped. Here's what I learned.&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you want to understand why this slowdown compounds on side projects —&lt;br&gt;&lt;br&gt;
  and what the piano problem has to do with it — I followed up on this in&lt;br&gt;&lt;br&gt;
  &lt;a href="https://dev.to/energetekk/day-4-vs-code-open-twenty-minutes-staring-at-the-same-function-tab-closed-you-tell-yourself-59in"&gt;a later post&lt;/a&gt;.&lt;/em&gt;   &lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>buildinpublic</category>
      <category>indiehacker</category>
    </item>
    <item>
      <title>I built 3 MVPs that never shipped. Here's what I learned.</title>
      <dc:creator>MVPBuilder_io</dc:creator>
      <pubDate>Thu, 19 Mar 2026 12:46:54 +0000</pubDate>
      <link>https://forem.com/energetekk/i-built-3-mvps-that-never-shipped-heres-what-i-learned-18cp</link>
      <guid>https://forem.com/energetekk/i-built-3-mvps-that-never-shipped-heres-what-i-learned-18cp</guid>
      <description>&lt;p&gt;I want to tell you about three projects I spent hundreds of hours on that nobody ever used. Not because I ran out of time. Because I never shipped them.&lt;/p&gt;

&lt;h2&gt;
  
  
  The three projects
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Project 1: Freelancer time tracker&lt;/strong&gt;&lt;br&gt;
Core timer working within two weeks. Then: reporting dashboard, CSV export, team features, client portal — for a product with zero users. Archived when I changed jobs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Project 2: Notion template marketplace&lt;/strong&gt;&lt;br&gt;
60% done before the backend rewrites started. "The architecture wasn't clean enough." Month four: clean code, no product, no energy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Project 3: AI habit tracker&lt;/strong&gt;&lt;br&gt;
Built it, used it myself for two weeks, stopped when I couldn't answer "why would anyone pay for this?"&lt;/p&gt;

&lt;p&gt;Three codebases on GitHub. Zero shipped products.&lt;/p&gt;
&lt;h2&gt;
  
  
  The pattern I kept missing
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;I was optimizing for building, not for shipping.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Without a hard external forcing function, the project always slipped. Re-entry friction accumulated. Every session started with: &lt;em&gt;what do I work on today?&lt;/em&gt; If that question took 20 minutes to answer — or worse, if I answered it wrong and spent 3 hours on something that didn't move the needle — I'd slowly start avoiding the project altogether.&lt;/p&gt;

&lt;p&gt;Eventually it entered "almost done permanently." Touching it meant confronting how much was left. So I just didn't.&lt;/p&gt;
&lt;h2&gt;
  
  
  How I fixed it — the technical side
&lt;/h2&gt;

&lt;p&gt;I built MVP Builder: a structured 30-day sprint with one focused daily prompt, delivered 07:00–09:00 local time.&lt;/p&gt;

&lt;p&gt;The constraint: &lt;strong&gt;Vercel free tier allows max 2 cron jobs, minimum daily interval.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;My solution — two cron jobs covering both hemispheres:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;0 6 * * *   → EU: CET 07:00, CEST 08:00
0 14 * * *  → Americas: MST 07:00, CST 08:00, EST 09:00
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Timezone filtering (app-side, not cron-side)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;toZonedTime&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;date-fns-tz&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;isInDeliveryWindow&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;timezone&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;zoned&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;toZonedTime&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="nx"&gt;timezone&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;hour&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;zoned&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getHours&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;hour&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="mi"&gt;7&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;hour&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;9&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// fail-open: send rather than skip&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Why app-side filtering instead of more cron jobs? Because cron runs at a fixed UTC time — it can't know each user's local hour. The filter runs per-user at send time.&lt;/p&gt;

&lt;p&gt;Why &lt;code&gt;fail-open&lt;/code&gt;? An invalid timezone (typo, legacy string) shouldn't mean the user never gets their prompt. Missing one day is worse than getting it slightly off-window.&lt;/p&gt;

&lt;h3&gt;
  
  
  Idempotency: 20h lookback, not 24h
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;twentyHoursAgo&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;alreadySent&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;supabase&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;users&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;select&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;id&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;eq&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;id&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;gt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;last_prompt_sent_at&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;twentyHoursAgo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toISOString&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;single&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;alreadySent&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// already sent today&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why 20h and not 24h?&lt;/strong&gt; DST transitions. A 24h lookback on a DST-switch day can either skip a user or double-send. 20h is safely within any DST shift (max ±1h) while still preventing duplicates within the same day.&lt;/p&gt;

&lt;h3&gt;
  
  
  Timezone capture at signup (silent, no UI change)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;detected&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;Intl&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;DateTimeFormat&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;resolvedOptions&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nx"&gt;timeZone&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="c1"&gt;// → include in form payload, no UI element needed&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One line. No dropdown. No user decision fatigue. Works on every modern browser.&lt;/p&gt;

&lt;h2&gt;
  
  
  The schema (relevant columns)
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;ALTER&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;users&lt;/span&gt; &lt;span class="k"&gt;ADD&lt;/span&gt; &lt;span class="k"&gt;COLUMN&lt;/span&gt; &lt;span class="n"&gt;timezone&lt;/span&gt; &lt;span class="nb"&gt;TEXT&lt;/span&gt; &lt;span class="k"&gt;DEFAULT&lt;/span&gt; &lt;span class="s1"&gt;'Europe/Berlin'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;ALTER&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;users&lt;/span&gt; &lt;span class="k"&gt;ADD&lt;/span&gt; &lt;span class="k"&gt;COLUMN&lt;/span&gt; &lt;span class="n"&gt;last_prompt_sent_at&lt;/span&gt; &lt;span class="n"&gt;TIMESTAMPTZ&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;DEFAULT 'Europe/Berlin'&lt;/code&gt; matters: new users without a detected timezone still get a prompt at a reasonable hour rather than silently skipping.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's different this time
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;I shipped it.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;$0 MRR. Beta cohort is free. The product that helps developers ship has to ship first — otherwise the whole premise falls apart.&lt;/p&gt;

&lt;p&gt;Three tiers based on where you actually are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Bronze (13 days):&lt;/strong&gt; Idea → working prototype&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Silver (21 days):&lt;/strong&gt; Started but stuck → shippable product&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gold (30 days):&lt;/strong&gt; Almost done → actually shipped&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each morning: one focused prompt, 30–90 minutes max. No sprawling to-do lists. One milestone checkpoint with proof of work. That's it.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Cohort #1 is free. 5–8 spots: &lt;a href="https://mvpbuilder.io/pipeline?utm_source=devto&amp;amp;utm_medium=article&amp;amp;utm_campaign=cohort1" rel="noopener noreferrer"&gt;mvpbuilder.io/pipeline&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you've shipped something after a long struggle — or if you're currently stuck at "almost done" — drop it in the comments. I read every one.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Building in public. Will post updates on what the beta cohort looks like.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>nextjs</category>
      <category>indiehacker</category>
      <category>buildinpublic</category>
    </item>
    <item>
      <title>I have 8 unfinished side projects. Here's what I learned.</title>
      <dc:creator>MVPBuilder_io</dc:creator>
      <pubDate>Wed, 18 Mar 2026 07:50:43 +0000</pubDate>
      <link>https://forem.com/energetekk/i-have-8-unfinished-side-projects-heres-what-i-learned-1mli</link>
      <guid>https://forem.com/energetekk/i-have-8-unfinished-side-projects-heres-what-i-learned-1mli</guid>
      <description>&lt;p&gt;I counted them last month. Eight. Eight folders on my hard drive with names like &lt;code&gt;invoice-tool-v2&lt;/code&gt;, &lt;code&gt;habit-tracker-final&lt;/code&gt;, &lt;code&gt;freelance-dashboard-REAL&lt;/code&gt;. All started with energy. All dead somewhere between week two and week four.&lt;/p&gt;

&lt;p&gt;I have a full-time job. I'm a decent developer. I'm not lazy.&lt;/p&gt;

&lt;p&gt;So what's actually going on?&lt;/p&gt;




&lt;h2&gt;
  
  
  The pattern I kept ignoring
&lt;/h2&gt;

&lt;p&gt;Every project died the same way.&lt;/p&gt;

&lt;p&gt;Week one: excited, building fast, everything feels possible. Week two: first real obstacle. I "take a break" to think. Week three: I open the folder, feel vaguely guilty, close it again.&lt;/p&gt;

&lt;p&gt;Week four: it's over. I just haven't admitted it yet.&lt;/p&gt;

&lt;p&gt;The problem wasn't skill. It wasn't the tech stack. It wasn't even time — I have 30-45 minutes most evenings.&lt;/p&gt;

&lt;p&gt;The problem was structure. I had no system that forced me to make the next decision when motivation ran out. And motivation always runs out.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I tried (and why it didn't work)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Notion boards.&lt;/strong&gt; Great for the first three days. Then the board becomes the project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;YouTube tutorials.&lt;/strong&gt; Learned a lot. Shipped nothing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"I'll do it on the weekend."&lt;/strong&gt; I have a full-time job. Weekends don't work the way I think they will on Tuesday.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Accountability partners.&lt;/strong&gt; The other person always dropped off first. Or I did.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deadlines I set for myself.&lt;/strong&gt; Completely ignored, every time.&lt;/p&gt;

&lt;p&gt;A deadline you set for yourself is just a suggestion.&lt;/p&gt;




&lt;h2&gt;
  
  
  The real bottleneck
&lt;/h2&gt;

&lt;p&gt;At some point I stopped asking "what should I build?" and started asking "why do I never finish anything?"&lt;/p&gt;

&lt;p&gt;The answer wasn't inspiring. I needed external structure. Not coaching, not a course — just something that tells me what to do next today, specific to my project, and holds me to a milestone I can't quietly move.&lt;/p&gt;

&lt;p&gt;That's not a personality flaw. That's just how accountability works. Most people need some version of it.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I'm testing now
&lt;/h2&gt;

&lt;p&gt;I built &lt;strong&gt;MVP Builder&lt;/strong&gt; — a structured 30-day sprint for solo developers with full-time jobs.&lt;/p&gt;

&lt;p&gt;The idea is simple: every morning you get a single AI prompt tailored to your specific project and where you are in the sprint. Not generic advice. Not a framework to study. One concrete task for today.&lt;/p&gt;

&lt;p&gt;At day 13, 21, or 30 (depending on your track — Bronze, Silver, Gold), you submit a milestone proof. Not a vague "I made progress" — a working link, a video, something real. It gets reviewed.&lt;/p&gt;

&lt;p&gt;I'm not going to tell you it's the perfect solution. It's a beta. Cohort #1 is free. I'm testing whether structured external accountability actually moves the needle for developers like me.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I've learned from the 8 dead projects
&lt;/h2&gt;

&lt;p&gt;A few things that are obvious in hindsight:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Shipping something imperfect beats planning something perfect.&lt;/strong&gt; Every time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The gap between "almost done" and "done" is psychological, not technical.&lt;/strong&gt; The last 20% is where motivation collapses and you need a different fuel.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solo builders don't fail because they lack ideas or skill.&lt;/strong&gt; They fail because nothing forces them to make the next decision when they don't feel like it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A deadline you didn't set yourself is worth ten you did.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;If any of this resonates — if you have your own graveyard of almost-finished projects — I'm looking for 5 to 10 developers for the first cohort.&lt;/p&gt;

&lt;p&gt;Free. No credit card. You just need an idea and 30-45 minutes a day.&lt;/p&gt;

&lt;p&gt;→ &lt;a href="https://mvpbuilder.io/pipeline" rel="noopener noreferrer"&gt;mvpbuilder.io/pipeline&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;What killed your last side project? Genuinely curious.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>sideprojects</category>
      <category>buildinpublic</category>
      <category>productivity</category>
      <category>devjournal</category>
    </item>
  </channel>
</rss>
