<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Alex Cloudstar</title>
    <description>The latest articles on Forem by Alex Cloudstar (@alexcloudstar).</description>
    <link>https://forem.com/alexcloudstar</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/alexcloudstar"/>
    <language>en</language>
    <item>
      <title>Running Local AI Models for Coding in 2026: When Cloud Tools Are Not the Answer</title>
      <dc:creator>Alex Cloudstar</dc:creator>
      <pubDate>Sat, 04 Apr 2026 11:25:25 +0000</pubDate>
      <link>https://forem.com/alexcloudstar/running-local-ai-models-for-coding-in-2026-when-cloud-tools-are-not-the-answer-gfb</link>
      <guid>https://forem.com/alexcloudstar/running-local-ai-models-for-coding-in-2026-when-cloud-tools-are-not-the-answer-gfb</guid>
      <description>&lt;p&gt;I pay for Claude Pro. I pay for a Cursor subscription. I have an Anthropic API key that costs me somewhere between $80 and $200 a month depending on how deep into agentic coding I go. And last month, I started running a local model on my MacBook for about 40% of my coding tasks.&lt;/p&gt;

&lt;p&gt;Not because the local models are better. They are not. Not because I am trying to save money, although that is a nice side effect. I started because I was on a flight from Bucharest to London, had no internet for three hours, and realized that my entire development workflow had become dependent on a connection to someone else's servers.&lt;/p&gt;

&lt;p&gt;That bothered me more than it should have.&lt;/p&gt;

&lt;p&gt;I am not here to tell you that local AI models are replacing cloud tools. They are not, and anyone who says otherwise is either selling something or has not tried to use a 7B parameter model for complex architectural reasoning. What I am here to tell you is that local models have reached a point where they are genuinely useful for a specific set of tasks, and understanding when to use them versus when to use cloud tools is becoming a real skill.&lt;/p&gt;




&lt;h2&gt;
  
  
  The State of Local AI for Coding in Q1 2026
&lt;/h2&gt;

&lt;p&gt;The numbers tell a clear story. Ollama, the most popular tool for running LLMs locally, hit 52 million monthly downloads in Q1 2026. That is a 520x increase from 100,000 downloads in Q1 2023. This is not a niche hobby anymore. Developers are doing this at scale.&lt;/p&gt;

&lt;p&gt;The reason is straightforward: the models got good enough. Not as good as Claude Opus or GPT-5, but good enough for a meaningful percentage of everyday coding tasks.&lt;/p&gt;

&lt;p&gt;Qwen3-Coder from Alibaba is the one that changed my mind. It uses a mixture-of-experts architecture that activates only 3 billion parameters from an 80 billion total. The result is a model that runs on consumer hardware with performance that sits surprisingly close to models 10 to 20 times larger on coding benchmarks. DeepSeek R1 14B is another strong option, especially for reasoning-heavy tasks. And Meta's Llama 4 is competitive enough that it has become the default starting point for a lot of developers experimenting with local setups.&lt;/p&gt;

&lt;p&gt;If you have been following the &lt;a href="https://dev.to/blog/open-source-ai-models-2026"&gt;open source AI model race&lt;/a&gt;, you know the gap between open-weights and proprietary models has shrunk from years to months. That trend is what makes local coding models viable now rather than a year ago.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Developers Are Going Local
&lt;/h2&gt;

&lt;p&gt;There are four reasons I keep hearing, and they map to my own experience.&lt;/p&gt;

&lt;h3&gt;
  
  
  Privacy and IP Protection
&lt;/h3&gt;

&lt;p&gt;This is the big one for anyone working on proprietary code. When you send your codebase to a cloud API, you are trusting that provider with your intellectual property. Most providers have clear policies about not training on your data, and I generally trust those policies. But "trust" and "guarantee" are different things.&lt;/p&gt;

&lt;p&gt;If you work at a company with strict data handling requirements, or you are building something where the code itself is the competitive advantage, or you are working with client code under NDA, running a local model means the code never leaves your machine. Period. No trust required. No compliance review needed. No data processing agreements to negotiate.&lt;/p&gt;

&lt;p&gt;I have talked to developers at defense contractors, healthcare startups, and fintech companies who switched to local models specifically because their legal teams could not approve sending proprietary code to third-party APIs. For them, local is not a preference. It is a requirement.&lt;/p&gt;

&lt;h3&gt;
  
  
  Zero Latency for Simple Tasks
&lt;/h3&gt;

&lt;p&gt;Cloud AI tools are fast, but they are not instant. There is always network latency. There is always the possibility of the service being slow, rate-limited, or down entirely. For complex tasks where you need frontier-model intelligence, that latency is worth it. For simple tasks like autocomplete, small refactors, and inline suggestions, it adds friction.&lt;/p&gt;

&lt;p&gt;A local model running on a good GPU or Apple Silicon responds in milliseconds for short completions. There is no spinner. No waiting for the network. The experience feels like a supercharged version of traditional IDE intelligence rather than a round-trip to a remote server.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cost at Scale
&lt;/h3&gt;

&lt;p&gt;The math on cloud AI costs gets uncomfortable when you do it honestly. If you are an indie hacker or solo developer paying $20 to $50 a month for AI tools, the cost is manageable. But it adds up.&lt;/p&gt;

&lt;p&gt;Running Ollama locally costs nothing per token after the initial hardware investment. If you already have a MacBook with 16GB or more of RAM, or a desktop with a decent GPU, your marginal cost for AI completions is essentially your electricity bill. For developers who would otherwise spend $100 to $300 a month on API calls, the payback period on prioritizing local models for appropriate tasks is measured in months, not years.&lt;/p&gt;

&lt;h3&gt;
  
  
  Offline Capability
&lt;/h3&gt;

&lt;p&gt;This is the one that hooked me. I travel frequently. I work from coffee shops with unreliable wifi. I code on trains. Having a coding assistant that works regardless of connectivity is not a luxury. It is a practical workflow improvement.&lt;/p&gt;

&lt;p&gt;The models are stored on your machine. Ollama runs as a local server. Your editor connects to localhost. No internet required. If you have experienced the frustration of losing your AI assistant mid-task because of a flaky connection, you understand why this matters.&lt;/p&gt;




&lt;h2&gt;
  
  
  Setting Up a Local Coding Workflow
&lt;/h2&gt;

&lt;p&gt;Here is the practical setup I use. It took about twenty minutes to get running the first time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Install Ollama
&lt;/h3&gt;

&lt;p&gt;On macOS:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;ollama
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On Linux:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://ollama.com/install.sh | sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ollama runs as a local API server on port 11434. Once installed, it is always available.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Pull a Coding Model
&lt;/h3&gt;

&lt;p&gt;For general coding assistance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ollama pull qwen3-coder
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For reasoning-heavy tasks:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ollama pull deepseek-r1:14b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For a balance of speed and capability:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ollama pull llama4:scout
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The download size ranges from 4GB to 30GB depending on the model and quantization. Plan accordingly if you are on a metered connection.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Connect to Your Editor
&lt;/h3&gt;

&lt;p&gt;Most modern editors support local model connections. In VS Code, extensions like Continue and Cody can point to a local Ollama endpoint. The configuration is usually as simple as setting the API URL to &lt;code&gt;http://localhost:11434&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;For terminal-based workflows, you can use Ollama directly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ollama run qwen3-coder &lt;span class="s2"&gt;"Refactor this function to use async/await instead of promises: &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;src/utils/fetch.ts&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 4: Build a Hybrid Workflow
&lt;/h3&gt;

&lt;p&gt;This is the part most guides skip, and it is the most important part. You do not want to use local models for everything. You want to use them for the right things.&lt;/p&gt;




&lt;h2&gt;
  
  
  When Local Beats Cloud (And When It Does Not)
&lt;/h2&gt;

&lt;p&gt;After three months of running a hybrid setup, here is my honest breakdown.&lt;/p&gt;

&lt;h3&gt;
  
  
  Local Wins
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Autocomplete and inline suggestions.&lt;/strong&gt; Fast, private, zero-cost. Local models handle this well because the context window is small and the expected output is short. This is where the latency advantage is most noticeable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Small refactors and transformations.&lt;/strong&gt; Rename a variable across a file. Convert a callback to async/await. Extract a function. Add TypeScript types to a JavaScript file. These are pattern-matching tasks where even a 7B model performs well.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Boilerplate generation.&lt;/strong&gt; Writing test scaffolding, adding CRUD endpoints that follow an existing pattern, generating type definitions from JSON. Tasks where the structure is predictable and the creativity required is low.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Documentation and comments.&lt;/strong&gt; Generating JSDoc comments, writing README sections, explaining what a function does. Local models handle this adequately because the task is more about summarization than reasoning.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Private or sensitive code.&lt;/strong&gt; Anything where you genuinely cannot or should not send the code to a third party. This is not about paranoia. It is about real constraints that many developers face.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cloud Wins
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Complex architectural reasoning.&lt;/strong&gt; When you need to think through how multiple systems interact, plan a migration strategy, or design a new feature that touches many parts of a codebase, frontier models are significantly better. The gap here is not close.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Large-context tasks.&lt;/strong&gt; Local models typically run with 4K to 32K context windows in practice (larger is possible but slow). Cloud models like Claude handle 200K tokens. If your task requires understanding a large codebase or a long conversation history, cloud is the only realistic option.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://dev.to/blog/agentic-coding-2026"&gt;Agentic workflows&lt;/a&gt;.&lt;/strong&gt; Multi-step tasks where the AI needs to read files, run commands, evaluate output, and iterate require a level of capability that local models do not reliably provide yet. The planning and execution quality of frontier models for agentic coding is in a different league.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Debugging complex issues.&lt;/strong&gt; When you paste a stack trace and ask "why is this happening," the reasoning capability gap matters. Frontier models catch subtle issues that local models miss.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code review and security analysis.&lt;/strong&gt; Evaluating code for architectural problems, security vulnerabilities, or subtle bugs requires the kind of deep reasoning where model size matters most.&lt;/p&gt;




&lt;h2&gt;
  
  
  Hardware Reality Check
&lt;/h2&gt;

&lt;p&gt;Let me be honest about what you need, because I have seen too many guides that gloss over this.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Apple Silicon Macs (M1/M2/M3/M4):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;16GB RAM: Can run 7B to 14B models comfortably. Adequate for autocomplete and small tasks.&lt;/li&gt;
&lt;li&gt;32GB RAM: Can run 30B to 34B models. This is the sweet spot for a good local coding experience.&lt;/li&gt;
&lt;li&gt;64GB+ RAM: Can run 70B+ models. Approaches the quality ceiling for local inference.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Linux/Windows with NVIDIA GPU:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;RTX 3060 (12GB VRAM): 7B to 14B models. Similar to the 16GB Mac experience.&lt;/li&gt;
&lt;li&gt;RTX 4080/4090 (16-24GB VRAM): 30B to 70B models with quantization. Excellent performance.&lt;/li&gt;
&lt;li&gt;Dual GPU setups: Can split larger models across cards. Enthusiast territory but increasingly common.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What does not work well:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;8GB RAM machines. You can technically run small models, but the experience is painful.&lt;/li&gt;
&lt;li&gt;CPUs without GPU offloading. Inference is too slow for interactive use.&lt;/li&gt;
&lt;li&gt;Older Intel Macs. The performance is not competitive. Save your time.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you are thinking about upgrading hardware specifically for local AI, the best value right now is a MacBook Pro with 32GB of unified memory or a desktop Linux box with an RTX 4070 Ti Super (16GB VRAM). Both will run the most useful coding models at interactive speeds.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Models I Actually Use
&lt;/h2&gt;

&lt;p&gt;After testing more models than I care to count, here is what I have settled on for daily use.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Qwen3-Coder (primary coding model):&lt;/strong&gt; Best overall coding performance for its resource requirements. The mixture-of-experts architecture means it punches well above its weight. I use this for autocomplete, small refactors, and boilerplate generation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DeepSeek R1 14B (reasoning tasks):&lt;/strong&gt; When I need the model to think through a problem rather than just pattern-match, this is the one. It is slower but noticeably better at explaining why something is wrong or suggesting architectural improvements.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Llama 4 Scout (general purpose):&lt;/strong&gt; Good all-rounder. I use it when I want to ask a question about code without needing specialist coding capability. Useful for documentation tasks and explaining concepts.&lt;/p&gt;

&lt;p&gt;The key insight is that you do not need one model for everything. Switching between models in Ollama takes seconds. Having two or three models pulled and ready to use lets you match the model to the task.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Honest Limitations
&lt;/h2&gt;

&lt;p&gt;I would not be writing this the way the existing posts on this blog are written if I did not talk about where this falls apart.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quality ceiling is real.&lt;/strong&gt; Even the best local coding models are noticeably worse than Claude Opus or GPT-5 for anything beyond straightforward tasks. If you have been using &lt;a href="https://dev.to/blog/claude-code-vs-cursor-vs-github-copilot-2026"&gt;Claude Code for agentic workflows&lt;/a&gt;, the local experience will feel like a significant downgrade for complex work. This is not a marginal difference. It is a category difference.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Context window constraints bite hard.&lt;/strong&gt; Most local models run effectively at 8K to 32K tokens. That sounds like a lot until you realize that a medium-sized file plus a prompt can eat half of it. For multi-file tasks, you are constantly managing what the model can see. Cloud tools with 200K context windows make this a non-issue.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No tool use or agentic capability.&lt;/strong&gt; Local models through Ollama do not read your file system, run your tests, or iterate on their output. You are copy-pasting code in and out. This is fine for targeted tasks, but it means you cannot replicate the &lt;a href="https://dev.to/blog/agentic-coding-2026"&gt;agentic coding experience&lt;/a&gt; that makes cloud tools so powerful.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Model management is your problem.&lt;/strong&gt; You need to decide which models to download, when to update them, and how to manage disk space. A single model can be 4 to 30GB. If you have three or four models pulled, that is a significant chunk of storage. Nobody is managing this for you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quantization tradeoffs.&lt;/strong&gt; Most local models run with 4-bit quantization to fit in consumer hardware. This reduces quality compared to the full-precision model. For simple tasks the difference is minimal. For tasks that push the model's capability, the quality loss becomes noticeable.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Practical Hybrid Strategy
&lt;/h2&gt;

&lt;p&gt;Here is the workflow I have settled into after three months of experimentation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Default to local for:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Autocomplete and inline completions (always local, always fast)&lt;/li&gt;
&lt;li&gt;Single-file refactors and transformations&lt;/li&gt;
&lt;li&gt;Generating boilerplate that follows existing patterns&lt;/li&gt;
&lt;li&gt;Writing docs, comments, and test scaffolding&lt;/li&gt;
&lt;li&gt;Any task involving code I cannot send to a cloud provider&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Switch to cloud for:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Multi-file features and complex implementations&lt;/li&gt;
&lt;li&gt;Architectural decisions and code review&lt;/li&gt;
&lt;li&gt;Agentic workflows (Claude Code, Cursor Agent mode)&lt;/li&gt;
&lt;li&gt;Debugging that requires reasoning about system behavior&lt;/li&gt;
&lt;li&gt;Anything that needs large context windows&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The 30-second rule:&lt;/strong&gt; If I can describe the task in one sentence and the expected output is less than 50 lines of code, I try it locally first. If the output is not good enough, I switch to cloud. The switching cost is low. The savings in API costs and latency add up over time.&lt;/p&gt;

&lt;p&gt;This is not about ideology. I am not anti-cloud. I wrote about the &lt;a href="https://dev.to/blog/ai-productivity-paradox-developers-2026"&gt;AI productivity paradox&lt;/a&gt; and how developers overestimate their speedup with AI tools. The same principle applies here: use the right tool for the task. Sometimes that tool runs on your machine. Sometimes it runs on someone else's.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Is Coming Next
&lt;/h2&gt;

&lt;p&gt;The trajectory of local AI models is steep. A year ago, running a useful coding model locally required enthusiast hardware and tolerance for slow inference. Today, a standard MacBook Pro handles it fine for targeted tasks.&lt;/p&gt;

&lt;p&gt;Three things are converging that will make local models significantly more capable in the next twelve months:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Smaller, smarter architectures.&lt;/strong&gt; The mixture-of-experts approach (activating a fraction of total parameters) is letting models deliver disproportionate quality for their compute requirements. Expect this trend to accelerate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hardware improvements.&lt;/strong&gt; Apple's next generation of chips, NVIDIA's consumer GPU roadmap, and NPU integration in Intel and AMD processors are all optimized for local inference. The hardware is meeting the software halfway.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Better tooling.&lt;/strong&gt; Ollama, LM Studio, and competitors are building richer integration points. Editor plugins are getting better at switching between local and cloud models seamlessly. The friction of running a hybrid setup is dropping fast.&lt;/p&gt;

&lt;p&gt;I do not think local models will replace cloud tools for serious development work in 2026. The capability gap for complex tasks is too large. But I do think the percentage of coding tasks where local is the right choice will grow from maybe 30% today to 50% or more by early 2027.&lt;/p&gt;

&lt;p&gt;If you have not tried running a local coding model yet, now is a good time to start. The setup cost is twenty minutes. The learning is worth having. And the next time you are on a plane with no wifi, you will be glad you did.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>devtools</category>
      <category>productivity</category>
      <category>programming</category>
    </item>
    <item>
      <title>AI Code Review Is the New Bottleneck: Why Faster Code Is Not Reaching Production Faster</title>
      <dc:creator>Alex Cloudstar</dc:creator>
      <pubDate>Sat, 04 Apr 2026 11:24:18 +0000</pubDate>
      <link>https://forem.com/alexcloudstar/ai-code-review-is-the-new-bottleneck-why-faster-code-is-not-reaching-production-faster-416a</link>
      <guid>https://forem.com/alexcloudstar/ai-code-review-is-the-new-bottleneck-why-faster-code-is-not-reaching-production-faster-416a</guid>
      <description>&lt;p&gt;A developer on my team opened eleven pull requests last Tuesday. Eleven. In a single day.&lt;/p&gt;

&lt;p&gt;Two years ago, that same developer averaged two or three PRs per week. The difference is not that he suddenly became five times more productive. The difference is Claude Code. He describes a feature, the agent implements it, he reviews the diff, and he opens the PR. The code-writing part of his job accelerated by an order of magnitude.&lt;/p&gt;

&lt;p&gt;The problem is what happened next. Those eleven PRs sat in review for an average of four days. Three of them took over a week. By the time the last one was approved and merged, the branch had conflicts with main that took another hour to resolve.&lt;/p&gt;

&lt;p&gt;He shipped more code than ever. The code reached production at roughly the same pace as before. And the two senior engineers who review most PRs on the team looked like they had been through a war by Friday.&lt;/p&gt;

&lt;p&gt;This is the story playing out across thousands of engineering teams right now, and nobody is talking about it with the urgency it deserves.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Numbers Behind the Bottleneck
&lt;/h2&gt;

&lt;p&gt;I wrote about the &lt;a href="https://dev.to/blog/ai-productivity-paradox-developers-2026"&gt;AI productivity paradox&lt;/a&gt; a few weeks ago, and the data on code review was the part that stuck with me the most. Let me put the full picture together.&lt;/p&gt;

&lt;p&gt;Faros AI analyzed telemetry from over 10,000 developers across 1,255 teams. On teams with high AI adoption:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Developers complete 21% more tasks&lt;/li&gt;
&lt;li&gt;PR merge volume increased 98%&lt;/li&gt;
&lt;li&gt;PR size increased 154%&lt;/li&gt;
&lt;li&gt;PR review time increased 91%&lt;/li&gt;
&lt;li&gt;Bug rates went up 9% per developer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Read those numbers together. Almost double the PRs. Each one more than double the size. Review taking almost double the time. And more bugs getting through despite all that review effort.&lt;/p&gt;

&lt;p&gt;The Anthropic Agentic Coding Trends Report from 2026 adds another data point. AI-generated code now represents 41 to 42% of all code globally. The sustainable threshold for maintaining quality, according to industry benchmarks, sits between 25 and 40%. Teams above that threshold start seeing quality degradation that eats into the productivity gains.&lt;/p&gt;

&lt;p&gt;Google's DORA report found that every 25% increase in AI adoption correlated with a 1.5% decrease in delivery speed and a 7.2% drop in system stability. Not because the code was bad, but because the organizational processes around the code could not absorb the increased volume.&lt;/p&gt;

&lt;p&gt;The bottleneck moved. Writing code used to be the constraint. Now review, validation, and integration are the constraints. And most teams have not restructured to account for this shift.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Reviewing AI Code Is Fundamentally Different
&lt;/h2&gt;

&lt;p&gt;Here is something I did not fully appreciate until I spent a month deliberately tracking my review patterns.&lt;/p&gt;

&lt;p&gt;When I review a PR from a colleague, I have context. I know their skill level. I know the conversations we had about the approach. I can predict what patterns they will use because we have discussed them. I can skim sections I trust and focus on the parts that are novel or complex. My brain is filling in gaps with shared understanding.&lt;/p&gt;

&lt;p&gt;When I review an AI-generated PR, none of that context exists.&lt;/p&gt;

&lt;p&gt;The AI made decisions at every level: naming, structure, error handling patterns, import organization, test strategies, edge case coverage. Each decision might be reasonable in isolation, but I have to evaluate each one independently because I have no basis for trusting that the AI shares our team's conventions and judgment.&lt;/p&gt;

&lt;p&gt;This is why review time nearly doubles. It is not that the code is worse. It is that the review process is fundamentally different. You are not checking a colleague's implementation of a discussed approach. You are evaluating a foreign system's judgment across dozens of decision points you never discussed.&lt;/p&gt;

&lt;p&gt;I have noticed three specific patterns that make AI code review harder:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Plausible but wrong implementations.&lt;/strong&gt; AI-generated code compiles, passes basic tests, and looks correct at a glance. But it sometimes makes subtle mistakes that require deep domain knowledge to catch. A colleague might use the wrong date format for an API, but they would typically get the business logic right because they understand the domain. AI gets the syntax right but sometimes gets the semantics wrong.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Unfamiliar patterns.&lt;/strong&gt; Every team develops conventions over time. How errors are handled. How logging is structured. Where validation happens. AI-generated code follows its own conventions, which might be technically valid but inconsistent with the codebase. A reviewer has to decide whether to accept the AI's approach or request changes to match existing patterns. That decision takes mental energy on every PR.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Volume-induced fatigue.&lt;/strong&gt; When a developer opens three PRs in a week, giving each one proper attention is manageable. When they open ten or fifteen because AI is writing the code, the reviewer's attention budget gets spread thin. Study after study shows that review quality drops significantly after the first 200 to 400 lines of code reviewed in a session. AI-generated PRs routinely exceed this threshold individually.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Review Queue Death Spiral
&lt;/h2&gt;

&lt;p&gt;There is a pattern I am seeing on teams that have not addressed this, and it is not pretty.&lt;/p&gt;

&lt;p&gt;It starts with review queues getting longer. PRs that used to get reviewed in hours now sit for days. Developers notice this and start batching more changes into each PR to reduce the number of reviews needed. But larger PRs take longer to review, which makes the queue worse. Reviewers start doing faster, shallower reviews to get through the backlog. Bug rates go up. Production incidents increase. The team adds more process (required approvals, mandatory CI checks, additional reviewers) to catch the bugs. This makes the queue even longer.&lt;/p&gt;

&lt;p&gt;The death spiral is: more AI-generated code, bigger PRs, longer queues, shallower reviews, more bugs, more process, even longer queues.&lt;/p&gt;

&lt;p&gt;I have watched three teams I work with closely enter some version of this cycle in the past six months. The common factor was not the AI tools they used. It was that they accelerated code production without changing their review processes.&lt;/p&gt;

&lt;p&gt;This connects directly to why &lt;a href="https://dev.to/blog/shipping-speed-only-strategy-2026"&gt;shipping speed matters&lt;/a&gt; but only if the code actually reaches production. Writing code faster does not matter if it sits in a review queue for a week.&lt;/p&gt;




&lt;h2&gt;
  
  
  AI Code Review Tools: What Actually Works
&lt;/h2&gt;

&lt;p&gt;The obvious solution is to use AI to review the code that AI wrote. This is not as circular as it sounds, but it is not a silver bullet either.&lt;/p&gt;

&lt;p&gt;I have spent the past two months evaluating AI code review tools, and here is what I found.&lt;/p&gt;

&lt;h3&gt;
  
  
  CodeRabbit
&lt;/h3&gt;

&lt;p&gt;This is the most widely adopted tool, with over 2 million connected repositories and 13 million PRs reviewed. It integrates directly with GitHub and GitLab, running automated reviews on every PR.&lt;/p&gt;

&lt;p&gt;What it does well: catches common issues (security vulnerabilities, performance problems, style inconsistencies), provides line-by-line feedback, and learns from your repository's patterns over time. It achieves about 46% accuracy in detecting real-world runtime bugs through a combination of AST analysis and generative AI feedback.&lt;/p&gt;

&lt;p&gt;What it does not do: replace human review for architectural decisions, business logic validation, or anything that requires understanding the product context. Think of it as a very thorough first pass that handles the mechanical checks.&lt;/p&gt;

&lt;h3&gt;
  
  
  PR-Agent (Open Source)
&lt;/h3&gt;

&lt;p&gt;For teams that need data sovereignty (the code cannot leave their infrastructure), CodiumAI's PR-Agent is an open-source option that can run self-hosted. It provides automated descriptions, review comments, and code suggestions.&lt;/p&gt;

&lt;p&gt;What it does well: works within your infrastructure, customizable rules, good at catching patterns you define. For teams with strict data handling requirements (and if you are already exploring &lt;a href="https://dev.to/blog/local-ai-models-coding-ollama-2026"&gt;local AI models for privacy reasons&lt;/a&gt;, this fits the same philosophy), this is the best open-source option.&lt;/p&gt;

&lt;h3&gt;
  
  
  Qodana (JetBrains)
&lt;/h3&gt;

&lt;p&gt;If your team uses JetBrains IDEs, Qodana brings the same static analysis to your CI pipeline. It is not AI-powered in the generative sense, but it catches the class of issues that static analysis handles well: null pointer risks, type mismatches, unused code, and security vulnerabilities.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Realistic Impact
&lt;/h3&gt;

&lt;p&gt;Teams using AI code review tools report 30 to 60% reduction in PR cycle times and 25 to 35% decrease in production defect rates. Those numbers match what I have seen. But there is a critical nuance: the reduction in cycle time comes from automating the mechanical review, not from replacing the human review.&lt;/p&gt;

&lt;p&gt;The correct mental model is not "AI reviews the code instead of humans." It is "AI handles the first pass (style, security, common bugs) so that humans can focus the limited review time on architecture, logic, and domain correctness." Human reviewers still need to look at every PR. They just spend less time on the checklist items and more time on the judgment calls.&lt;/p&gt;




&lt;h2&gt;
  
  
  Restructuring Your Review Process for the AI Era
&lt;/h2&gt;

&lt;p&gt;Tools alone do not fix this. The process needs to change. Here is what is working for teams I have observed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Smaller PRs, Even with AI
&lt;/h3&gt;

&lt;p&gt;This sounds counterintuitive. AI can generate a complete feature in one shot, so why split it up? Because the review constraint has not changed. Humans can effectively review 200 to 400 lines of code in a sitting. AI-generated PRs that touch 1,000+ lines get superficial reviews regardless of how good the reviewer is.&lt;/p&gt;

&lt;p&gt;The discipline is to take the AI's complete output and split it into reviewable chunks. Feature flag the incomplete parts if needed. The extra five minutes of splitting saves hours of review time and catches more bugs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tiered Review Levels
&lt;/h3&gt;

&lt;p&gt;Not every PR needs the same level of scrutiny. AI-generated boilerplate (tests, CRUD endpoints, type definitions) can be reviewed at a lighter level than core business logic or security-sensitive code.&lt;/p&gt;

&lt;p&gt;Some teams I work with have adopted a three-tier system:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tier 1 (automated only):&lt;/strong&gt; Pure boilerplate, formatting changes, dependency updates. AI review tools handle these. A human does a 30-second sanity check.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tier 2 (standard review):&lt;/strong&gt; Feature implementations, bug fixes, refactors. One human reviewer with AI review tools providing the first pass.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tier 3 (deep review):&lt;/strong&gt; Security-sensitive code, architectural changes, payment/auth logic. Two human reviewers, pair review session, AI tools for static analysis only.&lt;/p&gt;

&lt;p&gt;The key is being explicit about which tier each PR falls into. When everything gets the same review process, either the important PRs get under-reviewed or the routine PRs clog the queue.&lt;/p&gt;

&lt;h3&gt;
  
  
  Review Time Boxing
&lt;/h3&gt;

&lt;p&gt;Set explicit time expectations for reviews at each tier. Tier 1: same day. Tier 2: within 24 hours. Tier 3: within 48 hours, with a scheduled review session. When review expectations are vague ("review when you get to it"), queues grow silently until they become a crisis.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dedicated Review Rotations
&lt;/h3&gt;

&lt;p&gt;On teams with high AI-assisted output, having one or two developers on review rotation each day (instead of distributing reviews across everyone) produces better results. The reviewer can batch PRs, build context across related changes, and maintain review quality without the constant context-switching that kills depth.&lt;/p&gt;

&lt;p&gt;This is not a new idea, but it becomes necessary at AI-scale volumes. The alternative, where everyone reviews a few PRs between their own AI-assisted coding sessions, results in fragmented attention that catches fewer issues.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Code Review Skills Gap
&lt;/h2&gt;

&lt;p&gt;There is a career dimension to this that I think matters.&lt;/p&gt;

&lt;p&gt;The DORA report notes that code review expertise has become more valuable as the volume of AI-generated code requiring human evaluation has surged. The developers who are best at reviewing AI-generated code are not necessarily the fastest coders. They are the ones with the deepest understanding of system design, domain logic, and failure modes.&lt;/p&gt;

&lt;p&gt;This creates an interesting tension. AI tools make it possible for less experienced developers to produce more code. But the code still needs to be reviewed by someone who understands the system well enough to catch the mistakes that AI makes. The demand for review skills is growing faster than the supply.&lt;/p&gt;

&lt;p&gt;If you are a senior developer, investing in your code review skills is one of the highest-leverage things you can do right now. Not just reading diffs faster, but developing frameworks for evaluating AI-generated code specifically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How to spot plausible-but-wrong business logic&lt;/li&gt;
&lt;li&gt;How to identify when AI-generated patterns diverge from team conventions&lt;/li&gt;
&lt;li&gt;How to assess whether AI-generated tests are actually testing meaningful behavior or just achieving coverage metrics&lt;/li&gt;
&lt;li&gt;How to review AI-generated code without getting fatigued by the volume&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This connects to the broader shift in what it means to be a senior developer in 2026. The job is becoming less about writing code and more about &lt;a href="https://dev.to/blog/ai-generated-code-technical-debt-2026"&gt;ensuring code quality at scale&lt;/a&gt;. Review is where that happens.&lt;/p&gt;




&lt;h2&gt;
  
  
  What About Fully Automated Review?
&lt;/h2&gt;

&lt;p&gt;I know what some of you are thinking. If AI can write the code and AI can review the code, why do we need humans in the loop at all?&lt;/p&gt;

&lt;p&gt;I tried this. For two weeks, I let AI review tools be the sole gatekeepers on a non-critical internal project. Here is what happened:&lt;/p&gt;

&lt;p&gt;The AI reviewer caught formatting issues, potential null pointer errors, and a legitimate SQL injection vulnerability that I had missed. It also approved a PR that had a subtle race condition in a caching layer, approved another that used the wrong currency conversion for a specific locale, and failed to notice that a refactor broke the contract with a downstream service because the tests only covered the happy path.&lt;/p&gt;

&lt;p&gt;The bugs it caught were the kind that static analysis and pattern matching handle well. The bugs it missed were the kind that require understanding how the system works beyond the code being reviewed.&lt;/p&gt;

&lt;p&gt;This is not a knock on the tools. They are genuinely useful. But fully automated review without human judgment is like having spell check without an editor. It catches the mechanical errors. It misses the things that actually matter to users.&lt;/p&gt;

&lt;p&gt;The sustainable model is AI handling the first pass and humans handling the judgment calls. Not either/or. Both.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Practical Roadmap
&lt;/h2&gt;

&lt;p&gt;If your team is feeling the review bottleneck, here is a sequence that works.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Week 1: Measure the problem.&lt;/strong&gt; Track PR open-to-merge time, review queue depth, and reviewer workload distribution. You cannot fix what you are not measuring. Most teams are surprised by how bad the numbers actually are.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Week 2-3: Introduce AI review tooling.&lt;/strong&gt; Start with CodeRabbit or PR-Agent on your most active repositories. Let them run alongside human review for two weeks so your team calibrates trust in the tool's output.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Week 4: Implement PR size limits.&lt;/strong&gt; Set a soft maximum (400 lines is a good starting point). AI-generated PRs that exceed this should be split before review. This is the single highest-impact change you can make.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Week 5-6: Adopt tiered review.&lt;/strong&gt; Define your tiers, assign PRs to tiers, and set review time expectations for each. Make the tier assignment part of the PR template.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Month 2-3: Optimize.&lt;/strong&gt; Review rotation schedules, adjust tier definitions based on what you are learning, and start tracking defect rates by tier to validate that lighter reviews on Tier 1 are not letting bugs through.&lt;/p&gt;

&lt;p&gt;The goal is not to eliminate review. The goal is to match review effort to review value so that human attention goes where it matters most.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Bigger Picture
&lt;/h2&gt;

&lt;p&gt;The code review bottleneck is a symptom of a larger pattern: AI accelerates the parts of software development that were already fast (relative to the whole lifecycle) and does not yet help much with the parts that are slow.&lt;/p&gt;

&lt;p&gt;Writing code was never the primary bottleneck for most teams. Understanding requirements, making architectural decisions, reviewing changes, deploying safely, monitoring production, and responding to incidents take more total time than typing code. AI made the typing part ten times faster without proportionally improving any of the other parts.&lt;/p&gt;

&lt;p&gt;Teams that thrive with AI tools will be the ones that recognize this imbalance and restructure accordingly. Not by trying to make every part of the process AI-powered, but by deliberately investing human time and attention where it creates the most value.&lt;/p&gt;

&lt;p&gt;Right now, for most teams, that place is code review. Not because review is fun or glamorous, but because it is the gate between "code that exists" and "code that works in production." And that gate just got a lot more traffic.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>devtools</category>
      <category>productivity</category>
      <category>career</category>
    </item>
    <item>
      <title>The AI-Powered Agency: A Developer Playbook for Selling AI Services in 2026</title>
      <dc:creator>Alex Cloudstar</dc:creator>
      <pubDate>Thu, 02 Apr 2026 08:15:37 +0000</pubDate>
      <link>https://forem.com/alexcloudstar/the-ai-powered-agency-a-developer-playbook-for-selling-ai-services-in-2026-23ci</link>
      <guid>https://forem.com/alexcloudstar/the-ai-powered-agency-a-developer-playbook-for-selling-ai-services-in-2026-23ci</guid>
      <description>&lt;p&gt;A freelance brand designer I follow on X shared her numbers last month. In 2024, she was serving three to four clients at a time, billing around $150K per year. In 2025, she added AI to her workflow, not as a gimmick but as actual production infrastructure. She now serves fifteen to twenty concurrent clients, her annual revenue hit $720K, and she works fewer hours than before.&lt;/p&gt;

&lt;p&gt;She did not build a SaaS product. She did not raise money. She did not hire a team. She just got very good at using AI tools to deliver the same quality of work in a fraction of the time, and charged based on the value of the output rather than the hours it took.&lt;/p&gt;

&lt;p&gt;This is the model Y Combinator highlighted in their Spring 2026 Request for Startups. Their advice was blunt: instead of selling access to an AI tool for $50 a month, use the AI yourself and sell the finished work for $5,000. You are not a software company. You are a service company with near-zero marginal cost.&lt;/p&gt;

&lt;p&gt;For developers specifically, this model is even more powerful. Because you can build the automation layer that makes it scale. You are not just using AI tools. You are building systems around them that multiply your output without multiplying your time.&lt;/p&gt;

&lt;p&gt;I have been thinking about this model a lot, especially after writing about &lt;a href="https://dev.to/blog/one-person-startup-scaling-2026"&gt;the one-person startup&lt;/a&gt; and &lt;a href="https://dev.to/blog/freelancing-as-developer-guide-2026"&gt;the freelancing playbook&lt;/a&gt;. The AI-powered agency sits in the space between freelancing and SaaS, and for a lot of developers, it might be the better path than either.&lt;/p&gt;

&lt;p&gt;Let me break down how it works.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Services Beat SaaS for Most Developers Right Now
&lt;/h2&gt;

&lt;p&gt;I know this sounds like heresy in the indie hacker world. We have been conditioned to believe that SaaS is the holy grail. Recurring revenue, scalable, passive income while you sleep. The dream.&lt;/p&gt;

&lt;p&gt;The reality is different. SaaS requires &lt;a href="https://dev.to/blog/building-is-easy-distribution-is-the-moat-2026"&gt;distribution&lt;/a&gt;, and distribution is the part most developers are terrible at. You build the product, launch into silence, and spend months trying to get your first hundred users while burning through savings.&lt;/p&gt;

&lt;p&gt;An AI-powered agency flips the economics. You need five clients, not five thousand users. Each client pays $2,000 to $10,000 per month, not $29 per month. The revenue ramp is faster because you are selling to businesses that already have budget for the work you are replacing. You do not need to convince them that the problem exists. They are already paying someone to solve it.&lt;/p&gt;

&lt;p&gt;The numbers make this concrete. A solo developer running an AI-powered content agency with ten clients at $3,000 per month is at $360K annual revenue. Operating costs are minimal because the AI does most of the execution. That is a real business built in months, not years.&lt;/p&gt;

&lt;p&gt;Compare that to the typical SaaS path where the median indie hacker product takes twelve to eighteen months to reach $5K MRR. Both paths are valid. But if your goal is revenue and you do not have an existing audience, the agency model gets you there faster.&lt;/p&gt;




&lt;h2&gt;
  
  
  What AI-Powered Services Actually Sell
&lt;/h2&gt;

&lt;p&gt;Not every service translates well to the AI agency model. The sweet spot is work that meets three criteria: it is currently expensive because it requires skilled humans, AI can handle 70-90% of the execution with human oversight, and the output is measurable enough that clients can see the value.&lt;/p&gt;

&lt;p&gt;Here are the categories where I see developers building real agencies right now.&lt;/p&gt;

&lt;h3&gt;
  
  
  AI Automation and Workflow Building
&lt;/h3&gt;

&lt;p&gt;Businesses are drowning in manual processes. Data entry, report generation, lead qualification, invoice processing, customer onboarding sequences. They know these should be automated, but they do not have the technical skills to build the automation themselves.&lt;/p&gt;

&lt;p&gt;This is the most natural fit for developers. You audit a client's operations, identify the repetitive workflows, and build AI-powered automation using tools like n8n, Make, or custom code with the Claude API or OpenAI API. The client pays for the outcome, not the hours. A workflow that saves a business 20 hours per week is worth $2,000 to $5,000 per month to them, regardless of whether it took you two days or two weeks to build.&lt;/p&gt;

&lt;p&gt;The recurring revenue comes from maintenance, optimization, and building additional automations as the client sees results from the first ones. One good client can turn into $5K to $15K per month as you automate more of their operations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Custom AI Agent Development
&lt;/h3&gt;

&lt;p&gt;This is the premium tier. Businesses want AI agents that handle specific tasks: customer support triage, lead scoring, content personalization, inventory management. Off-the-shelf solutions exist for some of these, but they are generic. Businesses with specific requirements need custom agents built for their data, their workflows, and their edge cases.&lt;/p&gt;

&lt;p&gt;If you understand &lt;a href="https://dev.to/blog/agentic-coding-2026"&gt;how AI agents work&lt;/a&gt; and can build with the &lt;a href="https://dev.to/blog/mcp-model-context-protocol-developer-guide-2026"&gt;Model Context Protocol&lt;/a&gt;, you have a skill set that is in massive demand. The agentic AI market hit $7.29 billion in 2025 and is projected to reach $9.14 billion this year. Demand for AI-related freelance skills grew 109% year-over-year as of February 2026, with far more demand than supply.&lt;/p&gt;

&lt;p&gt;Custom agent projects typically bill $5,000 to $15,000 per engagement, with ongoing maintenance contracts after deployment.&lt;/p&gt;

&lt;h3&gt;
  
  
  AI-Augmented Content Production
&lt;/h3&gt;

&lt;p&gt;This one is the easiest to start with because the sales conversation is simple. A business currently pays $5,000 per month for a content writer or agency to produce blog posts, social media content, email sequences, and documentation. You can deliver the same volume and quality, or better, for $3,000 per month because AI handles the first draft and you handle the strategy, editing, and optimization.&lt;/p&gt;

&lt;p&gt;The key is that you are selling finished work, not AI access. The client does not care that you used AI. They care that the blog posts rank, the emails convert, and the social content engages their audience. That is the deliverable.&lt;/p&gt;

&lt;p&gt;Developers have an edge here because you can build the pipeline. Instead of manually prompting ChatGPT for each piece of content, you build an automated workflow that pulls topics from SEO research, generates drafts based on the client's voice and guidelines, runs quality checks, and presents finished pieces for final review. What a traditional agency does with a team of five, you do with automation and oversight.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Analysis and Reporting
&lt;/h3&gt;

&lt;p&gt;Businesses generate mountains of data they do not know how to use. AI makes it possible to build custom analytics pipelines that transform raw data into actionable insights. Churn prediction models, customer segmentation, sales forecasting, competitive intelligence.&lt;/p&gt;

&lt;p&gt;A developer who can connect to a client's data sources, build an AI-powered analysis pipeline, and deliver a weekly or monthly report with genuine insights is worth $3,000 to $8,000 per month to mid-size businesses. Especially if those insights directly connect to revenue decisions.&lt;/p&gt;




&lt;h2&gt;
  
  
  How to Price Without Leaving Money on the Table
&lt;/h2&gt;

&lt;p&gt;Pricing is where most developer-turned-agency-owners mess up. They default to hourly rates because that is what freelancing taught them. Hourly rates are the worst pricing model for AI-powered services because the entire value proposition is that you can deliver results faster than traditional approaches.&lt;/p&gt;

&lt;p&gt;If you charge $100 per hour and AI reduces a 40-hour project to 8 hours, you just cut your revenue by 80% for delivering the same outcome. That is backwards.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Value-based pricing is the only model that makes sense for this.&lt;/strong&gt; You charge based on what the outcome is worth to the client, not how long it takes you.&lt;/p&gt;

&lt;p&gt;A client spending $8,000 per month on a content team will happily pay you $4,000 per month for the same output. They save $4,000. You spend maybe 15 hours per month on their account. Your effective hourly rate is over $260. Everyone wins.&lt;/p&gt;

&lt;p&gt;For automation projects, tie pricing to measurable impact. "This workflow saves your team 30 hours per week" has a calculable value. If those hours cost the company $50 each, you are saving them $6,000 per month. Charging $2,500 per month for that is an easy yes.&lt;/p&gt;

&lt;p&gt;Here are the pricing ranges I have seen working in 2026:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AI automation builds:&lt;/strong&gt; $3,000 to $10,000 per project, plus $500 to $2,000 monthly maintenance&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom AI agents:&lt;/strong&gt; $5,000 to $15,000 per engagement, plus $1,000 to $3,000 monthly support&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Content production:&lt;/strong&gt; $2,000 to $5,000 per month retainer&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data analysis and reporting:&lt;/strong&gt; $3,000 to $8,000 per month retainer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Start at the lower end to build case studies and testimonials. Raise prices with every new client. If nobody pushes back on your pricing, you are too cheap.&lt;/p&gt;




&lt;h2&gt;
  
  
  Finding Your First Five Clients
&lt;/h2&gt;

&lt;p&gt;This is the hard part, and it is the same challenge I wrote about in the &lt;a href="https://dev.to/blog/building-is-easy-distribution-is-the-moat-2026"&gt;distribution article&lt;/a&gt;. Having the skills is not enough. You need to get in front of people who will pay for them.&lt;/p&gt;

&lt;p&gt;The good news is that you need five clients, not five thousand. This changes the strategy entirely. You do not need content marketing or SEO or a viral launch. You need targeted outreach and a compelling offer.&lt;/p&gt;

&lt;h3&gt;
  
  
  Start With Your Network
&lt;/h3&gt;

&lt;p&gt;The fastest path to your first client is someone who already knows and trusts you. Former employers, colleagues, friends who run businesses. Tell everyone you know what you are doing. Not a generic "I do AI consulting" pitch, but a specific offer: "I build AI automation that cuts your team's manual data entry by 80%. If you know any business spending more than 20 hours a week on repetitive tasks, I would love an intro."&lt;/p&gt;

&lt;p&gt;Specificity sells. Vagueness does not.&lt;/p&gt;

&lt;h3&gt;
  
  
  LinkedIn Outbound
&lt;/h3&gt;

&lt;p&gt;LinkedIn is still the best B2B cold outreach channel for service businesses. The approach that works: find businesses in your target niche, identify the operations lead or founder, and send a short message that demonstrates you understand their specific problem.&lt;/p&gt;

&lt;p&gt;Do not pitch in the first message. Share an insight about their industry. Reference something specific about their company. Build enough credibility that they respond. The pitch comes in the second or third exchange.&lt;/p&gt;

&lt;h3&gt;
  
  
  Build in Public as a Sales Channel
&lt;/h3&gt;

&lt;p&gt;If you are already &lt;a href="https://dev.to/blog/the-build-in-public-playbook-growing-your-personal-brand-on-x"&gt;building in public on X&lt;/a&gt;, shift some of your content toward the AI services angle. Share case studies from your work, even if the first ones are free projects you did to build your portfolio. Show the before and after. Show the numbers. Show the process.&lt;/p&gt;

&lt;p&gt;The developers I see landing the most agency clients in 2026 are the ones who consistently post about the results they are getting for clients. Not the tools they use or the code they write. The results. Businesses do not care about your tech stack. They care about outcomes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Free Audit as a Lead Magnet
&lt;/h3&gt;

&lt;p&gt;Offer a free automation audit to qualified businesses. You spend 30 to 60 minutes reviewing their operations and identify two or three areas where AI automation would save them significant time or money. You deliver the audit as a short document with specific recommendations and estimated impact.&lt;/p&gt;

&lt;p&gt;About 30 to 40% of businesses that receive a solid audit will ask you to implement the recommendations. That is your first project. The audit costs you an hour of work and positions you as the expert who already understands their business.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Tech Stack for Running an AI Agency Solo
&lt;/h2&gt;

&lt;p&gt;You already know most of the tools. The difference is how you combine them into a repeatable system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For building automations:&lt;/strong&gt; n8n (self-hosted for flexibility) or Make for visual workflow building. Claude API or OpenAI API for the intelligence layer. Postgres or Supabase for data storage. These are the building blocks that handle 90% of automation projects.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For custom agents:&lt;/strong&gt; Claude Code or Cursor for development. Vercel or Cloudflare for deployment. Your existing &lt;a href="https://dev.to/blog/solopreneur-automation-stack-2026"&gt;solopreneur automation stack&lt;/a&gt; for ops.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For running the business:&lt;/strong&gt; A CRM does not need to be fancy. Notion or a simple spreadsheet works for five to twenty clients. Stripe for invoicing and payments. Loom for recording client walkthroughs and deliverable explanations. Cal.com for scheduling.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For content production:&lt;/strong&gt; Your AI tool of choice for first drafts. A style guide per client that you feed into the prompt context. A review workflow that ensures quality before delivery. This is where &lt;a href="https://dev.to/blog/context-engineering-ai-coding-2026"&gt;context engineering&lt;/a&gt; skills directly translate to service quality. The better your prompts and context setup, the better your output, and the less time you spend editing.&lt;/p&gt;

&lt;p&gt;The total cost for this stack is under $200 per month. Your margins will be 70-85%.&lt;/p&gt;




&lt;h2&gt;
  
  
  From Freelancer to Agency Owner: The Mental Shift
&lt;/h2&gt;

&lt;p&gt;If you are coming from &lt;a href="https://dev.to/blog/freelancing-as-developer-guide-2026"&gt;freelancing&lt;/a&gt;, the biggest shift is not technical. It is how you think about the work.&lt;/p&gt;

&lt;p&gt;A freelancer sells time. An agency owner sells outcomes. This distinction changes everything about how you operate.&lt;/p&gt;

&lt;p&gt;When you sell time, the incentive is to work more hours. When you sell outcomes, the incentive is to deliver results as efficiently as possible. That means building systems, automating your delivery process, and investing in the infrastructure that lets you serve more clients without proportionally more effort.&lt;/p&gt;

&lt;p&gt;In practice, this means treating every client engagement as a chance to build a reusable system. The automation you build for Client A's invoice processing should be 70% reusable for Client B. The content pipeline you create for one client should be adaptable for the next. Every project should make the next one faster.&lt;/p&gt;

&lt;p&gt;This is the compounding advantage that developers have over non-technical agency owners. You can build the systems that create leverage. A non-developer running an AI content agency is manually prompting ChatGPT for every piece of content. You build a pipeline that handles the repetitive parts automatically. Over time, your capacity grows while your effort per client shrinks.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Honest Downsides
&lt;/h2&gt;

&lt;p&gt;I would be lying if I said this model has no drawbacks. It does.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Client management is real work.&lt;/strong&gt; Five clients means five sets of expectations, five communication threads, five different contexts to switch between. If you hate client work, an AI-powered agency will not fix that. It reduces the execution time but not the relationship management time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Revenue is not passive.&lt;/strong&gt; Unlike SaaS, when you stop working, revenue stops. You can mitigate this with retainer contracts and automation, but you are fundamentally exchanging your expertise for money. That is a service business, and service businesses require ongoing effort.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scope creep is constant.&lt;/strong&gt; Clients who pay $3,000 per month for content will inevitably ask for "just a quick landing page" or "can you also look at our email automation?" Setting boundaries is a skill you will need to develop fast.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scaling has a ceiling.&lt;/strong&gt; A solo AI agency can realistically serve ten to twenty clients. Beyond that, you either hire or you hit a wall. That might mean $300K to $600K in annual revenue, which is excellent, but it is not the unlimited scalability of SaaS. If your goal is a billion-dollar company, this is not the path. If your goal is a highly profitable business that funds your life, it absolutely is.&lt;/p&gt;




&lt;h2&gt;
  
  
  The First 90 Days: A Practical Roadmap
&lt;/h2&gt;

&lt;p&gt;If you are reading this and thinking about starting an AI-powered agency, here is the sequence I would follow.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Week 1-2: Pick your niche and service.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Do not try to be an "AI agency" that does everything. Pick one service type (automation, content, agents, or analytics) and one target industry. "I build AI automation for e-commerce businesses" is a better positioning than "I do AI stuff for whoever needs it." Read my take on &lt;a href="https://dev.to/blog/stop-validating-ideas-start-validating-pain"&gt;why validating the pain matters more than validating the idea&lt;/a&gt; before you commit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Week 3-4: Build your portfolio with free or discounted work.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You need proof that you can deliver results. Offer two or three free automation audits to businesses in your target niche. If the audit is strong, offer to implement one recommendation at a steep discount in exchange for a case study and testimonial.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Week 5-8: Land your first paying client.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With a case study in hand, start outreach. LinkedIn, your network, and relevant online communities. The free audit offer is your lead magnet. Your case study is your credibility. Your pricing should be lower than you want it to be, because the goal right now is revenue and experience, not maximum margins.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Week 9-12: Systematize and raise prices.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After your first two or three paying clients, you will have a clear picture of what the delivery process looks like. Build the systems. Automate the repetitive parts. Document your processes. Then raise your prices for the next client. Repeat this cycle indefinitely.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where This Model Goes
&lt;/h2&gt;

&lt;p&gt;I think the AI-powered agency is going to be one of the defining business models for developer-entrepreneurs in the next few years. It combines the technical skills developers already have with the AI leverage that makes solo operation viable at a scale that used to require a team.&lt;/p&gt;

&lt;p&gt;The developers who will do best with this model are the ones who treat it as a real business from day one. That means investing in &lt;a href="https://dev.to/blog/building-is-easy-distribution-is-the-moat-2026"&gt;distribution&lt;/a&gt;, pricing based on value, building systems that create leverage, and resisting the urge to build a SaaS product when the service business is already working.&lt;/p&gt;

&lt;p&gt;The agentic AI market is projected to hit $9.14 billion this year. The demand for AI-related freelance skills grew 109% year-over-year. Businesses are willing to pay premium prices for AI-powered services that deliver measurable results.&lt;/p&gt;

&lt;p&gt;The opportunity is here. The question is whether you are going to build another SaaS product that nobody uses, or start a service business that generates real revenue from month one.&lt;/p&gt;

&lt;p&gt;I know which one I would pick.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>startup</category>
      <category>freelancing</category>
      <category>productivity</category>
    </item>
    <item>
      <title>The Developer Freelancing Playbook: How to Land Clients, Set Rates, and Build a Business That Lasts in 2026</title>
      <dc:creator>Alex Cloudstar</dc:creator>
      <pubDate>Tue, 31 Mar 2026 10:33:18 +0000</pubDate>
      <link>https://forem.com/alexcloudstar/the-developer-freelancing-playbook-how-to-land-clients-set-rates-and-build-a-business-that-lasts-pki</link>
      <guid>https://forem.com/alexcloudstar/the-developer-freelancing-playbook-how-to-land-clients-set-rates-and-build-a-business-that-lasts-pki</guid>
      <description>&lt;p&gt;Three years ago, I took on my first freelance project. A friend of a friend needed a React dashboard built. I quoted $2,000 for what I estimated would be two weeks of work. It took five weeks, three scope changes, and a near-breakdown over a charting library that refused to cooperate. I made roughly $10 an hour after accounting for all the time I actually spent. My full-time salary at the time worked out to about $45 an hour.&lt;/p&gt;

&lt;p&gt;I almost never freelanced again.&lt;/p&gt;

&lt;p&gt;But I kept at it, mostly because I liked the idea of controlling my own time and choosing my own projects. Over the next two years, I raised my rates four times, fired two clients who were draining my energy, and eventually built a pipeline that consistently generates more inbound requests than I can take on.&lt;/p&gt;

&lt;p&gt;The gap between "developer who tries freelancing" and "developer who builds a freelancing business" is enormous. It is not about technical skill. Some of the best developers I know are terrible freelancers because they cannot price, sell, or manage clients. And some mid-level developers I know earn $150 an hour because they understand the business side.&lt;/p&gt;

&lt;p&gt;This is everything I have learned about freelancing as a developer, with the specific numbers, strategies, and mistakes that actually matter in 2026.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Freelance Developer Market in 2026
&lt;/h2&gt;

&lt;p&gt;Before we get into strategy, you need to understand the market you are entering. It has changed significantly in the last two years.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The demand side is strong but shifting.&lt;/strong&gt; Companies are spending more on software than ever, but they are also more cautious about full-time headcount. The post-2023 layoff wave made companies allergic to growing engineering teams aggressively. Many now prefer to hire freelancers and contractors for specific projects rather than committing to full-time salaries with benefits. This is good news for freelancers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI has raised the bar for what clients expect.&lt;/strong&gt; Two years ago, a client hired a freelance developer because they could not build the thing themselves. Today, many clients have tried building it with AI tools first. When they come to a freelancer, they have already seen what a &lt;a href="https://dev.to/blog/lets-talk-about-vibe-coding"&gt;vibe-coded&lt;/a&gt; prototype looks like. They are hiring you because they need it done properly, at production quality, with real architecture decisions. The bar for what counts as "professional developer work" is higher than it has ever been.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rates have compressed at the bottom and expanded at the top.&lt;/strong&gt; Junior and mid-level freelance rates have dropped because AI tools allow fewer people to do the same work. But senior rates have increased because the work that gets outsourced to freelancers is more complex, more critical, and requires more judgment. The rate ceiling for a specialist freelancer has never been higher.&lt;/p&gt;

&lt;p&gt;Here is what the market looks like right now.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Freelance Developers Actually Charge in 2026
&lt;/h2&gt;

&lt;p&gt;I am going to be specific because vague advice like "charge what you are worth" is useless.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;By experience level (US market):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Junior developers (0 to 2 years): $40 to $60 per hour&lt;/li&gt;
&lt;li&gt;Mid-level developers (3 to 6 years): $70 to $100 per hour&lt;/li&gt;
&lt;li&gt;Senior developers (7 to 15 years): $100 to $160 per hour&lt;/li&gt;
&lt;li&gt;Specialist or staff-level (15+ years or niche expertise): $160 to $300 per hour&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;By region:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;United States: $70 to $150 per hour average&lt;/li&gt;
&lt;li&gt;Western Europe: $55 to $130 per hour&lt;/li&gt;
&lt;li&gt;Eastern Europe: $25 to $70 per hour&lt;/li&gt;
&lt;li&gt;Latin America: $20 to $60 per hour&lt;/li&gt;
&lt;li&gt;Canada: $45 to $100 per hour&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;By specialization:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Full-stack development: $50 to $150 per hour&lt;/li&gt;
&lt;li&gt;Mobile app development: $50 to $160 per hour&lt;/li&gt;
&lt;li&gt;Blockchain and smart contracts: $100 to $200 per hour&lt;/li&gt;
&lt;li&gt;Cybersecurity consulting: $80 to $180 per hour&lt;/li&gt;
&lt;li&gt;AI and machine learning: $90 to $200 per hour&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These ranges are wide because they reflect a wide market. Where you land within the range depends on three things: your positioning, your portfolio, and your ability to sell.&lt;/p&gt;

&lt;p&gt;Notice that technical skill is not on that list. It matters, obviously. But a developer who is slightly less skilled and far better at positioning will consistently outearn a more skilled developer who cannot communicate their value.&lt;/p&gt;

&lt;p&gt;This connects directly to something I wrote about in my &lt;a href="https://dev.to/blog/how-to-negotiate-salary-as-a-developer-what-i-wish-i-knew-earlier"&gt;salary negotiation guide&lt;/a&gt;. Knowing the market is step one. Positioning yourself at the right point in the market is where the real money is.&lt;/p&gt;




&lt;h2&gt;
  
  
  Finding Your First Clients
&lt;/h2&gt;

&lt;p&gt;The hardest part of freelancing is not the work. It is finding the first three to five clients. After that, referrals and reputation do most of the heavy lifting. But getting to that point requires deliberate effort.&lt;/p&gt;

&lt;p&gt;Here are the channels that actually produce results, ranked by effectiveness for developers specifically.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Your Existing Network
&lt;/h3&gt;

&lt;p&gt;This is the highest-converting channel and the one most developers underuse. You have worked with people. You have former colleagues, managers, and teammates who are now at other companies. You have friends who run businesses or know people who do.&lt;/p&gt;

&lt;p&gt;Send a direct, specific message to 20 to 30 people in your network. Not "Hey, I am freelancing now, let me know if you need anything." That is too vague to act on. Instead: "Hey, I am taking on freelance React and Node.js projects. If you know anyone who needs a senior developer for a 2 to 4 week project, I would love an introduction."&lt;/p&gt;

&lt;p&gt;Specific asks generate specific responses. Vague asks generate polite nods and silence.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Freelance Platforms (Strategically)
&lt;/h3&gt;

&lt;p&gt;Upwork, Toptal, and similar platforms get a bad reputation among senior developers. The reputation is partially deserved. The race to the bottom on pricing is real, especially on Upwork. But these platforms are still useful as a starting point if you use them correctly.&lt;/p&gt;

&lt;p&gt;The strategy: do not compete on price. Compete on specificity. Instead of bidding on "build me a website" projects, filter for projects that require your specific expertise. "Need a developer experienced with Stripe Connect multi-party payments" is a project where your specific knowledge commands a premium. Generic projects attract hundreds of bids. Specific projects attract ten.&lt;/p&gt;

&lt;p&gt;On Toptal, the vetting process is rigorous, but once you are in, the quality of clients and projects is significantly higher. The rates reflect that.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Content Marketing
&lt;/h3&gt;

&lt;p&gt;This is a slower channel but the highest ROI over time. Write about the problems you solve. Not generic "how to use React hooks" tutorials. Write about the specific, painful problems your ideal clients face. "How to migrate a legacy jQuery app to React without rewriting everything" is an article that attracts exactly the kind of client who would hire you to do that work.&lt;/p&gt;

&lt;p&gt;If you already have a blog or post regularly on platforms like Dev.to or LinkedIn, you are building a client pipeline whether you realize it or not. Every technical article you write is a demonstration of your expertise to a potential client who is searching for exactly that solution.&lt;/p&gt;

&lt;p&gt;I have seen this work for my own projects. The content I write about &lt;a href="https://dev.to/blog/bun-vs-nodejs-is-it-time-to-switch-in-2026"&gt;building with modern tools&lt;/a&gt; and &lt;a href="https://dev.to/blog/how-i-use-ai-tools-in-my-daily-workflow-and-where-i-dont"&gt;developer workflows&lt;/a&gt; has led to inbound inquiries from people who read the articles and thought, "This person clearly knows what they are doing."&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Cold Outreach (Done Right)
&lt;/h3&gt;

&lt;p&gt;Most developers hate cold outreach because they think it means spamming strangers with sales pitches. It does not have to be that.&lt;/p&gt;

&lt;p&gt;Effective cold outreach for developers looks like this: find a company whose website or product has a specific technical problem you can identify. Maybe their site is slow, their mobile experience is broken, or they are using outdated technology. Send a short, specific email that identifies the problem and suggests a solution. Include a rough estimate of the impact.&lt;/p&gt;

&lt;p&gt;"I noticed your checkout flow takes 8 seconds to load on mobile. Based on industry data, that is costing you roughly 20 to 30 percent of mobile conversions. I can probably cut that to under 2 seconds. Happy to explain how if you are interested."&lt;/p&gt;

&lt;p&gt;That is not a sales pitch. That is a helpful observation from an expert. The response rate on emails like this is dramatically higher than generic "I am a freelance developer, do you need help?" messages.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Open Source and Community Contributions
&lt;/h3&gt;

&lt;p&gt;I wrote an &lt;a href="https://dev.to/blog/open-source-growth-strategy-developers-2026"&gt;entire article about open source as a growth strategy&lt;/a&gt;, and it applies to freelancing too. Contributing to popular open source projects puts your name and code in front of hiring decision-makers. Maintainers of popular projects regularly get asked for freelancer recommendations, and active contributors are the first names that come to mind.&lt;/p&gt;




&lt;h2&gt;
  
  
  Pricing: The Skill Most Developers Get Wrong
&lt;/h2&gt;

&lt;p&gt;Here is the uncomfortable truth about freelance pricing: most developers charge too little. Not because they are being humble. Because they are anchoring to their salary.&lt;/p&gt;

&lt;p&gt;If you earn $120,000 per year at a full-time job, your instinct is to divide that by 2,080 working hours and arrive at roughly $58 per hour. So you charge $60 per hour for freelance work and think you are doing well.&lt;/p&gt;

&lt;p&gt;You are not. Here is why.&lt;/p&gt;

&lt;p&gt;As a freelancer, you are paying for your own health insurance, retirement savings, equipment, software, accounting, and taxes. You are also spending time on unpaid work: finding clients, writing proposals, managing invoices, handling admin. A realistic estimate is that 30 to 40 percent of your time as a freelancer is non-billable.&lt;/p&gt;

&lt;p&gt;To match a $120,000 salary, you actually need to charge around $90 to $100 per hour, assuming 30 hours of billable work per week. And that is just to match your salary, not to earn more.&lt;/p&gt;

&lt;p&gt;The formula I use:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;(Target annual income + business expenses + taxes) / (billable hours per year) = minimum hourly rate&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A realistic billable hours per year number is 1,200 to 1,500, not 2,080. Anyone who tells you they bill 40 hours a week, 52 weeks a year is either lying or burning out.&lt;/p&gt;

&lt;h3&gt;
  
  
  Value-Based Pricing
&lt;/h3&gt;

&lt;p&gt;Hourly pricing is fine when you are starting out. But the real money in freelancing comes from value-based pricing, where you charge based on the value you deliver rather than the time you spend.&lt;/p&gt;

&lt;p&gt;If you build an e-commerce feature that increases a client's revenue by $50,000 per month, charging $5,000 for that project is a bargain for the client and a loss for you if it only took 20 hours. The value of the work to the business is disconnected from the time it takes you to do it.&lt;/p&gt;

&lt;p&gt;Value-based pricing requires two things: understanding the client's business well enough to quantify the impact, and having the confidence to price based on that impact. Both come with experience.&lt;/p&gt;

&lt;p&gt;Start with hourly rates. Move to project rates once you can reliably estimate scope. Graduate to value-based pricing once you understand how your work affects the client's bottom line.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Client Management Playbook
&lt;/h2&gt;

&lt;p&gt;Finding clients is half the battle. Managing them is the other half. Most freelancing nightmares come from poor client management, not poor technical skills.&lt;/p&gt;

&lt;h3&gt;
  
  
  Always Use a Contract
&lt;/h3&gt;

&lt;p&gt;I cannot stress this enough. Every project needs a written agreement that covers scope, timeline, payment terms, revision limits, and intellectual property. No exceptions. Not even for friends. Especially not for friends.&lt;/p&gt;

&lt;p&gt;The contract protects both you and the client. It sets expectations clearly so there is no ambiguity about what is included and what costs extra. I learned about &lt;a href="https://dev.to/blog/5-common-mistakes-freelancers-make-with-invoicing-and-how-to-fix-them"&gt;invoicing mistakes&lt;/a&gt; the hard way, and most of them could have been prevented with a clear contract.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scope Creep Is Your Biggest Threat
&lt;/h3&gt;

&lt;p&gt;Every freelancer has experienced this. The client asks for "one small change" that turns into a complete redesign. Then another "quick addition" that requires a new database schema. Before you know it, you are doing twice the work for the same price.&lt;/p&gt;

&lt;p&gt;The solution is not to say no to everything. The solution is to document the scope clearly upfront and then, when changes come in, acknowledge them as changes. "That is a great idea. It is outside the original scope, so let me put together a quick estimate for adding it." This is professional, not adversarial. Good clients respect it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Get Paid Before You Start
&lt;/h3&gt;

&lt;p&gt;For new clients, always require a deposit before starting work. Fifty percent upfront is standard for smaller projects. For larger projects, a payment schedule tied to milestones works better: 30 percent upfront, 30 percent at the midpoint, 40 percent on completion.&lt;/p&gt;

&lt;p&gt;Never do significant work before receiving payment. I have heard too many stories of developers completing projects and then chasing payment for months. The deposit is not just about cash flow. It is a commitment signal. Clients who refuse to pay a deposit are clients who will be difficult to collect from later.&lt;/p&gt;

&lt;h3&gt;
  
  
  Weekly Updates, No Surprises
&lt;/h3&gt;

&lt;p&gt;Send a brief status update every week, even if the client does not ask for one. What you worked on, what is coming next, any blockers or decisions needed. This takes five minutes and prevents the most common freelance relationship problem: the client wondering what is happening and losing trust.&lt;/p&gt;




&lt;h2&gt;
  
  
  Building a Sustainable Freelance Business
&lt;/h2&gt;

&lt;p&gt;The difference between freelancing as a side hustle and freelancing as a business comes down to systems.&lt;/p&gt;

&lt;h3&gt;
  
  
  Build a Pipeline, Not a Roller Coaster
&lt;/h3&gt;

&lt;p&gt;The classic freelance cycle: work hard on a project, finish it, realize you have no next client, panic, scramble to find work, repeat. This feast-or-famine cycle is exhausting and unnecessary.&lt;/p&gt;

&lt;p&gt;The fix is to always be marketing, even when you are busy. Set aside three to five hours per week for pipeline activities regardless of your current workload. Update your portfolio. Publish a blog post. Reach out to past clients. Attend a virtual meetup. The work you do today to build your pipeline pays off in two to three months.&lt;/p&gt;

&lt;h3&gt;
  
  
  Specialize Aggressively
&lt;/h3&gt;

&lt;p&gt;Generalist freelancers compete with everyone. Specialist freelancers compete with almost nobody.&lt;/p&gt;

&lt;p&gt;"I build web apps" puts you in a pool of millions. "I build real-time data dashboards for fintech startups using React and D3" puts you in a pool of maybe a few hundred. The second positioning lets you charge two to three times more because the client knows you have solved their exact problem before.&lt;/p&gt;

&lt;p&gt;Specialization feels risky because you are narrowing your market. In practice, it works the opposite way. A smaller market with less competition and higher willingness to pay is better than a massive market where you are competing on price with developers charging $15 per hour.&lt;/p&gt;

&lt;h3&gt;
  
  
  Raise Your Rates Every Six Months
&lt;/h3&gt;

&lt;p&gt;Most freelancers set a rate and never change it. This is a mistake. Your skills improve. Your portfolio grows. Your reputation strengthens. Your rates should reflect that.&lt;/p&gt;

&lt;p&gt;Every six months, raise your rate for new clients by 10 to 20 percent. Existing clients get the old rate for current projects and the new rate for new projects. Some clients will not renew at the higher rate. That is fine. You replace them with clients who value your work at the current rate, and your effective hourly income goes up over time.&lt;/p&gt;

&lt;p&gt;If nobody ever pushes back on your rates, you are charging too little. A healthy ratio is roughly 70 to 80 percent acceptance. If everyone says yes immediately, raise your rate.&lt;/p&gt;




&lt;h2&gt;
  
  
  The AI Question: Will Freelance Developers Be Replaced?
&lt;/h2&gt;

&lt;p&gt;I get asked this constantly, and the honest answer is nuanced.&lt;/p&gt;

&lt;p&gt;AI tools have already replaced certain types of freelance work. Simple landing pages, basic CRUD apps, and template-based websites are increasingly done with AI by the clients themselves. If your freelance business depends on work that AI can do in an afternoon, yes, you have a problem.&lt;/p&gt;

&lt;p&gt;But the freelance work that pays well in 2026 is the work that AI cannot do reliably. Complex architecture decisions. Migrating legacy systems. Performance optimization. Security audits. Building custom integrations between enterprise systems. Debugging production issues under pressure. These require judgment, experience, and the ability to understand a client's specific context. AI is a useful tool in all of these scenarios, but it is not a replacement.&lt;/p&gt;

&lt;p&gt;The developers I know who are earning the most as freelancers in 2026 use AI aggressively. They use Claude Code and Cursor to &lt;a href="https://dev.to/blog/agentic-coding-2026"&gt;write code faster&lt;/a&gt;, generate tests, and handle boilerplate. They use AI to draft proposals, write documentation, and automate repetitive tasks. The AI makes them faster, which means they deliver more value per hour, which justifies higher rates.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://dev.to/blog/ai-productivity-paradox-developers-2026"&gt;AI productivity paradox&lt;/a&gt; is real in employment settings where organizations absorb the gains. In freelancing, you keep the gains. If AI helps you finish a $10,000 project in 40 hours instead of 80, your effective rate just doubled. That is the freelancer's advantage.&lt;/p&gt;




&lt;h2&gt;
  
  
  When to Go Full-Time Freelance
&lt;/h2&gt;

&lt;p&gt;Not everyone should freelance full-time, and the timing matters more than most people think.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Start freelancing on the side.&lt;/strong&gt; Take on one or two projects while you still have a full-time job. This gives you time to build a portfolio, learn the business side, and establish a pipeline without the pressure of needing to pay rent from day one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Go full-time when you have three to six months of expenses saved and a pipeline of at least two to three clients.&lt;/strong&gt; The savings buffer gives you confidence to turn down bad projects and negotiate better rates. The pipeline means you are not starting from zero.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Do not quit your job because you hate it.&lt;/strong&gt; Quit because freelancing is working. Those are very different motivations, and they lead to very different outcomes. Desperation is a terrible negotiating position.&lt;/p&gt;

&lt;p&gt;I have seen developers make the leap too early, take terrible projects at terrible rates just to pay bills, burn out, and go back to full-time work convinced that "freelancing does not work." It works. But it works when you approach it as a business, not as an escape.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Playbook, Condensed
&lt;/h2&gt;

&lt;p&gt;If I were starting freelancing from scratch today, here is what I would do.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Month 1 to 2:&lt;/strong&gt; Keep my day job. Take one side project from my network. Do excellent work. Ask for a testimonial and a referral. Set up a simple portfolio page. Start writing one technical article per week about my specialty.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Month 3 to 4:&lt;/strong&gt; Take two to three more projects. Raise my rate from the first project. Build a profile on Toptal or a curated freelance platform. Continue writing. Start reaching out to past colleagues about potential work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Month 5 to 6:&lt;/strong&gt; Evaluate my pipeline. If I consistently have more inbound requests than I can handle, it is time to consider the transition. Save aggressively. Build a financial buffer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Month 7 onward:&lt;/strong&gt; Go full-time if the numbers work. Specialize. Raise rates. Build systems for pipeline management, client communication, and invoicing. Reinvest time savings from AI tools into higher-value work.&lt;/p&gt;

&lt;p&gt;The developers who treat freelancing as a real business, with real systems, real positioning, and real financial planning, are the ones who build careers they actually enjoy. The ones who wing it end up underpaid and overwhelmed.&lt;/p&gt;

&lt;p&gt;Choose which one you want to be, and plan accordingly.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Open Source as a Growth Engine: How Developers Are Using GitHub to Build Profitable Businesses in 2026</title>
      <dc:creator>Alex Cloudstar</dc:creator>
      <pubDate>Tue, 31 Mar 2026 10:33:17 +0000</pubDate>
      <link>https://forem.com/alexcloudstar/open-source-as-a-growth-engine-how-developers-are-using-github-to-build-profitable-businesses-in-2k82</link>
      <guid>https://forem.com/alexcloudstar/open-source-as-a-growth-engine-how-developers-are-using-github-to-build-profitable-businesses-in-2k82</guid>
      <description>&lt;p&gt;I spent the first half of 2025 trying every growth channel I could find for a developer tool I was building. Paid ads on Google. Cold DMs on X. Newsletter sponsorships. A Product Hunt launch that gave me a nice dopamine hit for 48 hours and then nothing. I was spending more time on distribution than on the product itself, and the results were mediocre at best.&lt;/p&gt;

&lt;p&gt;Then I open-sourced a utility library that solved one specific problem the tool addressed. I cleaned up the code, wrote a solid README, and pushed it to GitHub. No launch strategy. No coordinated social media blitz. Just a useful piece of code sitting on the internet.&lt;/p&gt;

&lt;p&gt;Within three weeks, it had 400 stars. Within two months, developers were opening issues, submitting PRs, and writing blog posts about it. My main product's signups tripled, not because I was marketing harder, but because developers who used the open source library discovered the paid product organically. The trust was already there. They had read my code, seen how I handled issues, and watched me respond to community feedback. The sales conversation was basically over before it started.&lt;/p&gt;

&lt;p&gt;That experience changed how I think about distribution entirely.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Open Source Works as a Growth Channel
&lt;/h2&gt;

&lt;p&gt;I have &lt;a href="https://dev.to/blog/building-is-easy-distribution-is-the-moat-2026"&gt;written before&lt;/a&gt; about how distribution is the only real moat for indie hackers in 2026. AI has commoditized the building part. Anyone can ship a functional product in a weekend. The hard part is getting people to trust you enough to pay for it.&lt;/p&gt;

&lt;p&gt;Open source solves the trust problem in a way that no other marketing channel can.&lt;/p&gt;

&lt;p&gt;When a developer reads your marketing page, they are skeptical. When they read your code, they form a real opinion. They can see how you think, how you structure problems, how you handle edge cases. Code is the most honest form of communication in our industry. You cannot fake competence in a public repository.&lt;/p&gt;

&lt;p&gt;The numbers back this up. GitHub now has over 200 million repositories, but the ones that gain real traction still stand out because developers are actively looking for tools to solve their problems. The micro SaaS segment is projected to grow from $15.7 billion to $59.6 billion by 2030, and a disproportionate share of successful developer tools started as open source projects.&lt;/p&gt;

&lt;p&gt;The growth mechanism is straightforward. You release useful code. Developers find it, use it, and trust it. When you offer a paid version with additional features or managed hosting, the trust transfer is immediate. The open source project becomes both the product and the distribution engine simultaneously.&lt;/p&gt;

&lt;p&gt;This is not theoretical. It is happening right now across the developer tool landscape.&lt;/p&gt;




&lt;h2&gt;
  
  
  The GitHub README Is Your New Landing Page
&lt;/h2&gt;

&lt;p&gt;Most developers treat their README as an afterthought. A quick description, maybe some install instructions, and that is it. This is a mistake that costs real growth.&lt;/p&gt;

&lt;p&gt;Your GitHub README is the first thing a potential user sees. For many developer tools, more people will read the README than will ever visit your marketing site. It needs to do the same job as a landing page: explain the problem, show the solution, make it easy to get started, and guide interested users toward your paid offering.&lt;/p&gt;

&lt;p&gt;Here is what a growth-optimized README looks like in practice:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A clear, specific headline.&lt;/strong&gt; Not "A modern framework for building things." Instead: "Generate type-safe API clients from your OpenAPI spec in 30 seconds." Specificity signals competence.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A screenshot or GIF within the first scroll.&lt;/strong&gt; Developers decide within seconds whether they care. A visual demonstration is worth more than three paragraphs of explanation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A "Get Started" section that works in under two minutes.&lt;/strong&gt; If a developer cannot go from zero to "this works" in under two minutes, you are losing a significant percentage of your potential users. One command to install. One command to run. Show the output.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A link to your paid product that feels natural, not promotional.&lt;/strong&gt; Something like "Need [feature]? Check out [Product Name] for teams" placed after the user has already seen the value of the free tool. The placement matters. After the value demonstration, not before.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Community signals.&lt;/strong&gt; A badge showing contributor count. A link to your Discord. A section showing who is using it. Social proof works in READMEs the same way it works on landing pages.&lt;/p&gt;

&lt;p&gt;I think about this the same way I think about &lt;a href="https://dev.to/blog/seo-for-indie-hackers-what-actually-moved-the-needle-for-me"&gt;SEO for indie hackers&lt;/a&gt;. The content that converts is the content that meets people where they already are and gives them exactly what they need. For developers, that place is GitHub.&lt;/p&gt;




&lt;h2&gt;
  
  
  Choosing What to Open Source
&lt;/h2&gt;

&lt;p&gt;This is where most founders get the strategy wrong. They think about open source as "giving away the product for free" and either refuse to do it or open source the entire thing and struggle to monetize.&lt;/p&gt;

&lt;p&gt;The right approach is surgical. You open source the part that solves a real, standalone problem. You keep the part that provides ongoing value to teams and businesses as the paid product.&lt;/p&gt;

&lt;p&gt;Here are the patterns that work:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The standalone utility.&lt;/strong&gt; A library or CLI tool that solves one specific problem well. It is useful on its own. It happens to integrate beautifully with your paid product, but it does not require the paid product to deliver value. The developers who adopt the utility become your warmest leads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The self-hosted core.&lt;/strong&gt; The full application is open source and self-hostable. The paid product is the managed, hosted version with additional features like team management, analytics, priority support, and automatic updates. This is the model that companies like Plausible, Cal.com, and Supabase use. It works because the open source version proves the product works, and the paid version saves time and effort.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The framework or toolkit.&lt;/strong&gt; You build an opinionated framework for solving a category of problems. The framework is free. The premium ecosystem around it, templates, plugins, managed infrastructure, is paid. This works particularly well in developer tooling where the open source project creates a platform effect.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The integration layer.&lt;/strong&gt; Your open source project connects two or more popular tools. Developers adopt it because it solves a painful integration problem. The paid version adds monitoring, error handling, or advanced configuration that teams need in production.&lt;/p&gt;

&lt;p&gt;The key principle across all of these: the open source project must be genuinely useful on its own. If it feels like a demo or a stripped-down version, developers will resent it. And developer resentment spreads fast on Hacker News.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Launch Playbook That Actually Gets Stars
&lt;/h2&gt;

&lt;p&gt;I have seen a lot of open source launches fizzle because the developer pushed code to GitHub and waited. That does not work. GitHub has 200 million repositories. Nobody is browsing.&lt;/p&gt;

&lt;p&gt;The launches that gain traction follow a specific pattern, and the timing matters more than most people realize.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Two days before launch: publish content.&lt;/strong&gt; Write an article on Dev.to, Hashnode, or your personal blog explaining the problem your tool solves and why you built it. Use the "How I built X" format. Developers love these. The article should be indexed by Google before your launch traffic arrives, creating a search entry point that persists long after the launch buzz fades.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Launch morning: submit to Hacker News.&lt;/strong&gt; Use the "Show HN" format. Your title should follow "Show HN: [Product Name], a [short description of what it does]." Submit between 8 and 9 AM Eastern time for maximum visibility. A successful Show HN can drive 5,000 to 15,000 views to your repository and hundreds of stars in a single day.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Same day: post to Reddit.&lt;/strong&gt; Target the specific subreddits where your users live. For self-hostable tools, r/selfhosted is essential. For web dev tools, r/webdev. For JavaScript specifically, r/javascript or r/node. Write genuine posts that explain the problem and ask for feedback. Do not write marketing copy. Reddit users can smell promotion from a mile away.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Same day: share on X and LinkedIn.&lt;/strong&gt; A thread format works well on X. Lead with the problem, show the solution, include a screenshot or demo, link to the repo. On LinkedIn, a more narrative approach tends to perform better.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The critical timing detail:&lt;/strong&gt; do all of this within a 24 to 48 hour window. GitHub's trending algorithm rewards velocity. A burst of stars and forks in a short period is more likely to land you on the trending page than the same total spread over weeks. And the trending page creates a compounding loop, because developers browse it daily looking for new tools to try.&lt;/p&gt;

&lt;p&gt;After the initial launch, maintain momentum with regular releases. Every version bump is an excuse to post an update in the communities that cared about the initial launch. "We just shipped v1.2 with [feature] based on community feedback" is a story that resonates every time.&lt;/p&gt;




&lt;h2&gt;
  
  
  Building Community Around Your Project
&lt;/h2&gt;

&lt;p&gt;Stars are a vanity metric. I know that sounds harsh, and I have been guilty of checking star counts compulsively. But stars do not pay bills. Users do. Contributors do. And the path from star to user to customer runs through community.&lt;/p&gt;

&lt;p&gt;The community infrastructure that matters:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A Discord server or GitHub Discussions board.&lt;/strong&gt; Somewhere people can ask questions, report issues, and help each other. Discord tends to work better for real-time engagement. GitHub Discussions works better for searchable, async conversations that help future users.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"Good First Issue" labels.&lt;/strong&gt; This is one of the highest-leverage things you can do. Labeling issues as "good first issue" or "help wanted" signals that your project is welcoming to new contributors. Every contributor who submits a PR becomes an ambassador for your project. They share it with their network. They write about it. They feel ownership.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A CONTRIBUTING.md file.&lt;/strong&gt; It does not need to be long. It needs to exist. Explain how to set up the development environment, how to run tests, and how to submit a PR. Remove friction from the contribution process and more people will contribute.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Regular, visible maintenance.&lt;/strong&gt; Respond to issues promptly. Review PRs within a reasonable timeframe. Ship releases consistently. Nothing kills an open source project faster than visible neglect. If a developer sees that the last commit was three months ago and there are 50 open issues with no responses, they are moving on.&lt;/p&gt;

&lt;p&gt;The community building compounds over time. Early contributors become advocates. Advocates write blog posts and give conference talks. Those posts and talks drive new users who become the next wave of contributors. It is the most organic growth loop available to developer tool companies, and it costs nothing but time and genuine engagement.&lt;/p&gt;




&lt;h2&gt;
  
  
  Monetization: From Free Users to Paying Customers
&lt;/h2&gt;

&lt;p&gt;The hardest part of the open source growth strategy is the transition from "people love my free tool" to "people pay me for the premium version." Get this wrong and you end up with a popular project and zero revenue.&lt;/p&gt;

&lt;p&gt;The monetization models that work in 2026:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Managed hosting and cloud.&lt;/strong&gt; This is the most common and most proven model. Your open source tool can be self-hosted for free. Your paid offering is a managed cloud version that eliminates the ops burden. Developers choose to pay because their time is worth more than the subscription cost. Enterprise teams choose to pay because they need SLAs, uptime guarantees, and someone to call when things break.&lt;/p&gt;

&lt;p&gt;Pricing for this model typically starts at $10 to $50 per month for individual developers and scales to $500 to $2,000 per month for startup teams. Enterprise contracts start around $50,000 per year and go up from there, depending on the scale and support requirements.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Premium features.&lt;/strong&gt; The core product is open source. Advanced features like team collaboration, analytics, audit logs, SSO, and role-based access control are paid. This works well because the features that individuals need are usually different from the features that teams and enterprises need. You are not crippling the free product. You are adding value that only matters at scale.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Support and consulting.&lt;/strong&gt; This is the most straightforward model and the easiest to start with. Your open source project is completely free and full-featured. You charge for priority support, implementation help, custom development, and training. This works particularly well for complex tools where enterprises need guidance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dual licensing.&lt;/strong&gt; The open source version uses a copyleft license like AGPL-3.0, which requires anyone who modifies and distributes the code to also open source their changes. Companies that want to use the code in proprietary products purchase a commercial license. This model works for infrastructure tools and libraries that get embedded in larger products.&lt;/p&gt;

&lt;p&gt;One critical mistake to avoid: do not gate features that the community expects to be free. If a feature was open source and you move it behind a paywall, you will face backlash. The boundary between free and paid should be clear from the beginning. Be transparent about what is free and what costs money, and the community will respect it.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Content Flywheel: Turning GitHub Activity into SEO Traffic
&lt;/h2&gt;

&lt;p&gt;Here is something that took me too long to figure out. Your open source project generates content opportunities that most founders completely ignore.&lt;/p&gt;

&lt;p&gt;Every significant feature you ship is a blog post. "How we implemented [feature] in [project name]" is a natural article that targets long-tail keywords, demonstrates expertise, and drives traffic back to your repo.&lt;/p&gt;

&lt;p&gt;Every common question in your issues or Discord is a documentation page. And documentation pages rank in Google. When someone searches "how to [do thing] with [technology]," your documentation can be the answer, and the reader is one click away from discovering your project.&lt;/p&gt;

&lt;p&gt;Every contributor story is social proof. A short case study about how a contributor went from first PR to core maintainer humanizes your project and attracts more contributors.&lt;/p&gt;

&lt;p&gt;Every integration you build is a partnership opportunity. When your open source tool integrates with Notion, Slack, or another popular platform, the integration becomes discoverable through the platform's marketplace. This is passive distribution that compounds as the platform grows. I touched on this when writing about &lt;a href="https://dev.to/blog/building-is-easy-distribution-is-the-moat-2026"&gt;distribution channels&lt;/a&gt;, and integrations remain one of the most underrated growth levers for developer products.&lt;/p&gt;

&lt;p&gt;The content flywheel looks like this: ship features, write about them, rank for related keywords, drive traffic to the repo, convert traffic to users, convert users to contributors, ship more features with contributor help. Each cycle reinforces the next.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Open Source Is Not Good For
&lt;/h2&gt;

&lt;p&gt;I want to be honest about where this strategy does not work, because the last thing you need is to spend six months building an open source community for a product that should never have been open source.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pure B2C consumer apps.&lt;/strong&gt; If your target user is not a developer and will never look at GitHub, open source is not a growth channel for you. It is a distribution strategy that only works when your users are the same people who browse GitHub.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Products where the value is in the data, not the code.&lt;/strong&gt; If your competitive advantage is a proprietary dataset, trained model, or curated content library, open sourcing the code does not give away the moat. But it also does not help much, because the code without the data is not particularly useful.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Markets where speed matters more than trust.&lt;/strong&gt; If you are in a category where the first product to market wins regardless of trust, spending months building an open source community might cost you the market. Sometimes a faster, closed-source approach with aggressive paid marketing is the right call.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solo founders who cannot maintain the project.&lt;/strong&gt; An open source project is a commitment. If you cannot respond to issues, review PRs, and ship updates consistently, the project will stagnate and hurt your brand more than help it. Be honest about your bandwidth before committing.&lt;/p&gt;

&lt;p&gt;For most developer-focused products built by technical founders, though, open source is one of the highest-ROI growth channels available. It costs nothing but time, builds genuine trust, and creates a compounding distribution engine that gets stronger as the community grows.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Playbook, Condensed
&lt;/h2&gt;

&lt;p&gt;If I were starting a new developer tool today, here is exactly what I would do.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Month 1:&lt;/strong&gt; Identify the core utility that solves a real problem on its own. Build it. Write a killer README. Set up community infrastructure. Prepare launch content.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Month 2:&lt;/strong&gt; Execute the coordinated launch. Hacker News, Reddit, Dev.to, X, LinkedIn, all within 48 hours. Respond to every issue and PR within 24 hours. Ship fixes fast. Get to the GitHub trending page.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Month 3 to 4:&lt;/strong&gt; Build the content flywheel. Write about every feature. Document everything. Encourage and support contributors. Start collecting emails from developers who want to know when the paid product launches.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Month 5 to 6:&lt;/strong&gt; Launch the paid product to a warm audience. The developers who have been using the open source tool are your first customers. They know the code works. They trust you. The conversion conversation is simple: "You have been using the free version. Here is what the paid version adds."&lt;/p&gt;

&lt;p&gt;This timeline is faster than most distribution strategies because the trust-building happens in parallel with the product development. You are not waiting to build an audience and then building the product. You are building both simultaneously, and the open source project is the bridge between them.&lt;/p&gt;

&lt;p&gt;The founders who figure this out in 2026 will have an unfair advantage. While everyone else is spending money on ads and cold outreach, they will have a growing community of developers who already know, use, and trust their code.&lt;/p&gt;

&lt;p&gt;That is a moat that no amount of &lt;a href="https://dev.to/blog/solopreneur-automation-stack-2026"&gt;AI-generated marketing&lt;/a&gt; can replicate.&lt;/p&gt;

</description>
      <category>github</category>
      <category>marketing</category>
      <category>opensource</category>
      <category>startup</category>
    </item>
    <item>
      <title>Return to Office Is Not a Productivity Strategy: What Actually Makes Developers Effective in 2026</title>
      <dc:creator>Alex Cloudstar</dc:creator>
      <pubDate>Mon, 30 Mar 2026 09:29:05 +0000</pubDate>
      <link>https://forem.com/alexcloudstar/return-to-office-is-not-a-productivity-strategy-what-actually-makes-developers-effective-in-2026-4hbo</link>
      <guid>https://forem.com/alexcloudstar/return-to-office-is-not-a-productivity-strategy-what-actually-makes-developers-effective-in-2026-4hbo</guid>
      <description>&lt;p&gt;A friend of mine, a staff engineer at a company you have heard of, got the email in January. Five days a week in the office starting March 1st. No exceptions. No negotiation.&lt;/p&gt;

&lt;p&gt;He had been remote for four years. Shipped two major platform migrations. Led the architecture review process for his entire org. His performance reviews were consistently top-tier. None of that mattered. The policy was universal.&lt;/p&gt;

&lt;p&gt;So he started interviewing. Had three offers within six weeks. Took one at a fully remote company for 15% more. His old company is still trying to backfill his role four months later.&lt;/p&gt;

&lt;p&gt;This story is playing out across the industry right now, and the data backs it up.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Numbers Tell a Clear Story
&lt;/h2&gt;

&lt;p&gt;The return-to-office push has accelerated dramatically. Fifty-four percent of Fortune 100 companies now require full-time in-person attendance, up from just 5% in 2023. Amazon, Dell, Meta, Google, Apple, and dozens more have tightened their hybrid policies or eliminated remote work entirely.&lt;/p&gt;

&lt;p&gt;But here is the part that should concern every engineering leader: 80% of companies that implemented RTO mandates have already lost talent because of it. Not just any talent. High performers are 16% more likely to leave when hit with an RTO mandate than average employees. The people you most want to keep are the ones most likely to walk.&lt;/p&gt;

&lt;p&gt;Among software engineers specifically, the numbers are even starker. Only 20% expect to return to full-time office work. Twenty-one percent say they would quit outright if forced, and another 49% would start looking. That is 70% of your engineering team either leaving or planning to leave.&lt;/p&gt;

&lt;p&gt;Meanwhile, the productivity argument that justifies these mandates does not hold up under scrutiny. Ninety percent of hybrid workers report equal or greater productivity compared to full-time office work. Fully remote companies grew revenue 1.7x faster than office-required companies between 2019 and 2024. And here is the kicker: despite companies increasing required office time by 12%, actual office attendance only went up 1 to 3%.&lt;/p&gt;

&lt;p&gt;People are badging in and leaving. The mandate creates the appearance of compliance without the collaboration it supposedly enables.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Debate Is Framed Wrong
&lt;/h2&gt;

&lt;p&gt;Most of the RTO conversation is about location. Should developers be in the office or at home? Three days or five? Tuesdays and Thursdays or whatever the team decides?&lt;/p&gt;

&lt;p&gt;This framing misses the point entirely.&lt;/p&gt;

&lt;p&gt;The question that actually matters for developer productivity is not where someone works. It is whether they can sustain focused, uninterrupted concentration on hard problems. That is what developers do. They think about complex systems, hold intricate mental models in their heads, and translate that thinking into code. This kind of work has a name: deep work. And deep work has specific environmental requirements that most office environments violate by design.&lt;/p&gt;

&lt;p&gt;Workers in open-plan offices, the default layout for most tech companies, are interrupted approximately once every three minutes. The average knowledge worker faces 31.6 interruptions per day. Each interruption costs 15 to 25 minutes of refocusing time. Engineers at enterprise organizations spend more than half their day bouncing between meetings and fragmented work sessions.&lt;/p&gt;

&lt;p&gt;The math is brutal. If a developer gets interrupted six times in a focused work session, they lose up to two and a half hours just getting back into the zone. That is not a productivity preference. That is a cognitive reality.&lt;/p&gt;

&lt;p&gt;Deep work matters more for software engineering than for almost any other knowledge work. A developer debugging a distributed system needs to hold the entire call chain, the data flow, the error propagation path, and the timing characteristics in their head simultaneously. One Slack ping, one shoulder tap, one "quick question" from across the desk, and that mental model collapses. Rebuilding it takes real time and real cognitive energy.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why RTO Mandates Actually Hurt Engineering Teams
&lt;/h2&gt;

&lt;p&gt;Let me walk through the specific ways office mandates damage developer productivity, because the abstract "interruptions are bad" argument does not capture the full picture.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Commute Tax
&lt;/h3&gt;

&lt;p&gt;The average American commute is 28 minutes each way. That is nearly an hour of dead time per day, five hours per week, that a remote developer spends working, exercising, sleeping, or doing anything more productive than sitting in traffic.&lt;/p&gt;

&lt;p&gt;But the real cost is not just the time. It is the energy. Arriving at the office after a stressful commute, you are already in a depleted cognitive state before writing a single line of code. Research consistently shows that commute stress correlates with lower job satisfaction, higher burnout, and reduced performance. For developers specifically, who need sustained focus, starting the day depleted is a significant handicap.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Meeting Proximity Problem
&lt;/h3&gt;

&lt;p&gt;Something counterintuitive happens when everyone is in the office: meetings multiply. Being physically present makes it trivially easy for someone to pull you into a meeting, ask you to "hop on a quick call," or schedule a "sync" because you are right there. Remote work creates just enough friction that people think twice before requesting someone's time.&lt;/p&gt;

&lt;p&gt;That friction is valuable. It acts as a natural filter against low-value interruptions. Remove the friction and you get more meetings, more interruptions, and less time for the work that actually matters.&lt;/p&gt;

&lt;p&gt;I have talked to dozens of developers who report the same pattern: their in-office days are dominated by meetings and their remote days are when they actually code. If that is the case, what exactly is the office adding?&lt;/p&gt;

&lt;h3&gt;
  
  
  The One-Size-Fits-All Problem
&lt;/h3&gt;

&lt;p&gt;Not every developer does the same kind of work. A junior engineer pairing with a senior mentor benefits from being physically present. A staff engineer designing a new system architecture needs four uninterrupted hours to produce anything meaningful. A mid-level developer fixing a complex bug needs quiet concentration. A team lead doing code review can do that from anywhere.&lt;/p&gt;

&lt;p&gt;Universal RTO mandates treat all of these situations identically. Everyone comes in, regardless of what their day actually looks like. The result is that the people who benefit least from office presence, typically your most senior and most productive engineers, are forced into an environment that actively works against how they do their best work.&lt;/p&gt;

&lt;p&gt;This is not a theory. It matches the attrition data. The engineers leaving over RTO mandates are disproportionately senior. They have the leverage to find remote work elsewhere, and they know from experience that their best output comes from controlled environments with minimal interruption.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Real Reasons Behind RTO
&lt;/h2&gt;

&lt;p&gt;If the productivity argument does not hold up, why are companies pushing so hard?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real estate.&lt;/strong&gt; Companies signed long-term leases on expensive office space. Empty offices are embarrassing in board meetings and expensive on balance sheets. Filling them with employees is easier to justify than writing off a 10-year lease.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Control and visibility.&lt;/strong&gt; Some leaders genuinely believe that if they cannot see people working, people are not working. This is a management failure, not a location problem. If you cannot evaluate your engineers' output without watching them type, your management processes are broken.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Intentional attrition.&lt;/strong&gt; This one is uncomfortable but real. Twenty-five percent of executives and 18% of HR leaders admit they hoped RTO mandates would cause some employees to quit voluntarily. It is a way to reduce headcount without calling it a layoff and without paying severance. The problem, as the data shows, is that the employees who leave are not the ones you want to lose.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cultural inertia.&lt;/strong&gt; For many executives, "going to work" means going to a physical place. The idea that work is something you do rather than somewhere you go has not fully landed in boardrooms where the average leader is over 50 and built their career in an office.&lt;/p&gt;

&lt;p&gt;None of these reasons are about developer productivity. They are about organizational politics, real estate obligations, and management preferences.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Actually Makes Developers Productive
&lt;/h2&gt;

&lt;p&gt;Instead of debating location, let me talk about what the research and my own experience say actually drives developer output.&lt;/p&gt;

&lt;h3&gt;
  
  
  Uninterrupted Focus Blocks
&lt;/h3&gt;

&lt;p&gt;The single highest-leverage change any engineering organization can make is protecting focus time. Two-hour minimum blocks of uninterrupted concentration, no meetings, no Slack expectations, no shoulder taps. This is where deep work happens. This is where bugs get fixed, features get built, and architectures get designed.&lt;/p&gt;

&lt;p&gt;Some companies have implemented "maker schedules" where meetings are batched into specific windows and the rest of the day is protected. The developers at those companies consistently report higher satisfaction and higher output. The location barely matters. What matters is whether the calendar allows focus.&lt;/p&gt;

&lt;h3&gt;
  
  
  Async-First Communication
&lt;/h3&gt;

&lt;p&gt;The most productive engineering teams I have seen operate async-first. They write things down. Design decisions go into documents. Questions go into threads. Status updates go into dashboards. Synchronous communication, meetings and real-time chat, is reserved for situations that genuinely require it: brainstorming, conflict resolution, relationship building.&lt;/p&gt;

&lt;p&gt;This works regardless of whether the team is remote, hybrid, or in-office. An office full of people communicating asynchronously can be just as effective as a distributed team. The medium matters less than the discipline.&lt;/p&gt;

&lt;h3&gt;
  
  
  Autonomy Over Where and When
&lt;/h3&gt;

&lt;p&gt;Not everyone does their best work at the same time or in the same place. Some developers are most productive at 6am in a quiet home office. Others peak after lunch in a coffee shop. Others thrive in a collaborative office environment two days a week and need solitude the other three.&lt;/p&gt;

&lt;p&gt;The highest-performing teams I know give their engineers autonomy to figure out what works for them, with clear expectations about output and availability. They measure results, not hours in a chair. This is not radical. It is basic management: define what success looks like and let competent adults figure out how to get there.&lt;/p&gt;

&lt;h3&gt;
  
  
  Intentional Collaboration
&lt;/h3&gt;

&lt;p&gt;The strongest argument for office time is collaboration. And it is a real one. Whiteboard sessions, pair programming, team lunches, the serendipitous hallway conversation that sparks an idea. These things have genuine value.&lt;/p&gt;

&lt;p&gt;But they require intention. Forcing everyone into the office five days a week does not automatically create collaboration. It creates proximity. Proximity without purpose is just a room full of people wearing headphones to block out distractions so they can do the focused work they could be doing at home.&lt;/p&gt;

&lt;p&gt;Intentional collaboration means bringing the team together for specific purposes: planning sessions, design reviews, retrospectives, team building. Not every day, not as a default, but when physical presence actually adds value. The rest of the time, let people work where they work best.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Senior Engineers Should Do
&lt;/h2&gt;

&lt;p&gt;If you are a senior developer navigating an RTO mandate, think strategically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quantify your deep work.&lt;/strong&gt; Track how much focused coding time you get on office days versus remote days. If the difference is significant, that is data you can bring to your manager. "I ship 40% less code on in-office days because I spend 3 hours in meetings and lose another 2 hours to interruptions" is a conversation starter that reframes RTO from a preference issue to a business issue.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Negotiate outcomes, not hours.&lt;/strong&gt; Instead of arguing about where you sit, argue about what you deliver. Propose a trial: let me work remotely for a month and measure my output against the previous in-office month. If the numbers are equal or better, the location argument dissolves.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Know your leverage.&lt;/strong&gt; The &lt;a href="https://dev.to/blog/developer-talent-paradox-ai-shortage-2026"&gt;talent market for senior engineers&lt;/a&gt; is brutally competitive right now. 87.5% of tech leaders describe hiring experienced engineers as "brutal." If your company is forcing you into an environment that hurts your productivity and they will not negotiate, there are companies that will. You are not stuck. You have options.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Protect your energy.&lt;/strong&gt; If RTO is non-negotiable, adapt. Block your calendar aggressively. Communicate your focus hours to your team. Wear noise-canceling headphones without apology. Find a quiet corner, a conference room, a hidden desk. Do whatever it takes to create the conditions for deep work within the constraints you have been given.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Engineering Leaders Should Do
&lt;/h2&gt;

&lt;p&gt;If you are running an engineering team, the RTO question is actually a productivity strategy question. Frame it that way.&lt;/p&gt;

&lt;h3&gt;
  
  
  Measure What Matters
&lt;/h3&gt;

&lt;p&gt;Stop counting butts in seats. Start measuring:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cycle time.&lt;/strong&gt; How long does it take from commit to production? This captures real velocity, not activity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Incident frequency.&lt;/strong&gt; Are your teams shipping reliable code? This reveals whether quality is suffering.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Developer satisfaction surveys.&lt;/strong&gt; Unhappy developers write worse code, miss edge cases, and quit. Satisfaction is a leading indicator of quality.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Retention rates by seniority.&lt;/strong&gt; If your seniors are leaving faster than your juniors, your environment is optimized for the wrong people.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If these metrics are healthy, your team is productive. If they are not, the problem is probably not location.&lt;/p&gt;

&lt;h3&gt;
  
  
  Offer Flexibility as a Retention Tool
&lt;/h3&gt;

&lt;p&gt;In a market where &lt;a href="https://dev.to/blog/ai-washing-layoffs-junior-developer-crisis-2026"&gt;AI is reshaping who gets hired&lt;/a&gt; and senior engineers have unprecedented leverage, flexibility is one of the most cost-effective retention tools available. It costs you nothing to let someone work from home two days a week. Replacing them when they leave costs six to nine months of salary in recruiting, onboarding, and lost productivity.&lt;/p&gt;

&lt;p&gt;The math is straightforward. Flexibility is cheaper than attrition.&lt;/p&gt;

&lt;h3&gt;
  
  
  Design for Collaboration, Not Attendance
&lt;/h3&gt;

&lt;p&gt;If you want the benefits of in-person collaboration, design for it. Schedule collaborative activities on specific days. Make office days count by filling them with pair programming, design sessions, and team lunches. Make remote days count by protecting them for deep work.&lt;/p&gt;

&lt;p&gt;The worst outcome is the hybrid schedule where everyone comes in on Tuesday and Thursday but spends both days on Zoom calls with teammates in other offices. That is the worst of both worlds: commute time plus virtual meetings plus an open office. If your hybrid policy creates that situation, fix the policy.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Gen Z Factor
&lt;/h2&gt;

&lt;p&gt;One nuance worth addressing: a Federal Reserve Bank of New York, Harvard, and University of Virginia study found that younger software engineers are actually more likely to come to the office voluntarily than their senior colleagues. Engineers in the same office received 18% more coding feedback, which accelerated their growth.&lt;/p&gt;

&lt;p&gt;This matters. &lt;a href="https://dev.to/blog/developer-talent-paradox-ai-shortage-2026"&gt;Junior developers need mentorship&lt;/a&gt;, and physical proximity genuinely helps with that. The solution is not to force everyone into the office. It is to create structured mentorship that sometimes happens in person and sometimes happens through &lt;a href="https://dev.to/blog/agentic-coding-2026"&gt;agentic coding workflows&lt;/a&gt; and pair programming sessions.&lt;/p&gt;

&lt;p&gt;The senior engineers who are leaving over RTO mandates are the same people who should be mentoring juniors. Driving them away makes the mentorship problem worse, not better.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where This Lands
&lt;/h2&gt;

&lt;p&gt;The RTO debate in 2026 feels like it is reaching a resolution, and the resolution is not "everyone back in the office" or "everyone stays remote." It is something more nuanced.&lt;/p&gt;

&lt;p&gt;The World Economic Forum published an article in March 2026 arguing that companies asking "how many days should employees be in the office?" are asking the wrong question. The right question is: "how do we create the conditions for our people to do their best work?"&lt;/p&gt;

&lt;p&gt;For developers, "best work" means sustained focus on hard problems. It means environments that protect concentration. It means communication norms that respect flow states. It means measuring output, not attendance.&lt;/p&gt;

&lt;p&gt;Some developers will do their best work in an office. Many will not. The companies that figure out how to support both, that design for outcomes instead of optics, will attract and retain the best engineers. The companies that force universal mandates and ignore the data will keep losing their most valuable people to competitors who read the same research and made a different choice.&lt;/p&gt;

&lt;p&gt;The office is a tool. It is not a strategy. And treating it like a strategy is costing the industry some of its best talent.&lt;/p&gt;

</description>
      <category>career</category>
      <category>discuss</category>
      <category>productivity</category>
      <category>workplace</category>
    </item>
    <item>
      <title>TypeScript 7.0 and Project Corsa: The Go Rewrite That Changes Everything</title>
      <dc:creator>Alex Cloudstar</dc:creator>
      <pubDate>Mon, 30 Mar 2026 09:28:32 +0000</pubDate>
      <link>https://forem.com/alexcloudstar/typescript-70-and-project-corsa-the-go-rewrite-that-changes-everything-3p62</link>
      <guid>https://forem.com/alexcloudstar/typescript-70-and-project-corsa-the-go-rewrite-that-changes-everything-3p62</guid>
      <description>&lt;p&gt;I remember the first time I ran &lt;code&gt;tsgo&lt;/code&gt; on a real project. Not a hello-world demo, not a contrived benchmark, but our actual production codebase. The kind with 1,200 TypeScript files, a tangled web of generics, and a build that takes long enough for me to make coffee.&lt;/p&gt;

&lt;p&gt;Seven seconds. Down from 74.&lt;/p&gt;

&lt;p&gt;I stared at the terminal for a moment, convinced something had gone wrong. Maybe it skipped type-checking. Maybe it bailed on some files. I ran it again with verbose output. Full type-check. All 1,200 files. Seven seconds.&lt;/p&gt;

&lt;p&gt;That moment is when TypeScript 7.0 stopped being "an interesting Microsoft project" and became something I needed to prepare for seriously. If you write TypeScript professionally, you need to prepare for it too. This is the biggest change to the TypeScript ecosystem since the language was created.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Happened and Why
&lt;/h2&gt;

&lt;p&gt;TypeScript's compiler has always been written in TypeScript (technically, JavaScript that compiles itself). This was a deliberate choice by Anders Hejlsberg and the team back in 2012. Writing the compiler in its own language proved the language was capable and made it easy for the TypeScript community to contribute.&lt;/p&gt;

&lt;p&gt;But self-hosting has a ceiling. JavaScript is single-threaded. It cannot share memory between workers efficiently. Its garbage collector adds unpredictable pauses. For small projects, none of this matters. For monorepos with tens of thousands of files, these limitations mean builds that take minutes, editor startup that takes 30 seconds, and IntelliSense that lags behind your typing.&lt;/p&gt;

&lt;p&gt;Microsoft tried optimizing the existing compiler for years. They squeezed out improvements here and there. But the fundamental bottleneck was the runtime, not the algorithms. You cannot make a single-threaded JavaScript program do the work of a parallel native binary, no matter how clever your optimizations are.&lt;/p&gt;

&lt;p&gt;So the TypeScript team made a bold decision: rewrite the entire compiler in Go. Not a partial port. Not a thin native wrapper around the existing JavaScript code. A full, ground-up reimplementation of the TypeScript compiler as a native binary.&lt;/p&gt;

&lt;p&gt;They chose Go for pragmatic reasons. Go compiles to native code, has excellent concurrency primitives (goroutines and channels), manages memory efficiently, and has a simpler learning model than Rust or C++ for a team that was mostly coming from TypeScript/JavaScript backgrounds. The goal was not to pick the theoretically fastest language. It was to pick the language that would let the team ship a correct, fast compiler in a reasonable timeframe.&lt;/p&gt;

&lt;p&gt;The project was codenamed "Corsa." The binary is called &lt;code&gt;tsgo&lt;/code&gt;. And the results speak for themselves.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Benchmarks Are Not Marketing
&lt;/h2&gt;

&lt;p&gt;I am usually skeptical of benchmark numbers in announcement blog posts. They tend to cherry-pick the best case and bury the caveats. So I was pleasantly surprised that the TypeScript team published real-world numbers across a range of well-known projects.&lt;/p&gt;

&lt;p&gt;Here are the type-checking benchmarks comparing TypeScript 6.0 (&lt;code&gt;tsc&lt;/code&gt;) to TypeScript 7.0 (&lt;code&gt;tsgo&lt;/code&gt;):&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Project&lt;/th&gt;
&lt;th&gt;tsc (6.0)&lt;/th&gt;
&lt;th&gt;tsgo (7.0)&lt;/th&gt;
&lt;th&gt;Speedup&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;VS Code (1.5M lines)&lt;/td&gt;
&lt;td&gt;89.11s&lt;/td&gt;
&lt;td&gt;8.74s&lt;/td&gt;
&lt;td&gt;10.2x&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Sentry&lt;/td&gt;
&lt;td&gt;133.08s&lt;/td&gt;
&lt;td&gt;16.25s&lt;/td&gt;
&lt;td&gt;8.19x&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;TypeORM&lt;/td&gt;
&lt;td&gt;15.80s&lt;/td&gt;
&lt;td&gt;1.06s&lt;/td&gt;
&lt;td&gt;9.88x&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Playwright&lt;/td&gt;
&lt;td&gt;9.30s&lt;/td&gt;
&lt;td&gt;1.24s&lt;/td&gt;
&lt;td&gt;7.51x&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;VS Code's own codebase, 1.5 million lines of TypeScript, went from a 77-second type-check down to 7.5 seconds. That is not a marginal improvement. That is a different experience entirely.&lt;/p&gt;

&lt;p&gt;The memory improvements are equally significant: roughly 3x reduction in memory usage. For large projects that currently push Node.js to its heap limits during compilation, this alone removes a painful constraint.&lt;/p&gt;

&lt;p&gt;And watch mode, the thing most developers actually use day-to-day, now uses native file-system events instead of polling. The result is sub-100ms restart times when a file changes. You save, and by the time your eyes move from the editor to the terminal, the build is done.&lt;/p&gt;

&lt;p&gt;These are not theoretical numbers. Multiple teams are already running &lt;code&gt;tsgo&lt;/code&gt; in their daily workflows. The TypeScript team reports that stability is going well, with many teams using Corsa without blocking issues.&lt;/p&gt;




&lt;h2&gt;
  
  
  TypeScript 6.0: The Bridge You Need to Cross First
&lt;/h2&gt;

&lt;p&gt;Before you can think about TypeScript 7.0, you need to understand TypeScript 6.0. Microsoft shipped it on March 23, 2026, and it exists for one specific purpose: to prepare your codebase for the breaking changes coming in 7.0.&lt;/p&gt;

&lt;p&gt;TypeScript 6.0 is the last major version built on the original JavaScript compiler. There will be no 6.1. The JavaScript codebase is done. All engineering effort is now focused on the native port.&lt;/p&gt;

&lt;p&gt;What 6.0 does is introduce deprecation warnings for every feature that 7.0 will remove or change. If your project builds clean on TypeScript 6.0 with zero deprecation warnings, you are in good shape for the migration. If it does not, those warnings tell you exactly what to fix.&lt;/p&gt;

&lt;p&gt;Think of TypeScript 6.0 as a migration assistant. It is not exciting on its own. It is the step that makes the next step possible.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Breaks in TypeScript 7.0
&lt;/h2&gt;

&lt;p&gt;Let me be direct about the breaking changes, because some of them will affect a lot of projects.&lt;/p&gt;

&lt;h3&gt;
  
  
  Strict Mode Is On by Default
&lt;/h3&gt;

&lt;p&gt;This is the big one. TypeScript 7.0 enables &lt;code&gt;--strict&lt;/code&gt; by default. That means &lt;code&gt;noImplicitAny&lt;/code&gt;, &lt;code&gt;strictNullChecks&lt;/code&gt;, &lt;code&gt;strictFunctionTypes&lt;/code&gt;, and all the other strict flags are on unless you explicitly turn them off.&lt;/p&gt;

&lt;p&gt;If your project already uses &lt;code&gt;"strict": true&lt;/code&gt; in tsconfig.json, you are fine. Nothing changes for you.&lt;/p&gt;

&lt;p&gt;If your project has strict mode off, or partially on, this will surface a large number of new type errors. These are not new bugs. They are existing issues that your compiler was configured to ignore. Fixing them is the right thing to do, but doing it as part of a compiler migration adds unnecessary risk. Turn strict mode on now, in TypeScript 5.x or 6.0, and fix the errors before 7.0 arrives.&lt;/p&gt;

&lt;h3&gt;
  
  
  ES5 Target Is Gone
&lt;/h3&gt;

&lt;p&gt;TypeScript 7.0 drops support for &lt;code&gt;--target es5&lt;/code&gt;. The minimum target is now ES2015 (ES6). If you are still targeting ES5, you need to decide whether your users actually need ES5 compatibility. In 2026, the answer is almost certainly no. Every modern browser, Node.js version, and runtime supports ES2015. If you have a genuine need for ES5 output, you will need a separate transpilation step (Babel or similar) after TypeScript compilation.&lt;/p&gt;

&lt;p&gt;The default target in 7.0 is &lt;code&gt;es2025&lt;/code&gt;, which is a sensible default for most projects.&lt;/p&gt;

&lt;h3&gt;
  
  
  baseUrl Is Removed
&lt;/h3&gt;

&lt;p&gt;This one will hit a lot of projects. The &lt;code&gt;--baseUrl&lt;/code&gt; flag, which many codebases use for path aliases like &lt;code&gt;import { foo } from 'components/Foo'&lt;/code&gt;, is gone in TypeScript 7.0.&lt;/p&gt;

&lt;p&gt;The replacement is &lt;code&gt;paths&lt;/code&gt; with explicit mappings in your tsconfig.json, or switching your &lt;code&gt;moduleResolution&lt;/code&gt; to &lt;code&gt;bundler&lt;/code&gt; (if you use a bundler like &lt;a href="https://dev.to/blog/vite-8-rolldown-oxc-2026"&gt;Vite&lt;/a&gt; or webpack) or &lt;code&gt;nodenext&lt;/code&gt; (if you run in Node.js directly). The migration is usually straightforward, but if your project has hundreds of imports relying on baseUrl, it is worth doing the conversion now rather than during the 7.0 upgrade.&lt;/p&gt;

&lt;h3&gt;
  
  
  Legacy Module Resolution Is Gone
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;--moduleResolution node10&lt;/code&gt; (the old &lt;code&gt;node&lt;/code&gt; option) is removed. You need to use &lt;code&gt;bundler&lt;/code&gt;, &lt;code&gt;nodenext&lt;/code&gt;, or &lt;code&gt;node18&lt;/code&gt; instead.&lt;/p&gt;

&lt;p&gt;This is a change that has been recommended for years. If you are still on &lt;code&gt;node10&lt;/code&gt; resolution, you are already in a configuration that does not match how modern tools resolve modules. The migration to &lt;code&gt;bundler&lt;/code&gt; or &lt;code&gt;nodenext&lt;/code&gt; is the right move regardless of TypeScript 7.0.&lt;/p&gt;

&lt;h3&gt;
  
  
  AMD, UMD, and SystemJS Output Removed
&lt;/h3&gt;

&lt;p&gt;If you are still emitting AMD, UMD, or SystemJS modules, TypeScript 7.0 will not do that for you anymore. The supported module outputs are ESM and CommonJS. Given that the &lt;a href="https://dev.to/blog/why-javascript-is-my-one-and-only-a-senior-developers-perspective"&gt;JavaScript ecosystem&lt;/a&gt; has firmly moved toward ESM, this is a reasonable cleanup. But if you have legacy build pipelines that depend on AMD or UMD output from TypeScript, plan your migration now.&lt;/p&gt;

&lt;h3&gt;
  
  
  Plugin API Changes
&lt;/h3&gt;

&lt;p&gt;This is the one that affects tool authors more than application developers. The current TypeScript compiler plugin API will not work in 7.0. A new API is being built, but the surface area is not finalized yet. If you maintain a TypeScript language service plugin, transformer, or any tool that uses the TypeScript compiler API (think: custom linters, code generators, documentation tools), this is a significant concern.&lt;/p&gt;

&lt;p&gt;The TypeScript team has acknowledged this and is working on a migration path, but "working on it" is not the same as "done." Keep an eye on the &lt;code&gt;microsoft/typescript-go&lt;/code&gt; repository for updates.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Practical Migration Path
&lt;/h2&gt;

&lt;p&gt;Here is my recommended approach, based on what the TypeScript team suggests and what I have seen work.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 1: Get to TypeScript 6.0
&lt;/h3&gt;

&lt;p&gt;Upgrade your project to TypeScript 6.0 and enable the &lt;code&gt;--deprecation&lt;/code&gt; flag. Fix every deprecation warning. This is the single most valuable preparation step because it surfaces exactly the issues 7.0 will enforce.&lt;/p&gt;

&lt;p&gt;Specific things to address:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Enable strict mode&lt;/strong&gt; if you have not already. Run &lt;code&gt;tsc --strict&lt;/code&gt; and work through the errors. This is the largest source of migration pain, and it has nothing to do with the Go rewrite. It is just good practice that 7.0 happens to enforce.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Switch away from baseUrl&lt;/strong&gt; for module resolution. Use explicit &lt;code&gt;paths&lt;/code&gt; mappings or change your &lt;code&gt;moduleResolution&lt;/code&gt; strategy.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Update your target&lt;/strong&gt; to at least ES2015. If you are targeting ES5, test whether your users actually need it. Drop it if you can.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Move off node10 module resolution.&lt;/strong&gt; Switch to &lt;code&gt;bundler&lt;/code&gt; if you use a build tool, or &lt;code&gt;nodenext&lt;/code&gt; if you target Node.js directly.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Phase 2: Test With the Preview Compiler
&lt;/h3&gt;

&lt;p&gt;The native preview compiler is available right now as &lt;code&gt;@typescript/native-preview&lt;/code&gt; on npm. Install it and run &lt;code&gt;tsgo&lt;/code&gt; alongside your existing &lt;code&gt;tsc&lt;/code&gt; to compare results.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;bun add &lt;span class="nt"&gt;-d&lt;/span&gt; @typescript/native-preview
npx tsgo &lt;span class="nt"&gt;--noEmit&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Compare the output against &lt;code&gt;tsc --noEmit&lt;/code&gt;. If both report the same errors (or tsgo reports no errors), you are in good shape. If tsgo surfaces new issues, fix them now while you still have the JavaScript compiler as a fallback.&lt;/p&gt;

&lt;p&gt;The TypeScript team has run this comparison against 20,000 test cases and only 74 show differences. Your project will very likely produce identical results.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 3: Migrate When 7.0 Ships
&lt;/h3&gt;

&lt;p&gt;When TypeScript 7.0 hits stable (expected mid-to-late 2026), the migration should be straightforward if you did Phases 1 and 2. Update your &lt;code&gt;typescript&lt;/code&gt; dependency, replace &lt;code&gt;tsc&lt;/code&gt; with &lt;code&gt;tsgo&lt;/code&gt; in your build scripts, and verify everything still works.&lt;/p&gt;

&lt;p&gt;For most projects, this will be a version bump and nothing else. The breaking changes are all things you will have already addressed in Phase 1.&lt;/p&gt;




&lt;h2&gt;
  
  
  What This Means for the Ecosystem
&lt;/h2&gt;

&lt;p&gt;The performance improvements in TypeScript 7.0 have ripple effects across the entire &lt;a href="https://dev.to/blog/bun-vs-nodejs-is-it-time-to-switch-in-2026"&gt;JavaScript development toolchain&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Editor Experience Transforms
&lt;/h3&gt;

&lt;p&gt;VS Code's TypeScript integration, which powers IntelliSense, error highlighting, go-to-definition, rename, and find-all-references, is backed by the TypeScript language service. When that language service runs as a native binary instead of a Node.js process, every single editor operation gets faster.&lt;/p&gt;

&lt;p&gt;The TypeScript team reports approximately 8x improvement in editor startup time. That means opening a large project and having IntelliSense work immediately, instead of waiting 15 to 30 seconds for the language service to index your codebase. For developers who open and close projects frequently, or who work across multiple repositories, this is a quality-of-life improvement that compounds throughout the day.&lt;/p&gt;

&lt;h3&gt;
  
  
  CI Pipelines Get Faster
&lt;/h3&gt;

&lt;p&gt;If your CI pipeline runs &lt;code&gt;tsc --noEmit&lt;/code&gt; as a type-checking step (and it should), that step just got 8 to 10x faster. For large projects where type-checking takes 2 to 3 minutes, that drops to 15 to 20 seconds. In a team that pushes 50 PRs a day, the cumulative time savings are significant.&lt;/p&gt;

&lt;p&gt;This also means you can afford to run type-checking more often. Pre-commit hooks with type-checking become practical even for large codebases. The performance penalty that made developers skip type-checking in development effectively disappears.&lt;/p&gt;

&lt;h3&gt;
  
  
  Monorepos Benefit the Most
&lt;/h3&gt;

&lt;p&gt;The biggest beneficiaries are large monorepos. TypeScript 7.0's Go runtime enables true parallel processing with shared-memory threads for multi-project builds. If your monorepo has 20 packages that depend on each other, &lt;code&gt;tsgo&lt;/code&gt; can build independent packages in parallel instead of sequentially. This is where the 10x improvement is most impactful, because the current compiler cannot parallelize across project references at all.&lt;/p&gt;




&lt;h2&gt;
  
  
  What This Does Not Change
&lt;/h2&gt;

&lt;p&gt;It is worth being clear about what stays the same.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The TypeScript language itself is unchanged.&lt;/strong&gt; Every type, every feature, every piece of syntax you know works exactly the same. This is a compiler rewrite, not a language redesign. Your code does not need to change unless it was using deprecated features that should have been updated anyway.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Your tsconfig.json structure is the same.&lt;/strong&gt; The configuration format is identical. &lt;code&gt;tsgo&lt;/code&gt; reads the same tsconfig files with the same options (minus the removed ones).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Your development workflow is the same.&lt;/strong&gt; You still write &lt;code&gt;.ts&lt;/code&gt; files, you still get type errors, you still compile to JavaScript. The output is the same JavaScript. The types are the same types. Everything that made TypeScript TypeScript is still there. It just runs faster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Type safety is identical.&lt;/strong&gt; The Go compiler implements the same type-checking rules. The 20,000 test case comparison with only 74 differences (mostly edge cases in unfinished features) demonstrates that the rewrite is faithful to the original semantics. You are not trading correctness for speed.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Bigger Picture
&lt;/h2&gt;

&lt;p&gt;TypeScript has always been pragmatic. It did not try to replace JavaScript. It added a type system that made JavaScript more manageable at scale. It did not require special tooling from day one. It compiled to plain JavaScript that ran anywhere.&lt;/p&gt;

&lt;p&gt;The Go rewrite follows the same philosophy. It is not changing what TypeScript is. It is removing a ceiling that limited how well TypeScript could serve large projects and fast development workflows. The teams building massive applications in TypeScript, the ones who felt the build time pain the most, are the ones who will benefit the most.&lt;/p&gt;

&lt;p&gt;There is also a broader lesson here about pragmatic engineering. The TypeScript team could have rewritten in Rust for maximum theoretical performance. They chose Go because it let them ship a correct compiler faster. They could have kept optimizing the JavaScript compiler for another five years. They chose a clean break because the ceiling was the runtime, not the algorithms. These are the kinds of engineering decisions that move ecosystems forward.&lt;/p&gt;

&lt;p&gt;If you are still on TypeScript 4.x or early 5.x, the migration path goes through 6.0 first. Get to 6.0, fix the deprecation warnings, test with the preview compiler, and you will be ready when 7.0 ships. The performance improvement alone is worth the effort, and the migration is designed to be incremental. You do not need to rewrite anything. You just need to stop using things that should have been updated years ago.&lt;/p&gt;

&lt;p&gt;TypeScript just got ten times faster. That is not a marketing number. It is a measured reality across real-world projects. And for the millions of developers who build with TypeScript every day, it changes the feel of the entire development experience. Not what you build, but how fast every step of building it becomes.&lt;/p&gt;

&lt;p&gt;That is the kind of improvement worth getting excited about.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Local-First Software Is Winning: A Developer Guide to Building Without the Cloud</title>
      <dc:creator>Alex Cloudstar</dc:creator>
      <pubDate>Sat, 28 Mar 2026 13:11:37 +0000</pubDate>
      <link>https://forem.com/alexcloudstar/local-first-software-is-winning-a-developer-guide-to-building-without-the-cloud-2dcl</link>
      <guid>https://forem.com/alexcloudstar/local-first-software-is-winning-a-developer-guide-to-building-without-the-cloud-2dcl</guid>
      <description>&lt;p&gt;I lost two hours of work last Tuesday because my internet went down for eleven minutes.&lt;/p&gt;

&lt;p&gt;Not because I was using some exotic cloud-dependent tool. I was editing a document in a mainstream SaaS product. The kind with a billion-dollar valuation and a "works offline" checkbox somewhere in their marketing materials. When my connection dropped, the app froze. When it came back, my changes were gone. The conflict resolution had decided that the empty server state was more recent than my local edits.&lt;/p&gt;

&lt;p&gt;Eleven minutes of downtime. Two hours of work. And a growing suspicion that maybe the way we build software has a fundamental problem.&lt;/p&gt;

&lt;p&gt;I am not the only one thinking this. Local-first software, a paradigm where your data lives on your device first and syncs to the cloud second, has been gaining serious momentum throughout 2026. FOSDEM dedicated an entire developer room to local-first, CRDTs, and sync engines this year. Companies like Linear and Figma have proven that local-first principles can power products used by millions. And the developer tooling has finally matured to the point where you do not need a PhD in distributed systems to build this way.&lt;/p&gt;

&lt;p&gt;Here is what local-first actually means, why it matters now, and how to start building with it.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Local-First Actually Means
&lt;/h2&gt;

&lt;p&gt;The term "local-first" comes from a 2019 research paper by Ink &amp;amp; Switch, a lab focused on the future of computing. Their definition is straightforward: local-first software keeps a full copy of your data on your device. The cloud exists for sync and backup, not as the source of truth.&lt;/p&gt;

&lt;p&gt;This is not the same as "offline mode." Most apps treat offline as a degraded state. You get a warning banner. Some features stop working. When you reconnect, the app tries to reconcile what happened while you were offline, and sometimes it gets that reconciliation wrong.&lt;/p&gt;

&lt;p&gt;Local-first flips this entirely. Your device is the primary. Reads and writes happen locally, instantly, with zero network latency. The network is secondary. When it is available, your changes sync to other devices and to a server. When it is not, you keep working normally. There is no degraded state because local is the default state.&lt;/p&gt;

&lt;p&gt;The seven principles from the original Ink &amp;amp; Switch paper still hold:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;No spinners.&lt;/strong&gt; Operations happen locally, so they feel instant.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Your data is yours.&lt;/strong&gt; It lives on your device in a format you control.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The network is optional.&lt;/strong&gt; Everything works offline. Sync happens when possible.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Collaboration works.&lt;/strong&gt; Multiple people can edit the same data, and conflicts resolve automatically.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data outlives the software.&lt;/strong&gt; Your data is stored in open formats, not locked in a proprietary cloud.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security and privacy by default.&lt;/strong&gt; Data does not have to leave your device unless you choose to share it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Full user control.&lt;/strong&gt; You decide where your data goes and who can access it.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If this sounds idealistic, consider that some of the most successful products of the past few years already follow these principles. Figma handles real-time collaboration with local-first techniques under the hood. Linear stores data locally and syncs in the background. Obsidian keeps everything as plain Markdown files on your machine. These are not niche experiments. They are products that millions of developers and designers use daily.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Local-First Is Gaining Ground in 2026
&lt;/h2&gt;

&lt;p&gt;Three things have converged to make local-first practical this year in ways it was not before.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cloud Fatigue Is Real
&lt;/h3&gt;

&lt;p&gt;The cloud solved enormous problems. It also created new ones. Monthly SaaS bills that compound across dozens of tools. Vendor lock-in that makes switching feel impossible. Outages that take down your entire workflow because a server in Virginia had a bad day. And a growing awareness, especially in Europe with stricter data regulations, that sending every keystroke to someone else's computer is not always a great idea.&lt;/p&gt;

&lt;p&gt;Developers are feeling this acutely. If you are building a product and every feature requires a round trip to a server, your infrastructure costs scale linearly with usage. If you are building local-first, most computation happens on the user's device. Your server costs drop dramatically because the server is doing sync, not processing.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Tooling Has Matured
&lt;/h3&gt;

&lt;p&gt;Two years ago, building a local-first app meant rolling your own sync layer. You needed deep knowledge of distributed systems, conflict resolution, and data replication. The developer experience was rough.&lt;/p&gt;

&lt;p&gt;That has changed. Tools like PowerSync, ElectricSQL, Zero, and Triplit have turned what used to be a research project into a framework you can npm install. The abstractions are solid. The documentation is real. You can build a local-first app today with the same level of effort as building a traditional client-server app.&lt;/p&gt;

&lt;h3&gt;
  
  
  Users Expect Instant
&lt;/h3&gt;

&lt;p&gt;We have trained users to expect instant responses. Every major app invests heavily in perceived performance. Skeleton screens, optimistic updates, background prefetching. All of these are workarounds for the fundamental latency of client-server architecture.&lt;/p&gt;

&lt;p&gt;Local-first eliminates the need for these workarounds. When your reads and writes happen locally, the UI is genuinely instant. Not "feels instant because we show a skeleton." Actually instant. Zero milliseconds of latency for every interaction. Users notice the difference even if they cannot articulate why the app "feels fast."&lt;/p&gt;




&lt;h2&gt;
  
  
  CRDTs: The Technology That Makes It Work
&lt;/h2&gt;

&lt;p&gt;The hardest problem in local-first is this: if two people edit the same data on different devices while offline, what happens when they reconnect?&lt;/p&gt;

&lt;p&gt;Traditional databases solve this with locks or "last write wins." One person's changes overwrite the other's. That is fine for a banking system where you want strict ordering. It is terrible for a collaborative document where both people's changes matter.&lt;/p&gt;

&lt;p&gt;Conflict-free Replicated Data Types, or CRDTs, solve this differently. They are data structures designed from the ground up to handle concurrent edits without conflicts. The math guarantees that no matter what order updates are received, every device eventually converges on the same state. No conflicts. No data loss. No central server needed to mediate.&lt;/p&gt;

&lt;p&gt;Here is the simplest way to think about it. Imagine two people have a shared shopping list. Person A adds "milk" while offline. Person B adds "eggs" while offline. When they reconnect, a CRDT-based list will have both "milk" and "eggs." Neither change is lost, and no conflict resolution was needed. The data structure itself handles it.&lt;/p&gt;

&lt;p&gt;This works for more complex scenarios too. CRDT implementations exist for text documents (collaborative editing like Google Docs), JSON-like objects (application state), counters, sets, maps, and more. Libraries like Automerge and Yjs have been battle-tested in production for years and handle edge cases that would take months to solve from scratch.&lt;/p&gt;

&lt;p&gt;The practical implication for you as a developer: you do not need to understand the mathematics behind CRDTs to use them. The libraries abstract the complexity. You interact with what feels like a normal JavaScript object, and the sync layer handles replication and conflict resolution behind the scenes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;repo&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Repo&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;network&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;BrowserWebSocketClientAdapter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;wss://sync.example.com&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)],&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;

&lt;span class="c1"&gt;// Create a document that syncs automatically&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;handle&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;repo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;items&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[],&lt;/span&gt; &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Shopping List&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;

&lt;span class="c1"&gt;// Local changes sync to all connected peers&lt;/span&gt;
&lt;span class="nx"&gt;handle&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;change&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;items&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;push&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;milk&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;added&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is it. You create a document, make changes locally, and the library handles syncing those changes to every connected device. No manual conflict resolution. No server-side merge logic. The CRDT guarantees convergence.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Current Tool Landscape
&lt;/h2&gt;

&lt;p&gt;The local-first ecosystem in 2026 has matured significantly. Here are the tools worth knowing about, organized by what they do best.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sync Engines (Bring Your Own Backend)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;PowerSync&lt;/strong&gt; is the most production-ready option if you have an existing Postgres, MongoDB, or MySQL database. It syncs a subset of your server data to a local SQLite database on the client. Your backend stays the same. You add PowerSync as a sync layer between your existing database and your frontend. This is the lowest-friction path for teams migrating an existing app toward local-first.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ElectricSQL&lt;/strong&gt; takes a similar approach but focuses specifically on Postgres. It provides active-active sync between your Postgres database and SQLite on the client. If your stack is already Postgres-based, ElectricSQL gives you local-first capabilities without changing your data model.&lt;/p&gt;

&lt;h3&gt;
  
  
  Full-Stack Local-First Frameworks
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Zero&lt;/strong&gt; and &lt;strong&gt;Triplit&lt;/strong&gt; are designed for building collaborative apps from scratch. They handle the entire stack: local storage, sync, conflict resolution, and real-time updates. If you are building something like a Notion clone or a project management tool, these frameworks give you multiplayer collaboration out of the box.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Jazz&lt;/strong&gt; takes an opinionated approach to local-first, providing a framework where collaborative data structures are the foundation of your app. It is particularly good for apps where real-time collaboration is a core feature, not an afterthought.&lt;/p&gt;

&lt;h3&gt;
  
  
  CRDT Libraries
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Automerge&lt;/strong&gt; is the gold standard for document-level CRDTs. It handles complex nested data structures and has excellent support for undo/redo, branching, and merging. If you are building something with rich, structured data (think: a collaborative spreadsheet or design tool), Automerge gives you the most flexibility.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Yjs&lt;/strong&gt; is the go-to for text editing. It powers the real-time collaboration in many popular editors and has a massive ecosystem of bindings for frameworks like ProseMirror, TipTap, CodeMirror, and Monaco. If your app involves collaborative text editing, Yjs is likely the right choice.&lt;/p&gt;

&lt;h3&gt;
  
  
  Local Databases
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;SQLite (via wa-sqlite or better-sqlite3)&lt;/strong&gt; is the local database that powers most of these solutions. Running SQLite in the browser via WebAssembly has become reliable and performant. On mobile and desktop (Electron, Tauri), native SQLite gives you a proper relational database on the user's device.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;RxDB&lt;/strong&gt; provides a reactive, local-first database with built-in replication to any backend. It is framework-agnostic and works across browser, Node.js, React Native, and Electron.&lt;/p&gt;




&lt;h2&gt;
  
  
  When Local-First Makes Sense (and When It Does Not)
&lt;/h2&gt;

&lt;p&gt;Local-first is not the right architecture for everything. Here is a practical framework for deciding.&lt;/p&gt;

&lt;h3&gt;
  
  
  Local-First Is a Strong Fit When:
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Your app handles user-generated content.&lt;/strong&gt; Note-taking apps, design tools, writing tools, project management, personal databases. Anything where the user creates and owns data benefits enormously from local-first. The data belongs to the user conceptually, so it makes sense for it to live on their device.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Offline capability matters.&lt;/strong&gt; Field service apps, mobile apps used in areas with poor connectivity, tools for travelers. If your users cannot always count on a stable internet connection, local-first gives them a seamless experience regardless.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real-time collaboration is a core feature.&lt;/strong&gt; CRDTs were designed for exactly this. If multiple people need to edit the same data simultaneously, local-first with CRDTs gives you collaboration that works even when participants are intermittently offline.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You want to reduce server costs.&lt;/strong&gt; When most computation happens on the client, your server does sync and backup rather than processing every request. This can dramatically reduce infrastructure costs, especially for apps with many concurrent users. For indie hackers and solo founders watching their &lt;a href="https://dev.to/blog/saas-pricing-indie-hackers-2026"&gt;SaaS costs&lt;/a&gt;, this matters.&lt;/p&gt;

&lt;h3&gt;
  
  
  Local-First Is Not Ideal When:
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;You need a single source of truth with strict consistency.&lt;/strong&gt; Financial transactions, inventory systems, booking platforms. Anything where two people should not be able to "claim" the same resource simultaneously. CRDTs provide eventual consistency, not strong consistency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Your data is primarily server-generated.&lt;/strong&gt; Analytics dashboards, news feeds, search engines. If the server produces the data and the client just displays it, local-first adds complexity without much benefit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Your app depends on real-time server-side processing.&lt;/strong&gt; Live video streaming, multiplayer gaming with server authority, real-time bidding. These need a server in the loop for every interaction.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Regulatory requirements mandate server-side control.&lt;/strong&gt; Some industries require that data be stored and processed on controlled servers. Local-first can complicate compliance in these cases.&lt;/p&gt;




&lt;h2&gt;
  
  
  Getting Started: A Practical Path
&lt;/h2&gt;

&lt;p&gt;If you want to try local-first without rewriting your entire app, here is a pragmatic approach.&lt;/p&gt;

&lt;h3&gt;
  
  
  Start with One Feature
&lt;/h3&gt;

&lt;p&gt;Pick a feature in your app where offline capability would genuinely improve the experience. A note editor, a task list, a settings panel. Something self-contained.&lt;/p&gt;

&lt;p&gt;Build that one feature with a local-first approach. Use PowerSync or ElectricSQL to sync it with your existing database. Keep everything else the same. This gives you real experience with the paradigm without a risky full rewrite.&lt;/p&gt;

&lt;h3&gt;
  
  
  Choose Your Sync Strategy
&lt;/h3&gt;

&lt;p&gt;The biggest architectural decision is how data moves between devices. You have three main options:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Server-mediated sync&lt;/strong&gt; is the most familiar. Clients sync through your server. The server stores the canonical copy and mediates sync between devices. This is what PowerSync and ElectricSQL do. It is the easiest to reason about and fits naturally with existing backend architectures.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Peer-to-peer sync&lt;/strong&gt; removes the server from the sync path. Devices communicate directly, often using WebRTC. This gives you true decentralization but adds complexity around discovery (how do devices find each other?) and availability (what if the peer you need to sync with is offline?).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hybrid&lt;/strong&gt; combines both. Sync through a server when available, fall back to peer-to-peer when not. This gives you the reliability of server-mediated sync with the resilience of P2P. Most production local-first apps end up here.&lt;/p&gt;

&lt;h3&gt;
  
  
  Handle the Hard Parts
&lt;/h3&gt;

&lt;p&gt;Two things trip up developers new to local-first:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Schema changes.&lt;/strong&gt; When your data model evolves, you need to handle migrations on every client device, not just on the server. Plan for this from the start. Use versioned schemas and write migration logic that runs locally.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Authorization.&lt;/strong&gt; In a traditional app, the server checks permissions on every request. In a local-first app, the data is already on the client before the server can say no. You need to think about authorization differently. Typically, the sync layer handles this by controlling which data subsets sync to which clients.&lt;/p&gt;




&lt;h2&gt;
  
  
  What This Means for the Future of Web Development
&lt;/h2&gt;

&lt;p&gt;I have been watching the &lt;a href="https://dev.to/blog/why-javascript-is-my-one-and-only-a-senior-developers-perspective"&gt;JavaScript ecosystem&lt;/a&gt; evolve for years, and local-first feels like one of the more significant shifts. Not because it replaces everything, but because it changes the default assumption.&lt;/p&gt;

&lt;p&gt;For a decade, the default assumption was: data lives on the server, the client asks for it. Local-first flips that to: data lives on the client, the server helps distribute it.&lt;/p&gt;

&lt;p&gt;This is not theoretical. It changes how you think about &lt;a href="https://dev.to/blog/stop-obsessing-over-the-perfect-stack"&gt;building products&lt;/a&gt;. You stop worrying about API latency because there is no API call for local reads. You stop worrying about loading states because the data is already there. You stop worrying about offline because offline is just the normal state with sync paused.&lt;/p&gt;

&lt;p&gt;The ecosystem is still young compared to traditional client-server tooling. You will hit rough edges. Documentation sometimes assumes distributed systems knowledge. Some tools are better suited for greenfield projects than migrations. But the trajectory is clear. Every major sync engine and CRDT library shipped significant improvements in the first quarter of 2026 alone.&lt;/p&gt;

&lt;p&gt;If you are building a new product this year, especially anything involving user-generated content or collaboration, local-first deserves serious consideration. The tools are ready. The developer experience is good. And your users will notice the difference the first time they lose internet and your app keeps working like nothing happened.&lt;/p&gt;




&lt;h2&gt;
  
  
  Resources to Go Deeper
&lt;/h2&gt;

&lt;p&gt;If you want to explore further, here are the starting points I found most useful:&lt;/p&gt;

&lt;p&gt;The original &lt;strong&gt;Ink &amp;amp; Switch paper&lt;/strong&gt; ("Local-First Software: You Own Your Data, in Spite of the Cloud") is still the best introduction to the philosophy. It is long but worth reading in full.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;lofi.so&lt;/strong&gt; maintains a directory of local-first projects, tools, and frameworks. It is the most comprehensive catalog of the ecosystem.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;FOSDEM 2026 Local-First track&lt;/strong&gt; recordings cover everything from beginner introductions to advanced CRDT implementations. The talks on sync engine internals are particularly good if you want to understand what these tools do under the hood.&lt;/p&gt;

&lt;p&gt;And if you want to see local-first in action, clone one of the example apps from Automerge, Yjs, or Zero. Build a simple collaborative todo list. Watch two browser tabs edit the same data. Disconnect one, make changes on both, reconnect, and watch them merge without conflicts. That moment when it just works is when local-first clicks.&lt;/p&gt;

&lt;p&gt;The cloud is not going away. But the assumption that every app needs to be cloud-first? That is already changing. And for the developers building the next generation of tools, that shift opens up possibilities that the always-online era never could.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>database</category>
      <category>softwaredevelopment</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>The Developer Talent Paradox: Why AI Is Making the Shortage Worse, Not Better</title>
      <dc:creator>Alex Cloudstar</dc:creator>
      <pubDate>Sat, 28 Mar 2026 13:11:36 +0000</pubDate>
      <link>https://forem.com/alexcloudstar/the-developer-talent-paradox-why-ai-is-making-the-shortage-worse-not-better-33a1</link>
      <guid>https://forem.com/alexcloudstar/the-developer-talent-paradox-why-ai-is-making-the-shortage-worse-not-better-33a1</guid>
      <description>&lt;p&gt;A CTO I know fired three junior developers in January. Replaced them with Claude Code and Cursor subscriptions. Saved roughly $280,000 in annual salary. Called it "AI-driven efficiency" in the board presentation.&lt;/p&gt;

&lt;p&gt;By March, his two senior engineers were drowning. They were spending four to five hours per day reviewing AI-generated code instead of doing architecture work. One of them quit. The other asked for a 40% raise and a title bump to stay. The $280,000 savings evaporated in eight weeks, and now the CTO is trying to hire a senior engineer in a market where the average time-to-fill for that role is four months.&lt;/p&gt;

&lt;p&gt;This story is not unique. It is the story of 2026.&lt;/p&gt;

&lt;p&gt;AI tools were supposed to solve the developer shortage. The pitch was simple: AI handles the routine work, developers become more productive, you need fewer people to ship the same output. Every tech executive heard this pitch. Many of them believed it. And the decision they made, cut junior headcount and lean on AI, is creating a talent crisis that will take a decade to unwind.&lt;/p&gt;

&lt;p&gt;The numbers are in. They tell a story that nobody in leadership wants to hear.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Numbers Behind the Paradox
&lt;/h2&gt;

&lt;p&gt;Let me lay out the data because this is one of those situations where the narrative and the reality are moving in opposite directions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The demand side is getting worse, not better.&lt;/strong&gt; Fifty percent of tech leaders now cite recruiting and retaining skilled technology workers as their number one business challenge in 2026. That is the highest it has ever been. Over 90% of organizations globally report severe IT talent shortages. The potential economic impact? More than $5.5 trillion in unrealized output.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Junior hiring has collapsed.&lt;/strong&gt; Entry-level developer opportunities have dropped approximately 67% since 2022. In the UK specifically, entry-level tech roles fell 46% in 2024, with projections hitting 53% by end of 2026. CS graduate unemployment sits at 6 to 7%, up from historical lows. Tech internship postings have declined 30% since 2023.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Senior demand is surging.&lt;/strong&gt; While junior roles disappear, 87.5% of tech leaders describe hiring experienced engineers as "brutal." Time-to-hire for senior roles has stretched to three to six months. Companies are competing aggressively for the same shrinking pool of senior talent, driving up salaries and making retention harder.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI adoption is near-universal but has not reduced headcount needs.&lt;/strong&gt; 84% of developers now use AI coding tools. Yet an NBER study surveying almost 6,000 executives across the US, UK, Germany, and Australia found that over 80% of firms report no measurable impact from AI on either employment or productivity over the past three years.&lt;/p&gt;

&lt;p&gt;Read that again. Eighty percent of firms. No impact. On employment or productivity. Despite near-universal adoption.&lt;/p&gt;

&lt;p&gt;The paradox is this: AI tools are everywhere, they have not reduced the need for developers, but companies have cut hiring as if they had.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Hollowed-Out Career Ladder
&lt;/h2&gt;

&lt;p&gt;I wrote about &lt;a href="https://dev.to/blog/ai-washing-layoffs-junior-developer-crisis-2026"&gt;AI washing and the junior developer crisis&lt;/a&gt; a few weeks ago. The response from readers was overwhelming, and the most common message I got was from senior engineers saying: "You described my company exactly."&lt;/p&gt;

&lt;p&gt;But the crisis goes deeper than layoffs disguised as innovation. There is a structural problem forming that will take years to become fully visible, and by then it will be extremely expensive to fix.&lt;/p&gt;

&lt;p&gt;Here is how the career ladder in software engineering has traditionally worked:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Junior developers&lt;/strong&gt; do the work that teaches them how software actually operates. They write boilerplate. They fix bugs. They handle simple features. They learn from code review. They absorb institutional knowledge. Over three to five years, they become mid-level.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mid-level developers&lt;/strong&gt; take on larger features, make architectural decisions within defined boundaries, and start mentoring juniors. Over another three to five years, they become seniors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Senior developers&lt;/strong&gt; own systems. They make architectural decisions that affect the entire product. They review code. They mentor. They are the bridge between business requirements and technical implementation. They become staff engineers, architects, engineering managers, and eventually CTOs.&lt;/p&gt;

&lt;p&gt;When companies cut junior hiring, they do not just lose today's juniors. They lose the pipeline that produces tomorrow's seniors. A 67% hiring cliff between 2024 and 2026 means 67% fewer potential engineering leaders in 2031 to 2036. You cannot skip a generation and expect the pipeline to be fine.&lt;/p&gt;

&lt;p&gt;Some executives think AI fills the gap. "AI can do what juniors used to do, so we do not need juniors." This misunderstands what junior developers actually are. They are not cheap labor for simple tasks. They are the training ground for the humans who will run your engineering organization in five years.&lt;/p&gt;

&lt;p&gt;An AI agent can write a CRUD endpoint. It cannot develop judgment about when to push back on a product requirement. It cannot learn when a shortcut will create &lt;a href="https://dev.to/blog/ai-generated-code-technical-debt-2026"&gt;technical debt&lt;/a&gt; that costs ten times more to fix later. It cannot build the relationships with other engineers, product managers, and designers that make complex projects succeed.&lt;/p&gt;

&lt;p&gt;You need humans for that. And those humans need to start somewhere.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Code Review Bottleneck
&lt;/h2&gt;

&lt;p&gt;This is the part of the paradox that gets the least attention, and it is the part that is hurting senior engineers the most.&lt;/p&gt;

&lt;p&gt;When companies replace junior developers with AI coding tools, the work does not disappear. It shifts. AI generates the code that juniors used to write. But someone still needs to review that code. Someone needs to verify that it does not introduce &lt;a href="https://dev.to/blog/ai-generated-code-security-risks-2026"&gt;security vulnerabilities&lt;/a&gt;. Someone needs to check that it follows the codebase's patterns and does not create architectural problems.&lt;/p&gt;

&lt;p&gt;That someone is a senior engineer. And the volume of code they need to review has exploded.&lt;/p&gt;

&lt;p&gt;Before AI tools, a senior engineer might review two to three pull requests per day from junior team members. Each PR was relatively small because it was written by a human who can only type so fast. The review was also a teaching moment. The senior explains why a certain approach is better. The junior learns. Both benefit.&lt;/p&gt;

&lt;p&gt;Now, a single developer with an AI agent can generate ten pull requests in a day. The code compiles. The tests pass. But the senior still needs to read it, understand it, and verify that it is actually good. AI-generated code has a 20% error rate and creates 8x more duplicate code blocks than human-written code. Those are not bugs that break immediately. They are quality issues that compound over time.&lt;/p&gt;

&lt;p&gt;The result: senior engineers spend most of their time reviewing AI-generated code instead of doing the work that actually requires their expertise. Architecture. System design. Mentoring. Strategic technical decisions. The highest-value activities that senior engineers are uniquely qualified to do are being crowded out by review work that used to be distributed across a team.&lt;/p&gt;

&lt;p&gt;A RAND study found that experienced developers using AI tools took 19% longer on tasks while believing they were 20% faster. The perception gap is real. Teams feel more productive because more code is being generated. But velocity, actual features shipped and working correctly, is not improving at the rate the activity metrics suggest.&lt;/p&gt;

&lt;p&gt;This creates a vicious cycle. Senior engineers burn out. They leave. The remaining seniors have even more review burden. They burn out faster. The company tries to hire replacements and discovers that the market for senior engineers is brutally competitive because every other company is in the same position.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Context-Switching Tax
&lt;/h2&gt;

&lt;p&gt;There is another dimension to this that connects to what I wrote about &lt;a href="https://dev.to/blog/ai-brain-fry-developer-burnout-2026"&gt;AI brain fry&lt;/a&gt;: the sheer cognitive cost of working at "machine speed."&lt;/p&gt;

&lt;p&gt;Before AI tools, a developer might work on one or two problems in a day. Each problem required focused thought, design, implementation, and testing. The pace was human-scale. You had time to think deeply about each problem because the implementation took long enough to allow it.&lt;/p&gt;

&lt;p&gt;With AI generating code quickly, a developer can touch six different problems in a single day. Each one "only takes an hour with AI." But context-switching between six problems is brutally expensive for the human brain. Research consistently shows that every context switch costs 15 to 25 minutes of refocusing time. If you switch contexts six times in a day, you lose up to two and a half hours just getting your brain back into the right mode.&lt;/p&gt;

&lt;p&gt;The work feels faster because the typing is faster. But the thinking, the part that actually matters for code quality, is being squeezed. Developers are doing more shallow work across more problems instead of deep work on fewer problems. The output looks impressive in sprint metrics. The quality tells a different story.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Companies Should Actually Do
&lt;/h2&gt;

&lt;p&gt;The companies that will come out of this well are the ones treating the talent pipeline as infrastructure, not a line item to optimize.&lt;/p&gt;

&lt;h3&gt;
  
  
  Keep Hiring Juniors, But Change What They Do
&lt;/h3&gt;

&lt;p&gt;The role of a junior developer needs to evolve, not disappear. Instead of writing boilerplate that AI handles better, juniors should be learning through AI-assisted pair programming. Give them an AI tool and a senior mentor. Their job becomes: use AI to generate code, review what it produces, understand why certain approaches are better, and develop the judgment that turns a junior into a mid-level engineer.&lt;/p&gt;

&lt;p&gt;This is not charity. This is investment in your future senior engineering pipeline. The companies cutting junior hiring to save money in 2026 will be paying three times the market rate for senior engineers in 2030 because they will have no internal candidates to promote.&lt;/p&gt;

&lt;h3&gt;
  
  
  Protect Senior Engineers from Review Overload
&lt;/h3&gt;

&lt;p&gt;If your seniors are spending more than 30% of their time on code review, you have a structural problem. Solutions that work:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tiered review processes.&lt;/strong&gt; Not every AI-generated PR needs senior eyes. Use automated tools (linters, static analysis, security scanners) as a first pass. Have mid-level engineers handle routine reviews. Reserve senior review for architectural decisions and complex changes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI-assisted code review.&lt;/strong&gt; Use AI to pre-screen PRs for common issues before a human looks at them. This does not replace human review. It reduces the surface area that needs human attention.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Review budgets.&lt;/strong&gt; Set explicit limits on how many review hours seniors spend per week. Protect their time for architecture, mentoring, and deep technical work. If the review queue is always full, that is a signal to hire, not to squeeze more out of existing seniors.&lt;/p&gt;

&lt;h3&gt;
  
  
  Measure What Matters
&lt;/h3&gt;

&lt;p&gt;Stop measuring lines of code generated. Start measuring:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Time-to-resolution for production incidents.&lt;/strong&gt; This reveals actual code quality.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Onboarding time for new team members.&lt;/strong&gt; This reveals codebase health.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Senior engineer retention.&lt;/strong&gt; This reveals whether your team structure is sustainable.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Percentage of senior time spent on architecture vs review.&lt;/strong&gt; This reveals whether you are using your most expensive talent correctly.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  What Developers Should Do
&lt;/h2&gt;

&lt;p&gt;If you are navigating this market as an individual developer, the paradox creates both risks and opportunities.&lt;/p&gt;

&lt;h3&gt;
  
  
  If You Are a Junior Developer
&lt;/h3&gt;

&lt;p&gt;The market is genuinely tough for you right now. But here is the counterintuitive thing: the developers who enter the industry during this contraction will have less competition in three to five years. The pipeline is shrinking, which means fewer mid-level and senior candidates in the future. If you get in now, you are investing in a supply-constrained future.&lt;/p&gt;

&lt;p&gt;Focus on the skills AI cannot replace. &lt;a href="https://dev.to/blog/learning-to-code-2026"&gt;System design&lt;/a&gt;, debugging complex distributed systems, understanding business context, communicating technical tradeoffs to non-technical stakeholders. These are the skills that make someone a senior engineer, and they can only be learned through practice, not through AI tutoring.&lt;/p&gt;

&lt;p&gt;Build things. Ship them. Write about what you learned. The developers who stand out in 2026 are the ones with real projects and real scars, not the ones with the longest list of tutorial completions. I have &lt;a href="https://dev.to/blog/side-project-to-first-dollar-developer-monetization-2026"&gt;written about this before&lt;/a&gt;: ship something real, no matter how small.&lt;/p&gt;

&lt;h3&gt;
  
  
  If You Are a Senior Developer
&lt;/h3&gt;

&lt;p&gt;You have never had more leverage. Use it wisely.&lt;/p&gt;

&lt;p&gt;If your company is piling review burden on you without adjusting expectations elsewhere, that is a negotiation point. Articulate the problem in business terms: "I am spending 60% of my time reviewing AI-generated code. That means I am spending 40% of my time on architecture and system design. If you hired one additional reviewer, I could spend 80% on architecture, which would directly reduce our incident rate and improve shipping velocity."&lt;/p&gt;

&lt;p&gt;Numbers move conversations. Feelings do not.&lt;/p&gt;

&lt;p&gt;Also consider whether your current role is developing you or just consuming you. The best senior engineers in 2026 are the ones who actively protect time for deep work, maintain their skills through real technical challenges, and do not let the code review treadmill turn them into full-time reviewers with an engineering title.&lt;/p&gt;

&lt;h3&gt;
  
  
  If You Are a Founder
&lt;/h3&gt;

&lt;p&gt;If you are &lt;a href="https://dev.to/blog/building-is-easy-distribution-is-the-moat-2026"&gt;building a product&lt;/a&gt; as a solo founder or small team, the talent paradox actually works in your favor in one specific way: the best developers are frustrated. Many senior engineers at large companies are burned out from review overload and want to work on something where they can do real engineering again. If your startup offers meaningful technical challenges, autonomy, and the chance to build rather than review, you can attract talent that would otherwise be out of your price range.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Ten-Year View
&lt;/h2&gt;

&lt;p&gt;This is what concerns me most. The decisions being made in 2026, cutting junior hiring, overloading seniors, treating AI as a headcount replacement, are creating a compound problem.&lt;/p&gt;

&lt;p&gt;In 2026, it looks like cost savings.&lt;/p&gt;

&lt;p&gt;By 2028, it looks like a retention crisis as burned-out seniors leave faster than they can be replaced.&lt;/p&gt;

&lt;p&gt;By 2030, it looks like a skills gap. The juniors who were never hired in 2024 to 2026 are not available as mid-level engineers. The mid-level engineers who should have become seniors did not get the mentoring because their seniors were too busy reviewing AI code.&lt;/p&gt;

&lt;p&gt;By 2032, it looks like an industry-wide talent crisis that makes the current shortage seem mild.&lt;/p&gt;

&lt;p&gt;A Harvard study of 62 million workers found that when companies adopt generative AI, junior developer employment drops about 9 to 10% within six quarters. That is the real number, not the 67% headline. But even 9 to 10% sustained over multiple years creates a pipeline gap that compounds. And the companies cutting 67% of junior roles are going much further than what the data supports.&lt;/p&gt;

&lt;p&gt;The fix is not complicated. It is just uncomfortable. It means spending money now on talent development that will not pay off for three to five years. It means hiring juniors even when AI can technically do their current tasks. It means structuring teams so that seniors have time to mentor, not just review. It means measuring engineering health, not just engineering output.&lt;/p&gt;

&lt;p&gt;The companies that invest in their talent pipeline today will have a massive competitive advantage in 2030. The ones that optimize for quarterly savings will be the ones desperately trying to hire in the most competitive talent market the industry has ever seen.&lt;/p&gt;

&lt;p&gt;The paradox is clear. AI was supposed to make developers less necessary. Instead, it made the right developers more necessary than ever. And the industry is responding by producing fewer of them.&lt;/p&gt;

&lt;p&gt;That math does not work. The sooner we acknowledge it, the sooner we can start fixing it.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>career</category>
      <category>management</category>
      <category>productivity</category>
    </item>
    <item>
      <title>How to Build a Discord Bot with Bun and TypeScript in 2026</title>
      <dc:creator>Alex Cloudstar</dc:creator>
      <pubDate>Fri, 27 Mar 2026 08:05:44 +0000</pubDate>
      <link>https://forem.com/alexcloudstar/how-to-build-a-discord-bot-with-bun-and-typescript-in-2026-1hin</link>
      <guid>https://forem.com/alexcloudstar/how-to-build-a-discord-bot-with-bun-and-typescript-in-2026-1hin</guid>
      <description>&lt;p&gt;Every Discord bot tutorial on the internet uses Node.js. I get it. Node has been the default for a decade, and Discord.js was built to run on it. But in 2026, there is a better option.&lt;/p&gt;

&lt;p&gt;Bun runs TypeScript natively without a build step. It loads &lt;code&gt;.env&lt;/code&gt; files automatically with no dotenv package needed. It cold-starts in under 15ms, installs packages in seconds, and ships with a built-in test runner and bundler. If you are building a Discord bot today and you reach for Node.js by default, you are choosing a slower development experience for no reason.&lt;/p&gt;

&lt;p&gt;This guide builds a Discord bot from scratch using Bun, TypeScript, and discord.js v14. By the end you will have a bot with slash commands, embed responses, event handling, and a deployment setup that actually makes sense. Every line of code is here, and it all runs on Bun.&lt;/p&gt;




&lt;h2&gt;
  
  
  What You Need Before Starting
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Bun&lt;/strong&gt; installed. If you have not yet: &lt;code&gt;curl -fsSL https://bun.sh/install | bash&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;A Discord account with a server where you have admin privileges&lt;/li&gt;
&lt;li&gt;Basic TypeScript familiarity. You do not need to be an expert, but you should know what a type annotation looks like&lt;/li&gt;
&lt;li&gt;A code editor. VS Code works well here&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Step 1: Create Your Discord Application
&lt;/h2&gt;

&lt;p&gt;Before writing any code, you need a bot account in Discord's developer system.&lt;/p&gt;

&lt;p&gt;Go to &lt;a href="https://discord.com/developers/applications" rel="noopener noreferrer"&gt;discord.com/developers/applications&lt;/a&gt; and click &lt;strong&gt;New Application&lt;/strong&gt;. Give it a name, something like "my-bun-bot" for now. Click Create.&lt;/p&gt;

&lt;p&gt;In the left sidebar, click &lt;strong&gt;Bot&lt;/strong&gt;. You will see a section for the bot token. Click &lt;strong&gt;Reset Token&lt;/strong&gt;, confirm the action, and copy what it gives you. Store it somewhere safe immediately. You cannot see it again once you leave the page, and this token is the key to your bot's account.&lt;/p&gt;

&lt;p&gt;Under the &lt;strong&gt;Privileged Gateway Intents&lt;/strong&gt; section on the same page, enable &lt;strong&gt;Message Content Intent&lt;/strong&gt;. Your bot needs this to read message content in servers.&lt;/p&gt;

&lt;p&gt;Now go to &lt;strong&gt;OAuth2&lt;/strong&gt; in the sidebar, then &lt;strong&gt;URL Generator&lt;/strong&gt;. Under Scopes, check &lt;code&gt;bot&lt;/code&gt; and &lt;code&gt;applications.commands&lt;/code&gt;. Under Bot Permissions, check:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Read Messages/View Channels&lt;/li&gt;
&lt;li&gt;Send Messages&lt;/li&gt;
&lt;li&gt;Embed Links&lt;/li&gt;
&lt;li&gt;Use Slash Commands&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Copy the generated URL at the bottom of the page. Open it in your browser and add the bot to your test server.&lt;/p&gt;

&lt;p&gt;You also need your &lt;strong&gt;Application ID&lt;/strong&gt;. Go back to the General Information tab and copy it. You will need this alongside the token.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 2: Initialize the Project
&lt;/h2&gt;

&lt;p&gt;Create the project directory and let Bun set it up:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;discord-bot &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd &lt;/span&gt;discord-bot
bun init &lt;span class="nt"&gt;-y&lt;/span&gt;
bun add discord.js
bun add &lt;span class="nt"&gt;-d&lt;/span&gt; @types/bun
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is the entire dependency install. No &lt;code&gt;ts-node&lt;/code&gt;, no &lt;code&gt;nodemon&lt;/code&gt;, no &lt;code&gt;dotenv&lt;/code&gt;. Bun handles all of that natively.&lt;/p&gt;

&lt;p&gt;Create the folder structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;discord-bot/
  src/
    index.ts
    commands/
      ping.ts
      userinfo.ts
      serverstats.ts
    events/
      ready.ts
      interactionCreate.ts
    utils/
      deploy-commands.ts
    types.d.ts
  .env
  .gitignore
  package.json
  tsconfig.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create your &lt;code&gt;.gitignore&lt;/code&gt; right now before you forget:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.env
node_modules/
dist/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add this &lt;code&gt;tsconfig.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"compilerOptions"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"target"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ESNext"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"module"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ESNext"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"moduleResolution"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"bundler"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"types"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"bun-types"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"strict"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"skipLibCheck"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"include"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"src/**/*"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;bun-types&lt;/code&gt; entry gives you type definitions for Bun-specific APIs like &lt;code&gt;import.meta.dir&lt;/code&gt; and &lt;code&gt;Bun.env&lt;/code&gt;. The &lt;code&gt;moduleResolution: "bundler"&lt;/code&gt; setting is the right option for projects that use a bundler or runtime like Bun instead of standard Node module resolution.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 3: Set Up Environment Variables
&lt;/h2&gt;

&lt;p&gt;Create your &lt;code&gt;.env&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight properties"&gt;&lt;code&gt;&lt;span class="py"&gt;DISCORD_TOKEN&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;your_bot_token_here&lt;/span&gt;
&lt;span class="py"&gt;CLIENT_ID&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;your_application_id_here&lt;/span&gt;
&lt;span class="py"&gt;GUILD_ID&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;your_test_server_id_here&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;GUILD_ID&lt;/code&gt; is the ID of your test server. Right-click on your server name in Discord and click "Copy Server ID" (you need Developer Mode enabled in Discord settings for this option to appear).&lt;/p&gt;

&lt;p&gt;Here is the part where Bun saves you from yourself: you do not need &lt;code&gt;dotenv&lt;/code&gt;. Bun reads &lt;code&gt;.env&lt;/code&gt; files automatically when you run any script. Access your variables anywhere with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;token&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DISCORD_TOKEN&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;clientId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;CLIENT_ID&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;guildId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;GUILD_ID&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No &lt;code&gt;require('dotenv').config()&lt;/code&gt;. No setup. It just works.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 4: Extend the Discord.js Client Type
&lt;/h2&gt;

&lt;p&gt;Discord.js ships a &lt;code&gt;Client&lt;/code&gt; class. We are going to attach a &lt;code&gt;commands&lt;/code&gt; collection to it. TypeScript needs to know about this addition, so create &lt;code&gt;src/types.d.ts&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;
&lt;span class="kr"&gt;interface&lt;/span&gt; &lt;span class="nx"&gt;Command&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;SlashCommandBuilder&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;interaction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ChatInputCommandInteraction&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="k"&gt;void&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kr"&gt;declare&lt;/span&gt; &lt;span class="kr"&gt;module&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;discord.js&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kr"&gt;interface&lt;/span&gt; &lt;span class="nx"&gt;Client&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;commands&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Collection&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;Command&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This declaration merges with discord.js's own types. Anywhere you access &lt;code&gt;client.commands&lt;/code&gt; in TypeScript, the compiler now knows what shape it has.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 5: Create the Bot Entry Point
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;src/index.ts&lt;/code&gt; is where everything starts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;intents&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="nx"&gt;GatewayIntentBits&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Guilds&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;GatewayIntentBits&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;GuildMessages&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;GatewayIntentBits&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;MessageContent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;commands&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Collection&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="c1"&gt;// Load commands dynamically from the commands directory&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;commandsPath&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;import&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;meta&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dir&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;commands&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;commandFiles&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;readdirSync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;commandsPath&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;f&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;f&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;endsWith&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;.ts&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

&lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;file&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;commandFiles&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;command&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;import&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;commandsPath&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
  &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;commands&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;command&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;command&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// Load events dynamically from the events directory&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;eventsPath&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;import&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;meta&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dir&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;events&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;eventFiles&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;readdirSync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;eventsPath&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;f&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;f&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;endsWith&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;.ts&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

&lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;file&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;eventFiles&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;import&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;eventsPath&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;once&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;once&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(...&lt;/span&gt;&lt;span class="nx"&gt;args&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;unknown&lt;/span&gt;&lt;span class="p"&gt;[])&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(...&lt;/span&gt;&lt;span class="nx"&gt;args&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(...&lt;/span&gt;&lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;unknown&lt;/span&gt;&lt;span class="p"&gt;[])&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(...&lt;/span&gt;&lt;span class="nx"&gt;args&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;login&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DISCORD_TOKEN&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Two things worth noting here. First, &lt;code&gt;import.meta.dir&lt;/code&gt; is Bun's equivalent of Node's &lt;code&gt;__dirname&lt;/code&gt;. It gives you the directory of the current file. Second, the &lt;code&gt;await import()&lt;/code&gt; calls at the top level work because Bun supports top-level &lt;code&gt;await&lt;/code&gt; natively, the way the ESNext spec defines it.&lt;/p&gt;

&lt;p&gt;The dynamic loading pattern means you never have to register new commands manually. Drop a &lt;code&gt;.ts&lt;/code&gt; file in the commands folder, and it loads automatically on next start.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 6: Add Event Handlers
&lt;/h2&gt;

&lt;p&gt;Create &lt;code&gt;src/events/ready.ts&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;Events&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ClientReady&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;once&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Client&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Online as &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tag&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Serving &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;guilds&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;size&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; server(s)`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;once: true&lt;/code&gt; export tells our loader to use &lt;code&gt;client.once&lt;/code&gt; instead of &lt;code&gt;client.on&lt;/code&gt;, so this handler fires exactly once when the bot connects.&lt;/p&gt;

&lt;p&gt;Create &lt;code&gt;src/events/interactionCreate.ts&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;Events&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;InteractionCreate&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;once&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;interaction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Interaction&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;interaction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;isChatInputCommand&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;command&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;interaction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;commands&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;interaction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;commandName&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;command&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`No command found for: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;interaction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;commandName&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;command&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;interaction&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Error executing &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;interaction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;commandName&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;:`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;errorMessage&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Something went wrong running that command.&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;ephemeral&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;

    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;interaction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;replied&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nx"&gt;interaction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;deferred&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;interaction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;followUp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;errorMessage&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;interaction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reply&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;errorMessage&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The error handling here covers the cases where an interaction has already been replied to or deferred. If you skip that check and try to call &lt;code&gt;reply()&lt;/code&gt; on an already-replied interaction, Discord throws and your bot process crashes.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 7: Build Your First Slash Commands
&lt;/h2&gt;

&lt;p&gt;Create &lt;code&gt;src/commands/ping.ts&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;SlashCommandBuilder&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setName&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ping&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setDescription&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Check how fast the bot is responding&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;interaction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ChatInputCommandInteraction&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;interaction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reply&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Measuring...&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;fetchReply&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;roundtrip&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;sent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;createdTimestamp&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;interaction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;createdTimestamp&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ws&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;round&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;interaction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ws&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ping&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;interaction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;editReply&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Pong. Round-trip: **&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;roundtrip&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;ms** | WebSocket: **&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;ws&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;ms**`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create &lt;code&gt;src/commands/userinfo.ts&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;SlashCommandBuilder&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setName&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;userinfo&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setDescription&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Get information about a user&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addUserOption&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;option&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;
    &lt;span class="nx"&gt;option&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setName&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;user&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setDescription&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;The user to look up (defaults to you)&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setRequired&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;interaction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ChatInputCommandInteraction&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;targetUser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;interaction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;user&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;??&lt;/span&gt; &lt;span class="nx"&gt;interaction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;member&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;interaction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;guild&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;members&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;targetUser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="k"&gt;catch&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;joinedAt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;member&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;joinedAt&lt;/span&gt;
    &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="s2"&gt;`&amp;lt;t:&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;floor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;member&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;joinedAt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getTime&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;&lt;span class="s2"&gt;:D&amp;gt;`&lt;/span&gt;
    &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Unknown&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;createdAt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`&amp;lt;t:&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;floor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;targetUser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;createdTimestamp&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;&lt;span class="s2"&gt;:D&amp;gt;`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;roles&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;member&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;roles&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cache&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;r&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@everyone&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;r&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toString&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;, &lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;None&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;embed&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;EmbedBuilder&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setAuthor&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;targetUser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tag&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;iconURL&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;targetUser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;displayAvatarURL&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setThumbnail&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;targetUser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;displayAvatarURL&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;256&lt;/span&gt; &lt;span class="p"&gt;}))&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addFields&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;User ID&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;targetUser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;inline&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Account Created&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;createdAt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;inline&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Joined Server&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;joinedAt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;inline&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Roles&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;roles&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setColor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mh"&gt;0x5865F2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setTimestamp&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setFooter&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`Requested by &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;interaction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tag&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;interaction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reply&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;embeds&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;embed&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;&amp;lt;t:timestamp:D&amp;gt;&lt;/code&gt; syntax is Discord's built-in timestamp formatting. It renders the date in the user's own timezone automatically, which is a much better experience than hardcoding a date format.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 8: Add a More Useful Command
&lt;/h2&gt;

&lt;p&gt;Create &lt;code&gt;src/commands/serverstats.ts&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;SlashCommandBuilder&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setName&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;serverstats&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setDescription&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Show statistics for this server&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;interaction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ChatInputCommandInteraction&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;guild&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;interaction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;guild&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;guild&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;interaction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reply&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;This command can only be used in a server.&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;ephemeral&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;guild&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;totalMembers&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;guild&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;memberCount&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;onlineMembers&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;guild&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;members&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nx"&gt;m&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;m&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;presence&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;offline&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;m&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;presence&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="kc"&gt;undefined&lt;/span&gt;
  &lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;size&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;textChannels&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;guild&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;channels&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;c&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;size&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;voiceChannels&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;guild&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;channels&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;c&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;size&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;roles&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;guild&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;roles&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;size&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// subtract @everyone&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;boostLevel&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;guild&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;premiumTier&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;boostCount&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;guild&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;premiumSubscriptionCount&lt;/span&gt; &lt;span class="o"&gt;??&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;createdAt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`&amp;lt;t:&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;floor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;guild&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;createdTimestamp&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;&lt;span class="s2"&gt;:D&amp;gt;`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;embed&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;EmbedBuilder&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setTitle&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;guild&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setThumbnail&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;guild&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;iconURL&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;256&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;??&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addFields&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Members&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;totalMembers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toLocaleString&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt;&lt;span class="s2"&gt; total`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;inline&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Channels&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;textChannels&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; text, &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;voiceChannels&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; voice`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;inline&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Roles&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;roles&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;inline&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Server Boosts&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`Level &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;boostLevel&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; (&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;boostCount&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; boosts)`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;inline&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Created&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;createdAt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;inline&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setColor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mh"&gt;0x57F287&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setTimestamp&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;interaction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reply&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;embeds&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;embed&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 9: Register Slash Commands with Discord
&lt;/h2&gt;

&lt;p&gt;Slash commands have to be registered with Discord's API before they appear in the UI. Create &lt;code&gt;src/utils/deploy-commands.ts&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;commands&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;object&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;commandsPath&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;import&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;meta&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dir&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;..&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;commands&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;commandFiles&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;readdirSync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;commandsPath&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;f&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;f&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;endsWith&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;.ts&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

&lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;file&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;commandFiles&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;command&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;import&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;commandsPath&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
  &lt;span class="nx"&gt;commands&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;push&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;command&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toJSON&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;rest&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;REST&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;setToken&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DISCORD_TOKEN&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Registering &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;commands&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; slash command(s)...`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Guild commands update instantly. Use these during development.&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;rest&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;put&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="nx"&gt;Routes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;applicationGuildCommands&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;CLIENT_ID&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;GUILD_ID&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;commands&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Done. Commands are live in your test server.&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run this once:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;bun run src/utils/deploy-commands.ts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Guild-scoped commands (the kind registered to a specific server ID) go live instantly. Global commands, registered with &lt;code&gt;Routes.applicationCommands(clientId)&lt;/code&gt; instead, take up to an hour to propagate. Use guild commands during development, switch to global when you deploy.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 10: Add Scripts and Run
&lt;/h2&gt;

&lt;p&gt;Update &lt;code&gt;package.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"scripts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"dev"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"bun --watch src/index.ts"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"start"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"bun run src/index.ts"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"deploy"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"bun run src/utils/deploy-commands.ts"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;bun --watch&lt;/code&gt; restarts the process automatically when any file changes. No nodemon, no extra package, built in.&lt;/p&gt;

&lt;p&gt;Start the bot:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;bun run dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see your ready event log in the terminal within a second or two. Go to your test server and type &lt;code&gt;/&lt;/code&gt; to see your slash commands listed. Run &lt;code&gt;/ping&lt;/code&gt;, &lt;code&gt;/userinfo&lt;/code&gt;, and &lt;code&gt;/serverstats&lt;/code&gt; to verify everything works.&lt;/p&gt;

&lt;p&gt;On my machine the bot is online in about 12ms after the Bun process starts. With a Node.js and ts-node setup doing the same thing, the same startup takes over a second. That difference matters more than it sounds because every time you restart during development, you wait. In a day of active development, those seconds add up to minutes.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 11: Deployment
&lt;/h2&gt;

&lt;p&gt;When you are ready to run this somewhere permanently, you have two good options.&lt;/p&gt;

&lt;h3&gt;
  
  
  VPS with PM2
&lt;/h3&gt;

&lt;p&gt;If you have a Linux VPS (DigitalOcean, Hetzner, Vultr), install Bun on it and use PM2 to keep the process running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# On your VPS&lt;/span&gt;
git clone https://github.com/you/discord-bot.git
&lt;span class="nb"&gt;cd &lt;/span&gt;discord-bot
bun &lt;span class="nb"&gt;install
&lt;/span&gt;bun run deploy

&lt;span class="c"&gt;# Install PM2 globally via bun&lt;/span&gt;
bun add &lt;span class="nt"&gt;-g&lt;/span&gt; pm2

&lt;span class="c"&gt;# Start the bot under PM2&lt;/span&gt;
pm2 start src/index.ts &lt;span class="nt"&gt;--interpreter&lt;/span&gt; bun &lt;span class="nt"&gt;--name&lt;/span&gt; discord-bot
pm2 save
pm2 startup
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;PM2 will restart the bot automatically if it crashes and will survive server reboots after you run &lt;code&gt;pm2 startup&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker
&lt;/h3&gt;

&lt;p&gt;If you prefer containers, here is a minimal Dockerfile:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; oven/bun:1&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package.json bun.lock ./&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;bun &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--frozen-lockfile&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["bun", "run", "src/index.ts"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Build and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nt"&gt;-t&lt;/span&gt; discord-bot &lt;span class="nb"&gt;.&lt;/span&gt;
docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--env-file&lt;/span&gt; .env &lt;span class="nt"&gt;--name&lt;/span&gt; discord-bot discord-bot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The official &lt;code&gt;oven/bun&lt;/code&gt; Docker image is well-maintained and small. The &lt;code&gt;--frozen-lockfile&lt;/code&gt; flag ensures the install matches exactly what you tested locally.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Bun Instead of Node.js
&lt;/h2&gt;

&lt;p&gt;To be concrete about what you are actually getting:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Native TypeScript.&lt;/strong&gt; You write &lt;code&gt;.ts&lt;/code&gt; files and run them directly. No &lt;code&gt;ts-node&lt;/code&gt;, no &lt;code&gt;@babel/register&lt;/code&gt;, no build step that produces a &lt;code&gt;dist/&lt;/code&gt; folder you have to remember to rebuild. The DX improvement is real and consistent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No dotenv.&lt;/strong&gt; Bun reads &lt;code&gt;.env&lt;/code&gt; automatically. Remove one package from your dependencies, remove the setup call from your entry point, and stop wondering whether it loaded before the code that needs it ran.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Speed that actually matters in dev.&lt;/strong&gt; The 12ms startup versus 1,000ms+ for ts-node is the kind of difference you notice when you are restarting the bot ten times during a debugging session.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;bun --watch&lt;/code&gt; built in.&lt;/strong&gt; Hot reloading with no extra packages and no configuration file.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Faster installs.&lt;/strong&gt; &lt;code&gt;bun install&lt;/code&gt; is between 6x and 20x faster than &lt;code&gt;npm install&lt;/code&gt; in my experience. If you have a CI pipeline, this compounds quickly.&lt;/p&gt;




&lt;h2&gt;
  
  
  Common Mistakes to Avoid
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Forgetting to enable Gateway Intents.&lt;/strong&gt; If your bot can see messages in some servers but not others, check whether the server has Community features enabled with stricter intent requirements. Also check that you enabled Message Content Intent in the Developer Portal.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deploying guild commands to the wrong guild.&lt;/strong&gt; Your &lt;code&gt;GUILD_ID&lt;/code&gt; has to match the server you added the bot to. Test server IDs and production server IDs are different.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Committing your &lt;code&gt;.env&lt;/code&gt; file.&lt;/strong&gt; Your bot token is a secret. Anyone who has it can log in as your bot, join any server it is in, and do whatever permissions allow. Confirm &lt;code&gt;.env&lt;/code&gt; is in your &lt;code&gt;.gitignore&lt;/code&gt; before your first commit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using global commands during development.&lt;/strong&gt; They take up to an hour to update. You will waste a lot of time wondering why your command changes are not showing up. Always use guild commands during development.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Not handling deferred interactions.&lt;/strong&gt; If your command does anything async that might take more than three seconds, use &lt;code&gt;interaction.deferReply()&lt;/code&gt; first. Discord will show a loading indicator and give you fifteen minutes to respond instead of three seconds.&lt;/p&gt;




&lt;h2&gt;
  
  
  What to Build Next
&lt;/h2&gt;

&lt;p&gt;Now that you have the scaffold, the extension points are straightforward.&lt;/p&gt;

&lt;p&gt;Adding a &lt;strong&gt;database&lt;/strong&gt; is the most natural next step for anything beyond toy commands. Prisma works with Bun and gives you a type-safe ORM with minimal boilerplate. Connect it to a Neon serverless Postgres database and you have a persistent, scalable backend for your bot without managing infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI commands&lt;/strong&gt; are increasingly common in Discord bots. With the Anthropic SDK available on npm, you can add a &lt;code&gt;/ask&lt;/code&gt; command that routes questions to Claude in a handful of lines. The bot token plus a Claude API key and you have a genuinely useful AI assistant living inside your Discord server.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rate limiting&lt;/strong&gt; becomes important once your bot is in more than a few servers. Discord enforces rate limits on API calls, and you want to handle them gracefully rather than having commands fail silently. The discord.js client handles basic rate limiting automatically, but you should add application-level limits for commands that call external APIs.&lt;/p&gt;

&lt;p&gt;The scaffold you have built here handles all of these extensions cleanly. The dynamic command loading means adding features is a matter of dropping new files, not modifying the core bot logic.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Bun Advantage in Practice
&lt;/h2&gt;

&lt;p&gt;I have been building small Discord bots for a few years. The jump from a Node.js TypeScript setup to a Bun TypeScript setup is one of the more pleasant DX improvements I have made recently.&lt;/p&gt;

&lt;p&gt;The configuration overhead that used to be table stakes, ts-node or tsx, dotenv, nodemon, a tsconfig that actually worked, is gone. You write TypeScript, you run it, it works. The developer loop is tighter because the tooling gets out of the way.&lt;/p&gt;

&lt;p&gt;Discord.js v14 on Bun is stable and well-documented. The Bun team maintains official guides for it. There are community projects with thousands of stars running on this stack. It is not experimental.&lt;/p&gt;

&lt;p&gt;If you are building a Discord bot in 2026, this is the setup I recommend without hesitation.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AI-Generated Code Is a Security Liability: What Every Developer Needs to Know in 2026</title>
      <dc:creator>Alex Cloudstar</dc:creator>
      <pubDate>Fri, 27 Mar 2026 08:05:43 +0000</pubDate>
      <link>https://forem.com/alexcloudstar/ai-generated-code-is-a-security-liability-what-every-developer-needs-to-know-in-2026-4da5</link>
      <guid>https://forem.com/alexcloudstar/ai-generated-code-is-a-security-liability-what-every-developer-needs-to-know-in-2026-4da5</guid>
      <description>&lt;p&gt;I shipped a production bug last month that I would have caught in 30 seconds if I had actually read the code. But I did not read it. Claude wrote it, the tests passed, and I merged it. The feature worked perfectly. The bug was not in the feature logic. It was in an endpoint I had asked Claude to create alongside it, an endpoint with no authentication check, sitting quietly in my codebase for nine days before a security scan caught it.&lt;/p&gt;

&lt;p&gt;I am not sharing this to be dramatic. I am sharing it because I suspect you have a similar story, or you will soon.&lt;/p&gt;

&lt;p&gt;Here is the situation in 2026: 85% of professional developers use AI coding tools, and more than half use them daily. Vibe coding is not a fringe experiment anymore. It is how code gets written. AI tools write entire features, not just autocomplete. And the research on what that means for security is genuinely alarming.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Numbers You Need to Know
&lt;/h2&gt;

&lt;p&gt;Veracode published their Spring 2026 GenAI Code Security update after testing over 150 large language models, the most comprehensive longitudinal study of its kind. The headline finding is this: across all models and all tasks, only 55% of code generation tasks result in secure code. In 45% of cases, the model introduces a known security flaw.&lt;/p&gt;

&lt;p&gt;Sit with that for a second. Nearly one in two AI-generated code outputs has a security problem. And this number has not improved in two years despite enormous advances in everything else these models do.&lt;/p&gt;

&lt;p&gt;Here is the part that makes it worse. AI models now exceed 95% syntax correctness. They almost never write code that fails to run. The gap between "code that works" and "code that works securely" is not closing. It is widening. Models are getting better at producing functional code faster, which means developers are reviewing less of it, which means more vulnerable code is reaching production.&lt;/p&gt;

&lt;p&gt;Even the best performers in Veracode's testing, the GPT-5 series with extended reasoning enabled, only hit 70 to 72% security pass rates. That is the ceiling right now. The highest-performing category in the most comprehensive study available still means roughly one in three outputs has a vulnerability.&lt;/p&gt;

&lt;p&gt;CodeRabbit did a separate analysis of 470 pull requests that mixed AI-generated and human-written code. Their finding: AI code has 1.7x more major issues overall and 2.74x higher security vulnerability rates compared to human-written code for equivalent tasks.&lt;/p&gt;

&lt;p&gt;And from the academic side, SWE-Agent with Claude 4 Sonnet solved 61% of tasks functionally correctly. Only 10.5% of those solutions were also secure.&lt;/p&gt;

&lt;p&gt;These numbers are not coming from critics of AI development. They are coming from organizations that actively use and support AI coding tools. The point is not to stop using AI. The point is to understand what it is not good at yet, and security is near the top of that list.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Most Common Vulnerabilities
&lt;/h2&gt;

&lt;p&gt;The same categories of security flaws show up across every model, every language, and every study. Knowing what to look for is half the battle.&lt;/p&gt;

&lt;h3&gt;
  
  
  Missing Input Sanitization
&lt;/h3&gt;

&lt;p&gt;This is the most common flaw identified across all languages and models in Veracode's testing. AI generates route handlers, form processors, and API endpoints that handle user input without sanitizing it. The code works. You can test it with clean inputs all day. It only breaks when someone sends it something it was not expecting, and attackers send things it was not expecting.&lt;/p&gt;

&lt;p&gt;A typical example from an Express route:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/search&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;query&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;raw&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`SELECT * FROM posts WHERE title LIKE '%&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;query&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;%'`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;results&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is SQL injection waiting to happen. AI generated this because it technically answers the question asked. The developer did not sanitize &lt;code&gt;query&lt;/code&gt; in the prompt, so the model did not sanitize it in the output. It mirrored the incomplete specification back perfectly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hardcoded Credentials
&lt;/h3&gt;

&lt;p&gt;AI systems produce example code. Example code has example values. The problem is that "example" API keys and connection strings look real enough that developers commit them, especially when they are generated in the middle of a larger feature and get lost in the noise.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Common pattern AI generates during scaffolding&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;stripe&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Stripe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;sk_live_4eC39HqLyjWDarjtT1zdp7dc&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Pool&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;connectionString&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;postgresql://admin:password123@localhost/myapp&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;GitHub's secret scanning catches some of these but not all, especially when developers are moving fast and not thinking about what the AI just dropped into their file.&lt;/p&gt;

&lt;h3&gt;
  
  
  Over-Permissive Defaults
&lt;/h3&gt;

&lt;p&gt;AI defaults to configurations that work, not configurations that are restrictive. When you ask it to create an IAM role, an S3 bucket policy, or a database user, it gives you something that functions correctly for the task at hand. It does not think about least-privilege access.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Statement"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"*"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"*"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a real IAM policy pattern that AI generates when asked to "make a role that can access my bucket." It works. It also gives that role access to everything in your AWS account.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hallucinated Dependencies
&lt;/h3&gt;

&lt;p&gt;This one is particularly insidious. AI models suggest packages that do not exist. They generate import statements for libraries that sound plausible but were never published to npm, PyPI, or any registry. If a developer runs &lt;code&gt;bun add&lt;/code&gt; or &lt;code&gt;npm install&lt;/code&gt; on a hallucinated package name and an attacker has registered that name, the developer just installed malware.&lt;/p&gt;

&lt;p&gt;This is called a dependency confusion or typosquatting attack, and AI-generated code has made the attack surface significantly larger. You used to have to mistype a package name. Now you can import a perfectly spelled package that simply does not exist as a legitimate library yet.&lt;/p&gt;

&lt;p&gt;Always verify every package before installing it. Check the npm page. Check the GitHub. Check the download count. If it has under 1,000 weekly downloads and no clear maintainer, be skeptical.&lt;/p&gt;

&lt;h3&gt;
  
  
  Incomplete Access Control
&lt;/h3&gt;

&lt;p&gt;AI implements business logic without consistently enforcing authorization. It builds the feature you described. It does not think about who should and should not be able to use it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/admin/users&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;users&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;SELECT * FROM users&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;users&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This endpoint works. It returns users. What it does not do is check whether the person calling it has admin privileges. The developer asked for an admin endpoint to list users. The model created an endpoint that lists users. The "admin" part was a naming hint, not a security enforcement.&lt;/p&gt;

&lt;h3&gt;
  
  
  SQL Injection Through String Concatenation
&lt;/h3&gt;

&lt;p&gt;Despite parameterized queries being the established solution for decades, AI still generates raw SQL string interpolation with frustrating regularity, especially in dynamic query scenarios.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// AI-generated, dangerous&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;getUserPosts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`SELECT * FROM posts WHERE user_id = &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; AND status = '&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;'`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="c1"&gt;// What it should look like&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;getUserPosts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;SELECT * FROM posts WHERE user_id = $1 AND status = $2&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first version is one curl command away from a full database dump.&lt;/p&gt;




&lt;h2&gt;
  
  
  Real Incidents: When This Goes Wrong in Production
&lt;/h2&gt;

&lt;p&gt;These are not theoretical risks. They are documented incidents from 2025 and 2026 where AI-generated code created real security breaches.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lovable, CVE-2025-48757.&lt;/strong&gt; A security researcher audited 1,645 apps built entirely with the Lovable AI builder. 170 of them, 10.3%, had critical row-level security flaws. These were apps that users trusted with sensitive data, built by developers who shipped whatever the AI produced without auditing the database access patterns.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CurXecute, CVE-2025-54135.&lt;/strong&gt; A remote code execution vulnerability discovered in Cursor, the AI code editor itself. The flaw allowed attackers to execute arbitrary code on developers' machines with no user interaction required. The irony of an AI coding tool being the vector for RCE is not lost on anyone in the security community.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Moltbook incident.&lt;/strong&gt; Security firm Wiz identified a misconfigured database exposing 1.5 million authentication tokens, 35,000 email addresses, and private messages between users. The site in question was Moltbook, a social platform. The owner publicly stated he had not written a single line of code. The entire application was vibe coded. The misconfiguration was a default configuration that any experienced developer would have caught, but when no experienced developer ever reads the code, it ships as-is.&lt;/p&gt;

&lt;p&gt;The CVE count tells the same story at scale. In January 2026, six new CVEs were directly traced to AI-generated code. In February, fifteen. In March, thirty-five. The curve is not flattening.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why AI Models Are Structurally Bad at Security
&lt;/h2&gt;

&lt;p&gt;Understanding why this happens makes it easier to compensate for it.&lt;/p&gt;

&lt;p&gt;AI models are optimized for functional correctness. The training signal rewards code that works. Security is orthogonal to correctness in most cases. A vulnerable SQL query runs fine. A hardcoded credential authenticates successfully. An over-permissive IAM role grants access without errors. Nothing breaks at the code level.&lt;/p&gt;

&lt;p&gt;Security requires adversarial thinking. It requires asking "how would someone abuse this?" Models are trained to be helpful. They produce what was asked for and assume good faith on the part of the caller. Adversarial reasoning is not in their training objective.&lt;/p&gt;

&lt;p&gt;Security also requires context that models do not have. What threat model is this application operating under? Who are the users and how much should they be trusted? What other systems does this endpoint connect to? What is the blast radius if this role is compromised? Models cannot answer these questions because they only see the code they are generating, not the system it lives in.&lt;/p&gt;

&lt;p&gt;The training data problem compounds everything. The corpus that these models trained on includes every insecure Stack Overflow answer, every tutorial that skips authentication for brevity, every open source project that cut corners on input validation. When a model has seen "works but insecure" patterns millions of times, it reproduces them.&lt;/p&gt;

&lt;p&gt;Finally, benchmarks do not measure security. Models are evaluated on HumanEval, SWE-bench, and MMLU. None of these measure whether the generated code is secure. So models get optimized for the benchmarks they are evaluated on, and security is not on the scorecard.&lt;/p&gt;




&lt;h2&gt;
  
  
  My Actual Review Process
&lt;/h2&gt;

&lt;p&gt;I changed how I review AI-generated code after the incident I opened with. Here is what I actually do now.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Read every line before merging.&lt;/strong&gt; This sounds obvious and I did not do it consistently before. If a function is too long to read carefully, it is too long to trust. I break large AI-generated outputs into smaller pieces and review each one before asking for the next.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Run SAST on every PR.&lt;/strong&gt; Static analysis tools like Semgrep, Snyk, and Veracode catch patterns your eyes miss. I have Semgrep running in my CI pipeline on every pull request. It catches injection patterns, hardcoded secrets, and unsafe deserialization before they reach the branch.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Verify every import.&lt;/strong&gt; Before &lt;code&gt;bun add&lt;/code&gt;-ing anything AI suggested, I check npm for the package. Does it exist? Does it have real downloads? Does it have a GitHub repository with commits? One hallucinated package can compromise your entire development environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Audit auth separately.&lt;/strong&gt; I make a specific pass through every AI-generated route handler focused only on authentication and authorization. Is there a session check? Is there a role check? Does the check happen before or after the database query? This is the category where AI is most consistently wrong.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check environment variable handling.&lt;/strong&gt; Look for hardcoded fallbacks like &lt;code&gt;process.env.API_KEY || 'default_key_here'&lt;/code&gt;. Look for debug endpoints that expose internal state. Look for CORS configurations that default to &lt;code&gt;*&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use AI to review AI.&lt;/strong&gt; Once I have a working implementation, I paste it back into a model and explicitly prompt for security review: "What security vulnerabilities exist in this code? Think like an attacker." The model that generated the code is not always the best reviewer, but a second model looking specifically for problems catches things the first pass missed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Keep a vulnerability log.&lt;/strong&gt; Every time I find a security issue in AI-generated code, I write it down. Patterns emerge. I know that Claude tends to miss rate limiting. I know that GPT tends to produce over-permissive CORS. Knowing your tools' specific blind spots makes you a better reviewer.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Junior Developer Problem
&lt;/h2&gt;

&lt;p&gt;There is a harder conversation underneath this one.&lt;/p&gt;

&lt;p&gt;Junior developers are the heaviest users of AI coding tools. They are also the least equipped to review code for security issues. When an AI tool gives a junior developer the output speed of a senior, it gives them the speed without the judgment.&lt;/p&gt;

&lt;p&gt;This is not a criticism of junior developers. The tooling is failing them. AI coding tools present generated code with enormous confidence. They do not flag uncertainty about security implications. They do not prompt developers to review the authentication logic. They just produce a clean, well-formatted, syntactically correct output and move on.&lt;/p&gt;

&lt;p&gt;The more dangerous version of this problem: organizations cutting senior engineering headcount because AI makes their junior teams "productive enough." The seniors who would have reviewed that code in a PR, who would have caught the missing auth check, who have the mental model of what can go wrong, are gone. The AI creates the productivity illusion. The security review capacity disappears quietly.&lt;/p&gt;

&lt;p&gt;If you are a senior engineer reading this, your security judgment is one of the things AI cannot replace right now. Make sure your organization understands that.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Actually Want from These Tools
&lt;/h2&gt;

&lt;p&gt;The security gap is not inevitable. AI coding tools could address this directly if vendors prioritized it.&lt;/p&gt;

&lt;p&gt;What I want to see: built-in security scanning before code is presented to the developer. Not after you accept it. Before. If the model is about to generate an endpoint with no authentication check, it should flag that before I see the output.&lt;/p&gt;

&lt;p&gt;Threat model awareness based on project context. If I have described a multi-tenant SaaS, the model should know that row-level security is critical. If I have an admin panel, it should assume elevated risk on every generated endpoint.&lt;/p&gt;

&lt;p&gt;Automatic matching against known CVE patterns. The top 25 CWEs are well-documented. There is no technical reason AI tools cannot check generated code against these patterns before surfacing the output.&lt;/p&gt;

&lt;p&gt;Honest uncertainty. "This code is functional, but I am not confident it handles all edge cases in your auth flow" would be more valuable than a clean output that hides the gap.&lt;/p&gt;

&lt;p&gt;Default to restriction. Generate the most restrictive version of any configuration and let developers loosen it explicitly rather than generating permissive defaults.&lt;/p&gt;

&lt;p&gt;Some of these are starting to appear in tools like Cursor with their Security Rules feature and in Copilot's latest enterprise integrations. But they are not standard, not consistent, and not enough.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where This Leaves Us
&lt;/h2&gt;

&lt;p&gt;AI coding tools are the most significant productivity improvement in software development since the IDE. I use them every single day and I am not going back to writing everything by hand.&lt;/p&gt;

&lt;p&gt;But the current reality is that AI tools are excellent at writing code that functions and unreliable at writing code that is secure. That gap is the developer's responsibility to close until the tools close it themselves.&lt;/p&gt;

&lt;p&gt;The developers who thrive in this environment are the ones who use AI for speed and bring their own security judgment to every output. They read the code. They run the scans. They think about who should not be able to call this endpoint. They treat AI-generated code the way you would treat code from a talented intern who has never thought about security before: grateful for the output, but obligated to review it carefully.&lt;/p&gt;

&lt;p&gt;That is not a criticism of the technology. It is a description of where the technology is right now. And knowing where it is means you can use it safely.&lt;/p&gt;

&lt;p&gt;The bug I shipped last month cost me two hours to find and fix. The bigger cost was what I learned from it: speed without review is not a productivity gain. It is a delayed liability.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>security</category>
      <category>vibecoding</category>
    </item>
    <item>
      <title>TanStack Start vs Next.js in 2026: Should You Actually Switch?</title>
      <dc:creator>Alex Cloudstar</dc:creator>
      <pubDate>Thu, 26 Mar 2026 10:04:33 +0000</pubDate>
      <link>https://forem.com/alexcloudstar/tanstack-start-vs-nextjs-in-2026-should-you-actually-switch-4b2l</link>
      <guid>https://forem.com/alexcloudstar/tanstack-start-vs-nextjs-in-2026-should-you-actually-switch-4b2l</guid>
      <description>&lt;p&gt;For the past four years, if someone asked me what full-stack React framework to use, the answer was Next.js without hesitation. It had the ecosystem, the docs, the deployment story, and the community. Recommending anything else felt like a contrarian take for the sake of being different.&lt;/p&gt;

&lt;p&gt;That changed in 2026.&lt;/p&gt;

&lt;p&gt;TanStack Start, built by Tanner Linsley (the person behind TanStack Query, TanStack Table, and TanStack Router), has matured into a genuinely compelling alternative. Not in a "this is interesting, keep an eye on it" way. In a "I built a production app with this and the developer experience made me question several choices I made with Next.js" way.&lt;/p&gt;

&lt;p&gt;This is not a "Next.js is dead" article. I still use Next.js for certain projects and recommend it in specific situations. But the days of it being the only serious option are over, and understanding when to use each framework is now a real skill.&lt;/p&gt;




&lt;h2&gt;
  
  
  What TanStack Start Actually Is
&lt;/h2&gt;

&lt;p&gt;TanStack Start is a full-stack React framework built on top of TanStack Router. It uses Vinxi (a Vite-based server framework from the SolidStart team) as its build engine and Nitro as its server runtime. If those names mean nothing to you, the practical summary is: it is fast, it is modern, and it is built on battle-tested foundations.&lt;/p&gt;

&lt;p&gt;The core philosophy is "just React." No React Server Components. No compiler magic. No &lt;code&gt;"use client"&lt;/code&gt; directives. You write React components the way you always have, and the framework gives you full-stack capabilities on top of that. Server-side rendering, data loading, server functions, middleware, all without requiring a new mental model for how React works.&lt;/p&gt;

&lt;p&gt;TanStack Start hit v1.0 in early 2025 after an extended beta. It is production-ready and people are shipping real applications with it. That matters because a year ago the answer to "should I use TanStack Start" was "maybe wait." That is no longer the case.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Routing Difference Is Bigger Than You Think
&lt;/h2&gt;

&lt;p&gt;I expected routing to be a minor differentiator. It turned out to be the single biggest reason developers switch.&lt;/p&gt;

&lt;h3&gt;
  
  
  TanStack Start: Type Safety That Actually Works
&lt;/h3&gt;

&lt;p&gt;TanStack Router (which powers TanStack Start) generates a fully typed route tree at build time. Every route parameter, every search parameter, every loader return type is known to TypeScript without you writing a single type annotation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Route definition with validated search params&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;Route&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;createFileRoute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/products/&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)({&lt;/span&gt;
  &lt;span class="na"&gt;validateSearch&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;object&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;category&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;string&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;optional&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="na"&gt;sort&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;enum&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;price&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;name&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;date&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]).&lt;/span&gt;&lt;span class="k"&gt;default&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;date&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="na"&gt;page&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;number&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="k"&gt;default&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="p"&gt;}),&lt;/span&gt;
  &lt;span class="na"&gt;loaderDeps&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;search&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;search&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;
  &lt;span class="na"&gt;loader&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;deps&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// deps.search is fully typed here&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;fetchProducts&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;deps&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;search&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;component&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ProductsPage&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;ProductsPage&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// search params are typed, validated, and reactive&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;category&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;sort&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;Route&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;useSearch&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;navigate&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;Route&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;useNavigate&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

  &lt;span class="c1"&gt;// TypeScript catches typos and wrong types at compile time&lt;/span&gt;
  &lt;span class="nf"&gt;navigate&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;search&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;sort&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;price&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;page&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The search params story is particularly good. In Next.js, &lt;code&gt;useSearchParams()&lt;/code&gt; gives you a &lt;code&gt;URLSearchParams&lt;/code&gt; object. Everything is a string. You parse it yourself, validate it yourself, and hope you remembered to handle the edge cases. In TanStack Start, search params are validated with Zod schemas, typed end to end, and reactive. You use them like state but they persist in the URL.&lt;/p&gt;

&lt;h3&gt;
  
  
  Next.js: Convention Over Configuration (With Some Gaps)
&lt;/h3&gt;

&lt;p&gt;Next.js uses folder-based routing. Your file structure is your route structure. &lt;code&gt;app/products/[id]/page.tsx&lt;/code&gt; creates the route &lt;code&gt;/products/:id&lt;/code&gt;. It works and it is intuitive.&lt;/p&gt;

&lt;p&gt;But the type safety story has gaps. Route params come in as &lt;code&gt;{ params: { id: string } }&lt;/code&gt;. Search params are accessed through &lt;code&gt;useSearchParams()&lt;/code&gt; which returns untyped strings. There is no built-in validation layer. You either add a library like &lt;code&gt;next-safe-navigation&lt;/code&gt; or you write your own parsing logic.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Next.js approach&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;ProductPage&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;params&lt;/span&gt; &lt;span class="p"&gt;}:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nl"&gt;params&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;searchParams&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useSearchParams&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="c1"&gt;// searchParams.get('sort') returns string | null&lt;/span&gt;
  &lt;span class="c1"&gt;// No validation, no type inference&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sort&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;searchParams&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;sort&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;??&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;date&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="c1"&gt;// Is 'date' a valid sort value? TypeScript cannot tell you.&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For simple routes, this is fine. For applications with complex filtering, pagination, and multi-parameter URLs (dashboards, admin panels, search interfaces), the gap in developer experience is significant. I have written hundreds of lines of boilerplate in Next.js projects to get type-safe URL state that TanStack Start gives me for free.&lt;/p&gt;




&lt;h2&gt;
  
  
  Data Loading: Explicit vs Magic
&lt;/h2&gt;

&lt;p&gt;This is where the philosophical difference between the two frameworks is most visible.&lt;/p&gt;

&lt;h3&gt;
  
  
  TanStack Start: Loaders and Server Functions
&lt;/h3&gt;

&lt;p&gt;Data loading in TanStack Start uses &lt;code&gt;loader&lt;/code&gt; functions on each route. They run on the server, return typed data, and the component receives that data through hooks.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;Route&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;createFileRoute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/dashboard/&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)({&lt;/span&gt;
  &lt;span class="na"&gt;loader&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;stats&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;recentActivity&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;all&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
      &lt;span class="nf"&gt;getDashboardStats&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
      &lt;span class="nf"&gt;getRecentActivity&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="p"&gt;]);&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;stats&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;recentActivity&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;component&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Dashboard&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;Dashboard&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// Fully typed, no loading states to manage&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;stats&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;recentActivity&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;Route&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;useLoaderData&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;DashboardView&lt;/span&gt; &lt;span class="nx"&gt;stats&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;stats&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="nx"&gt;activity&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;recentActivity&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="sr"&gt;/&amp;gt;&lt;/span&gt;&lt;span class="err"&gt;;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For mutations and server-side operations, you use &lt;code&gt;createServerFn&lt;/code&gt;. It creates a typed RPC endpoint that you call like a regular function from the client.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;updateProfile&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;createServerFn&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;POST&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;validator&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;object&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;string&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;min&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="na"&gt;bio&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;string&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;max&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;500&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="p"&gt;}))&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;update&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;where&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;getAuthUserId&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// In a component, call it like a function&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;updateProfile&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Alex&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;bio&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Builder&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="c1"&gt;// result is typed based on the handler return type&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The input validation, the type inference across the client-server boundary, and the simplicity of calling a server function like a local function. This is genuinely good developer experience.&lt;/p&gt;

&lt;h3&gt;
  
  
  Next.js: Server Components and Server Actions
&lt;/h3&gt;

&lt;p&gt;Next.js uses React Server Components as its primary data loading mechanism. Components marked as server components (the default in the App Router) can fetch data directly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// This is a Server Component by default&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;Dashboard&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;stats&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;getDashboardStats&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;activity&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;getRecentActivity&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;DashboardView&lt;/span&gt; &lt;span class="nx"&gt;stats&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;stats&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="nx"&gt;activity&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;activity&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="sr"&gt;/&amp;gt;&lt;/span&gt;&lt;span class="err"&gt;;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For mutations, you use Server Actions with the &lt;code&gt;"use server"&lt;/code&gt; directive.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;use server&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;updateProfile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;formData&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;FormData&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;formData&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;name&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;bio&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;formData&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;bio&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="c1"&gt;// Manual validation, manual type assertions&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;update&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;where&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;getAuthUserId&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;bio&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="nf"&gt;revalidatePath&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/profile&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Server Components are powerful. Zero client-side JavaScript for server-rendered content. Streaming. Partial prerendering. The model is innovative and when it works well, it works very well.&lt;/p&gt;

&lt;p&gt;But the complexity cost is real. You need to understand the server-client component boundary. You need to know when to add &lt;code&gt;"use client"&lt;/code&gt;. You need to reason about which components hydrate and which do not. Server Actions have awkward error handling and no built-in input validation. The caching layer, which was aggressively set to &lt;code&gt;force-cache&lt;/code&gt; in Next.js 14 and then changed to &lt;code&gt;no-store&lt;/code&gt; in Next.js 15, has been a consistent source of bugs and confusion.&lt;/p&gt;

&lt;p&gt;I have spent more time debugging Next.js caching behavior than I care to admit. TanStack Start's explicit caching through TanStack Query (where you set &lt;code&gt;staleTime&lt;/code&gt; and &lt;code&gt;gcTime&lt;/code&gt; yourself) is less magical but far more predictable.&lt;/p&gt;




&lt;h2&gt;
  
  
  Performance: Real Numbers
&lt;/h2&gt;

&lt;p&gt;Let me share the performance differences I have actually measured, not theoretical benchmarks.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;TanStack Start&lt;/th&gt;
&lt;th&gt;Next.js (App Router)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Dev server cold start&lt;/td&gt;
&lt;td&gt;300 to 500ms&lt;/td&gt;
&lt;td&gt;2 to 5 seconds&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;HMR update&lt;/td&gt;
&lt;td&gt;50 to 100ms&lt;/td&gt;
&lt;td&gt;200 to 500ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Client bundle (hello world)&lt;/td&gt;
&lt;td&gt;45 to 60 KB gzipped&lt;/td&gt;
&lt;td&gt;80 to 95 KB gzipped&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Build time (medium app)&lt;/td&gt;
&lt;td&gt;Noticeably faster&lt;/td&gt;
&lt;td&gt;Slower, especially with App Router&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;TTFB (SSR)&lt;/td&gt;
&lt;td&gt;Comparable&lt;/td&gt;
&lt;td&gt;Comparable&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Lighthouse score&lt;/td&gt;
&lt;td&gt;95 to 100&lt;/td&gt;
&lt;td&gt;90 to 98&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The dev server speed difference is the one you feel every day. TanStack Start uses Vite, which means near-instant cold starts and HMR so fast you barely notice it. Next.js improved significantly with Turbopack (stable in Next.js 15), but Vite still wins in most real-world setups.&lt;/p&gt;

&lt;p&gt;The bundle size difference comes from TanStack Start not shipping the React Server Components runtime, which adds roughly 15 to 20 KB to every Next.js App Router application. For a hello world this is noticeable in the numbers. For a large app, the difference becomes proportionally smaller but it is still there.&lt;/p&gt;

&lt;p&gt;Where Next.js has a genuine performance edge is static generation and ISR (Incremental Static Regeneration). If you are building a content-heavy site with thousands of pages that can be pre-rendered and cached at the CDN level, Next.js plus Vercel is hard to beat. TanStack Start can do SSG but it is not the framework's primary strength.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Deployment Question
&lt;/h2&gt;

&lt;p&gt;This is where I see a lot of developers making their decision, and it is worth being honest about.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Next.js works best on Vercel.&lt;/strong&gt; That is not a conspiracy theory. Features like ISR, Edge Middleware, &lt;code&gt;next/image&lt;/code&gt; optimization, and analytics are designed to work seamlessly on Vercel's infrastructure. You can self-host Next.js, and many companies do. But the experience degrades. The community Docker images fill gaps that Vercel does not officially support. Features that "just work" on Vercel require extra configuration elsewhere.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TanStack Start deploys anywhere.&lt;/strong&gt; Because it uses Nitro as its server runtime (the same one Nuxt uses), it has built-in adapters for Node.js, Bun, Deno, Cloudflare Workers, AWS Lambda, Vercel, Netlify, and more. You are not locked into any platform. The deployment story is genuinely platform-agnostic.&lt;/p&gt;

&lt;p&gt;If you are already on Vercel and happy with it, this is not a problem. If you have opinions about where your code runs, or your company has specific infrastructure requirements, TanStack Start's flexibility is a meaningful advantage.&lt;/p&gt;




&lt;h2&gt;
  
  
  Ecosystem and Community: The Honest Trade-off
&lt;/h2&gt;

&lt;p&gt;Let me be straightforward. Next.js has a bigger ecosystem by a large margin.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;TanStack Start&lt;/th&gt;
&lt;th&gt;Next.js&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;GitHub stars (core)&lt;/td&gt;
&lt;td&gt;~30K (TanStack Router)&lt;/td&gt;
&lt;td&gt;~130K&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;npm weekly downloads&lt;/td&gt;
&lt;td&gt;Growing fast, still a fraction&lt;/td&gt;
&lt;td&gt;6 to 7 million&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Job listings&lt;/td&gt;
&lt;td&gt;Emerging, mostly startups&lt;/td&gt;
&lt;td&gt;Dominant in React market&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Third-party examples&lt;/td&gt;
&lt;td&gt;Hundreds&lt;/td&gt;
&lt;td&gt;Thousands&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Learning resources&lt;/td&gt;
&lt;td&gt;Good, growing&lt;/td&gt;
&lt;td&gt;Massive&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Community size&lt;/td&gt;
&lt;td&gt;~30K Discord members&lt;/td&gt;
&lt;td&gt;Fragmented but huge&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;If you hit a weird edge case at 2 AM, there are more Stack Overflow answers, blog posts, and GitHub discussions for Next.js. If you are hiring, more candidates know Next.js. If you need a specific integration example, Next.js probably has one.&lt;/p&gt;

&lt;p&gt;TanStack Start's community is smaller but highly engaged. The TanStack Discord is active, Tanner Linsley is responsive to issues, and the developers using it tend to be experienced engineers who chose it deliberately. The documentation is solid and improving. But you will sometimes be the first person to encounter a specific problem, and that requires a different comfort level.&lt;/p&gt;

&lt;p&gt;The trajectory matters though. TanStack Start's growth curve in 2025 and 2026 mirrors what Vite's adoption looked like in 2021 and 2022. Rapid uptake among experienced developers, with broader adoption following. Whether it reaches Next.js scale is an open question, but it has already crossed the threshold of "safe to use in production."&lt;/p&gt;




&lt;h2&gt;
  
  
  When to Use Each: My Honest Recommendation
&lt;/h2&gt;

&lt;p&gt;After building with both frameworks in production this year, here is how I think about the decision.&lt;/p&gt;

&lt;h3&gt;
  
  
  Choose TanStack Start when:
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;You are building an application.&lt;/strong&gt; Dashboards, SaaS products, admin panels, internal tools, anything where the primary experience is dynamic, interactive, and data-driven. This is where TanStack Start's type-safe routing, validated search params, and explicit data loading shine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Type safety is a priority.&lt;/strong&gt; If your team cares about catching bugs at compile time rather than runtime, TanStack Start's end-to-end type inference is a genuine differentiator. No code generation, no extra packages, it just works.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You want to deploy anywhere.&lt;/strong&gt; If Vercel is not your platform of choice, or you want the flexibility to move later, TanStack Start's platform-agnostic deployment is a real benefit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Your team already uses TanStack Query.&lt;/strong&gt; The integration between TanStack Start and TanStack Query is seamless. If you are already using Query for data fetching, Start feels like a natural extension.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You prefer explicit over magical.&lt;/strong&gt; If the mental model of "I know exactly what runs on the server and what runs on the client" appeals to you more than RSC's automatic splitting, TanStack Start will feel right.&lt;/p&gt;

&lt;h3&gt;
  
  
  Stick with Next.js when:
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;You are building a content site.&lt;/strong&gt; Blogs, marketing pages, documentation sites, e-commerce storefronts. Anything where pre-rendering, ISR, and CDN caching are critical to performance. Next.js was built for this and it shows.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You need React Server Components.&lt;/strong&gt; If shipping zero JavaScript for server-rendered content is important for your performance budget, RSC is a genuine innovation that TanStack Start does not offer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Your team already knows Next.js.&lt;/strong&gt; Migration cost is real. If your team is productive with Next.js and does not hit the pain points I described, switching for the sake of switching makes no sense.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You need the largest possible ecosystem.&lt;/strong&gt; More tutorials, more examples, more Stack Overflow answers, more candidates in the hiring pool. For some teams and some projects, this matters more than DX improvements.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You are invested in Vercel.&lt;/strong&gt; If Vercel is your deployment platform and you use their analytics, image optimization, and edge functions, Next.js is the natural choice and the integration is excellent.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Migration Question
&lt;/h2&gt;

&lt;p&gt;If you are considering moving an existing Next.js project to TanStack Start, here is what I would tell you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Do not migrate for the sake of migrating.&lt;/strong&gt; If your Next.js app works, your team is productive, and you are not hitting significant pain points, stay where you are. Framework migrations are expensive and the grass is not always greener.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Consider it for your next project.&lt;/strong&gt; If you are starting something new and the use case fits (dynamic app, type safety matters, deployment flexibility needed), try TanStack Start for that project instead. Low risk, high learning value.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you do migrate, route by route.&lt;/strong&gt; You do not need to rewrite everything at once. Start with one section of your app, prove out the patterns, and expand from there. The routing model is different enough that a big-bang migration is likely to introduce bugs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The biggest lift is rethinking data loading.&lt;/strong&gt; Moving from Server Components and Server Actions to loaders and &lt;code&gt;createServerFn&lt;/code&gt; requires changing how you think about data flow. The concepts are similar but the patterns are different. Budget time for your team to internalize the new approach.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Am Watching
&lt;/h2&gt;

&lt;p&gt;A few things will shape how this comparison evolves over the next year.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Next.js and Turbopack.&lt;/strong&gt; If Turbopack closes the dev server speed gap with Vite (it is getting closer), one of TanStack Start's most tangible advantages shrinks. The Next.js team is investing heavily here.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TanStack Start's ecosystem growth.&lt;/strong&gt; The framework needs more third-party integrations, more learning resources, and a bigger community to become a mainstream recommendation. The technical foundation is strong but ecosystem breadth matters for adoption.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;React Server Components outside Next.js.&lt;/strong&gt; If RSC becomes available in other frameworks (including potentially TanStack Start), the "Next.js is the only way to use RSC" argument disappears. There are early signals this might happen.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tanner Linsley's roadmap.&lt;/strong&gt; TanStack Start benefits from being maintained by a developer with a strong track record of finishing what he starts. TanStack Query, Table, and Router are all excellent, well-maintained libraries. That track record gives me confidence in Start's long-term viability.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Bottom Line
&lt;/h2&gt;

&lt;p&gt;The React framework landscape is no longer a one-horse race. Next.js remains excellent for content-heavy sites and teams invested in the Vercel ecosystem. TanStack Start is the better choice for dynamic applications where type safety, explicit data flow, and deployment flexibility matter most.&lt;/p&gt;

&lt;p&gt;The question is not which framework is objectively better. It is which one fits the project you are building right now. And for the first time in years, the answer is not automatically Next.js.&lt;/p&gt;

&lt;p&gt;If you are a React developer who has not tried TanStack Start yet, build a small project with it. Not to switch, just to understand what the fuss is about. The type-safe routing alone will make you think differently about what a React framework should provide.&lt;/p&gt;

&lt;p&gt;That is how it started for me. I built one small tool with it. Then I built something bigger. And now I reach for it every time I am starting a new application that does not need static generation. That shift happened not because of hype but because the developer experience is genuinely that good.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>nextjs</category>
      <category>react</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
