<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: DAVID VIEJO</title>
    <description>The latest articles on Forem by DAVID VIEJO (@david_viejo_4d48fdfa7cfff).</description>
    <link>https://forem.com/david_viejo_4d48fdfa7cfff</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/david_viejo_4d48fdfa7cfff"/>
    <language>en</language>
    <item>
      <title>How I Use AI to Build a 55-Crate Rust Project (Honestly)</title>
      <dc:creator>DAVID VIEJO</dc:creator>
      <pubDate>Fri, 20 Mar 2026 06:50:57 +0000</pubDate>
      <link>https://forem.com/david_viejo_4d48fdfa7cfff/how-i-use-ai-to-build-a-55-crate-rust-project-honestly-2if6</link>
      <guid>https://forem.com/david_viejo_4d48fdfa7cfff/how-i-use-ai-to-build-a-55-crate-rust-project-honestly-2if6</guid>
      <description>&lt;p&gt;I build Temps alone. The codebase is 55 Rust crates. That's a lot of surface area for one person. So yes, I use AI assistance heavily. Claude, mostly.&lt;/p&gt;

&lt;p&gt;I want to be specific about how I use it, because the honest answer is more nuanced than "AI writes my code" or "I write everything myself."&lt;/p&gt;

&lt;h2&gt;
  
  
  What AI is actually good at
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Boilerplate.&lt;/strong&gt; Rust has a lot of it. Implementing a trait for a new type means writing the same three methods with the same signature patterns I've written fifty times. I'll describe what I need and let Claude draft the implementation. I read every line before it goes in, but the drafting is fast.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tests.&lt;/strong&gt; Writing unit tests is often more tedious than writing the code being tested. AI drafts test cases quickly. I review them for completeness, add the edge cases it missed (there are always edge cases it missed), and move on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Documentation.&lt;/strong&gt; Rust has &lt;code&gt;rustdoc&lt;/code&gt;, and public APIs need doc comments. Describing what a function does in a comment is the kind of task where AI is faster than I am and the output is usually as good or better.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Debugging known error classes.&lt;/strong&gt; If I have a borrow checker error I've seen before, I'll paste it in and let Claude explain what's happening. This is faster than re-deriving the explanation from first principles every time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Translation between representations.&lt;/strong&gt; I have a Postgres schema and I want the corresponding Sea-ORM entity structs. That's mechanical translation. AI handles it in seconds. I verify the output and check the types.&lt;/p&gt;

&lt;h2&gt;
  
  
  What AI is not good at
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Architecture decisions.&lt;/strong&gt; I've tried asking Claude to help me decide how to structure a new subsystem and the answers are plausible but shallow. They don't account for the specific constraints of my codebase's design, the things I know from having built it over two years that aren't captured in any single context window.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Debugging race conditions.&lt;/strong&gt; When something is timing-dependent, when a test fails 1 in 20 runs, when a deployment works on the third retry but not the first, AI suggestions are often confident and wrong. These bugs require understanding the system's actual runtime behavior, not its code structure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Knowing when to say no.&lt;/strong&gt; AI will implement almost any feature I describe if I frame the request confidently. It rarely pushes back with "this is the wrong approach" or "this will create a problem in three months." That skepticism is something I have to supply myself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Subtle Rust lifetime issues.&lt;/strong&gt; Straightforward borrow checker errors are fine. But the ones that involve async lifetimes, shared mutable state across task boundaries, or complex interaction between the proxy and the deployment engine — those require careful human reasoning. AI gets them wrong often enough that I've learned to treat its suggestions as a starting point, not an answer.&lt;/p&gt;

&lt;h2&gt;
  
  
  How I've changed how I work
&lt;/h2&gt;

&lt;p&gt;The biggest shift is that I've stopped writing first drafts of boilerplate myself. I used to start every new module by typing from scratch. Now I describe what I want in a comment and let Claude draft the structure, then I revise from there.&lt;/p&gt;

&lt;p&gt;This sounds minor, but it changed my relationship with tedious tasks. The parts of coding I used to procrastinate because they were boring are now the fastest parts. The cognitive overhead of starting a new file went to nearly zero.&lt;/p&gt;

&lt;p&gt;What it means is that my attention is available for the hard parts. Designing the state machine for a new deployment feature. Thinking through the failure modes. Deciding what to leave out. That's where I spend my time now.&lt;/p&gt;

&lt;h2&gt;
  
  
  The parts I want to get better
&lt;/h2&gt;

&lt;p&gt;Prompt engineering for Rust codebases is still somewhat rough. I've gotten better at providing context (I include relevant type signatures, trait bounds, and architectural constraints in my prompts) but there's still a lot of back-and-forth on anything non-trivial.&lt;/p&gt;

&lt;p&gt;I'd like better integration with the actual codebase — asking questions about my specific code rather than generic Rust patterns. That's getting better. But it's not there yet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Honest assessment
&lt;/h2&gt;

&lt;p&gt;AI made me probably 30% faster overall. Not 10x, not 2x. 30%. The gains are concentrated in the boring parts, which means the hard parts (the architecture, the edge cases, the debugging) are still slow. But they're the same speed they would have been without AI. The tedium tax came down.&lt;/p&gt;

&lt;p&gt;For a solo developer building something this large, 30% faster is the difference between shipping and not shipping.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>rust</category>
      <category>productivity</category>
      <category>programming</category>
    </item>
    <item>
      <title>Why We Chose Rust for a Deployment Platform</title>
      <dc:creator>DAVID VIEJO</dc:creator>
      <pubDate>Fri, 20 Mar 2026 06:50:54 +0000</pubDate>
      <link>https://forem.com/david_viejo_4d48fdfa7cfff/why-we-chose-rust-for-a-deployment-platform-4gmf</link>
      <guid>https://forem.com/david_viejo_4d48fdfa7cfff/why-we-chose-rust-for-a-deployment-platform-4gmf</guid>
      <description>&lt;p&gt;Every Rust project eventually publishes a post about memory safety. This isn't going to be that post. Memory safety is real, and it matters, but it's not why we chose Rust for a deployment platform. There are better reasons, and they're more concrete.&lt;/p&gt;

&lt;h2&gt;
  
  
  The single binary problem
&lt;/h2&gt;

&lt;p&gt;Deployment tools have a distribution problem. Install a typical self-hosted PaaS and you're pulling Docker images, Node.js runtimes, PHP scripts, and a handful of dependencies that need to stay in sync. Something breaks in an update, and you're debugging which layer of the stack caused it.&lt;/p&gt;

&lt;p&gt;Temps ships as one binary. No runtime, no interpreter, no package manager to satisfy. Download a ~30MB file and you're done. That binary contains the proxy, the deployment engine, the analytics pipeline, the error tracking backend, the uptime monitor, and the AI gateway.&lt;/p&gt;

&lt;p&gt;Rust makes this possible. The compiler links everything statically. You cross-compile for &lt;code&gt;x86_64-unknown-linux-gnu&lt;/code&gt; or &lt;code&gt;aarch64-unknown-linux-gnu&lt;/code&gt; on a Mac, upload the artifact, and it runs. No "please install libssl-dev" errors on the target server.&lt;/p&gt;

&lt;p&gt;Go can do this too, but Go has a garbage collector.&lt;/p&gt;

&lt;h2&gt;
  
  
  GC pauses are the wrong tradeoff for a proxy
&lt;/h2&gt;

&lt;p&gt;The proxy is the most latency-sensitive piece of the platform. Every HTTP request passes through it. When you're routing hundreds of concurrent connections and doing TLS termination, a pause of even 10ms is visible to users.&lt;/p&gt;

&lt;p&gt;Go's garbage collector has gotten excellent. Modern Go applications see GC pauses in the 1-2ms range under normal conditions. But "normal conditions" is the problem. Under memory pressure, during a traffic spike, when large objects are being collected, pauses creep up. And a deployment platform sees traffic spikes by definition, because deployments themselves generate bursty internal traffic.&lt;/p&gt;

&lt;p&gt;Rust has no garbage collector. Memory is freed deterministically when the owning variable goes out of scope. The proxy's latency profile under load is the same as its latency profile at idle. That predictability matters when you're routing traffic for a production application.&lt;/p&gt;

&lt;h2&gt;
  
  
  50MB for an entire platform
&lt;/h2&gt;

&lt;p&gt;The full Temps process, handling active deployments, collecting analytics, running the proxy, and polling for uptime, idles at roughly 50MB of RAM.&lt;/p&gt;

&lt;p&gt;Compare that to what you'd need to assemble the equivalent toolkit:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Node.js analytics server: 200-400MB resident&lt;/li&gt;
&lt;li&gt;A Go-based proxy (Traefik): 50-100MB&lt;/li&gt;
&lt;li&gt;A Python error tracking backend: 300-500MB&lt;/li&gt;
&lt;li&gt;A monitoring daemon: 50-100MB&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Stack them together and you're at 600MB-plus before your first application is even deployed. On a small VPS with 2GB RAM, that's 30% of your available memory gone to tooling.&lt;/p&gt;

&lt;p&gt;Rust's compiled code is dense. There's no VM overhead, no JIT warm-up, no per-request interpreter work. The binary does exactly what the source code says, at hardware speed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cross-compilation without tears
&lt;/h2&gt;

&lt;p&gt;Temps runs on Hetzner ARM instances, on x86 bare metal, on Raspberry Pi clusters, and on Apple Silicon development machines (for local testing). Supporting that matrix without Rust would mean maintaining separate builds, separate runtimes, or a virtualization layer.&lt;/p&gt;

&lt;p&gt;With Rust, cross-compilation is a compile flag. The CI pipeline produces four target binaries from the same source: &lt;code&gt;x86_64-linux-gnu&lt;/code&gt;, &lt;code&gt;aarch64-linux-gnu&lt;/code&gt;, &lt;code&gt;x86_64-darwin&lt;/code&gt;, and &lt;code&gt;aarch64-darwin&lt;/code&gt;. The install script picks the right one. No containers, no QEMU, no emulation layer needed on the target.&lt;/p&gt;

&lt;h2&gt;
  
  
  The proxy handles thousands of connections without struggling
&lt;/h2&gt;

&lt;p&gt;Temps uses Pingora (Cloudflare's proxy engine, also written in Rust) for all traffic routing. Pingora was written because nginx's architecture doesn't handle connection reuse well at scale. Cloudflare processes trillions of requests per day through it.&lt;/p&gt;

&lt;p&gt;Pingora uses async Rust (tokio) for its I/O model. Each worker thread handles thousands of concurrent connections through non-blocking I/O, without the per-connection thread overhead of nginx's workers. The result is consistent throughput under load, not throughput that degrades as connection count climbs.&lt;/p&gt;

&lt;p&gt;For a platform that's doing zero-downtime deploys (spinning up new containers, running health checks, swapping routes under live traffic), this matters. The proxy doesn't buckle when deployment events create a burst of internal routing work.&lt;/p&gt;

&lt;h2&gt;
  
  
  A practical note
&lt;/h2&gt;

&lt;p&gt;None of this means Rust is the right tool for every problem. The landing page is Next.js. Infrastructure scripts are bash. Rust earns its complexity budget specifically where you need compile-time guarantees, low overhead, and predictable runtime behavior. The key question when evaluating Rust for a project is whether the proxy and concurrency requirements justify the learning curve and compile times. For a deployment platform's core, the answer was yes. For a lot of other projects, it wouldn't be.&lt;/p&gt;

</description>
      <category>rust</category>
      <category>devops</category>
      <category>architecture</category>
      <category>performance</category>
    </item>
    <item>
      <title>Session Replay: What It Is, How It Works, and When You Need It</title>
      <dc:creator>DAVID VIEJO</dc:creator>
      <pubDate>Fri, 20 Mar 2026 06:50:51 +0000</pubDate>
      <link>https://forem.com/david_viejo_4d48fdfa7cfff/session-replay-what-it-is-how-it-works-and-when-you-need-it-3a1f</link>
      <guid>https://forem.com/david_viejo_4d48fdfa7cfff/session-replay-what-it-is-how-it-works-and-when-you-need-it-3a1f</guid>
      <description>&lt;p&gt;Session replay records what users actually do on your site: mouse movements, clicks, scrolls, keypresses, and every DOM mutation. Play it back and you're watching a user's experience as they lived it.&lt;/p&gt;

&lt;p&gt;It sounds invasive. It kind of is. That tension is worth understanding before you pick a tool.&lt;/p&gt;

&lt;h2&gt;
  
  
  How session replay actually works
&lt;/h2&gt;

&lt;p&gt;Almost every session replay tool in existence — FullStory, Hotjar, LogRocket, PostHog Replay, Microsoft Clarity, and most self-hosted options — is built on the same foundation: &lt;strong&gt;rrweb&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;rrweb (record and replay the web) is an open-source library that works in two steps. First, it takes a full snapshot of the DOM at the start of a session. Then it records every subsequent mutation as a compact, serialized diff. Scroll positions, CSS changes, added or removed elements — every change gets timestamped and appended to the recording stream.&lt;/p&gt;

&lt;p&gt;On replay, rrweb reconstructs the DOM from the initial snapshot and replays the mutations in order, synced to a virtual timeline. What you see isn't a video file. It's a reconstructed DOM — which is why replays can be paused, scrubbed, and inspected as live HTML.&lt;/p&gt;

&lt;p&gt;The practical implications of this architecture matter:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Storage is small.&lt;/strong&gt; A typical 5-minute session compresses to 100-500KB. Much cheaper to store than video.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What gets captured is everything rendered.&lt;/strong&gt; rrweb captures the full DOM, which means user-visible text, form fields, error messages, and anything else on the page. Including sensitive content that you didn't intend to capture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Network requests are captured separately.&lt;/strong&gt; Most tools also let you record XHR/fetch requests alongside the DOM replay, which helps correlate user actions with API calls.&lt;/p&gt;

&lt;h2&gt;
  
  
  The privacy problem is real
&lt;/h2&gt;

&lt;p&gt;In 2017, Princeton researchers found that session replay scripts on popular sites were capturing credit card numbers and passwords in plain text. The mechanism was simple: rrweb captures form field values by default. If your input isn't masked, the content gets recorded.&lt;/p&gt;

&lt;p&gt;Most tools have added automatic masking heuristics since then — detecting fields with &lt;code&gt;type="password"&lt;/code&gt;, matching patterns for card numbers, etc. But the fundamental issue remains: you're serializing the entire rendered DOM and shipping it somewhere. Unless you explicitly audit what gets captured, you can leak sensitive data.&lt;/p&gt;

&lt;p&gt;The tooling has three approaches to this:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Input masking:&lt;/strong&gt; Replace field values with placeholder characters (&lt;code&gt;*****&lt;/code&gt;). All major tools support this. It's on by default for password fields; for other sensitive fields, you have to configure it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Element exclusion:&lt;/strong&gt; Mark elements with a CSS class (often &lt;code&gt;rr-block&lt;/code&gt; or similar) to exclude them from recording entirely. The element is still visible in the replay as a gray box but its content isn't captured.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Text masking:&lt;/strong&gt; Replace all text content with &lt;code&gt;x&lt;/code&gt; characters. The highest-privacy option, but makes replays harder to use.&lt;/p&gt;

&lt;p&gt;The data-transfer issue is separate from what gets captured. When you use a SaaS replay tool, the serialized DOM snapshots — including any content you didn't mask — leave your infrastructure. Under GDPR, this likely constitutes a transfer of personal data, which requires a Data Processing Agreement with the vendor and, if you have EU users and the vendor is US-based, Standard Contractual Clauses.&lt;/p&gt;

&lt;h2&gt;
  
  
  The tools compared
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;FullStory&lt;/strong&gt; — The enterprise option. Deep session data, rage-click detection, DX Data analytics, and an API for programmatic access. Pricing starts around $300/mo for 1,000 sessions and climbs steeply at volume. Data lives on FullStory's US servers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hotjar&lt;/strong&gt; — The mid-market option. $39/mo for 100 recorded sessions per day (about 3,000/mo). Adds heatmaps and user surveys. Simpler interface than FullStory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;LogRocket&lt;/strong&gt; — Developer-focused. In addition to session replay, it captures Redux state, network requests, and console logs. Particularly useful for debugging — you get the full application context, not just the visual replay. Pricing starts around $99/mo.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PostHog Replay&lt;/strong&gt; — Part of PostHog's broader product analytics platform. If you're already using PostHog for event analytics, the session replay is included (up to 5,000 recordings/mo on the free tier). Built on rrweb, same as the others. Self-hostable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Microsoft Clarity&lt;/strong&gt; — Free. No session limits. Surprisingly capable: session replay, heatmaps, rage-click detection. The catch is that data goes to Microsoft. Clarity's terms allow Microsoft to "use the data for Microsoft's business purposes." For non-critical sites where data residency isn't a concern, it's compelling given the price.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Self-hosted with rrweb&lt;/strong&gt; — You can use rrweb directly, record sessions to your own infrastructure, and build a replay UI on top of it. This is what tools like OpenReplay and Highlight.io do, and what some teams build in-house. OpenReplay is fully open-source and deployable with Docker Compose. You own the data. The tradeoff is operational overhead.&lt;/p&gt;

&lt;h2&gt;
  
  
  When do you actually need session replay?
&lt;/h2&gt;

&lt;p&gt;Not every app needs it. Session replay adds a non-trivial script to your page (usually 50-100KB) and generates storage costs. Worth it in these situations:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;High checkout or conversion funnel abandonment.&lt;/strong&gt; "12% of users drop off at step 3" is an analytics fact. Why they drop off requires watching them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vague bug reports.&lt;/strong&gt; "It just doesn't work" tells you nothing. A session replay of that user shows you exactly what they clicked and what the page showed in response.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Onboarding confusion.&lt;/strong&gt; Where do new users get stuck? Watching 20 onboarding sessions tells you more than most quantitative analyses.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Accessibility issues.&lt;/strong&gt; Keyboard-only navigation problems, tab order issues, and focus traps show up clearly in replays in ways that automated tests miss.&lt;/p&gt;

&lt;p&gt;Less useful for high-traffic content sites where most pages are informational and user behavior is predictable. More useful for apps with complex workflows, multi-step forms, or frequent user-reported "broken" experiences.&lt;/p&gt;

&lt;h2&gt;
  
  
  Storage cost math
&lt;/h2&gt;

&lt;p&gt;If you're self-hosting, the storage math matters. At 200KB average per session (after compression), 10,000 sessions/month is 2GB. 100,000 sessions/month is 20GB. A year of sessions at that volume is 240GB — about $5-6/mo on object storage.&lt;/p&gt;

&lt;p&gt;For most teams, storage cost for self-hosted replay is negligible compared to SaaS pricing. The main operational cost is running the service and maintaining the replay infrastructure. Tools like OpenReplay abstract that away; raw rrweb requires you to build it.&lt;/p&gt;

&lt;p&gt;The question to ask: does the debugging and UX value of replay justify the cost (money, storage, operational overhead, and privacy audit work) at your current traffic level? For most teams under 1,000 sessions/day, the answer is yes if you're running a complex product. For simple sites, probably not.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>frontend</category>
      <category>ux</category>
      <category>privacy</category>
    </item>
    <item>
      <title>How Git-Push Deployments Work Under the Hood</title>
      <dc:creator>DAVID VIEJO</dc:creator>
      <pubDate>Fri, 20 Mar 2026 06:50:49 +0000</pubDate>
      <link>https://forem.com/david_viejo_4d48fdfa7cfff/how-git-push-deployments-work-under-the-hood-156i</link>
      <guid>https://forem.com/david_viejo_4d48fdfa7cfff/how-git-push-deployments-work-under-the-hood-156i</guid>
      <description>&lt;p&gt;&lt;code&gt;git push&lt;/code&gt; deploys to production. It's the workflow that Heroku popularized, Vercel polished, and dozens of tools since have copied. But most developers who use it every day don't know what's happening between the push and the live URL. Understanding the pipeline helps you debug it when it breaks and make smarter decisions about your deployment setup.&lt;/p&gt;

&lt;p&gt;Here's what happens at each step.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: The Webhook Fires
&lt;/h2&gt;

&lt;p&gt;When you push to a repository on GitHub, GitLab, or Gitea, the platform sends an HTTP POST to any webhooks registered for that repo. The payload includes the commit SHA, the branch name, the repository URL, and the pusher's identity.&lt;/p&gt;

&lt;p&gt;Your deployment platform registers one of these webhooks when you connect a repository. On Vercel, it happens automatically when you import a project. On Coolify, Dokploy, or a self-hosted tool, you configure it from the dashboard during project setup.&lt;/p&gt;

&lt;p&gt;The webhook request is just an HTTP POST. Your deployment server needs a public IP and an open port to receive it. This is why deployment platforms need to be accessible from the internet, not just from your private network.&lt;/p&gt;

&lt;p&gt;One thing worth knowing: GitHub will retry a webhook if your server doesn't respond with a 2xx status within 10 seconds. If your build system is slow to acknowledge, you can end up with duplicate builds. Well-built platforms deduplicate by commit SHA.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: The Build Queue
&lt;/h2&gt;

&lt;p&gt;The deployment server receives the webhook, parses the payload, and queues a build job. It stores the commit SHA, the branch, and a pointer to the repository.&lt;/p&gt;

&lt;p&gt;Queuing matters because pushes can come faster than builds complete. If two developers push within 30 seconds of each other, the second push should wait for the first build to finish (or cancel it, depending on the platform's configuration). Naively triggering a concurrent build for every push causes resource contention and race conditions on the traffic swap.&lt;/p&gt;

&lt;p&gt;Platforms handle this differently: Vercel runs builds in parallel on separate infrastructure. Coolify queues them on your server. The right behavior depends on whether you have enough build capacity to run concurrently.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Clone and Detect
&lt;/h2&gt;

&lt;p&gt;The build agent clones the repository at the specific commit SHA. This is always a specific SHA, not just the branch head, because the branch might advance between when the webhook fired and when the build starts.&lt;/p&gt;

&lt;p&gt;After cloning, the build system detects how to build the app. There are two common approaches:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dockerfile detection.&lt;/strong&gt; If a &lt;code&gt;Dockerfile&lt;/code&gt; is present at the root, use it. The developer has already specified the build process. This is the most explicit option and the least surprising.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Buildpacks.&lt;/strong&gt; If there's no Dockerfile, Cloud Native Buildpacks (CNB) scan the repository for language indicators: a &lt;code&gt;package.json&lt;/code&gt; suggests Node.js, a &lt;code&gt;requirements.txt&lt;/code&gt; suggests Python, a &lt;code&gt;go.mod&lt;/code&gt; suggests Go. The matching buildpack downloads the right runtime, installs dependencies, and produces a container image. Heroku pioneered this model; the CNB specification standardized it.&lt;/p&gt;

&lt;p&gt;The advantage of buildpacks: you push code without a Dockerfile and the platform figures it out. The downside: if your app needs something non-standard, the buildpack's defaults might not be right, and debugging why takes longer than just writing a Dockerfile.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: The Container Build
&lt;/h2&gt;

&lt;p&gt;The actual build runs inside Docker (or a Docker-compatible builder like BuildKit). For a Node.js app, this means &lt;code&gt;npm ci&lt;/code&gt; followed by your build command. For a Python app, &lt;code&gt;pip install&lt;/code&gt;. For a Go binary, &lt;code&gt;go build&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Build logs stream in real time to the dashboard. This is worth appreciating: you're watching exactly what would happen if you ran &lt;code&gt;docker build&lt;/code&gt; locally. If the build fails because a dependency version is missing or an environment variable is undefined, the error is right there in the log.&lt;/p&gt;

&lt;p&gt;A detail that matters for build speed: &lt;strong&gt;Docker layer caching.&lt;/strong&gt; A well-structured Dockerfile copies &lt;code&gt;package.json&lt;/code&gt; and runs &lt;code&gt;npm install&lt;/code&gt; before copying the rest of the source code. That way, the installed dependencies layer gets cached between builds, and only the application code layer gets rebuilt on each push. A poorly structured Dockerfile invalidates the cache on every build. The difference is 30 seconds versus 4 minutes for a typical Node.js app.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Health Check Before Traffic Switch
&lt;/h2&gt;

&lt;p&gt;Before the new container gets any production traffic, most deployment platforms run a health check. The new container starts, and the platform pings a health endpoint (usually &lt;code&gt;/health&lt;/code&gt; or just &lt;code&gt;/&lt;/code&gt;) and waits for a 200 response.&lt;/p&gt;

&lt;p&gt;This step is what makes zero-downtime deployment possible. The old container keeps serving requests while the new one warms up. If the new container never passes its health check (because the new code has a startup bug, or the database migration failed, or a required environment variable is missing), it never receives traffic. The old version stays live.&lt;/p&gt;

&lt;p&gt;Without a health check, the platform would just replace the old container with the new one and hope. Sometimes it works. Sometimes users get 502 errors for 10-30 seconds while the new container cold-starts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: The Traffic Swap
&lt;/h2&gt;

&lt;p&gt;Once the new container passes its health check, the proxy routes new requests to it. The mechanism varies by platform:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nginx-based platforms&lt;/strong&gt; update an upstream block and reload the nginx config. This works but has a brief gap where in-flight requests can be interrupted.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Traefik&lt;/strong&gt; (used by Coolify and Dokploy) supports dynamic configuration: it picks up the new container via Docker labels without restarting. In-flight requests on the old container are generally handled gracefully, though the behavior depends on Traefik's version and configuration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Edge networks&lt;/strong&gt; (Vercel, Cloudflare) route traffic via their global infrastructure with connection draining behavior, ensuring in-flight requests complete on the old version before it's removed.&lt;/p&gt;

&lt;p&gt;The key distinction is between "stop sending new requests to old container" and "wait for old requests to finish before stopping the old container." The second is harder to implement correctly, but it's the difference between zero-downtime and almost-zero-downtime.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 7: Cleanup
&lt;/h2&gt;

&lt;p&gt;After traffic moves to the new container, the old container stops. Container images from old deploys get retained for a configurable period (to support rollbacks) and then pruned. Build artifacts get cleaned up.&lt;/p&gt;

&lt;p&gt;This cleanup step is easy to neglect in a DIY setup and causes a subtle problem: if you're running frequent deploys, old Docker images accumulate and fill your disk. Platforms handle this automatically; a bare Docker setup needs a cron job running &lt;code&gt;docker system prune&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Full Pipeline, Summarized
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git push
  → webhook fires (HTTP POST to your deployment server)
  → build queued (deduplicated by SHA)
  → repository cloned at commit SHA
  → build detection (Dockerfile or buildpacks)
  → container build (Docker, logs stream to dashboard)
  → health check (new container must pass before traffic switches)
  → traffic swap (proxy re-routes requests to new container)
  → old container drains and stops
  → image cleanup
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every platform that does git-push deploys, whether it's Vercel, Netlify, Coolify, Heroku, or a self-hosted tool, runs some version of this same pipeline. The differences are in speed, correctness of the traffic swap, and what you have to configure manually versus what's automatic.&lt;/p&gt;

&lt;p&gt;When a deploy fails, it's almost always at one of three steps: the build (a code error), the health check (a startup bug or missing env var), or the traffic swap (a proxy misconfiguration). Knowing which step failed cuts your debugging time in half.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>deployment</category>
      <category>git</category>
      <category>webdev</category>
    </item>
    <item>
      <title>How to Deploy a Next.js App to a VPS (The Manual Way)</title>
      <dc:creator>DAVID VIEJO</dc:creator>
      <pubDate>Fri, 20 Mar 2026 06:50:46 +0000</pubDate>
      <link>https://forem.com/david_viejo_4d48fdfa7cfff/how-to-deploy-a-nextjs-app-to-a-vps-the-manual-way-2bjb</link>
      <guid>https://forem.com/david_viejo_4d48fdfa7cfff/how-to-deploy-a-nextjs-app-to-a-vps-the-manual-way-2bjb</guid>
      <description>&lt;p&gt;Most Next.js tutorials end at &lt;code&gt;npm run dev&lt;/code&gt;. The deployment section says "push to Vercel" and moves on.&lt;/p&gt;

&lt;p&gt;That's fine until you need to own your infrastructure, keep costs under control, or just understand what's actually happening when your app goes live. This post walks through deploying a Next.js app to a bare VPS, step by step. No platform, no abstraction layer. Just you, a server, and the commands.&lt;/p&gt;

&lt;p&gt;By the end, you'll understand every piece of the deployment pipeline. And you'll be able to decide whether you want to keep doing it manually or hand it off to a tool.&lt;/p&gt;

&lt;h2&gt;
  
  
  What You Need
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A VPS from any provider (Hetzner, DigitalOcean, Linode, Vultr, whatever). 2GB RAM minimum for a Next.js app with a build step. 4GB if you're running a database on the same box.&lt;/li&gt;
&lt;li&gt;A domain name pointed at your server's IP address.&lt;/li&gt;
&lt;li&gt;SSH access to the server.&lt;/li&gt;
&lt;li&gt;A Next.js app that builds successfully with &lt;code&gt;npm run build&lt;/code&gt; on your local machine.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This guide assumes Ubuntu 22.04 or 24.04. Debian works too with minor differences.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Set Up the Server
&lt;/h2&gt;

&lt;p&gt;SSH into your new server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh root@your-server-ip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Update packages and install the basics:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;apt update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt upgrade &lt;span class="nt"&gt;-y&lt;/span&gt;
apt &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; curl git ufw
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Set up the firewall. Open SSH, HTTP, and HTTPS. Close everything else:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ufw allow OpenSSH
ufw allow 80
ufw allow 443
ufw &lt;span class="nb"&gt;enable&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a non-root user (running everything as root is asking for trouble):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;adduser deploy
usermod &lt;span class="nt"&gt;-aG&lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;deploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Copy your SSH key to the new user:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;rsync &lt;span class="nt"&gt;--archive&lt;/span&gt; &lt;span class="nt"&gt;--chown&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;deploy:deploy ~/.ssh /home/deploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Log out and log back in as the &lt;code&gt;deploy&lt;/code&gt; user from now on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Install Node.js
&lt;/h2&gt;

&lt;p&gt;Don't install Node from apt. The version in Ubuntu's default repos is usually ancient. Use the NodeSource repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://deb.nodesource.com/setup_20.x | &lt;span class="nb"&gt;sudo&lt;/span&gt; &lt;span class="nt"&gt;-E&lt;/span&gt; bash -
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; nodejs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;node &lt;span class="nt"&gt;--version&lt;/span&gt;  &lt;span class="c"&gt;# Should be 20.x&lt;/span&gt;
npm &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If your project uses a specific Node version (check &lt;code&gt;.nvmrc&lt;/code&gt; or &lt;code&gt;engines&lt;/code&gt; in &lt;code&gt;package.json&lt;/code&gt;), match it here. Mismatched Node versions between local and server are one of the most common causes of "works on my machine" deployment failures.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Clone and Build Your App
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; /home/deploy
git clone https://github.com/your-username/your-nextjs-app.git
&lt;span class="nb"&gt;cd &lt;/span&gt;your-nextjs-app
npm ci
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;npm ci&lt;/code&gt; instead of &lt;code&gt;npm install&lt;/code&gt;. It installs from the lockfile exactly, which is what you want in production. &lt;code&gt;npm install&lt;/code&gt; can resolve to different versions.&lt;/p&gt;

&lt;p&gt;Set your environment variables:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cp&lt;/span&gt; .env.example .env.production
nano .env.production
&lt;span class="c"&gt;# Fill in your production values: DATABASE_URL, API keys, etc.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Build:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm run build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If this fails, fix it before continuing. Common issues: missing env vars that the build needs at compile time (Next.js bakes &lt;code&gt;NEXT_PUBLIC_*&lt;/code&gt; variables into the client bundle during build), or native dependencies that need build tools (&lt;code&gt;sudo apt install -y build-essential&lt;/code&gt;).&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Run the App with PM2
&lt;/h2&gt;

&lt;p&gt;You need a process manager. If you just run &lt;code&gt;npm start&lt;/code&gt; in your terminal and disconnect, the process dies. PM2 keeps it running, restarts it if it crashes, and manages logs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; pm2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Start your app:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pm2 start npm &lt;span class="nt"&gt;--name&lt;/span&gt; &lt;span class="s2"&gt;"nextjs-app"&lt;/span&gt; &lt;span class="nt"&gt;--&lt;/span&gt; start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check it's running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pm2 status
pm2 logs nextjs-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By default, Next.js starts on port 3000. Verify it's listening:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl http://localhost:3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Tell PM2 to start your app on server boot:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pm2 startup
pm2 save
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;pm2 startup&lt;/code&gt; command prints a line you need to copy and run with sudo. Don't skip it, or your app won't survive a server reboot.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Install and Configure Nginx
&lt;/h2&gt;

&lt;p&gt;Nginx sits in front of your Next.js app. It handles SSL termination, serves static files, and proxies dynamic requests to your Node process.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a site config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;nano /etc/nginx/sites-available/your-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="k"&gt;server&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kn"&gt;listen&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;server_name&lt;/span&gt; &lt;span class="s"&gt;yourdomain.com&lt;/span&gt; &lt;span class="s"&gt;www.yourdomain.com&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_pass&lt;/span&gt; &lt;span class="s"&gt;http://127.0.0.1:3000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_http_version&lt;/span&gt; &lt;span class="mf"&gt;1.1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;Upgrade&lt;/span&gt; &lt;span class="nv"&gt;$http_upgrade&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;Connection&lt;/span&gt; &lt;span class="s"&gt;'upgrade'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;Host&lt;/span&gt; &lt;span class="nv"&gt;$host&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;X-Real-IP&lt;/span&gt; &lt;span class="nv"&gt;$remote_addr&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;X-Forwarded-For&lt;/span&gt; &lt;span class="nv"&gt;$proxy_add_x_forwarded_for&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_set_header&lt;/span&gt; &lt;span class="s"&gt;X-Forwarded-Proto&lt;/span&gt; &lt;span class="nv"&gt;$scheme&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;proxy_cache_bypass&lt;/span&gt; &lt;span class="nv"&gt;$http_upgrade&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Enable the site:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo ln&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; /etc/nginx/sites-available/your-app /etc/nginx/sites-enabled/
&lt;span class="nb"&gt;sudo &lt;/span&gt;nginx &lt;span class="nt"&gt;-t&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl reload nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;proxy_set_header Upgrade&lt;/code&gt; and &lt;code&gt;Connection 'upgrade'&lt;/code&gt; lines are for WebSocket support. If your app uses real-time features, these headers are required.&lt;/p&gt;

&lt;p&gt;At this point, your app should be accessible at &lt;code&gt;http://yourdomain.com&lt;/code&gt;. No SSL yet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: SSL with Let's Encrypt
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; certbot python3-certbot-nginx
&lt;span class="nb"&gt;sudo &lt;/span&gt;certbot &lt;span class="nt"&gt;--nginx&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; yourdomain.com &lt;span class="nt"&gt;-d&lt;/span&gt; www.yourdomain.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Certbot modifies your Nginx config to add SSL, sets up automatic certificate renewal, and redirects HTTP to HTTPS.&lt;/p&gt;

&lt;p&gt;Verify the renewal timer is active:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl status certbot.timer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Certificates expire every 90 days. Certbot's timer renews them automatically. If you skip this check and the timer isn't running, your site goes down in 3 months with an expired cert. It happens more often than you'd think.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 7: Set Up Deployments
&lt;/h2&gt;

&lt;p&gt;Your app is live. Now you need a way to update it when you push new code.&lt;/p&gt;

&lt;p&gt;Create &lt;code&gt;/home/deploy/deploy.sh&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;
&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt;

&lt;span class="nb"&gt;cd&lt;/span&gt; /home/deploy/your-nextjs-app

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Pulling latest code..."&lt;/span&gt;
git pull origin main

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Installing dependencies..."&lt;/span&gt;
npm ci

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Building..."&lt;/span&gt;
npm run build

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Restarting..."&lt;/span&gt;
pm2 restart nextjs-app

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Done."&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;chmod&lt;/span&gt; +x /home/deploy/deploy.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Automating with GitHub Actions
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# .github/workflows/deploy.yml&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;main&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;deploy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy to VPS&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;appleboy/ssh-action@v1&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.SERVER_IP }}&lt;/span&gt;
          &lt;span class="na"&gt;username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deploy&lt;/span&gt;
          &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.SSH_PRIVATE_KEY }}&lt;/span&gt;
          &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/home/deploy/deploy.sh&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add &lt;code&gt;SERVER_IP&lt;/code&gt; and &lt;code&gt;SSH_PRIVATE_KEY&lt;/code&gt; to your repo's GitHub Secrets.&lt;/p&gt;

&lt;p&gt;Now every push to &lt;code&gt;main&lt;/code&gt; triggers the deploy script over SSH.&lt;/p&gt;

&lt;h3&gt;
  
  
  What This Doesn't Handle
&lt;/h3&gt;

&lt;p&gt;The deploy script above has real gaps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;No health check.&lt;/strong&gt; If the new build is broken, PM2 restarts the old process, but there's a window where requests fail.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No rollback.&lt;/strong&gt; If the deploy breaks the app, you have to manually revert.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No preview environments.&lt;/strong&gt; Every push to &lt;code&gt;main&lt;/code&gt; goes straight to production.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Downtime during restart.&lt;/strong&gt; PM2's restart kills the old process and starts the new one. There's a 1-3 second gap with 502 errors.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can solve each of these individually (blue-green deploys with Nginx upstream toggling, a webhook server, PM2's cluster mode). But each solution adds complexity, and by the time you've built all of them, you've built a deployment platform.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 8: Monitoring (The Part Most Tutorials Skip)
&lt;/h2&gt;

&lt;p&gt;Your app is deployed. How do you know it's still running tomorrow?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Logs:&lt;/strong&gt; PM2 handles application logs. Rotate them or they'll fill your disk:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pm2 &lt;span class="nb"&gt;install &lt;/span&gt;pm2-logrotate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Uptime:&lt;/strong&gt; You need something that pings your site and alerts you when it's down. Free options: UptimeRobot, Betterstack (free tier), or a cron job that curls your health endpoint.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Error tracking:&lt;/strong&gt; When your app throws an unhandled exception in production, how do you find out? Sentry (free tier), LogRocket, or parsing PM2 logs manually.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Analytics:&lt;/strong&gt; Google Analytics, Plausible, Umami, or similar.&lt;/p&gt;

&lt;p&gt;Each of these is a separate tool, a separate account, a separate dashboard.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Full Stack of What You Just Built
&lt;/h2&gt;

&lt;p&gt;Let's count:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Running on your server:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Node.js (runtime)&lt;/li&gt;
&lt;li&gt;PM2 (process manager)&lt;/li&gt;
&lt;li&gt;Nginx (reverse proxy, SSL termination)&lt;/li&gt;
&lt;li&gt;Certbot (certificate renewal)&lt;/li&gt;
&lt;li&gt;Git (code delivery)&lt;/li&gt;
&lt;li&gt;GitHub Actions (build automation)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Still need but didn't set up:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Uptime monitoring (external service)&lt;/li&gt;
&lt;li&gt;Error tracking (external service)&lt;/li&gt;
&lt;li&gt;Analytics (external service)&lt;/li&gt;
&lt;li&gt;Log management (PM2 + logrotate, or external service)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That's 6 things on your server and 4 external services for a single Next.js app.&lt;/p&gt;

&lt;h2&gt;
  
  
  Or: Skip All of That
&lt;/h2&gt;

&lt;p&gt;The manual approach works. But there's a reason deployment platforms exist.&lt;/p&gt;

&lt;p&gt;If you want the &lt;code&gt;git push&lt;/code&gt; workflow without managing Nginx, PM2, Certbot, deploy scripts, and GitHub Actions yourself:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Vercel&lt;/strong&gt; is the obvious choice for Next.js. They built the framework. Free tier is generous. Costs scale fast with traffic.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Coolify&lt;/strong&gt; is open-source, self-hosted. Handles Docker, Traefik proxy, SSL, and git-push deploys. Good community.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dokploy&lt;/strong&gt; is another open-source option, simpler than Coolify, focused on minimal configuration.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kamal&lt;/strong&gt; (from the Rails team) deploys Docker containers to any server over SSH. Minimal abstraction.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Temps&lt;/strong&gt; is what I build. Single Rust binary that handles deployments plus analytics, error tracking, uptime monitoring, and session replay. One tool instead of 10. Smaller community, dashboard isn't as polished as Vercel's. Open source and free to self-host.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The point of this tutorial isn't to talk you out of doing it manually. It's to make sure you know what "deploying to production" actually involves, so you can make an informed choice.&lt;/p&gt;

</description>
      <category>nextjs</category>
      <category>deployment</category>
      <category>devops</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>I built an open-source Vercel alternative in Rust — here's what I learned</title>
      <dc:creator>DAVID VIEJO</dc:creator>
      <pubDate>Wed, 18 Feb 2026 08:54:40 +0000</pubDate>
      <link>https://forem.com/david_viejo_4d48fdfa7cfff/i-built-an-open-source-vercel-alternative-in-rust-heres-what-i-learned-3oel</link>
      <guid>https://forem.com/david_viejo_4d48fdfa7cfff/i-built-an-open-source-vercel-alternative-in-rust-heres-what-i-learned-3oel</guid>
      <description>&lt;p&gt;I come from a DevOps and blockchain background. I've spent years managing infrastructure, wrangling containers, and thinking about how systems should be architected. So when I started shipping web apps and saw what developers were paying for deployment platforms, something felt off.&lt;/p&gt;

&lt;p&gt;Vercel's DX is incredible — I won't pretend otherwise. &lt;code&gt;git push&lt;/code&gt; and your app is live. But then you look at the bill: $20/seat/month. Bandwidth overages. And that's just hosting. You still need Sentry for error tracking ($26/mo), something like PostHog for session replay and analytics, an uptime monitoring tool, maybe a transactional email service. Suddenly you're juggling six SaaS subscriptions for what is fundamentally one job: running your app and knowing what's happening inside it.&lt;/p&gt;

&lt;p&gt;As someone who's managed infrastructure professionally, I kept thinking: all of this can run on a single $20 VPS. The data is just HTTP requests, error payloads, and time-series metrics. There's no technical reason this needs to be six separate services.&lt;/p&gt;

&lt;p&gt;So I built &lt;a href="https://github.com/gotempsh/temps" rel="noopener noreferrer"&gt;Temps&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Temps, in one sentence
&lt;/h2&gt;

&lt;p&gt;An open-source, self-hosted deployment platform with built-in analytics, error tracking, session replay, uptime monitoring, and transactional email. Runs on any VPS. Dual-licensed under MIT and Apache 2.0.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://temps.sh/deploy.sh | sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's the entire install. One command, on any Linux server. From bare server to first deployment in under 3 minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Rust
&lt;/h2&gt;

&lt;p&gt;I didn't start with Rust. The first prototype was Node.js. It worked, but the resource footprint was brutal — the deployment server itself was eating 800MB of RAM just idling. When the thing that &lt;em&gt;deploys your apps&lt;/em&gt; needs more resources than the apps it's deploying, something is wrong.&lt;/p&gt;

&lt;p&gt;Rust brought that down dramatically. But the real win was &lt;a href="https://github.com/cloudflare/pingora" rel="noopener noreferrer"&gt;Cloudflare Pingora&lt;/a&gt; — their open-source proxy engine. Pingora handles reverse proxying, TLS termination (with dynamic SNI-based certificate loading), HTTP/2, and connection management. Building on top of it meant I got battle-tested networking code from a company that handles a significant chunk of internet traffic, instead of writing my own proxy from scratch.&lt;/p&gt;

&lt;p&gt;The stack ended up being:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Rust&lt;/strong&gt; — 51 workspace crates covering the entire platform&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Axum&lt;/strong&gt; — HTTP framework for the API&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sea-ORM&lt;/strong&gt; — database access layer&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pingora&lt;/strong&gt; — Cloudflare's proxy engine for reverse proxying and TLS&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bollard&lt;/strong&gt; — Docker API client for container management&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PostgreSQL + TimescaleDB&lt;/strong&gt; — app data + time-series analytics/metrics&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Everything runs as a single binary. No Kubernetes. No microservices. One process that handles deployments, proxying, analytics ingestion, error collection, monitoring, email, and more. My DevOps background made me appreciate this kind of simplicity — fewer moving parts means fewer things to debug at 3am.&lt;/p&gt;

&lt;h2&gt;
  
  
  The hard problems nobody warns you about
&lt;/h2&gt;

&lt;p&gt;Building a deployment platform sounds straightforward until you actually try it. Here's what surprised me.&lt;/p&gt;

&lt;h3&gt;
  
  
  Zero-downtime deployments
&lt;/h3&gt;

&lt;p&gt;The naive approach — stop old container, start new one — creates a gap. Even a 2-second gap means dropped requests and angry users.&lt;/p&gt;

&lt;p&gt;Temps uses a blue-green deployment pattern:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Build&lt;/strong&gt; the new container image&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deploy and health check&lt;/strong&gt; the new container &lt;em&gt;alongside&lt;/em&gt; the old one (HTTP health checks with a configurable timeout, up to 300 seconds)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Shift traffic&lt;/strong&gt; to the new container once health checks pass&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tear down&lt;/strong&gt; the old container only after the new one is confirmed healthy&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If the new container fails health checks or crashes, the old container stays running and the deployment is marked as failed. No downtime.&lt;/p&gt;

&lt;h3&gt;
  
  
  Framework auto-detection is a rabbit hole
&lt;/h3&gt;

&lt;p&gt;"Just detect the framework from the project files" sounds simple. In practice:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A project with both &lt;code&gt;next.config.js&lt;/code&gt; and a &lt;code&gt;Dockerfile&lt;/code&gt; — which one wins?&lt;/li&gt;
&lt;li&gt;A Python project with &lt;code&gt;requirements.txt&lt;/code&gt;, &lt;code&gt;Pipfile&lt;/code&gt;, AND &lt;code&gt;pyproject.toml&lt;/code&gt; — which dependency manager?&lt;/li&gt;
&lt;li&gt;A Node.js project — is it Next.js, Vite, Nuxt, Remix, Astro, NestJS, or plain Express?&lt;/li&gt;
&lt;li&gt;A monorepo with 4 different frameworks in subdirectories&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I built a detection system that reads &lt;code&gt;package.json&lt;/code&gt; dependencies, checks for framework-specific config files, and detects package managers from lock files (npm, yarn, pnpm, bun). It handles Next.js, Vite, Astro, Nuxt, Remix, NestJS, Vue, Express, Docusaurus, CRA, Rsbuild, Python, Go, Rust, Java, .NET, and anything with a Dockerfile. The Dockerfile always wins if present.&lt;/p&gt;

&lt;p&gt;Each detected preset generates a Dockerfile automatically. The result: most projects deploy with zero configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;bunx @temps-sdk/cli deploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No &lt;code&gt;temps.json&lt;/code&gt;. No &lt;code&gt;temps.yaml&lt;/code&gt;. No build configuration file. It just figures it out.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sentry-compatible error tracking
&lt;/h3&gt;

&lt;p&gt;I didn't want to build a toy error tracker. I wanted something you could actually use in production instead of Sentry.&lt;/p&gt;

&lt;p&gt;The key decision: &lt;strong&gt;make it Sentry-compatible at the protocol level.&lt;/strong&gt; Temps implements the Sentry envelope format — it parses events, transactions, sessions, and spans using &lt;code&gt;relay-event-schema&lt;/code&gt; (Sentry's own Rust types). If you're already using &lt;code&gt;@sentry/nextjs&lt;/code&gt; or &lt;code&gt;sentry-sdk&lt;/code&gt; for Python, you change one line — the DSN endpoint — and your errors flow into Temps instead.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Before&lt;/span&gt;
&lt;span class="nx"&gt;Sentry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;init&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;dsn&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://abc@sentry.io/123&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// After&lt;/span&gt;
&lt;span class="nx"&gt;Sentry&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;init&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;dsn&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://abc@your-server.com/123&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Same SDK. Same error grouping. Same stack traces. Source map support included. Zero per-event fees.&lt;/p&gt;

&lt;p&gt;I'd rather be compatible with the ecosystem than force people to learn a new tool.&lt;/p&gt;

&lt;h3&gt;
  
  
  Session replay with rrweb
&lt;/h3&gt;

&lt;p&gt;The session replay feature uses &lt;a href="https://www.rrweb.io/" rel="noopener noreferrer"&gt;rrweb&lt;/a&gt; — the same recording library used by PostHog, LogRocket, and others. The React SDK (&lt;code&gt;@temps-sdk/react-analytics&lt;/code&gt;) records DOM mutations and user interactions on the client, compresses them with zlib, and sends them to Temps where they're stored alongside the rest of your analytics data.&lt;/p&gt;

&lt;p&gt;You can watch real user sessions directly in the Temps dashboard, correlated with errors, page views, and performance data. No separate session replay subscription needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I got wrong
&lt;/h2&gt;

&lt;p&gt;I'll save you the hero narrative. I made plenty of mistakes building this solo.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I underestimated managed databases.&lt;/strong&gt; The first version required you to set up your own PostgreSQL and Redis. Nobody wanted to do that. Temps now provisions Postgres, Redis, S3 (via RustFS), and MongoDB alongside your apps — handles creation, backups, and teardown.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The first CLI was overengineered.&lt;/strong&gt; It had too many flags and options. I rewrote it to have sensible defaults for everything. Now the most common flow is two commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;bunx @temps-sdk/cli init
bunx @temps-sdk/cli deploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;I tried to build a dashboard before the CLI was solid.&lt;/strong&gt; The dashboard is nice for monitoring — it has a web terminal (xterm.js), a code editor (Monaco), charts (Recharts), and the rrweb session replay player. But engineers live in the terminal. Getting the CLI experience right first was the correct order of operations — I just didn't do it in that order.&lt;/p&gt;

&lt;h2&gt;
  
  
  What surprised me: the MCP server
&lt;/h2&gt;

&lt;p&gt;One thing I didn't plan from the start but ended up building: a &lt;a href="https://modelcontextprotocol.io/" rel="noopener noreferrer"&gt;Model Context Protocol&lt;/a&gt; server (&lt;code&gt;@temps-sdk/mcp&lt;/code&gt;). MCP is the standard that lets AI assistants interact with external tools.&lt;/p&gt;

&lt;p&gt;With the Temps MCP server, an AI agent like Claude can deploy your apps, check deployment status, and manage your infrastructure through natural language. You add it to your Claude Desktop config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"temps"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"npx"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"@temps-sdk/mcp"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And now your AI assistant can talk to your deployment platform directly. It's a small thing, but it fits a pattern I believe in: meet developers where they already work. If that's the terminal, build a great CLI. If that's an AI assistant, build an MCP server.&lt;/p&gt;

&lt;h2&gt;
  
  
  The economics
&lt;/h2&gt;

&lt;p&gt;Here's the math that motivated this whole project — what a typical developer or small team pays to run production apps:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;What you get with Temps&lt;/th&gt;
&lt;th&gt;Instead of paying for&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Git deployments + preview URLs&lt;/td&gt;
&lt;td&gt;Vercel / Netlify ($20+/mo)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Web analytics + funnels&lt;/td&gt;
&lt;td&gt;PostHog / Plausible ($0-450/mo)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Session replay&lt;/td&gt;
&lt;td&gt;PostHog / FullStory ($0-2000/mo)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Error tracking (Sentry-compatible)&lt;/td&gt;
&lt;td&gt;Sentry ($26+/mo)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Uptime monitoring + status pages&lt;/td&gt;
&lt;td&gt;Better Uptime / Pingdom ($20+/mo)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Managed Postgres/Redis/S3/MongoDB&lt;/td&gt;
&lt;td&gt;AWS RDS / ElastiCache ($50+/mo)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Transactional email + DKIM&lt;/td&gt;
&lt;td&gt;Resend / SendGrid ($20-100/mo)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Request logs + proxy&lt;/td&gt;
&lt;td&gt;Cloudflare ($0-200/mo)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;KV store + blob storage&lt;/td&gt;
&lt;td&gt;Vercel KV / S3 ($0-50/mo)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Total with Temps&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$0 (self-hosted on your VPS)&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;As an indie, that difference is real money. It's the difference between burning runway and not.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Temps supports today
&lt;/h2&gt;

&lt;p&gt;To be concrete about where the project is — this is a 51-crate Rust workspace, not a weekend project:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Frameworks:&lt;/strong&gt; Next.js, Vite, Astro, Nuxt, Remix, NestJS, Vue, Express, Docusaurus, Python, Go, Rust, Java, .NET, and anything with a Dockerfile.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deployment:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Git push deployments (GitHub and GitLab)&lt;/li&gt;
&lt;li&gt;Preview deployments per branch/PR&lt;/li&gt;
&lt;li&gt;Zero-downtime blue-green deployments&lt;/li&gt;
&lt;li&gt;Automatic SSL via Let's Encrypt (HTTP-01 and DNS-01)&lt;/li&gt;
&lt;li&gt;Custom domains with automatic TLS&lt;/li&gt;
&lt;li&gt;Environment variables and secrets (AES-256 encrypted at rest)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Built-in observability:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Web analytics with funnels and visitor tracking&lt;/li&gt;
&lt;li&gt;Session replay (rrweb-based)&lt;/li&gt;
&lt;li&gt;Error tracking (Sentry-compatible — same SDK, change one line)&lt;/li&gt;
&lt;li&gt;Uptime monitoring with alerts (email, Slack, webhooks)&lt;/li&gt;
&lt;li&gt;Request-level logging (method, path, status, response time)&lt;/li&gt;
&lt;li&gt;Performance tracking (Web Vitals)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Infrastructure:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Managed PostgreSQL, Redis, S3 (RustFS), and MongoDB&lt;/li&gt;
&lt;li&gt;KV store (&lt;code&gt;@temps-sdk/kv&lt;/code&gt; — Redis-like API)&lt;/li&gt;
&lt;li&gt;Blob storage (&lt;code&gt;@temps-sdk/blob&lt;/code&gt; — S3-compatible)&lt;/li&gt;
&lt;li&gt;Transactional email with DKIM verification&lt;/li&gt;
&lt;li&gt;Vulnerability scanning (Trivy-based)&lt;/li&gt;
&lt;li&gt;Status pages with incident management&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Developer tools:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;MCP server for AI agents (&lt;code&gt;@temps-sdk/mcp&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;React analytics SDK (&lt;code&gt;@temps-sdk/react-analytics&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Node.js SDK with Sentry-compatible error tracking (&lt;code&gt;@temps-sdk/node-sdk&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;TypeScript CLI (&lt;code&gt;bunx @temps-sdk/cli&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Web dashboard with terminal, code editor, and session replay player&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Infrastructure:&lt;/strong&gt; Runs on any Linux VPS — AWS, GCP, Azure, DigitalOcean, Hetzner, your own hardware.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who should NOT use Temps
&lt;/h2&gt;

&lt;p&gt;I believe in being honest about trade-offs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;If you're on Vercel's free tier&lt;/strong&gt; — Vercel's free tier is genuinely great. Temps doesn't make sense until you're paying.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;If you need edge computing&lt;/strong&gt; — Temps runs on your servers, not a global edge network. If sub-50ms latency from every continent matters, Vercel or Cloudflare is better.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;If you want zero ops&lt;/strong&gt; — Temps is self-hosted. It's dramatically simpler than raw Docker or Kubernetes, but it's not zero-ops. You're still responsible for a server.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;If cost isn't a concern&lt;/strong&gt; — If you have the budget, Vercel's ecosystem and managed infrastructure is hard to beat.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What I'd tell someone building an open-source tool solo
&lt;/h2&gt;

&lt;p&gt;A few things I've learned that I wish someone told me:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Compatibility beats originality.&lt;/strong&gt; Making the error tracking Sentry-compatible (using their actual &lt;code&gt;relay-event-schema&lt;/code&gt; types) instead of inventing a new protocol was the single best technical decision. Users can try it with a one-line change and zero risk.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The install experience IS the product.&lt;/strong&gt; If your open-source tool takes more than 5 minutes to set up, most people will never try it. The one-liner install took an unreasonable amount of engineering effort — auto-detecting OS, architecture, setting up services, configuring PostgreSQL with TimescaleDB, initializing encryption keys — but it's worth it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Don't build for everyone.&lt;/strong&gt; Temps is for developers and small teams who are paying for hosting and want to own their infrastructure without the DevOps overhead. That's a specific group, and that's fine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build for the workflow, not just the feature.&lt;/strong&gt; Adding the MCP server wasn't on my roadmap. But developers are increasingly working through AI assistants, and if Temps can be part of that workflow natively, it removes friction. Same logic applies to the CLI, the SDKs, the Sentry compatibility. Meet people where they are.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;p&gt;If any of this resonates:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://temps.sh/deploy.sh | sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The CLI is free forever. The &lt;a href="https://github.com/gotempsh/temps" rel="noopener noreferrer"&gt;source code is on GitHub&lt;/a&gt; — 51 Rust crates, dual-licensed MIT/Apache 2.0.&lt;/p&gt;

&lt;p&gt;If you run into issues or want to chat, the &lt;a href="https://discord.gg/temps" rel="noopener noreferrer"&gt;Discord&lt;/a&gt; is active and I'm usually around.&lt;/p&gt;

&lt;p&gt;Temps isn't perfect — it's not. But I think the idea that your deployment platform should include observability, email, storage, and AI integration by default, at no extra cost, on infrastructure you control, is the right direction. And I'd rather build that in the open than behind a paywall.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If you've dealt with SaaS cost sprawl or have opinions on self-hosted vs managed, I'd love to hear your take in the comments.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>devops</category>
      <category>opensource</category>
      <category>rust</category>
    </item>
  </channel>
</rss>
