<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Brus-Nockk</title>
    <description>The latest articles on Forem by Brus-Nockk (@brusnockk).</description>
    <link>https://forem.com/brusnockk</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/brusnockk"/>
    <language>en</language>
    <item>
      <title>How I Built a CLI Tool That Generates and Manages Its Own Infrastructure</title>
      <dc:creator>Brus-Nockk</dc:creator>
      <pubDate>Wed, 06 May 2026 20:48:25 +0000</pubDate>
      <link>https://forem.com/brusnockk/how-i-built-a-cli-tool-that-generates-and-manages-its-own-infrastructure-56oa</link>
      <guid>https://forem.com/brusnockk/how-i-built-a-cli-tool-that-generates-and-manages-its-own-infrastructure-56oa</guid>
      <description>&lt;p&gt;&lt;em&gt;A plain-English walkthrough of building SwiftDeploy — a declarative deployment tool in Go and Python&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;Imagine handing someone a single index card and saying: everything you need to know about how this system runs is on this card. No digging through config files, no cross-referencing documentation, no wondering whether what's on disk matches what's actually running. Just the card.&lt;/p&gt;

&lt;p&gt;That's the idea behind SwiftDeploy. You write one file — &lt;code&gt;manifest.yaml&lt;/code&gt; — and a CLI tool called &lt;code&gt;swiftdeploy&lt;/code&gt; reads it and generates everything else: the Nginx configuration, the Docker Compose file, the running containers. If you delete the generated files and run &lt;code&gt;swiftdeploy init&lt;/code&gt; again, the stack rebuilds identically. The manifest is the only thing that matters.&lt;/p&gt;

&lt;p&gt;Here's how every piece of it works.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why this matters
&lt;/h2&gt;

&lt;p&gt;Most deployment setups have a coordination problem. The Nginx config lives in one place. The Docker Compose file lives in another. The environment variables live somewhere else. They're all supposed to agree on the same port numbers, the same network name, the same image — but nothing enforces that. You change one and forget to update another, and the stack breaks in a way that takes an hour to trace.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Config drift&lt;/strong&gt; — when multiple config files that are supposed to agree on the same values silently fall out of sync over time.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;SwiftDeploy solves this by having only one file that a human ever edits. Everything else is generated from it. You can't have drift between files that are all derived from the same source.&lt;/p&gt;




&lt;h2&gt;
  
  
  The architecture in one sentence
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;swiftdeploy&lt;/code&gt; reads &lt;code&gt;manifest.yaml&lt;/code&gt;, renders Jinja2 templates into &lt;code&gt;nginx.conf&lt;/code&gt; and &lt;code&gt;docker-compose.yml&lt;/code&gt;, brings up a Go HTTP service and an Nginx reverse proxy as Docker containers, and manages their lifecycle through five subcommands.&lt;/p&gt;

&lt;p&gt;The CLI never sits inside a running container — it runs on the host and talks to Docker from the outside. A bug in the CLI cannot take down the running stack.&lt;/p&gt;




&lt;h2&gt;
  
  
  Piece 1: The manifest — one file to rule them all
&lt;/h2&gt;

&lt;p&gt;Everything starts here. &lt;code&gt;manifest.yaml&lt;/code&gt; describes the entire deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;swift-deploy-1-node:latest&lt;/span&gt;
  &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3000&lt;/span&gt;
  &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1.0.0"&lt;/span&gt;

&lt;span class="na"&gt;nginx&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
  &lt;span class="na"&gt;proxy_timeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30&lt;/span&gt;
  &lt;span class="na"&gt;contact&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ops@swiftdeploy.internal"&lt;/span&gt;

&lt;span class="na"&gt;network&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;swiftdeploy-net&lt;/span&gt;
  &lt;span class="na"&gt;driver_type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bridge&lt;/span&gt;

&lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;stable&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The CLI reads this file, resolves all values, and uses them as variables when rendering config templates. Nothing in the generated files is hardcoded.&lt;/p&gt;

&lt;p&gt;One design rule that matters: the manifest is immutable during normal operations. The only exception is the &lt;code&gt;mode&lt;/code&gt; field, which &lt;code&gt;swiftdeploy promote&lt;/code&gt; updates in-place when switching between deployment modes. Every other subcommand reads the manifest and generates from it — never writes back to it.&lt;/p&gt;

&lt;p&gt;This also comes with a layered override system. You can pass flags like &lt;code&gt;--nginx.port=9090&lt;/code&gt; at invocation time, and they take effect for that session only without touching the file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;hardcoded defaults → manifest.yaml → CLI flags (highest priority)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Same pattern used by Docker and Kubernetes — predictable, auditable, no surprises.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why flag overrides exist
&lt;/h2&gt;

&lt;p&gt;The manifest is the source of truth for &lt;em&gt;stored&lt;/em&gt; configuration — what the system looks like by default, in its resting state. But deployment is rarely one-size-fits-all.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Twelve-factor app&lt;/strong&gt; — a methodology for building software that runs cleanly in any environment. One of its core principles: configuration that varies between environments (ports, credentials, endpoints) should come from the environment, not from the codebase.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Consider what changes across environments without the flag system:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A developer running locally might need a different port because &lt;code&gt;8080&lt;/code&gt; is already taken by another service&lt;/li&gt;
&lt;li&gt;A CI pipeline might want to override the contact email to point to an automated alert channel&lt;/li&gt;
&lt;li&gt;A staging environment might need a longer &lt;code&gt;proxy_timeout&lt;/code&gt; because its upstream dependencies are slower&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without flags, each of these would require editing &lt;code&gt;manifest.yaml&lt;/code&gt; — which means either maintaining separate manifest files per environment, or making changes you then have to remember to revert. Both are sources of mistakes.&lt;/p&gt;

&lt;p&gt;With flags, &lt;code&gt;manifest.yaml&lt;/code&gt; stays clean and environment-agnostic:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# local dev — port conflict on 8080&lt;/span&gt;
swiftdeploy deploy &lt;span class="nt"&gt;--nginx&lt;/span&gt;.port&lt;span class="o"&gt;=&lt;/span&gt;9090

&lt;span class="c"&gt;# staging — slower upstreams, different contact&lt;/span&gt;
swiftdeploy deploy &lt;span class="nt"&gt;--nginx&lt;/span&gt;.proxy_timeout&lt;span class="o"&gt;=&lt;/span&gt;60 &lt;span class="nt"&gt;--nginx&lt;/span&gt;.contact&lt;span class="o"&gt;=&lt;/span&gt;staging-alerts@corp.com

&lt;span class="c"&gt;# CI — validate only, don't care about the contact field&lt;/span&gt;
swiftdeploy validate &lt;span class="nt"&gt;--services&lt;/span&gt;.image&lt;span class="o"&gt;=&lt;/span&gt;swift-deploy-1-node:ci-build-42
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The critical constraint: flags are session-only. They affect what gets generated and deployed, but they never write back to &lt;code&gt;manifest.yaml&lt;/code&gt;. The file stays as the committed, reviewable record of what the system looks like in its canonical state. Flags are the escape hatch for the edges, not a replacement for the centre.&lt;/p&gt;

&lt;p&gt;This is the same pattern Helm, Docker, and Kubernetes &lt;code&gt;kubectl&lt;/code&gt; all use — a base config with runtime overrides layered on top.&lt;/p&gt;




&lt;h2&gt;
  
  
  Piece 2: The CLI and layered config resolution
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;swiftdeploy&lt;/code&gt; is a Python executable — no &lt;code&gt;.py&lt;/code&gt; extension, just a shebang line at the top (&lt;code&gt;#!/usr/bin/env python3&lt;/code&gt;) and &lt;code&gt;chmod +x&lt;/code&gt;. The OS reads the shebang and knows which interpreter to use. The language is invisible to whoever calls it.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Shebang&lt;/strong&gt; — the &lt;code&gt;#!&lt;/code&gt; line at the top of a script that tells the operating system which interpreter to use when the file is executed directly. &lt;code&gt;#!/usr/bin/env python3&lt;/code&gt; means "find Python 3 wherever it lives on this machine and use it."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It has five subcommands, each a thin wrapper that orchestrates the other pieces:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;init&lt;/code&gt; — generate configs from the manifest&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;validate&lt;/code&gt; — five pre-flight checks before anything touches Docker&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;deploy&lt;/code&gt; — init + bring the stack up + wait until healthy&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;promote&lt;/code&gt; — switch deployment mode with a rolling restart&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;teardown&lt;/code&gt; — bring everything down; &lt;code&gt;--clean&lt;/code&gt; also removes generated files&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The config resolution happens in &lt;code&gt;cli/config.py&lt;/code&gt;. It uses a deep merge:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Start from hardcoded Python defaults&lt;/li&gt;
&lt;li&gt;Merge in &lt;code&gt;manifest.yaml&lt;/code&gt; values (these win over defaults)&lt;/li&gt;
&lt;li&gt;Merge in any CLI flag overrides (these win over everything)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The result is a &lt;code&gt;ResolvedConfig&lt;/code&gt; dataclass — a clean, typed object with no missing fields that every other piece of the CLI works from.&lt;/p&gt;




&lt;h2&gt;
  
  
  Piece 3: Template rendering — generating configs, not writing them
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;nginx.conf&lt;/code&gt; and &lt;code&gt;docker-compose.yml&lt;/code&gt; are never written by hand. They are the output of rendering Jinja2 templates with the resolved config values.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Jinja2&lt;/strong&gt; — a Python templating engine. You write a file with &lt;code&gt;{{ variable }}&lt;/code&gt; placeholders, provide a dictionary of values, and Jinja2 produces the final file with all placeholders substituted. The same engine Django and Ansible use.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The templates live in &lt;code&gt;templates/&lt;/code&gt; and look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="k"&gt;upstream&lt;/span&gt; &lt;span class="s"&gt;swiftdeploy_app&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kn"&gt;server&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="err"&gt;{&lt;/span&gt; &lt;span class="kn"&gt;service_host&lt;/span&gt; &lt;span class="err"&gt;}}&lt;/span&gt;&lt;span class="p"&gt;:{&lt;/span&gt;&lt;span class="err"&gt;{&lt;/span&gt; &lt;span class="kn"&gt;service_port&lt;/span&gt; &lt;span class="err"&gt;}}&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kn"&gt;server&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kn"&gt;listen&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="err"&gt;{&lt;/span&gt; &lt;span class="kn"&gt;nginx_port&lt;/span&gt; &lt;span class="err"&gt;}}&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kn"&gt;proxy_read_timeout&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="err"&gt;{&lt;/span&gt; &lt;span class="kn"&gt;proxy_timeout&lt;/span&gt; &lt;span class="err"&gt;}}&lt;/span&gt;&lt;span class="s"&gt;s&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One deliberate constraint: templates contain no logic. No &lt;code&gt;if&lt;/code&gt; statements, no loops. All decisions happen in &lt;code&gt;cli/generator.py&lt;/code&gt; before rendering — the template receives flat, already-resolved values. Logic in templates is a maintenance trap; it belongs in code.&lt;/p&gt;

&lt;p&gt;The generator uses &lt;code&gt;StrictUndefined&lt;/code&gt; — if a variable is referenced in a template but not provided, rendering fails immediately with a clear error rather than silently producing a config with an empty field. Same philosophy as a compiler refusing to let you use an uninitialised variable.&lt;/p&gt;

&lt;p&gt;Running &lt;code&gt;swiftdeploy init&lt;/code&gt; twice on the same manifest produces byte-identical output. This property — called idempotency — is what ensures you can safely delete generated files at any point and cleanly regenerate them.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Idempotent&lt;/strong&gt; — an operation you can run multiple times and always get the same result. &lt;code&gt;swiftdeploy init&lt;/code&gt; is idempotent: same manifest in, same config files out, regardless of how many times you run it.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Piece 4: The Go service — stable and canary modes
&lt;/h2&gt;

&lt;p&gt;The HTTP service is written in Go and runs the same binary in both modes. The mode is injected as an environment variable (&lt;code&gt;MODE=stable&lt;/code&gt; or &lt;code&gt;MODE=canary&lt;/code&gt;) by Docker Compose at startup.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Environment variable&lt;/strong&gt; — a named value set in a process's environment before it starts. Programs read these at startup rather than hardcoding configuration. Docker Compose injects them into containers via the &lt;code&gt;environment:&lt;/code&gt; block.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The service exposes three endpoints:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;GET /&lt;/code&gt; — welcome message with current mode, version, and server timestamp&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;GET /healthz&lt;/code&gt; — liveness check returning status and uptime in seconds&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;POST /chaos&lt;/code&gt; — simulates degraded behaviour (canary mode only)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The architecture inside the service is hexagonal — also called ports and adapters.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Hexagonal architecture (ports and adapters)&lt;/strong&gt; — a design pattern where the core business logic sits in the middle with no knowledge of the outside world. All external concerns — HTTP, databases, file systems — connect to it through defined interfaces called ports, with concrete implementations called adapters. You can swap an adapter without touching the core.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In practice this means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;internal/core/service.go&lt;/code&gt; — pure logic, zero external dependencies. Knows about modes, chaos states, responses. Knows nothing about HTTP.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;internal/core/ports.go&lt;/code&gt; — defines two interfaces: &lt;code&gt;ServicePort&lt;/code&gt; (what the HTTP layer calls in) and &lt;code&gt;ChaosStore&lt;/code&gt; (what the core calls out to for chaos state)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;internal/adapters/http/handler.go&lt;/code&gt; — translates HTTP requests into core calls, and core responses into HTTP responses&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;internal/adapters/store/memory.go&lt;/code&gt; — stores chaos state in memory, behind the &lt;code&gt;ChaosStore&lt;/code&gt; interface&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;cmd/main.go&lt;/code&gt; — the composition root; the only file allowed to know about all layers at once&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The payoff: the &lt;code&gt;/chaos&lt;/code&gt; endpoint returning &lt;code&gt;403&lt;/code&gt; in stable mode is a &lt;em&gt;policy&lt;/em&gt; that lives in the core, not in the HTTP layer. The HTTP adapter just translates the domain error into a status code. Swap the HTTP adapter for a gRPC adapter and the policy travels with it automatically.&lt;/p&gt;




&lt;h2&gt;
  
  
  Piece 5: Chaos engineering — simulating failure on demand
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;POST /chaos&lt;/code&gt; endpoint is only active in canary mode. It accepts a JSON body and puts the service into a degraded state:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"mode"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"slow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nl"&gt;"duration"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;sleep&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;seconds&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;before&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;every&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;response&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"mode"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"error"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="nl"&gt;"rate"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;return&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;500&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;on&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;~&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="err"&gt;%&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;of&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;requests&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"mode"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"recover"&lt;/span&gt;&lt;span class="w"&gt;                 &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;cancel&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;any&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;active&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;chaos&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Canary mode&lt;/strong&gt; — a deployment strategy where a new or experimental version of a service runs alongside the stable version, receiving a subset of traffic. The name comes from "canary in a coal mine" — if something goes wrong, the canary version shows it first before it affects everyone. Here, canary mode is the same binary as stable but with chaos capabilities unlocked.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Chaos state is held in memory in &lt;code&gt;MemoryChaosStore&lt;/code&gt;, protected by a &lt;code&gt;sync.RWMutex&lt;/code&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Mutex (mutual exclusion lock)&lt;/strong&gt; — a mechanism that ensures only one goroutine can write to a shared value at a time, while allowing many to read simultaneously. Without it, two concurrent requests modifying chaos state could corrupt each other's writes — a bug called a race condition.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;One intentional design choice: &lt;code&gt;/healthz&lt;/code&gt; is never wrapped by the chaos middleware. Chaos on health checks would cause Docker to think the container is unhealthy and restart it — not the intended behaviour. The health check must always tell the truth about liveness.&lt;/p&gt;




&lt;h2&gt;
  
  
  Piece 6: Nginx — the only public face of the stack
&lt;/h2&gt;

&lt;p&gt;Nginx sits in front of the Go service and is the only container with a port bound to the host. The service port (&lt;code&gt;3000&lt;/code&gt;) is declared with &lt;code&gt;expose:&lt;/code&gt; in Docker Compose — visible within the Docker network, never reachable from outside.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Reverse proxy&lt;/strong&gt; — a server that sits in front of your application and forwards incoming requests to it. External clients talk to the proxy, not to the application directly. This lets you add timeouts, logging, error handling, and SSL termination in one place without touching the application.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Nginx handles three things the Go service doesn't need to know about:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Custom access logs&lt;/strong&gt; — every request is written to a named volume in a specific format:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2026-05-03T10:00:00+00:00 | 200 | 0.001s | 172.18.0.2:3000 | GET / HTTP/1.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;JSON error bodies&lt;/strong&gt; — if the upstream service is unavailable, Nginx returns a structured JSON response instead of its default HTML error page:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"error"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Bad Gateway - upstream service unavailable"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"code"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"502"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"service"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"swiftdeploy-app"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"contact"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ops@swiftdeploy.internal"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Header forwarding&lt;/strong&gt; — the Go service sets &lt;code&gt;X-Mode: canary&lt;/code&gt; on every response in canary mode. Nginx passes this through to the client with &lt;code&gt;proxy_pass_header X-Mode&lt;/code&gt;. Nginx also adds &lt;code&gt;X-Deployed-By: swiftdeploy&lt;/code&gt; to every response from its side.&lt;/p&gt;

&lt;p&gt;Two things that look trivial but aren't:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;JSON error strings use ASCII only — multi-byte characters (like em dashes) in Nginx &lt;code&gt;return&lt;/code&gt; directives can cause the response body to be silently truncated&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;add_header&lt;/code&gt; directives are not inherited by named &lt;code&gt;@error&lt;/code&gt; locations — each error location repeats the header explicitly or it disappears from error responses&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Piece 7: The deploy and promote lifecycle
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;swiftdeploy deploy&lt;/code&gt; does three things in sequence:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Runs &lt;code&gt;init&lt;/code&gt; to ensure configs reflect the current manifest&lt;/li&gt;
&lt;li&gt;Runs &lt;code&gt;docker compose up -d&lt;/code&gt; to start the containers&lt;/li&gt;
&lt;li&gt;Polls &lt;code&gt;GET /healthz&lt;/code&gt; through Nginx (not directly to the service) every 2 seconds until it gets a 200, or fails after 60 seconds&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Polling through Nginx matters. External monitoring tools will also go through Nginx — if the service is up but Nginx isn't routing yet, a direct poll would give a false positive.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;swiftdeploy promote canary&lt;/code&gt; is more interesting:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Updates &lt;code&gt;mode: canary&lt;/code&gt; in &lt;code&gt;manifest.yaml&lt;/code&gt; in-place&lt;/li&gt;
&lt;li&gt;Regenerates &lt;code&gt;docker-compose.yml&lt;/code&gt; only — Nginx config is mode-agnostic&lt;/li&gt;
&lt;li&gt;Restarts the &lt;code&gt;app&lt;/code&gt; container only, with &lt;code&gt;--no-deps --force-recreate&lt;/code&gt; — Nginx is never touched&lt;/li&gt;
&lt;li&gt;Confirms the switch by hitting &lt;code&gt;GET /&lt;/code&gt; and verifying the response body says &lt;code&gt;"mode": "canary"&lt;/code&gt; and the &lt;code&gt;X-Mode: canary&lt;/code&gt; header is present&lt;/li&gt;
&lt;li&gt;If the restart fails, rolls &lt;code&gt;manifest.yaml&lt;/code&gt; back to &lt;code&gt;stable&lt;/code&gt; before returning an error&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The in-place manifest update uses &lt;code&gt;ruamel.yaml&lt;/code&gt; rather than PyYAML.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;YAML round-trip&lt;/strong&gt; — reading a YAML file, modifying a value in memory, and writing it back out. PyYAML does not preserve comments, key ordering, or quote styles on write — it produces a structurally correct but reformatted file. &lt;code&gt;ruamel.yaml&lt;/code&gt; preserves all of these, so the manifest looks exactly the same after a &lt;code&gt;promote&lt;/code&gt; as it did before, just with one field value changed.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Piece 8: Security by default
&lt;/h2&gt;

&lt;p&gt;A few practices baked in rather than bolted on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Go service runs as a non-root user inside the container (&lt;code&gt;uid 1001&lt;/code&gt;). The log directory is pre-owned by that user in the Dockerfile &lt;em&gt;before&lt;/em&gt; the named volume mounts over it — without this, Docker would create the volume owned by root and the non-root process couldn't write logs.&lt;/li&gt;
&lt;li&gt;Both containers use &lt;code&gt;cap_drop: ALL&lt;/code&gt; and add back only the specific Linux capabilities each needs. Nginx needs &lt;code&gt;SETUID&lt;/code&gt;/&lt;code&gt;SETGID&lt;/code&gt; to drop from its master process to worker processes; the Go service needs none.&lt;/li&gt;
&lt;li&gt;The Docker image is built in two stages: a full Go toolchain in the builder stage, and a bare Alpine base in the runtime stage. Only the compiled binary transfers between them. Final image size: ~12MB, well under the 300MB limit.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Multi-stage Docker build&lt;/strong&gt; — a Dockerfile technique where you use one image (the builder) to compile your code, then copy only the output into a second, minimal image (the runtime). The build tools, source code, and intermediate files never land in the final image.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  The ultimate test of the architecture
&lt;/h2&gt;

&lt;p&gt;A core constraint of this system is absolute reproducibility: you must be able to delete all generated files, re-run &lt;code&gt;swiftdeploy init&lt;/code&gt;, and verify they regenerate exactly as they were.&lt;/p&gt;

&lt;p&gt;Every design decision above serves this constraint:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Templates are committed; generated files are not (they go in &lt;code&gt;.gitignore&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;init&lt;/code&gt; is idempotent — same manifest always produces identical output&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ruamel.yaml&lt;/code&gt; keeps the manifest intact through &lt;code&gt;promote&lt;/code&gt; so a subsequent &lt;code&gt;init&lt;/code&gt; still has a valid source to read from&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;StrictUndefined&lt;/code&gt; in Jinja2 means a missing template variable fails loudly during &lt;code&gt;init&lt;/code&gt;, not silently at runtime&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If &lt;code&gt;swiftdeploy init&lt;/code&gt; breaks, the entire stack breaks. That single command is the load-bearing piece.&lt;/p&gt;

</description>
      <category>cli</category>
      <category>go</category>
      <category>infrastructure</category>
      <category>python</category>
    </item>
  </channel>
</rss>
