<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: ikaro</title>
    <description>The latest articles on Forem by ikaro (@ikaro1192).</description>
    <link>https://forem.com/ikaro1192</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/ikaro1192"/>
    <language>en</language>
    <item>
      <title>Why I stopped writing Serverspec specs by hand</title>
      <dc:creator>ikaro</dc:creator>
      <pubDate>Wed, 06 May 2026 02:29:13 +0000</pubDate>
      <link>https://forem.com/ikaro1192/why-i-stopped-writing-serverspec-specs-by-hand-9b2</link>
      <guid>https://forem.com/ikaro1192/why-i-stopped-writing-serverspec-specs-by-hand-9b2</guid>
      <description>&lt;h2&gt;
  
  
  Every Serverspec suite I inherit looks like a different language
&lt;/h2&gt;

&lt;p&gt;I've been using &lt;a href="https://serverspec.org/" rel="noopener noreferrer"&gt;Serverspec&lt;/a&gt; for years and I still reach for it when I want to verify a host actually looks the way I asked it to. The DSL is great. The way it falls apart over time is not.&lt;/p&gt;

&lt;p&gt;The pattern goes like this. The suite starts small. Someone wants to share checks across roles, so a helper module appears. Someone else needs a custom matcher, then a &lt;code&gt;define_method&lt;/code&gt; to generate similar matchers. The team decides the inventory should live "in the specs themselves" so a &lt;code&gt;case node_role when ...&lt;/code&gt; shows up. A year later &lt;code&gt;spec_helper.rb&lt;/code&gt; is hundreds lines, has its own unit tests, and a single &lt;code&gt;it { ... }&lt;/code&gt; line means tracing two helpers and a metaprogrammed method to figure out what is actually being asserted.&lt;/p&gt;

&lt;p&gt;Nothing in there is malicious. The problem is structural: Serverspec specs are Ruby files. &lt;code&gt;describe&lt;/code&gt; and &lt;code&gt;it&lt;/code&gt; are just methods. Once you've put a DSL on top of a general-purpose language, declarative checks and arbitrary procedural code become syntactically indistinguishable, and the procedural side compounds faster on a long timeline. After a couple of years the suite is the original author's private dialect, and nobody else fully knows what it covers.&lt;/p&gt;

&lt;p&gt;I built &lt;a href="https://github.com/ikaro1192/PanInfraSpec" rel="noopener noreferrer"&gt;PanInfraSpec&lt;/a&gt; to keep my hands off that loop. It's agenerator that takes &lt;a href="https://dhall-lang.org/" rel="noopener noreferrer"&gt;Dhall&lt;/a&gt; inputs and emits Serverspec &lt;code&gt;*_spec.rb&lt;/code&gt; files. The Ruby still does the actual checking, but I no longer write it. There is no &lt;code&gt;spec_helper.rb&lt;/code&gt; to grow custom matchers in, because &lt;code&gt;spec_helper.rb&lt;/code&gt; is generated and I don't edit it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ruby won't even tell you when the spec is wrong
&lt;/h2&gt;

&lt;p&gt;A related, smaller failure mode. Serverspec's resource API is dynamically dispatched, which means this is a perfectly valid Ruby file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="n"&gt;describe&lt;/span&gt; &lt;span class="n"&gt;service&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'nginx'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="n"&gt;it&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;should&lt;/span&gt; &lt;span class="n"&gt;be_runnning&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;   &lt;span class="c1"&gt;# spot the typo&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Three n's. RSpec doesn't care — &lt;code&gt;be_runnning&lt;/code&gt; resolves through &lt;code&gt;method_missing&lt;/code&gt; into something that doesn't fail loudly enough, the test runs, and the suite stays green. Same story for asserting a &lt;code&gt;running&lt;/code&gt; state on a &lt;code&gt;package&lt;/code&gt; resource (packages don't have one).&lt;/p&gt;

&lt;p&gt;Same root cause: a Ruby DSL on top of Ruby has no opinion about which calls were meant to be assertions. The spec-helper sprawl is the visible symptom; this is the quiet one that ships green tests for things that aren't actually running.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Dhall pitch in 30 seconds
&lt;/h2&gt;

&lt;p&gt;If you haven't touched Dhall: imagine &lt;strong&gt;JSON with types and functions, but no I/O and no unbounded loops&lt;/strong&gt;. Every program terminates, every program type-checks before it runs. You can &lt;code&gt;dhall freeze&lt;/code&gt; imports to pin them by SHA-256. There's no &lt;code&gt;eval&lt;/code&gt;, no exceptions, no way to accidentally hit the network from a config file.&lt;/p&gt;

&lt;p&gt;Things you'd normally bolt onto YAML — Helm templates, JSON Schema validation, Jinja2 — are just &lt;em&gt;language features&lt;/em&gt; here. So if your problem is "I want a typed, reusable description of my infrastructure", Dhall fits without dragging Ruby (or Python, or Go templates) along for the ride.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the tool does
&lt;/h2&gt;

&lt;p&gt;PanInfraSpec takes two Dhall files in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;an &lt;strong&gt;inventory&lt;/strong&gt;: &lt;code&gt;[{ hostname, ip, role, tags, customAttributes }, ...]&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;a &lt;strong&gt;plan&lt;/strong&gt;: a list of &lt;code&gt;(selector, [assertions])&lt;/code&gt; pairs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;…and emits Serverspec Ruby out the other side. You then run &lt;code&gt;bundle exec rake spec&lt;/code&gt; like you always did.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgtsorb0qhqmw089aiksc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgtsorb0qhqmw089aiksc.png" alt="A flowchart diagram illustrating the PanInfraSpec architecture: It takes typed Dhall configuration files (inventory and plan) as input, transforms them into a Generic Semantic Intermediate Representation (IR), which is then processed by a Serverspec-specific emitter to generate the final Ruby *_spec.rb files." width="800" height="47"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The IR is deliberately backend-agnostic — Serverspec is the first emitter, &lt;a href="https://www.chef.io/products/chef-inspec" rel="noopener noreferrer"&gt;InSpec&lt;/a&gt; is the next one. The architecture is borrowed from Pandoc; that's also where the "Pan" comes from.&lt;/p&gt;

&lt;h2&gt;
  
  
  A five-minute tour
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;ikaro1192/tap/paninfraspec        &lt;span class="c"&gt;# macOS&lt;/span&gt;
nix run github:ikaro1192/PanInfraSpec &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="nt"&gt;--help&lt;/span&gt;  &lt;span class="c"&gt;# anywhere with Nix&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;.deb&lt;/code&gt; / &lt;code&gt;.rpm&lt;/code&gt; / tarball / Docker / &lt;code&gt;cabal build&lt;/code&gt; paths in &lt;a href="https://github.com/ikaro1192/PanInfraSpec/blob/main/docs/install.md" rel="noopener noreferrer"&gt;docs/install.md&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Inventory
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let I = https://raw.githubusercontent.com/ikaro1192/PanInfraSpec/main/dhall/Inventory.dhall
let none = [] : List I.CustomAttribute

in  [ { hostname = "web01", ip = Some "10.0.1.10", role = "Web",       tags = [ "frontend", "metrics" ], customAttributes = none }
    , { hostname = "db01",  ip = None Text,        role = "DBPrimary", tags = [ "metrics" ],             customAttributes = none }
    ] : List I.Node
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you're on Terraform, &lt;code&gt;--from-terraform-state&lt;/code&gt; turns &lt;code&gt;aws_instance&lt;/code&gt; resources into this list directly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Plan
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let Spec = https://raw.githubusercontent.com/ikaro1192/PanInfraSpec/main/dhall/Serverspec.dhall
let Plan = https://raw.githubusercontent.com/ikaro1192/PanInfraSpec/main/dhall/Plan.dhall

let nginxSpec =
      [ Spec.package "nginx" Spec.PackageState.Installed
      , Spec.service "nginx" Spec.ServiceState.Running
      , Spec.port 80 (Spec.PortState.WithProtocol "tcp")
      ]

in  Plan.make Spec.targetBackend
      [ Plan.onAll              [ Spec.command "uname -a" (Spec.CommandState.ExitCode 0) ]
      , Plan.forRole "Web"      nginxSpec
      , Plan.forTag  "metrics"  [ Spec.port 9090 Spec.PortState.Listening ]
      ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are exactly four selectors: &lt;code&gt;onAll&lt;/code&gt;, &lt;code&gt;forRole&lt;/code&gt;, &lt;code&gt;forTag&lt;/code&gt;, &lt;code&gt;forHost&lt;/code&gt;. They stack — a node matching multiple selectors gets the union of their assertions. There's no &lt;code&gt;if&lt;/code&gt;, no &lt;code&gt;for&lt;/code&gt;, no "matching loop" you have to write.&lt;/p&gt;

&lt;h3&gt;
  
  
  Generate
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;paninfraspec-gen &lt;span class="nt"&gt;--inventory&lt;/span&gt; examples/inventory.dhall &lt;span class="nt"&gt;--plan&lt;/span&gt; examples/plan.dhall &lt;span class="se"&gt;\&lt;/span&gt;
                 &lt;span class="nt"&gt;--target&lt;/span&gt; serverspec &lt;span class="nt"&gt;--out&lt;/span&gt; /tmp/out
&lt;span class="nb"&gt;cd&lt;/span&gt; /tmp/out &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; bundle &lt;span class="nb"&gt;exec &lt;/span&gt;rake spec
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For &lt;code&gt;web01&lt;/code&gt; (Web + metrics), the emitter produces:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="nb"&gt;require&lt;/span&gt; &lt;span class="s1"&gt;'spec_helper'&lt;/span&gt;

&lt;span class="c1"&gt;# src: examples/plan.dhall — Spec.package "nginx" Spec.PackageState.Installed&lt;/span&gt;
&lt;span class="n"&gt;describe&lt;/span&gt; &lt;span class="n"&gt;package&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'nginx'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="n"&gt;it&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;should&lt;/span&gt; &lt;span class="n"&gt;be_installed&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="c1"&gt;# src: examples/plan.dhall — Spec.service "nginx" Spec.ServiceState.Running&lt;/span&gt;
&lt;span class="n"&gt;describe&lt;/span&gt; &lt;span class="n"&gt;service&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'nginx'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="n"&gt;it&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;should&lt;/span&gt; &lt;span class="n"&gt;be_running&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="c1"&gt;# src: examples/plan.dhall — Spec.port 80 (Spec.PortState.WithProtocol "tcp")&lt;/span&gt;
&lt;span class="n"&gt;describe&lt;/span&gt; &lt;span class="n"&gt;port&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="n"&gt;it&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;should&lt;/span&gt; &lt;span class="n"&gt;be_listening&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;with&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'tcp'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="c1"&gt;# src: examples/plan.dhall — Spec.port 9090 Spec.PortState.Listening&lt;/span&gt;
&lt;span class="n"&gt;describe&lt;/span&gt; &lt;span class="n"&gt;port&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;9090&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="n"&gt;it&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;should&lt;/span&gt; &lt;span class="n"&gt;be_listening&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Assertions sharing the same &lt;code&gt;(kind, primaryKey)&lt;/code&gt; get merged into a single &lt;code&gt;describe&lt;/code&gt; block. Each block carries a &lt;code&gt;# src:&lt;/code&gt; comment pointing back at the Dhall expression that produced it, so you can navigate from generated Ruby back to the source.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I actually get for the trouble
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;There is nowhere for custom Ruby to live.&lt;/strong&gt; This is the main one. The generated &lt;code&gt;&amp;lt;host&amp;gt;_spec.rb&lt;/code&gt; files are a mechanical projection of the plan. They contain no helpers, no matcher definitions, no inventory logic. The &lt;code&gt;spec_helper.rb&lt;/code&gt; is also generated, and the only thing it does is flip between the &lt;code&gt;:exec&lt;/code&gt; and &lt;code&gt;:ssh&lt;/code&gt; backends. There is no place where "we needed something custom for this role" can quietly land.&lt;/p&gt;

&lt;p&gt;If a teammate genuinely needs to extend behaviour, they do it as a typed constructor in the IR, or reach for one explicit named escape hatch (more below). Both routes are visible in code review.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The typo from the intro is impossible too.&lt;/strong&gt; &lt;code&gt;Spec.service "nginx" Spec.PackageState.Installed&lt;/code&gt; is a Dhall type error — &lt;code&gt;Spec.service&lt;/code&gt; wants a &lt;code&gt;ServiceState&lt;/code&gt;. The file won't pass &lt;code&gt;dhall type-check&lt;/code&gt;, so it never gets as far as generating Ruby. State values are constructors of closed unions, not strings.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Contradictions are caught at generation time.&lt;/strong&gt; If two assertions fight — &lt;code&gt;command "uname -a"&lt;/code&gt; asserted to exit &lt;code&gt;0&lt;/code&gt; in one place and &lt;code&gt;1&lt;/code&gt; in another — &lt;code&gt;paninfraspec-gen&lt;/code&gt; exits non-zero in CI rather than producing Ruby that's guaranteed to fail.&lt;/p&gt;

&lt;p&gt;The obvious objection to all of this is the case where the expected value depends on the host — &lt;em&gt;&lt;code&gt;innodb_buffer_pool_size&lt;/code&gt; should be 70% of physical RAM&lt;/em&gt;, that kind of thing. Goss falls back to Go templates for this; classic Serverspec falls back to arbitrary Ruby in &lt;code&gt;let(:something) { ... }&lt;/code&gt;. Both quietly reintroduce the procedural-leakage problem. PanInfraSpec has a typed sub-IR for arithmetic instead: facts are declared per host (a shell command whose stdout becomes the value) and referenced by name from typed expression constructors, no Ruby strings involved. For cases the IR hasn't grown a constructor for yet, there's a single named escape hatch where raw Ruby is allowed in exactly one field — visible in code review, can't metastasize the way a &lt;code&gt;spec_helper.rb&lt;/code&gt; does.&lt;/p&gt;

&lt;h2&gt;
  
  
  What you give up
&lt;/h2&gt;

&lt;p&gt;You can't drop into Ruby in the middle of a spec file. If your workflow leans on that — a custom matcher per role, a &lt;code&gt;let&lt;/code&gt; that reads &lt;code&gt;/etc/whatever&lt;/code&gt; and parses it inline — that goes away. PanInfraSpec is opinionated that the "I'll just write Ruby here" door is exactly the door long-term breakage walks in through.&lt;/p&gt;

&lt;p&gt;That's a real tradeoff. It depends on how often the Ruby flexibility is paying off versus how often you're getting bitten by the kind of bug I started this post with.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrap
&lt;/h2&gt;

&lt;p&gt;If you've ever inherited a Serverspec suite and spent the first afternoon trying to figure out what the &lt;code&gt;spec_helper.rb&lt;/code&gt; was &lt;em&gt;doing&lt;/em&gt;, this might be interesting to you. If your team has the discipline to keep that file boring forever without a tool forcing it, you may not need this.&lt;/p&gt;

&lt;p&gt;The repo is at &lt;a href="https://github.com/ikaro1192/PanInfraSpec" rel="noopener noreferrer"&gt;ikaro1192/PanInfraSpec&lt;/a&gt;, MIT-licensed. There's a working set of examples in &lt;a href="https://github.com/ikaro1192/PanInfraSpec/tree/main/examples" rel="noopener noreferrer"&gt;&lt;code&gt;examples/&lt;/code&gt;&lt;/a&gt; — copy one, point it at a real host, see how it feels.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;ikaro1192/tap/paninfraspec
nix run github:ikaro1192/PanInfraSpec &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="nt"&gt;--help&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Issues, stars, and "actually I think you got X wrong" comments all welcome.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>testing</category>
      <category>infrastructure</category>
      <category>dhall</category>
    </item>
    <item>
      <title>Why I Built PureMyHA: A Lightweight MySQL 8.4 HA Manager in Haskell</title>
      <dc:creator>ikaro</dc:creator>
      <pubDate>Wed, 08 Apr 2026 13:45:11 +0000</pubDate>
      <link>https://forem.com/ikaro1192/why-i-built-puremyha-a-lightweight-mysql-84-ha-manager-in-haskell-2ood</link>
      <guid>https://forem.com/ikaro1192/why-i-built-puremyha-a-lightweight-mysql-84-ha-manager-in-haskell-2ood</guid>
      <description>&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;What is it? PureMyHA is a lightweight, asynchronous High Availability manager built exclusively for MySQL 8.4.&lt;/li&gt;
&lt;li&gt;Why Haskell? To leverage its robust type system and fearless concurrency for rock-solid state management where failure is not an option.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;​The Goal: Provide automated failover, split-brain protection, and modern MySQL 8.4 syntax support without the complexity of full-scale database clustering systems.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;🔗 &lt;strong&gt;GitHub Repository:&lt;/strong&gt; &lt;a href="https://github.com/ikaro1192/PureMyHA" rel="noopener noreferrer"&gt;ikaro1192/PureMyHA&lt;/a&gt; &lt;em&gt;(If you find it interesting, a star would be awesome!)&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The State of MySQL 8.4 High Availability
&lt;/h2&gt;

&lt;p&gt;In modern database operations, the need to build highly available (HA) MySQL clusters from scratch on bare metal or IaaS is decreasing, thanks to managed services like Amazon Aurora. However, due to strict latency requirements, cost optimization, or existing infrastructure constraints, self-hosted MySQL HA is still a harsh reality for many of us.&lt;/p&gt;

&lt;p&gt;When looking at HA solutions for the recently released MySQL 8.4, the official &lt;em&gt;InnoDB Cluster&lt;/em&gt; is a common choice. But there's a catch: it relies on Group Replication, which requires synchronous commits across nodes. For workloads highly sensitive to write latency, this can be a dealbreaker.&lt;/p&gt;

&lt;h3&gt;
  
  
  What about community solutions?
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;Orchestrator&lt;/em&gt; (now under Percona) supports MySQL 8.4, but its semi-synchronous replication support is still in the Tech Preview stage.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Vitess&lt;/em&gt; fully supports MySQL 8.4 and asynchronous replication. However, Vitess is a massive, all-in-one database clustering system. For a modest setup of just a few nodes, it is simply overkill.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Missing Piece: PureMyHA
&lt;/h3&gt;

&lt;p&gt;To summarize, there is still a strong need for a lightweight HA tool based on standard asynchronous replication—a tool that stays out of the way and only intervenes during failures.&lt;/p&gt;

&lt;p&gt;That’s why I built PureMyHA: a simple yet powerful HA manager written in Haskell, fully compatible with MySQL 8.4's new syntax and authentication methods.&lt;/p&gt;

&lt;h2&gt;
  
  
  Core Features
&lt;/h2&gt;

&lt;p&gt;PureMyHA is designed to be minimal but heavily armed for production edge-cases. Instead of packing every possible database feature, it focuses purely on making asynchronous replication highly available and easy to operate.&lt;/p&gt;

&lt;p&gt;Here are the highlights:&lt;/p&gt;

&lt;h3&gt;
  
  
  🐬 1. Exclusively MySQL 8.4 Native
&lt;/h3&gt;

&lt;p&gt;We dropped legacy baggage. PureMyHA uses only modern syntax and defaults:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Strictly uses SHOW REPLICA STATUS and CHANGE REPLICATION SOURCE TO.&lt;/li&gt;
&lt;li&gt;Fully supports caching_sha2_password authentication (the default in 8.4), leaving mysql_native_password in the past.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🛡️ 2. Bulletproof Automatic Failover
&lt;/h3&gt;

&lt;p&gt;Failovers shouldn't be scary. PureMyHA ensures safe promotions with zero-data-loss semantics where possible:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Errant GTID Repair: Automatically detects and neutralizes errant GTIDs by injecting empty transactions before promotion.&lt;/li&gt;
&lt;li&gt;Split-Brain Auto-Fencing: If multiple nodes act as a source, it can automatically enforce super_read_only=ON to prevent write divergence.&lt;/li&gt;
&lt;li&gt;Anti-Flap Protection: Prevents endless failover loops during network instability by enforcing a configurable recovery_block_period.&lt;/li&gt;
&lt;li&gt;Consecutive Failure Thresholds: Avoids false positives from momentary TCP timeouts by requiring $N$ consecutive probe failures.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🛠️ 3. Built for the Operator
&lt;/h3&gt;

&lt;p&gt;We know that maintaining clusters involves more than just waiting for crashes.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Zero-Downtime Config Reloads: Tweak monitoring thresholds or webhooks and apply them via SIGHUP without restarting the daemon.&lt;/li&gt;
&lt;li&gt;Native CLONE Plugin Support: Re-seed a lagging or broken replica effortlessly from a donor node with a single CLI command (puremyha clone).&lt;/li&gt;
&lt;li&gt;Granular CLI Controls: Pause/resume auto-failover, exclude specific replicas during maintenance, or perform dry-runs for manual switchovers.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  📊 4. Observability &amp;amp; Integrations
&lt;/h3&gt;

&lt;p&gt;PureMyHA fits right into modern cloud-native stacks.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prometheus Metrics: Exposes a /metrics endpoint out-of-the-box tracking replication lag, failure counts, and cluster health.&lt;/li&gt;
&lt;li&gt;Kubernetes-Ready: Provides /health liveness probes.&lt;/li&gt;
&lt;li&gt;Custom Hooks: Trigger your own shell scripts (e.g., Slack alerts, DNS updates) via lifecycle events like pre_failover or on_lag_threshold_exceeded.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note: For the complete, exhaustive list of features and configuration options, check out the Feature Reference in the docs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture: The Unix Philosophy
&lt;/h2&gt;

&lt;p&gt;PureMyHA is designed around the classic Unix philosophy: separate the heavy background lifting from the user interface. It consists of two main components communicating over a local socket.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fymn35e9i6j5lbz83kmr4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fymn35e9i6j5lbz83kmr4.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;puremyhad&lt;/code&gt; (The Daemon):&lt;/strong&gt; The brain of the operation. This long-running background process handles topology auto-discovery, continuous health monitoring, and automatic failovers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;puremyha&lt;/code&gt; (The CLI):&lt;/strong&gt; The control panel. It sends commands to the daemon and formats responses for the operator.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Communication:&lt;/strong&gt; They communicate via a Unix domain socket (&lt;code&gt;/run/puremyhad.sock&lt;/code&gt;) using newline-delimited JSON (NDJSON), ensuring fast and secure local-only access.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Quick Start: Up and Running in Minutes
&lt;/h2&gt;

&lt;p&gt;One of the main goals of PureMyHA is operational simplicity. Because it compiles to a single, statically-linked binary, you don't need to set up external dependencies like etcd, Consul, or ZooKeeper.&lt;/p&gt;

&lt;p&gt;Here is how easily you can get a cluster under management.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Installation
&lt;/h3&gt;

&lt;p&gt;The easiest way to install PureMyHA is via the pre-built packages for your distribution. (Docker builds and source compilation via cabal are also available).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ DOWNLOAD_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; https://api.github.com/repos/ikaro1192/PureMyHA/releases/latest &lt;span class="se"&gt;\&lt;/span&gt;
  | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.assets[].browser_download_url'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;uname&lt;/span&gt; &lt;span class="nt"&gt;-m&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="s2"&gt;rpm$"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;wget &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DOWNLOAD_URL&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nv"&gt;$ FILE_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;basename&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DOWNLOAD_URL&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;rpm &lt;span class="nt"&gt;-ivh&lt;/span&gt; &lt;span class="nv"&gt;$FILE_NAME&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable&lt;/span&gt; &lt;span class="nt"&gt;--now&lt;/span&gt; puremyhad
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. The Minimal Configuration
&lt;/h3&gt;

&lt;p&gt;Configuration is done via a straightforward YAML file. You just need to define your cluster nodes and provide the monitoring credentials, and hooks. The daemon handles the rest by auto-discovering the topology.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# cp -abi /etc/puremyha/config.yaml{.example,}&lt;/span&gt;
&lt;span class="c"&gt;# vim /etc/puremyha/config.yaml&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Start the Daemon
&lt;/h3&gt;

&lt;p&gt;Enable and start the background daemon. It will immediately connect to the nodes, map the replication tree, and begin continuous health monitoring. Now you can use the puremyha CLI to interact with the daemon over the local Unix socket.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# systemctl start puremyhad &lt;/span&gt;
&lt;span class="c"&gt;# systemctl status puremyhad &lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Demo: Chaos in Action
&lt;/h2&gt;

&lt;p&gt;Setting up is easy, but how does PureMyHA handle a real fire? Let’s walk through a simulated disaster and recovery scenario using a 3-node cluster, controlling everything strictly through the CLI.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. The Initial State
&lt;/h3&gt;

&lt;p&gt;First, let's check our healthy cluster. db01 is our active primary (source), with two replicas smoothly following along.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# puremyha status
CLUSTER             HEALTH                   SOURCE              NODES PAUSED  RECOVERY BLOCKED
----------------------------------------------------------------------------------------
main                Healthy                  db01                3     no      2026-04-08T13:04:40Z
# puremyha topology
Cluster: main
[SOURCE] db01:3306 [Healthy]
  [REPLICA] db02:3306 [Healthy] lag=0s
  [REPLICA] db03:3306 [Healthy] lag=0s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Pulling the Plug (Automatic Failover)
&lt;/h3&gt;

&lt;p&gt;Now, let's simulate a hard crash by stopping the MySQL service on our primary node, db01.&lt;/p&gt;

&lt;p&gt;Instantly, the background daemon detects the disruption, confirms quorum, and executes an automatic failover. When we check the topology again, we can see that db02 has been safely promoted to the new source, and db03 has been re-pointed to it. db01 is correctly marked as unreachable.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# puremyha topology
Cluster: main
[SOURCE] db02:3306 [Healthy]
  [REPLICA] db03:3306 [Healthy] lag=0s
  [REPLICA] db01:3306 [NodeUnreachable: Network.Socket.connect: &amp;lt;socket: 16&amp;gt;: does not exist (Connection refused)]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Bringing the Dead Node Back
&lt;/h3&gt;

&lt;p&gt;Let's fast-forward. We fixed the issue on db01 and brought the server back online.&lt;/p&gt;

&lt;p&gt;PureMyHA detects that db01 is alive again, but because of its strict safety principles, it doesn't just blindly start replicating. It waits for operator instruction, marking the node as [NotReplicating].&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# puremyha topology
Cluster: main
[SOURCE] db02:3306 [Healthy]
  [REPLICA] db03:3306 [Healthy] lag=0s
  [REPLICA] db01:3306 [NotReplicating]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. Rejoining the Cluster
&lt;/h3&gt;

&lt;p&gt;To bring db01 back into the fold safely, we use the demote command. This instructs PureMyHA to configure db01 as a standard replica under our new source, db02.&lt;/p&gt;

&lt;p&gt;The cluster is fully healthy again!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# puremyha demote --host db01 --source db02&lt;/span&gt;
OK: Demote completed: db01 is now a replica
&lt;span class="c"&gt;# puremyha topology&lt;/span&gt;
Cluster: main
&lt;span class="o"&gt;[&lt;/span&gt;SOURCE] db02:3306 &lt;span class="o"&gt;[&lt;/span&gt;Healthy]
  &lt;span class="o"&gt;[&lt;/span&gt;REPLICA] db03:3306 &lt;span class="o"&gt;[&lt;/span&gt;Healthy] &lt;span class="nv"&gt;lag&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0s
  &lt;span class="o"&gt;[&lt;/span&gt;REPLICA] db01:3306 &lt;span class="o"&gt;[&lt;/span&gt;Healthy] &lt;span class="nv"&gt;lag&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  5. The Graceful Switchover
&lt;/h3&gt;

&lt;p&gt;Finally, to complete our maintenance, we want our original topology back with db01 at the helm. Since this is a planned operation, we demand zero data loss. We trigger a manual switchover.&lt;/p&gt;

&lt;p&gt;PureMyHA handles the delicate dance under the hood: locking writes, waiting for db01 to catch up to the exact GTID, promoting it, and re-pointing db02 and db03.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# puremyha switchover --to=db01&lt;/span&gt;
OK: Switchover completed
&lt;span class="c"&gt;# puremyha topology&lt;/span&gt;
Cluster: main
&lt;span class="o"&gt;[&lt;/span&gt;SOURCE] db01:3306 &lt;span class="o"&gt;[&lt;/span&gt;Healthy]
  &lt;span class="o"&gt;[&lt;/span&gt;REPLICA] db02:3306 &lt;span class="o"&gt;[&lt;/span&gt;Healthy] &lt;span class="nv"&gt;lag&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0s
  &lt;span class="o"&gt;[&lt;/span&gt;REPLICA] db03:3306 &lt;span class="o"&gt;[&lt;/span&gt;Healthy] &lt;span class="nv"&gt;lag&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And just like that, we are exactly back where we started. No missing transactions, no split-brain, no manual GTID math—just clean, predictable operations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Engineering Decisions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Why Haskell for an HA Manager?
&lt;/h3&gt;

&lt;p&gt;When building infrastructure tooling—especially an HA manager like PureMyHA—you are dealing with a hostile environment. Network partitions, split-brain scenarios, and manual interventions create a chaotic storm of asynchronous events.&lt;/p&gt;

&lt;p&gt;Managing this state machine correctly is the difference between keeping a database highly available and accidentally wiping out production. To handle this "complex state management" and "concurrent execution" safely, Haskell's language guarantees proved to be invaluable.&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Pure Functions for State Transitions (Fearless Testing)
&lt;/h4&gt;

&lt;p&gt;The Problem: In many languages, I/O (like polling MySQL) and state mutations get tangled together. This creates a breeding ground for race conditions and makes thorough testing nearly impossible without complex mocking.&lt;/p&gt;

&lt;p&gt;The Haskell Solution: Haskell enforces strict separation between side effects (I/O) and business logic. We modeled the core state transitions as a completely pure function (a Reducer): $f: \text{State} \times \text{Event} \rightarrow \text{State}$.&lt;/p&gt;

&lt;p&gt;Because this logic is detached from the network or database, we can write lightning-fast, exhaustive unit tests that cover thousands of edge cases deterministically.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Lock-Free Concurrency with STM
&lt;/h4&gt;

&lt;p&gt;The Problem: Traditional concurrency models rely on mutexes and locks. This inevitably leads to deadlocks, forgotten unlocks, or Time-of-Check to Time-of-Use (TOCTOU) bugs—fatal flaws for an HA manager.&lt;/p&gt;

&lt;p&gt;The Haskell Solution: Software Transactional Memory (STM). By using atomically blocks, we can treat the entire flow—popping an event from a queue, calculating the new state, and updating the state—as an indivisible transaction. If a conflict occurs, the Haskell runtime safely and automatically retries the transaction. Data corruption between threads is structurally impossible.&lt;/p&gt;

&lt;h4&gt;
  
  
  3. Strict Modeling with Algebraic Data Types (ADTs)
&lt;/h4&gt;

&lt;p&gt;The Problem: Relying on boolean flags or string constants to track state leads to "impossible states" or loss of context (e.g., knowing a TopologyDrift occurred, but losing the specific reason why).&lt;/p&gt;

&lt;p&gt;The Haskell Solution: ADTs allow us to model facts and events rigorously. Every state transition is explicit. Even better, the compiler enforces exhaustive pattern matching. If we introduce a new failure scenario or event type, the code literally will not compile until we have explicitly handled that event everywhere in the system. No unhandled exceptions at runtime.&lt;/p&gt;

&lt;h4&gt;
  
  
  4. Natural Backpressure and Async Side-Effects
&lt;/h4&gt;

&lt;p&gt;The Problem: An HA manager often needs to run external hook scripts (like Slack alerts or DNS updates). If these scripts hang or events spike, it can block the main monitoring loop or cause memory exhaustion.&lt;/p&gt;

&lt;p&gt;The Haskell Solution: Haskell makes robust concurrency patterns trivial. Using bounded queues (TBQueue) provides natural backpressure, preventing the system from being overwhelmed. Furthermore, executing side-effects asynchronously (fire-and-forget) while maintaining thread safety takes only a few lines of code using the async library.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Unix Philosophy: Simple by Deliberate Omission
&lt;/h3&gt;

&lt;p&gt;In an era where database tools tend to evolve into heavyweight, all-in-one platforms, PureMyHA takes a step back. It embraces the classic Unix philosophy: write programs that do one thing and do it well.&lt;/p&gt;

&lt;p&gt;Here are the core principles that guided its architecture:&lt;/p&gt;

&lt;h4&gt;
  
  
  🔪 1. Do One Thing Well (and Delegate the Rest)
&lt;/h4&gt;

&lt;p&gt;PureMyHA is a highly focused HA tool. It is not a topology manager, a query router, or a schema migration framework. It detects failures, promotes a replica safely, and gets out of the way.&lt;/p&gt;

&lt;p&gt;Furthermore, it strictly delegates what it does not own. For example, PureMyHA does not implement its own distributed consensus or leader election for the daemon itself. Its own high availability is delegated entirely to tools like Pacemaker, which are already purpose-built for that exact problem.&lt;/p&gt;

&lt;h4&gt;
  
  
  🛡️ 2. Correctness Before Convenience
&lt;/h4&gt;

&lt;p&gt;A failover that corrupts data is worse than no failover at all. Every decision made by PureMyHA is strictly GTID-aware. It actively detects and neutralizes errant GTIDs, waits for relay log application before promotion, and identifies split-brain scenarios before acting. Safety is never compromised for speed.&lt;/p&gt;

&lt;h4&gt;
  
  
  🚫 3. Simple by Deliberate Omission
&lt;/h4&gt;

&lt;p&gt;We explicitly target MySQL 8.4+ and nothing else. There is no support for legacy syntax, older authentication plugins (mysql_native_password), or non-GTID topologies. Saying "no" to backward compatibility layers keeps the codebase remarkably small, auditable, and inherently correct.&lt;/p&gt;

&lt;h4&gt;
  
  
  📦 4. Pure, Stateless, and Dependency-Free
&lt;/h4&gt;

&lt;p&gt;Pure Haskell: PureMyHA is built entirely on mysql-haskell, a pure-Haskell MySQL client. There is no libmysqlclient, no CGo, and no FFI. The result is a single, statically-linked binary that you can drop in and run anywhere.&lt;/p&gt;

&lt;p&gt;Stateless by Design: The daemon itself holds no durable state. All topology knowledge is dynamically derived directly from MySQL on startup and continuously refreshed. If the daemon crashes, recovery is trivially safe—just restart it.&lt;/p&gt;

&lt;h4&gt;
  
  
  🔍 5. Transparent Operation
&lt;/h4&gt;

&lt;p&gt;Infrastructure operators need to trust their tools. PureMyHA provides dry-run modes, config hot-reloads, and pause/resume controls, giving operators full visibility and control over the cluster without ever requiring a daemon restart.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;The database ecosystem is inherently complex, and the stakes in production are incredibly high. However, by leveraging Haskell's robust concurrency guarantees and adhering strictly to the Unix philosophy—doing one thing and doing it flawlessly—I believe PureMyHA brings a breath of fresh air to modern MySQL 8.4 operations.&lt;/p&gt;

&lt;p&gt;It proves that infrastructure tooling can be incredibly safe without being highly complex.&lt;/p&gt;

&lt;h3&gt;
  
  
  Try It Out! 🚀
&lt;/h3&gt;

&lt;p&gt;If you are running MySQL 8.4 (or planning an upgrade soon) and feel that existing HA solutions are either overkill for your modest setup or too complex to audit, I highly encourage you to give PureMyHA a spin.&lt;/p&gt;

&lt;p&gt;Grab the binary, spin up a few local Docker containers, and try pulling the plug on a source node. I think you'll appreciate how smoothly and predictably it handles the chaos.&lt;/p&gt;

&lt;h3&gt;
  
  
  Star, Fork, and Contribute 🌟
&lt;/h3&gt;

&lt;p&gt;This project is completely open-source, and I would love to grow it with the community.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Drop a Star: If you found this article interesting or like the architectural approach, please consider giving the project a &lt;a href="https://github.com/ikaro1192/PureMyHA" rel="noopener noreferrer"&gt;⭐ Star on GitHub&lt;/a&gt;! It means the world to me and helps the project gain visibility.&lt;/li&gt;
&lt;li&gt;Issues &amp;amp; PRs Welcome: I am actively looking for feedback from real-world operators. Did you find an edge case? Do you have an idea for a new webhook trigger? Please open an Issue! Pull Requests—whether for code, tests, or documentation—are more than welcome.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's build a rock-solid, minimalist HA ecosystem for MySQL 8.4 together. Thanks for reading!&lt;/p&gt;

</description>
      <category>mysql</category>
      <category>database</category>
      <category>opensource</category>
      <category>haskell</category>
    </item>
  </channel>
</rss>
