<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Bruno Verachten</title>
    <description>The latest articles on Forem by Bruno Verachten (@gounthar).</description>
    <link>https://forem.com/gounthar</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/gounthar"/>
    <language>en</language>
    <item>
      <title>Running Node.js on RISC-V with Docker (when there's no official image yet)</title>
      <dc:creator>Bruno Verachten</dc:creator>
      <pubDate>Thu, 05 Mar 2026 10:51:52 +0000</pubDate>
      <link>https://forem.com/gounthar/running-nodejs-on-risc-v-with-docker-when-theres-no-official-image-yet-279i</link>
      <guid>https://forem.com/gounthar/running-nodejs-on-risc-v-with-docker-when-theres-no-official-image-yet-279i</guid>
      <description>&lt;p&gt;&lt;em&gt;Photo by &lt;a href="https://unsplash.com/@tatiana_p?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Tatiana P&lt;/a&gt; on &lt;a href="https://unsplash.com/photos/a-large-bridge-spanning-over-a-large-body-of-water-UfFSO6JOlKE" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You know that moment when you try &lt;code&gt;docker pull node --platform linux/riscv64&lt;/code&gt; and Docker comes back with this?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Using default tag: latest
Error response from daemon: no matching manifest for linux/riscv64 in the manifest list entries:
no match for platform in manifest: not found
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Not exactly a warm welcome.&lt;/p&gt;

&lt;p&gt;If you've been watching the RISC-V world lately, you know the software story is catching up fast. Kernel support has been there for years, Debian and Fedora ship riscv64 builds, and boards like the Banana Pi F3 or StarFive VisionFive 2 are sitting on people's desks running real workloads. But the official Node.js Docker images? They don't support riscv64. Not yet. There's work happening upstream to change that, but we're not there today.&lt;/p&gt;

&lt;p&gt;So I built my own.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem (a.k.a. "why can't I just apt install?")
&lt;/h2&gt;

&lt;p&gt;I maintain a set of &lt;a href="https://github.com/gounthar/unofficial-builds" rel="noopener noreferrer"&gt;unofficial Node.js builds for riscv64&lt;/a&gt;. These are native builds, compiled on actual RISC-V hardware (a Banana Pi F3 with 8 cores and 16GB of RAM, because if it's too easy, it's no fun, right?), packaged as tarballs, &lt;code&gt;.deb&lt;/code&gt;, and &lt;code&gt;.rpm&lt;/code&gt; files, and published as GitHub Releases. Node.js 22 LTS and 24 Current are available today.&lt;/p&gt;

&lt;p&gt;The builds work well. But distributing raw tarballs has friction. People who get their hands on a RISC-V board typically have a clean Debian or Fedora install. Their distro might ship Node.js 18. Maybe they installed a vendor-specific build. They don't want to mess with that. They want isolation.&lt;/p&gt;

&lt;p&gt;Others don't have RISC-V hardware at all. They want to test their Node.js application against riscv64 before their CI environment supports it, or before they deploy to a RISC-V server. QEMU handles that.&lt;/p&gt;

&lt;p&gt;In both cases, Docker is the answer.&lt;/p&gt;

&lt;h2&gt;
  
  
  What we ship
&lt;/h2&gt;

&lt;p&gt;The images live on Docker Hub at &lt;a href="https://hub.docker.com/r/gounthar/node-riscv64" rel="noopener noreferrer"&gt;&lt;code&gt;gounthar/node-riscv64&lt;/code&gt;&lt;/a&gt;. Two variants per version:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tag&lt;/th&gt;
&lt;th&gt;Base&lt;/th&gt;
&lt;th&gt;What's inside&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;24.13.1-trixie&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;buildpack-deps:trixie&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Node.js 24 + npm + Yarn + gcc, g++, make, python3 (~522 MB)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;24.13.1-trixie-slim&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;debian:trixie-slim&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Node.js 24 + npm + Yarn, minimal footprint (~80 MB)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;22.22.0-trixie&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;buildpack-deps:trixie&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Node.js 22 LTS + npm + Yarn + build tools (~523 MB)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;22.22.0-trixie-slim&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;debian:trixie-slim&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Node.js 22 LTS + npm + Yarn, minimal footprint (~80 MB)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;You'll notice there's no &lt;code&gt;latest&lt;/code&gt; tag. That's on purpose, at least for now. The first images were pushed today via manual workflow dispatch, and the CI pipeline only creates &lt;code&gt;latest&lt;/code&gt; and &lt;code&gt;slim&lt;/code&gt; floating tags on actual GitHub Release events. The next release will add them.&lt;/p&gt;

&lt;p&gt;But honestly? You shouldn't use &lt;code&gt;latest&lt;/code&gt; anyway. Friends don't let friends use &lt;code&gt;latest&lt;/code&gt;. It's a moving target that tells you nothing about what you're running. Your build works on Tuesday, breaks on Thursday, and you have no idea what changed because &lt;code&gt;latest&lt;/code&gt; silently moved from 22.x to 24.x under your feet. Pin your versions. &lt;code&gt;24.13.1-trixie&lt;/code&gt; means you know exactly what you're getting, and your Dockerfile stays reproducible six months from now. &lt;code&gt;latest&lt;/code&gt; is a convenience for quick demos and local experiments, not for anything you'd put in production or commit to a repo.&lt;/p&gt;

&lt;p&gt;The full variant is for development: you can &lt;code&gt;npm install&lt;/code&gt; native addons without extra setup. The slim variant is for production or when image size matters.&lt;/p&gt;

&lt;p&gt;Quick smoke test:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;--platform&lt;/span&gt; linux/riscv64 gounthar/node-riscv64:24.13.1-trixie node &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"console.log(process.arch)"&lt;/span&gt;
&lt;span class="c"&gt;# riscv64&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you're on an x86_64 or ARM machine, Docker uses QEMU under the hood to emulate riscv64. It's slower than native, but it works.&lt;/p&gt;

&lt;h2&gt;
  
  
  How the Dockerfiles are structured
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The full variant
&lt;/h3&gt;

&lt;p&gt;We adapted the Dockerfiles from the upstream &lt;a href="https://github.com/nodejs/docker-node" rel="noopener noreferrer"&gt;&lt;code&gt;nodejs/docker-node&lt;/code&gt;&lt;/a&gt; project. The approach is the same one the official images use for x86_64 and ARM: download a pre-built binary, verify checksums, extract to &lt;code&gt;/usr/local/&lt;/code&gt;, install Yarn, set up the entrypoint.&lt;/p&gt;

&lt;p&gt;The full variant is pretty straightforward:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; buildpack-deps:trixie&lt;/span&gt;

&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; NODE_VERSION=24.13.1&lt;/span&gt;
&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; NODE_DOWNLOAD_URL=https://github.com/gounthar/unofficial-builds/releases/download&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-ex&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; curl &lt;span class="nt"&gt;-fsSLO&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;NODE_DOWNLOAD_URL&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/v&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;NODE_VERSION&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/node-v&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;NODE_VERSION&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-linux-riscv64.tar.xz"&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; curl &lt;span class="nt"&gt;-fsSLO&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;NODE_DOWNLOAD_URL&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/v&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;NODE_VERSION&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/SHASUMS256.txt"&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="s2"&gt;" node-v&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;NODE_VERSION&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-linux-riscv64.tar.xz&lt;/span&gt;&lt;span class="se"&gt;\$&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; SHASUMS256.txt | &lt;span class="nb"&gt;sha256sum&lt;/span&gt; &lt;span class="nt"&gt;-c&lt;/span&gt; - &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;tar&lt;/span&gt; &lt;span class="nt"&gt;-xJf&lt;/span&gt; &lt;span class="s2"&gt;"node-v&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;NODE_VERSION&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-linux-riscv64.tar.xz"&lt;/span&gt; &lt;span class="nt"&gt;-C&lt;/span&gt; /usr/local &lt;span class="nt"&gt;--strip-components&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 &lt;span class="nt"&gt;--no-same-owner&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="s2"&gt;"node-v&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;NODE_VERSION&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-linux-riscv64.tar.xz"&lt;/span&gt; SHASUMS256.txt &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;ln&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; /usr/local/bin/node /usr/local/bin/nodejs &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; node &lt;span class="nt"&gt;--version&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; npm &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Nothing exotic. The binaries come from our own GitHub Releases instead of nodejs.org, and we verify them with the SHA256 checksums published alongside each release.&lt;/p&gt;

&lt;h3&gt;
  
  
  The slim variant (where it gets interesting)
&lt;/h3&gt;

&lt;p&gt;The slim variant uses a technique borrowed from the official Node.js images to keep the final image small. After installing Node.js, it uses &lt;code&gt;ldd&lt;/code&gt; to figure out which shared libraries the binaries actually need at runtime, marks only those packages as manually installed, then runs &lt;code&gt;apt-get purge --auto-remove&lt;/code&gt; to strip everything else:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;apt-mark auto &lt;span class="s1"&gt;'.*'&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /dev/null
find /usr/local &lt;span class="nt"&gt;-type&lt;/span&gt; f &lt;span class="nt"&gt;-executable&lt;/span&gt; &lt;span class="nt"&gt;-exec&lt;/span&gt; ldd &lt;span class="s1"&gt;'{}'&lt;/span&gt; &lt;span class="s1"&gt;';'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'/=&amp;gt;/ { so = $(NF-1); if (index(so, "/usr/local/") == 1) { next }; gsub("^/(usr/)?", "", so); print so }'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  | &lt;span class="nb"&gt;sort&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  | xargs &lt;span class="nt"&gt;-r&lt;/span&gt; dpkg-query &lt;span class="nt"&gt;--search&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  | &lt;span class="nb"&gt;cut&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt;: &lt;span class="nt"&gt;-f1&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  | &lt;span class="nb"&gt;sort&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  | xargs &lt;span class="nt"&gt;-r&lt;/span&gt; apt-mark manual
apt-get purge &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nt"&gt;--auto-remove&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; APT::AutoRemove::RecommendsImportant&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;false&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Why am I showing you this gnarly pipeline? Because I find it genuinely clever. It also removes OpenSSL architecture-specific files for platforms other than riscv64, since they're dead weight in a single-arch image.&lt;/p&gt;

&lt;h2&gt;
  
  
  The CI pipeline
&lt;/h2&gt;

&lt;h3&gt;
  
  
  From release to Docker Hub
&lt;/h3&gt;

&lt;p&gt;Every time we create a GitHub Release (which happens when a new Node.js version comes out or we rebuild an existing one), a GitHub Actions workflow kicks in:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Verifies the release contains a &lt;code&gt;node-v*-linux-riscv64.tar.xz&lt;/code&gt; tarball&lt;/li&gt;
&lt;li&gt;Sets up QEMU for riscv64 emulation&lt;/li&gt;
&lt;li&gt;Sets up Docker Buildx for cross-platform builds&lt;/li&gt;
&lt;li&gt;Builds both the full and slim images targeting &lt;code&gt;linux/riscv64&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Pushes to Docker Hub with version-specific tags&lt;/li&gt;
&lt;li&gt;Updates the &lt;code&gt;latest&lt;/code&gt; and &lt;code&gt;slim&lt;/code&gt; floating tags (only for actual releases, not manual rebuilds)&lt;/li&gt;
&lt;li&gt;Syncs the Docker Hub repository description from a README file in the repo&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The workflow runs on standard &lt;code&gt;ubuntu-latest&lt;/code&gt; GitHub runners. No RISC-V hardware needed for the Docker build step; QEMU handles it. The actual Node.js compilation happened earlier on the Banana Pi F3; this workflow just packages the result.&lt;/p&gt;

&lt;p&gt;One detail I'm particularly proud of: the floating &lt;code&gt;latest&lt;/code&gt; and &lt;code&gt;slim&lt;/code&gt; tags only get updated on release events, not on manual workflow dispatches. This prevents someone (me, probably) from accidentally rebuilding an older version and overwriting the latest tag. I've already been burned by that kind of thing before.&lt;/p&gt;

&lt;h3&gt;
  
  
  Staying in sync with upstream
&lt;/h3&gt;

&lt;p&gt;The official &lt;code&gt;nodejs/docker-node&lt;/code&gt; project evolves. Security fixes, Yarn updates, Dockerfile best practices. We don't want our images to drift silently.&lt;/p&gt;

&lt;p&gt;So we have a second workflow that runs weekly (every Monday at 06:00 UTC). It reads a tracking file (&lt;code&gt;docker/UPSTREAM_REF&lt;/code&gt;) that records which upstream commit we last synced from and which files we care about:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;repo=nodejs/docker-node
commit=74b0481b76e0af5b19d425ad34489e7393b23aff
files=24/trixie/Dockerfile,24/trixie-slim/Dockerfile,24/trixie/docker-entrypoint.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The workflow compares file hashes between our tracked commit and upstream HEAD. If the Dockerfiles or entrypoint changed, it creates a GitHub issue tagged &lt;code&gt;upstream-sync&lt;/code&gt; with a diff link and a list of changed files. If upstream moved forward but the tracked files didn't change, it opens a PR to update the reference commit. If nothing changed, it does nothing.&lt;/p&gt;

&lt;p&gt;This is a lighter approach than forking the entire upstream repo. We track exactly the files we adapted, and we get notified only when those files change.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using the images
&lt;/h2&gt;

&lt;h3&gt;
  
  
  On a RISC-V machine
&lt;/h3&gt;

&lt;p&gt;If you're running Docker on actual riscv64 hardware, there's no &lt;code&gt;--platform&lt;/code&gt; flag needed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-it&lt;/span&gt; gounthar/node-riscv64:24.13.1-trixie bash
node &lt;span class="nt"&gt;-v&lt;/span&gt;    &lt;span class="c"&gt;# v24.13.1&lt;/span&gt;
npm &lt;span class="nt"&gt;-v&lt;/span&gt;     &lt;span class="c"&gt;# 11.8.0&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can use the image as a base for your own applications:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; gounthar/node-riscv64:24.13.1-trixie&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package*.json ./&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["node", "server.js"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or the slim variant for a smaller production image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; gounthar/node-riscv64:24.13.1-trixie-slim&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package*.json ./&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm ci &lt;span class="nt"&gt;--omit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;dev
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;
&lt;span class="k"&gt;USER&lt;/span&gt;&lt;span class="s"&gt; node&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["node", "server.js"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  On x86_64 or ARM with QEMU
&lt;/h3&gt;

&lt;p&gt;Make sure QEMU user-mode emulation is registered. On most Docker Desktop installations, this works out of the box. On Linux, you may need:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--privileged&lt;/span&gt; multiarch/qemu-user-static &lt;span class="nt"&gt;--reset&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="nb"&gt;yes&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then use the &lt;code&gt;--platform&lt;/code&gt; flag:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;--platform&lt;/span&gt; linux/riscv64 gounthar/node-riscv64:24.13.1-trixie node &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"
const os = require('os');
console.log('arch:', os.arch());
console.log('platform:', os.platform());
console.log('cpus:', os.cpus().length);
"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Expect it to be slow. QEMU instruction-level emulation has overhead. But for testing compatibility and validating that your dependencies work on riscv64, it's good enough.&lt;/p&gt;

&lt;h3&gt;
  
  
  Building native addons
&lt;/h3&gt;

&lt;p&gt;The full variant includes build tools, so native addons with C/C++ bindings compile inside the container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;--platform&lt;/span&gt; linux/riscv64 gounthar/node-riscv64:24.13.1-trixie &lt;span class="se"&gt;\&lt;/span&gt;
  sh &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"mkdir /tmp/test &amp;amp;&amp;amp; cd /tmp/test &amp;amp;&amp;amp; npm init -y &amp;amp;&amp;amp; npm install utf-8-validate &amp;amp;&amp;amp; node -e &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;require('utf-8-validate'); console.log('native addon loaded')&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is useful for checking whether your application's native dependencies build on riscv64 before committing to hardware. Give it a whirl with your own dependencies.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why not wait for official images?
&lt;/h2&gt;

&lt;p&gt;There's an open discussion at &lt;a href="https://github.com/nodejs/docker-node/issues/1707" rel="noopener noreferrer"&gt;nodejs/docker-node#1707&lt;/a&gt; about adding riscv64 support to the official images. It's moving, but it takes time. The official images need to build from official binaries, and Node.js doesn't produce official riscv64 binaries yet (riscv64 is in the "unofficial builds" tier).&lt;/p&gt;

&lt;p&gt;We're filling a gap. When official support lands, these images become unnecessary and that's fine. Until then, if you need Node.js on riscv64 in a container today, they're here.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's inside, specifically
&lt;/h2&gt;

&lt;p&gt;Each image contains:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Node.js (built natively on RISC-V, not cross-compiled) with npm&lt;/li&gt;
&lt;li&gt;Yarn Classic 1.22.22, verified via GPG signature&lt;/li&gt;
&lt;li&gt;A &lt;code&gt;node&lt;/code&gt; user (UID 1000) for running processes without root&lt;/li&gt;
&lt;li&gt;An entrypoint that auto-detects whether you're passing Node.js arguments or a system command&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That last point means &lt;code&gt;docker run gounthar/node-riscv64 app.js&lt;/code&gt; runs &lt;code&gt;node app.js&lt;/code&gt;, while &lt;code&gt;docker run gounthar/node-riscv64 bash&lt;/code&gt; gives you a shell. Same behavior as the official Node.js images.&lt;/p&gt;

&lt;h2&gt;
  
  
  Some numbers
&lt;/h2&gt;

&lt;p&gt;Node.js compiles in about 12 hours on the Banana Pi F3 (first build), around 40 minutes with ccache warm. The QEMU-based Docker image build on GitHub Actions takes about 5 minutes per version (under a minute for the full variant, closer to 4 for the slim one because of the &lt;code&gt;ldd&lt;/code&gt;/&lt;code&gt;apt-mark&lt;/code&gt; cleanup dance). We currently build Node.js 22 LTS and 24 Current, on a Banana Pi F3 (SpacemiT K1 SoC, 8 cores, 16GB RAM) running Debian Trixie.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Quick test&lt;/span&gt;
docker run &lt;span class="nt"&gt;--platform&lt;/span&gt; linux/riscv64 gounthar/node-riscv64:24.13.1-trixie node &lt;span class="nt"&gt;-v&lt;/span&gt;

&lt;span class="c"&gt;# Interactive session&lt;/span&gt;
docker run &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--platform&lt;/span&gt; linux/riscv64 gounthar/node-riscv64:24.13.1-trixie bash

&lt;span class="c"&gt;# As a build environment&lt;/span&gt;
docker run &lt;span class="nt"&gt;--platform&lt;/span&gt; linux/riscv64 &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;:/app &lt;span class="nt"&gt;-w&lt;/span&gt; /app gounthar/node-riscv64:24.13.1-trixie npm &lt;span class="nb"&gt;test&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The source is at &lt;a href="https://github.com/gounthar/unofficial-builds" rel="noopener noreferrer"&gt;github.com/gounthar/unofficial-builds&lt;/a&gt;, the images are at &lt;a href="https://hub.docker.com/r/gounthar/node-riscv64" rel="noopener noreferrer"&gt;hub.docker.com/r/gounthar/node-riscv64&lt;/a&gt;, and contributions are welcome.&lt;/p&gt;

&lt;p&gt;If you're experimenting with RISC-V or just curious about where this architecture is heading, I'd love to hear about your experience. The gap between "officially supported" and "actually usable" is where the fun happens.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>node</category>
      <category>riscv</category>
    </item>
    <item>
      <title>What About iOS? Or, How a $30 Android Phone Embarrasses a $1000 iPad</title>
      <dc:creator>Bruno Verachten</dc:creator>
      <pubDate>Mon, 09 Feb 2026 19:36:51 +0000</pubDate>
      <link>https://forem.com/gounthar/what-about-ios-or-how-a-30-android-phone-embarrasses-a-1000-ipad-25fc</link>
      <guid>https://forem.com/gounthar/what-about-ios-or-how-a-30-android-phone-embarrasses-a-1000-ipad-25fc</guid>
      <description>&lt;p&gt;&lt;em&gt;Photo by &lt;a href="https://unsplash.com/@holmo" rel="noopener noreferrer"&gt;Holmo&lt;/a&gt; on &lt;a href="https://unsplash.com/photos/96w-2ZgEzG0" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I was giving my talk at a conference (the one about running Jenkins on old Android phones with Termux) and the demo had gone well. Jenkins was running, the agent was connected, builds were passing. I was feeling pretty good about the whole thing.&lt;/p&gt;

&lt;p&gt;Then someone in the audience raised their hand.&lt;/p&gt;

&lt;p&gt;"This is cool, but... what about iOS? Can you do the same thing on an iPhone?"&lt;/p&gt;

&lt;p&gt;I paused. I actually didn't know. I mean, I &lt;em&gt;assumed&lt;/em&gt; iOS would be harder. Apple locks things down way more than Android. But I'd never actually investigated the question properly. I gave a vague answer about Apple's sandbox restrictions and moved on, but the question kept nagging at me.&lt;/p&gt;

&lt;p&gt;So I did what any self-respecting tinkerer would do: I went down the rabbit hole.&lt;/p&gt;

&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;No, you cannot run Jenkins on iOS.&lt;/strong&gt; The closest thing to Termux on iOS is &lt;a href="https://ish.app" rel="noopener noreferrer"&gt;iSH&lt;/a&gt; (Alpine Linux via x86 emulation), but Java is fundamentally broken on it. The only theoretically viable path involves running a full Linux VM on a $1000+ iPad Pro, and nobody has ever documented actually doing it. A $30 used Android phone does natively what a $1000 iPad can barely do in a virtual machine.&lt;/p&gt;

&lt;p&gt;If you're at a conference and someone asks "What about iOS?", that's your answer.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now, if you want to know &lt;em&gt;how&lt;/em&gt; I arrived at that conclusion, and all the dead ends I explored along the way... read on.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Investigation: Let's Start With iSH
&lt;/h2&gt;

&lt;p&gt;First, I needed to find the iOS equivalent of Termux. After some digging, the answer was clear: &lt;a href="https://apps.apple.com/us/app/ish-shell/id1436902243" rel="noopener noreferrer"&gt;iSH Shell&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;iSH is an impressive project. 19.2k stars on &lt;a href="https://github.com/ish-app/ish" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;, actively maintained, and it runs Alpine Linux on your iPhone or iPad via usermode x86 emulation. It gives you a real shell, a real package manager (&lt;code&gt;apk&lt;/code&gt; with 14,000+ packages), and you can install stuff like Git, Python, Node.js, GCC, vim, curl...&lt;/p&gt;

&lt;p&gt;Nice! This sounds promising!&lt;/p&gt;

&lt;p&gt;Let's check the important boxes for running Jenkins:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Shell environment? &lt;strong&gt;Yes.&lt;/strong&gt; Bash, zsh, the works.&lt;/li&gt;
&lt;li&gt;Package manager? &lt;strong&gt;Yes.&lt;/strong&gt; Alpine's &lt;code&gt;apk&lt;/code&gt;, fully functional.&lt;/li&gt;
&lt;li&gt;SSH server? &lt;strong&gt;Yes.&lt;/strong&gt; You can accept incoming connections (&lt;a href="https://github.com/ish-app/ish/wiki/Running-an-SSH-server" rel="noopener noreferrer"&gt;wiki&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;Git? &lt;strong&gt;Yes.&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Python? &lt;strong&gt;Yes.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So far so good. Now the big one:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Java/OpenJDK? ...&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let me check.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Java Wall
&lt;/h2&gt;

&lt;p&gt;This is where everything falls apart.&lt;/p&gt;

&lt;p&gt;I found &lt;a href="https://github.com/ish-app/ish/issues/306" rel="noopener noreferrer"&gt;issue #306&lt;/a&gt; on the iSH repository. Then &lt;a href="https://github.com/ish-app/ish/issues/1560" rel="noopener noreferrer"&gt;issue #1560&lt;/a&gt;, helpfully titled "Java is very broken." Then &lt;a href="https://github.com/ish-app/ish/issues/2589" rel="noopener noreferrer"&gt;issue #2589&lt;/a&gt;. The picture was grim.&lt;/p&gt;

&lt;p&gt;Here's what happens when you try to run Java on iSH:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;JDK 7&lt;/strong&gt;: Barely works. You need manual heap flags (&lt;code&gt;java -mx256m&lt;/code&gt;). Functional but ancient. Jenkins dropped JDK 7 support years ago.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;JDK 8+&lt;/strong&gt;: Crashes immediately with "Too small initial heap." Can't start.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;JDK 11+&lt;/strong&gt;: Completely non-functional. Dead on arrival.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;JDK 21&lt;/strong&gt; (what Jenkins needs today): Forget about it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Damn!&lt;/p&gt;

&lt;p&gt;The root causes are architectural, not just bugs that'll get fixed someday:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Missing SSE instruction support&lt;/strong&gt; in iSH's x86 emulator. Modern JVMs assume SSE instructions exist.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Missing shared futex support&lt;/strong&gt;. The JVM's threading model depends on these.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;32-bit only&lt;/strong&gt; (i386). iSH emulates an old x86 processor, not x86_64.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;And the big one: &lt;strong&gt;Apple's W^X memory policy forbids JIT compilation&lt;/strong&gt;. The JVM &lt;em&gt;needs&lt;/em&gt; JIT to function at any reasonable performance level. Without it, you're running through an interpreter inside an emulator. Even if it worked, it would be comically slow.&lt;/p&gt;

&lt;p&gt;Of course... I should have seen this coming. Jenkins is a Java application. No Java, no Jenkins. Investigation over?&lt;/p&gt;

&lt;p&gt;Not quite. Let me check a few more things.&lt;/p&gt;

&lt;h2&gt;
  
  
  The JIT Saga: Apple Says No
&lt;/h2&gt;

&lt;p&gt;Here's a detail I found fascinating and infuriating in equal measure.&lt;/p&gt;

&lt;p&gt;The iSH developers actually tried to get JIT access through the EU's Digital Markets Act (DMA). In July 2025, they filed an interoperability request with Apple, arguing that JIT compilation is essential for their app to function properly.&lt;/p&gt;

&lt;p&gt;Apple denied it in 2025, classifying iSH as ineligible.&lt;/p&gt;

&lt;p&gt;There's a &lt;a href="https://ish.app/blog/ish-jit-and-eu" rel="noopener noreferrer"&gt;blog post about it&lt;/a&gt; if you want to read the details. The short version: Apple controls the platform, Apple says no JIT for third-party apps, and the EU regulations weren't enough to change that.&lt;/p&gt;

&lt;p&gt;Compare this with Android: Termux runs native ARM64 binaries. No emulation layer. No JIT restrictions. &lt;code&gt;pkg install openjdk-21&lt;/code&gt; and you're done. The JVM runs at full native speed.&lt;/p&gt;

&lt;p&gt;You know what? Let's keep investigating. Maybe there's another way.&lt;/p&gt;

&lt;h2&gt;
  
  
  What About a Full VM?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://getutm.app" rel="noopener noreferrer"&gt;UTM&lt;/a&gt; is a QEMU-based virtual machine app that can run full Linux distributions on iOS. Two versions exist:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;UTM SE&lt;/strong&gt; (free, App Store): No JIT. x86 emulation at roughly 2-5% of native speed. Jenkins would be... let's say "meditative." Actually, unusable.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;UTM with JIT&lt;/strong&gt; (sideloaded or via EU AltStore): ARM64 Linux VMs running at 80-95% native speed on M-series iPads.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The JIT version is interesting! On a 16GB iPad Pro with an M-series chip, you could theoretically run an ARM64 Ubuntu VM with enough RAM for Jenkins.&lt;/p&gt;

&lt;p&gt;Here's the problem:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;iPad Model&lt;/th&gt;
&lt;th&gt;Total RAM&lt;/th&gt;
&lt;th&gt;Practical VM Allocation&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;iPad Air M1 (8GB)&lt;/td&gt;
&lt;td&gt;8 GB&lt;/td&gt;
&lt;td&gt;2-3.5 GB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;iPad Pro M1/M2 (16GB)&lt;/td&gt;
&lt;td&gt;16 GB&lt;/td&gt;
&lt;td&gt;6-10 GB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;iPad Pro M4 (16GB)&lt;/td&gt;
&lt;td&gt;16 GB&lt;/td&gt;
&lt;td&gt;8-12 GB&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Jenkins needs roughly 1.5-2GB minimum (512MB heap plus the OS overhead). So you'd need at least an iPad Pro to even attempt this. And there's a fun detail: hardware-accelerated VMs on UTM have double RAM overhead due to JIT mirror mapping. ¯\_(ツ)_/¯&lt;/p&gt;

&lt;p&gt;And the background execution story is rough. iOS aggressively kills background apps. There's some hope with iPadOS 26's &lt;code&gt;BGContinuedProcessingTask&lt;/code&gt; API (&lt;a href="https://developer.apple.com/videos/play/wwdc2025/227/" rel="noopener noreferrer"&gt;WWDC 2025 session&lt;/a&gt;), and there's an &lt;a href="https://github.com/utmapp/UTM/issues/7220" rel="noopener noreferrer"&gt;open UTM issue tracking this&lt;/a&gt;, but as of early 2026, a VM-based Jenkins would die every time you switch to Safari.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The kicker: nobody has documented actually running Jenkins this way.&lt;/strong&gt; Not a blog post, not a forum thread, not even a tweet. Zero evidence of anyone successfully doing it. I looked hard.&lt;/p&gt;

&lt;p&gt;So the "best" iOS path to Jenkins requires a $1000+ iPad Pro, sideloading or EU DMA provisions for JIT access, a full Linux VM with careful RAM management, and... hope.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Dead-End Parade
&lt;/h2&gt;

&lt;p&gt;For completeness, I checked everything else I could find. Let me save you the trouble:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Approach&lt;/th&gt;
&lt;th&gt;Java?&lt;/th&gt;
&lt;th&gt;SSH Server?&lt;/th&gt;
&lt;th&gt;Background?&lt;/th&gt;
&lt;th&gt;Verdict&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/holzschu/a-shell" rel="noopener noreferrer"&gt;a-Shell&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Cool WebAssembly environment, but not a server&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://apps.apple.com/us/app/blink-shell-build-code/id1594898306" rel="noopener noreferrer"&gt;Blink Shell&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;SSH &lt;em&gt;client&lt;/em&gt;. Great for connecting TO Jenkins, not running it&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;a href="https://github.com/emkey1/AOK-Filesystem-Tools" rel="noopener noreferrer"&gt;iSH-AOK&lt;/a&gt; (fork)&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Same x86 emulation engine as iSH. Same Java problems. Dormant since June 2024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;iOS Shortcuts&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Can trigger Jenkins builds via REST API, but that's about it&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Play.js&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;JavaScript IDE&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pythonista 3&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Python IDE&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Docker on iOS&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Would require Linux kernel features that iOS's XNU kernel doesn't have&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Every. Single. One. A dead end for running Jenkins.&lt;/p&gt;

&lt;h2&gt;
  
  
  "But What About Jailbreaking?"
&lt;/h2&gt;

&lt;p&gt;Sure, let's go there. Jailbreaking could theoretically give you &lt;code&gt;launchd&lt;/code&gt; daemon persistence, a real SSH server on port 22, and Linux chroot environments.&lt;/p&gt;

&lt;p&gt;But here's the state of jailbreaking in early 2026:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;iOS Support&lt;/th&gt;
&lt;th&gt;Devices&lt;/th&gt;
&lt;th&gt;Status&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://palera.in/" rel="noopener noreferrer"&gt;Palera1n&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;iOS 15.0-18.7.4&lt;/td&gt;
&lt;td&gt;A8-A11 only (iPhone 6s through X)&lt;/td&gt;
&lt;td&gt;Active but ancient hardware only&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Dopamine&lt;/td&gt;
&lt;td&gt;iOS 15.0-16.6.1&lt;/td&gt;
&lt;td&gt;A12+&lt;/td&gt;
&lt;td&gt;Stalled. No iOS 17+&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;TrollStore&lt;/td&gt;
&lt;td&gt;iOS 15.5-17.0&lt;/td&gt;
&lt;td&gt;All&lt;/td&gt;
&lt;td&gt;Apple patched the vulnerability in iOS 17.0.1&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;No jailbreak exists for iPhone 12 or newer on iOS 17.1+. Jailbreaking is effectively dead for modern hardware. And even if you &lt;em&gt;could&lt;/em&gt; jailbreak, there's no official OpenJDK package. You'd need to manually compile it.&lt;/p&gt;

&lt;p&gt;So that's not a realistic answer either.&lt;/p&gt;

&lt;p&gt;Before we wrap up the investigation, there's one more recent development worth mentioning.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Glimmer of Hope: OpenJDK Mobile
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://openjdk.org/projects/mobile/" rel="noopener noreferrer"&gt;OpenJDK Mobile project&lt;/a&gt; has made progress. &lt;a href="https://www.infoq.com/news/2025/11/java-on-ios/" rel="noopener noreferrer"&gt;InfoQ reported in November 2025&lt;/a&gt; that OpenJDK can build and run on iOS using the Zero interpreter (pure C++, no JIT) enhanced with AOT-compiled methods from Project Leyden. &lt;a href="https://gluonhq.com/bringing-openjdk-to-mobile-a-community-effort/" rel="noopener noreferrer"&gt;Gluon&lt;/a&gt; has built automated build pipelines.&lt;/p&gt;

&lt;p&gt;Interesting! But (and this is a big but) it's designed for embedding Java in iOS apps, not running standalone JVM servers. Jenkins is far too dynamic for AOT compilation (hundreds of plugins, dynamic class loading everywhere). This doesn't help us.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Comparison That Says It All
&lt;/h2&gt;

&lt;p&gt;So after exploring every possible path (native apps, emulation, VMs, jailbreaking, even emerging OpenJDK efforts), let me lay out the complete picture in one table:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Capability&lt;/th&gt;
&lt;th&gt;Termux (Android)&lt;/th&gt;
&lt;th&gt;iSH (iOS)&lt;/th&gt;
&lt;th&gt;UTM+JIT (iPad Pro)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Execution&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Native ARM64&lt;/td&gt;
&lt;td&gt;x86 emulation (5-100x slower)&lt;/td&gt;
&lt;td&gt;Near-native ARM64 (in VM)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Package manager&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;pkg&lt;/code&gt; (Debian-based)&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;apk&lt;/code&gt; (Alpine i386)&lt;/td&gt;
&lt;td&gt;Guest OS (&lt;code&gt;apt&lt;/code&gt;/&lt;code&gt;dnf&lt;/code&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;SSH server&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Port 8022, background&lt;/td&gt;
&lt;td&gt;Port 22, foreground only&lt;/td&gt;
&lt;td&gt;Inside VM&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Java/OpenJDK 21&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Works perfectly&lt;/td&gt;
&lt;td&gt;Broken&lt;/td&gt;
&lt;td&gt;Works (inside VM)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Background services&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;termux-services (runit/sv)&lt;/td&gt;
&lt;td&gt;Unreliable (location hack)&lt;/td&gt;
&lt;td&gt;Partial (improving)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Boot persistence&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Termux:Boot&lt;/td&gt;
&lt;td&gt;Impossible&lt;/td&gt;
&lt;td&gt;Impossible&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Jenkins&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Fully working&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Impossible&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Theoretically possible&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Cost&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Free (any old Android)&lt;/td&gt;
&lt;td&gt;Free (App Store)&lt;/td&gt;
&lt;td&gt;$1000+ iPad Pro 16GB&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Look at that last row. Free versus a thousand dollars, and the free option actually works.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why iOS Is Fundamentally Different
&lt;/h2&gt;

&lt;p&gt;This isn't a matter of finding the right app or the right workaround. The gap between Android and iOS for this use case is architectural:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;No JIT compilation&lt;/strong&gt; (W^X memory policy). Kills JVM performance. Apple denied the EU DMA exemption request.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Aggressive background app killing&lt;/strong&gt;. iOS suspends apps after minutes. You can hack around this with &lt;a href="https://github.com/ish-app/ish/wiki/Running-in-background" rel="noopener noreferrer"&gt;location services in the background&lt;/a&gt; (&lt;code&gt;cat /dev/location &amp;gt; /dev/null &amp;amp;&lt;/code&gt;), but it's unreliable.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No boot-time auto-start&lt;/strong&gt;. There is no equivalent to Termux:Boot. You cannot have Jenkins start when the device boots.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;App sandbox isolation&lt;/strong&gt;. Apps cannot spawn arbitrary processes or access system directories.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No raw syscall access&lt;/strong&gt;. Everything must go through Apple's approved APIs.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Android's architecture is simply more open. Termux runs native ARM64 binaries, manages services via runit, and persists across reboots, all without root. That's not a feature gap. It's a philosophy gap.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Punchline
&lt;/h2&gt;

&lt;p&gt;So here's what I'll say next time someone asks "What about iOS?" at a conference:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;iSH is the closest thing to Termux on iOS. It runs Alpine Linux with a real package manager and even accepts incoming SSH connections. Pretty impressive! But Java is broken on it due to Apple's ban on JIT compilation and missing CPU instruction support in the emulator. The only way to run Jenkins on an iPad would be inside a full Linux VM using UTM on a 16GB iPad Pro, and even that has never been documented working. Meanwhile, a $30 used Android phone does natively what a $1000 iPad can barely do in a VM.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;How cool is that? Well, not cool for iOS users. But it definitively validates the Android/Termux approach.&lt;/p&gt;

&lt;p&gt;Those old Android phones gathering dust in your drawer? They're not e-waste. They're infrastructure waiting to happen. And no amount of Apple silicon can change the fact that an open platform beats a locked-down one when you need to run actual server workloads.&lt;/p&gt;

&lt;p&gt;Suite au prochain episode. I'm already thinking about multi-device clusters and load balancing across a fleet of phones. But that's a story for another day.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;The project:&lt;/strong&gt; &lt;a href="https://github.com/gounthar/termux-jenkins-automation" rel="noopener noreferrer"&gt;termux-jenkins-automation on GitHub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The talk:&lt;/strong&gt; If you want to see Jenkins running on a phone and ask your own tricky audience questions, come find me at the next conference. I'll have a better answer for the iOS question this time.&lt;/p&gt;

</description>
      <category>jenkins</category>
      <category>android</category>
      <category>ios</category>
    </item>
    <item>
      <title>The Ring Around the Rosie: Upgrading Next.js on RISC-V from 13.5.6 to 14.2.35</title>
      <dc:creator>Bruno Verachten</dc:creator>
      <pubDate>Sun, 28 Dec 2025 12:16:34 +0000</pubDate>
      <link>https://forem.com/gounthar/the-ring-around-the-rosie-upgrading-nextjs-on-risc-v-from-1356-to-14235-maj</link>
      <guid>https://forem.com/gounthar/the-ring-around-the-rosie-upgrading-nextjs-on-risc-v-from-1356-to-14235-maj</guid>
      <description>&lt;p&gt;&lt;em&gt;Photo by &lt;a href="https://unsplash.com/@kkalerry" rel="noopener noreferrer"&gt;Klara Kulikova&lt;/a&gt; on &lt;a href="https://unsplash.com/photos/JUgm3RZFhko" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I had a working Next.js 13.5.6 setup on my Banana Pi F3. Then Dependabot showed up with PRs bumping to 14.2.35, and suddenly my carefully built SWC binary was useless. What followed was a two-hour journey through cross-compilation failures, Cargo dependency rabbit holes, and the eventual realization that the same trick that worked for 13.5.6 was fundamentally broken in 14.x. Here is how I fixed it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem: Version Mismatch
&lt;/h2&gt;

&lt;p&gt;I have been running Next.js on RISC-V hardware for a while now. The Banana Pi F3 is my test bed: 8 cores, 15GB of RAM, running Debian 13. Not exactly a speed demon, but it gets the job done.&lt;/p&gt;

&lt;p&gt;Back in November, I successfully built &lt;code&gt;@next/swc&lt;/code&gt; for version 13.5.6 using a simple workaround: the &lt;code&gt;--no-default-features&lt;/code&gt; flag. This disabled TLS dependencies that pulled in the &lt;code&gt;ring&lt;/code&gt; cryptographic library, which famously does not support riscv64.&lt;/p&gt;

&lt;p&gt;Then Dependabot filed PRs #22 and #23, bumping Next.js to 14.2.35.&lt;/p&gt;

&lt;p&gt;The problem? SWC binaries are version-locked. My 13.5.6 binary would not load in 14.2.35. Time to rebuild.&lt;/p&gt;

&lt;h2&gt;
  
  
  Failed Attempt 1: Cross-Compilation with cross-rs
&lt;/h2&gt;

&lt;p&gt;Building on the Banana Pi takes hours. The 13.5.6 build ran for about 4 hours. Surely there is a faster way?&lt;/p&gt;

&lt;p&gt;Cross-compilation seemed like the obvious answer. I have a perfectly good x86_64 machine running WSL2. Why not build there and copy the binary?&lt;/p&gt;

&lt;p&gt;I set up &lt;code&gt;cross-rs&lt;/code&gt;, the standard Rust cross-compilation tool:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="c"&gt;# Cross.toml&lt;/span&gt;
&lt;span class="nn"&gt;[target.riscv64gc-unknown-linux-gnu]&lt;/span&gt;
&lt;span class="py"&gt;image&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"ghcr.io/cross-rs/riscv64gc-unknown-linux-gnu:main"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then ran:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cross build &lt;span class="nt"&gt;--release&lt;/span&gt; &lt;span class="nt"&gt;--target&lt;/span&gt; riscv64gc-unknown-linux-gnu &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--manifest-path&lt;/span&gt; crates/napi/Cargo.toml &lt;span class="nt"&gt;--no-default-features&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It failed. Not immediately, but about 200 crates in:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;error: failed to run custom build command for `ring v0.16.20`

Caused by:
  process didn't exit successfully: [...]/build-script-build

  ring build.rs panic: Target architecture not supported
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The issue is subtle but fundamental. The &lt;code&gt;ring&lt;/code&gt; crate embeds hand-written assembly for cryptographic operations. Each target architecture needs its own assembly files. When &lt;code&gt;ring&lt;/code&gt;'s &lt;code&gt;build.rs&lt;/code&gt; runs, it checks the target architecture and panics if there is no matching assembly.&lt;/p&gt;

&lt;p&gt;Cross-compilation does not help here. The build script still runs, still checks for riscv64 assembly, still panics when it finds none.&lt;/p&gt;

&lt;h2&gt;
  
  
  Failed Attempt 2: Patching ring/rustls Versions
&lt;/h2&gt;

&lt;p&gt;Maybe I could upgrade &lt;code&gt;ring&lt;/code&gt;? Version 0.17 added riscv64 support. I tried patching &lt;code&gt;Cargo.toml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[patch.crates-io]&lt;/span&gt;
&lt;span class="py"&gt;ring&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"0.17"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Cargo rejected it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;error: failed to select a version for `ring`.
    ... required by rustls v0.20.9
    ... which requires ring ^0.16
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The caret (&lt;code&gt;^&lt;/code&gt;) in Cargo means "compatible version," and 0.17 is not compatible with 0.16 under semver rules (pre-1.0 versions treat minor bumps as breaking changes).&lt;/p&gt;

&lt;p&gt;To use ring 0.17, I would need to upgrade the entire chain: &lt;code&gt;ring&lt;/code&gt; -&amp;gt; &lt;code&gt;rustls&lt;/code&gt; -&amp;gt; &lt;code&gt;hyper-rustls&lt;/code&gt; -&amp;gt; &lt;code&gt;reqwest&lt;/code&gt; -&amp;gt; &lt;code&gt;turbo-tasks-fetch&lt;/code&gt;. That is not a quick patch. That is a major dependency overhaul that would diverge significantly from upstream.&lt;/p&gt;

&lt;h2&gt;
  
  
  Failed Attempt 3: The Old Workaround
&lt;/h2&gt;

&lt;p&gt;Fine. Forget cross-compilation. I will just build natively like I did for 13.5.6.&lt;/p&gt;

&lt;p&gt;I SSH'd into the Banana Pi, cloned Next.js 14.2.35, and ran the exact command that worked before:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cargo build &lt;span class="nt"&gt;--release&lt;/span&gt; &lt;span class="nt"&gt;--manifest-path&lt;/span&gt; crates/napi/Cargo.toml &lt;span class="nt"&gt;--no-default-features&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It compiled 282 crates. Then:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;error: failed to run custom build command for `ring v0.16.20`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Wait, what? The &lt;code&gt;--no-default-features&lt;/code&gt; flag was supposed to skip ring entirely. It worked in 13.5.6. Why is it failing now?&lt;/p&gt;

&lt;h2&gt;
  
  
  Root Cause: The Cargo Configuration Changed
&lt;/h2&gt;

&lt;p&gt;I dug into the &lt;code&gt;Cargo.toml&lt;/code&gt; files, comparing 13.5.6 to 14.2.35.&lt;/p&gt;

&lt;p&gt;In 13.5.6:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[features]&lt;/span&gt;
&lt;span class="py"&gt;default&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"rustls-tls"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="py"&gt;rustls-tls&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="py"&gt;native-tls&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;TLS was a &lt;strong&gt;default feature&lt;/strong&gt;. Passing &lt;code&gt;--no-default-features&lt;/code&gt; disabled it. No TLS, no ring, build succeeds.&lt;/p&gt;

&lt;p&gt;In 14.2.35:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="c"&gt;# Enable specific tls features per-target.&lt;/span&gt;
&lt;span class="nn"&gt;[target.'cfg(all(target_os = "windows", target_arch = "aarch64"))'.dependencies]&lt;/span&gt;
&lt;span class="py"&gt;next-core&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="py"&gt;workspace&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="py"&gt;features&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"native-tls"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nn"&gt;[target.'cfg(not(any(all(target_os = "windows", target_arch = "aarch64"), target_arch="wasm32")))'.dependencies]&lt;/span&gt;
&lt;span class="py"&gt;next-core&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="py"&gt;workspace&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="py"&gt;features&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"rustls-tls"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;TLS is now a &lt;strong&gt;target-specific dependency&lt;/strong&gt;. The second block says: for everything that is not Windows ARM64 or WASM, use rustls-tls.&lt;/p&gt;

&lt;p&gt;riscv64 is not Windows ARM64. riscv64 is not WASM. So riscv64 gets rustls-tls.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;--no-default-features&lt;/code&gt; flag does nothing here because this is not a feature. It is a conditional dependency based on target platform.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Fix: native-tls for riscv64
&lt;/h2&gt;

&lt;p&gt;The solution was staring at me from the first target block. Windows ARM64 uses &lt;code&gt;native-tls&lt;/code&gt; instead of &lt;code&gt;rustls-tls&lt;/code&gt;. Why? Probably similar issues with ring on that platform.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;native-tls&lt;/code&gt; uses the system's TLS library, OpenSSL on Linux. OpenSSL supports riscv64. My Debian 13 install has OpenSSL 3.5.4.&lt;/p&gt;

&lt;p&gt;I patched &lt;code&gt;packages/next-swc/crates/napi/Cargo.toml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="c"&gt;# Use native-tls for riscv64 (ring 0.16.20 doesn't support riscv64)&lt;/span&gt;
&lt;span class="nn"&gt;[target.'cfg(all(target_os = "linux", target_arch = "riscv64"))'.dependencies]&lt;/span&gt;
&lt;span class="py"&gt;next-core&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="py"&gt;workspace&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="py"&gt;features&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"native-tls"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Exclude riscv64 from rustls-tls targets&lt;/span&gt;
&lt;span class="nn"&gt;[target.'cfg(not(any(all(target_os = "windows", target_arch = "aarch64"), all(target_os = "linux", target_arch = "riscv64"), target_arch="wasm32")))'.dependencies]&lt;/span&gt;
&lt;span class="py"&gt;next-core&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="py"&gt;workspace&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="py"&gt;features&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;"rustls-tls"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Two changes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add a riscv64-specific block that uses &lt;code&gt;native-tls&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Modify the fallback block to exclude riscv64&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Started the build again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cargo build &lt;span class="nt"&gt;--release&lt;/span&gt; &lt;span class="nt"&gt;--manifest-path&lt;/span&gt; crates/napi/Cargo.toml &lt;span class="nt"&gt;--no-default-features&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Build
&lt;/h2&gt;

&lt;p&gt;130 minutes. About half the time of the 13.5.6 build, though I suspect that is due to better incremental caching rather than any architectural improvement.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Compiling ring v0.17.8
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Wait, ring? I thought we were avoiding ring?&lt;/p&gt;

&lt;p&gt;Turns out &lt;code&gt;native-tls&lt;/code&gt; on Linux still uses ring for some operations, but version 0.17.8, which supports riscv64. The constraint chain is different: &lt;code&gt;native-tls&lt;/code&gt; does not pin to &lt;code&gt;ring ^0.16&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Binary size: 230MB. About what I expected for a debug-info-included release build.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing
&lt;/h2&gt;

&lt;p&gt;I ran the test suite against both App Router and Pages Router test apps:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./scripts/run-tests.sh tests/app-router
./scripts/run-tests.sh tests/pages-router
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Results:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Test App&lt;/th&gt;
&lt;th&gt;Status&lt;/th&gt;
&lt;th&gt;Notes&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;App Router&lt;/td&gt;
&lt;td&gt;Pass&lt;/td&gt;
&lt;td&gt;9 pages including SSG and SSR&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pages Router&lt;/td&gt;
&lt;td&gt;Pass&lt;/td&gt;
&lt;td&gt;6 pages including API routes&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Both work. Native SWC compilation, no Babel fallback needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bonus Discovery: No Loader Patch Required
&lt;/h2&gt;

&lt;p&gt;In 13.5.6, I had to patch &lt;code&gt;node_modules/next/dist/build/swc/index.js&lt;/code&gt; to add riscv64 to the architecture map:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Had to add this line in 13.5.6&lt;/span&gt;
&lt;span class="nx"&gt;riscv64&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;linux&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;riscv64gc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I checked 14.2.35's loader. It is already there:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;linux&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;x64&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;linux&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;x64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;triple&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;&lt;span class="nx"&gt;triple&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;abi&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;gnux32&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="nx"&gt;arm64&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;linux&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;arm64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;riscv64&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;linux&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;riscv64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;// Already present!&lt;/span&gt;
    &lt;span class="nx"&gt;arm&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;linux&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;arm&lt;/span&gt;
&lt;span class="p"&gt;},&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Someone at Vercel added riscv64 support between 13.5.6 and 14.x. The only thing missing is the actual binary, and now we have that too.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Dependency Chain Explained
&lt;/h2&gt;

&lt;p&gt;For anyone who wants to understand why this was so painful, here is the dependency chain:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@next/swc
  -&amp;gt; turbo-tasks-fetch
    -&amp;gt; reqwest
      -&amp;gt; hyper-rustls
        -&amp;gt; rustls 0.20.9
          -&amp;gt; ring ^0.16  &amp;lt;-- PROBLEM: ring 0.16 has no riscv64 assembly
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And why &lt;code&gt;native-tls&lt;/code&gt; works:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@next/swc
  -&amp;gt; turbo-tasks-fetch
    -&amp;gt; reqwest (with native-tls feature)
      -&amp;gt; native-tls
        -&amp;gt; openssl-sys  &amp;lt;-- Uses system OpenSSL, which supports riscv64
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;ring&lt;/code&gt; crate is a pure-Rust cryptographic library that embeds assembly for performance. Each architecture needs hand-written assembly. riscv64 assembly was added in ring 0.17, but rustls 0.20 pins to ring 0.16.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;native-tls&lt;/code&gt; crate delegates to the operating system's TLS implementation. On Linux, that is OpenSSL. OpenSSL 3.x has full riscv64 support through its own assembly routines and C fallbacks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;For anyone building Next.js on exotic architectures:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Version matters&lt;/strong&gt;: Build flags that work in one version may not work in another. Check the &lt;code&gt;Cargo.toml&lt;/code&gt; configuration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cross-compilation will not save you from ring&lt;/strong&gt;: The build script runs on the host but checks the target architecture. No assembly for target = build failure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;native-tls is your friend&lt;/strong&gt;: If rustls fails due to ring, try native-tls. It uses system libraries that typically have broader architecture support.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Check upstream first&lt;/strong&gt;: Next.js 14.x already has riscv64 in the loader. They are aware of the architecture. The binary is the only missing piece.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;For the Next.js team&lt;/strong&gt; (if anyone is reading): The fix is a three-line patch to &lt;code&gt;packages/next-swc/crates/napi/Cargo.toml&lt;/code&gt;. Add riscv64 to the native-tls targets. The CI infrastructure is the harder problem, but the code change is minimal.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Next
&lt;/h2&gt;

&lt;p&gt;The patch file is in this repository at &lt;code&gt;patches/nextjs-14.x-native-tls.patch&lt;/code&gt;. Apply it before building:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; ~/next.js
git checkout v14.2.35
patch &lt;span class="nt"&gt;-p1&lt;/span&gt; &amp;lt; /path/to/patches/nextjs-14.x-native-tls.patch
&lt;span class="nb"&gt;cd &lt;/span&gt;packages/next-swc
cargo build &lt;span class="nt"&gt;--release&lt;/span&gt; &lt;span class="nt"&gt;--manifest-path&lt;/span&gt; crates/napi/Cargo.toml &lt;span class="nt"&gt;--no-default-features&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Build time: about 2 hours on a Banana Pi F3. Faster on better hardware.&lt;/p&gt;

&lt;p&gt;I will update the test apps to 14.2.35 once I verify there are no regressions in the Dependabot PRs. The infrastructure is ready.&lt;/p&gt;

&lt;h2&gt;
  
  
  Version Comparison
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;13.5.6&lt;/th&gt;
&lt;th&gt;14.2.35&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;TLS configuration&lt;/td&gt;
&lt;td&gt;Default feature&lt;/td&gt;
&lt;td&gt;Target-specific dependency&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Build workaround&lt;/td&gt;
&lt;td&gt;&lt;code&gt;--no-default-features&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Cargo.toml patch required&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Loader patch needed&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No (riscv64 already included)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Build time (Banana Pi F3)&lt;/td&gt;
&lt;td&gt;~4 hours&lt;/td&gt;
&lt;td&gt;~130 minutes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Required Rust nightly&lt;/td&gt;
&lt;td&gt;2023-10-06&lt;/td&gt;
&lt;td&gt;2024-04-03&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The 14.x build is actually faster despite being a more complex codebase. Progress.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Patch file: &lt;code&gt;patches/nextjs-14.x-native-tls.patch&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Build documentation: &lt;code&gt;docs/BUILDING-SWC.md&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;ring crate issue: &lt;a href="https://github.com/briansmith/ring/issues/1292" rel="noopener noreferrer"&gt;github.com/briansmith/ring/issues/1292&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>riscv64</category>
      <category>nextjs</category>
      <category>swc</category>
      <category>rust</category>
    </item>
    <item>
      <title>The Old Dog Learns a New Trick: Cross-Compiling Tauri CLI for RISC-V</title>
      <dc:creator>Bruno Verachten</dc:creator>
      <pubDate>Tue, 23 Dec 2025 22:01:34 +0000</pubDate>
      <link>https://forem.com/gounthar/the-old-dog-learns-a-new-trick-cross-compiling-tauri-cli-for-risc-v-ibn</link>
      <guid>https://forem.com/gounthar/the-old-dog-learns-a-new-trick-cross-compiling-tauri-cli-for-risc-v-ibn</guid>
      <description>&lt;p&gt;&lt;em&gt;Photo by &lt;a href="https://unsplash.com/@dariuszpiosik" rel="noopener noreferrer"&gt;Dariusz Piosik&lt;/a&gt; on Unsplash&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Previously, on "Bruno Fights RISC-V"
&lt;/h2&gt;

&lt;p&gt;If you missed the first episode (&lt;a href="https://dev.to/gounthar/adding-risc-v-support-to-armbian-imager-a-tale-of-qemu-tauri-and-deja-vu-18nl"&gt;available here&lt;/a&gt;), here's the recap: I tried adding RISC-V builds to Armbian Imager (a Tauri app), hit a wall with 6+ hour QEMU emulation times, and realized the real fix was upstream. Tauri CLI ships pre-built binaries for x64, ARM64, macOS, Windows... but not RISC-V.&lt;/p&gt;

&lt;p&gt;My plan was simple: use my physical Banana Pi F3 as a self-hosted GitHub runner. Native RISC-V compilation. No emulation overhead. I'd done this before with Docker builds. Easy.&lt;/p&gt;

&lt;p&gt;Famous last words. Again.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;🚀 The Self-Hosted Runner Approach&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;I wasn't walking into this blind. I already had working RISC-V runners from my Docker-for-RISC-V project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;self-hosted&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;riscv64&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cargo build --release&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Two machines at home (192.168.1.185 and 192.168.1.36), both Banana Pi F3 boards, both registered as GitHub runners. They've been churning out Docker engine builds for months. Adding Tauri CLI to the mix seemed natural.&lt;/p&gt;

&lt;p&gt;And you know what? &lt;strong&gt;It worked.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Working Solution (That Got Replaced)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;✅ First Blood&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The self-hosted runner approach actually succeeded. Full workflow run, actual binary produced:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Build time&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;1h 2m 28s&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Binary&lt;/td&gt;
&lt;td&gt;ELF 64-bit RISC-V, RVC, double-float ABI&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Size&lt;/td&gt;
&lt;td&gt;16 MB (stripped)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Version&lt;/td&gt;
&lt;td&gt;tauri-cli 2.9.6&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Proof: &lt;a href="https://github.com/gounthar/tauri/releases/tag/tauri-cli-v2.9.6" rel="noopener noreferrer"&gt;actual release with actual binary&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I was ready to submit the PR. Self-hosted runners, conditional logic to skip them if not available, fallback to QEMU for the brave souls with 6 hours to spare. A complete solution.&lt;/p&gt;

&lt;p&gt;Then FabianLars left a comment on my PR.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;🤔 "Have you considered cross?"&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;I stared at the screen. Cross? Cross-compilation? I'd just spent an hour in the previous session explaining why cross-compilation was a nightmare for WebKit-dependent projects. Sysroots, symlink hell, version mismatches. I'd tried it. I'd abandoned it.&lt;/p&gt;

&lt;p&gt;But FabianLars wasn't talking about manual cross-compilation. He was talking about &lt;a href="https://github.com/cross-rs/cross" rel="noopener noreferrer"&gt;cross-rs&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;cross should work pretty well for the tauri cli&lt;/p&gt;

&lt;p&gt;— FabianLars, GitHub PR comment&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I had never heard of cross-rs.&lt;/p&gt;

&lt;p&gt;Let me say that again, because it bears repeating: I've been doing ARM32 cross-compilation since 2013. I've set up sysroots. I've dealt with qemu-user-static. I've maintained unofficial Node.js builds for exotic architectures. I've built Docker images on physical RISC-V hardware. &lt;strong&gt;Twelve years&lt;/strong&gt; of this.&lt;/p&gt;

&lt;p&gt;And I had never heard of cross-rs.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Cross-rs?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;📦 The Tool I Should Have Known About&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Cross is a "zero setup" cross compilation tool for Rust. Instead of manually setting up toolchains and sysroots, it uses pre-built Docker containers with everything already configured.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install it&lt;/span&gt;
cargo &lt;span class="nb"&gt;install &lt;/span&gt;cross

&lt;span class="c"&gt;# Use it exactly like cargo&lt;/span&gt;
cross build &lt;span class="nt"&gt;--target&lt;/span&gt; riscv64gc-unknown-linux-gnu &lt;span class="nt"&gt;--release&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. No sysroot configuration. No toolchain setup. No symlink archaeology. It just... works.&lt;/p&gt;

&lt;p&gt;The magic is Docker containers maintained by the cross-rs team. For RISC-V, you get:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The GNU toolchain (&lt;code&gt;riscv64-unknown-linux-gnu-gcc&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Proper sysroot with glibc&lt;/li&gt;
&lt;li&gt;QEMU for running test binaries&lt;/li&gt;
&lt;li&gt;All the standard Rust cross-compilation targets&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When I ran &lt;code&gt;cross build&lt;/code&gt; for the first time, I expected it to fail spectacularly. WebKit dependencies! Linker errors! Missing symbols! The usual cross-compilation dance of despair.&lt;/p&gt;

&lt;p&gt;Instead:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ time cross build --manifest-path ./crates/tauri-cli/Cargo.toml \
    --target riscv64gc-unknown-linux-gnu --profile release-size-optimized

   Compiling tauri-cli v2.9.6
    Finished `release-size-optimized` profile [optimized] target(s) in 4m 27s

real    4m27.235s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Four minutes and twenty-seven seconds.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;📊 The Numbers Don't Lie&lt;/strong&gt;
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Method&lt;/th&gt;
&lt;th&gt;Time&lt;/th&gt;
&lt;th&gt;Relative Speed&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;QEMU emulation (Docker buildx)&lt;/td&gt;
&lt;td&gt;6+ hours (killed)&lt;/td&gt;
&lt;td&gt;Baseline of pain&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Native RISC-V (Banana Pi F3)&lt;/td&gt;
&lt;td&gt;63 minutes&lt;/td&gt;
&lt;td&gt;~6x faster than QEMU&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cross-rs on x64&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;4m 27s&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;~90x faster than QEMU&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;I sat there looking at my terminal. Sixty-three minutes of native compilation. Four and a half minutes with cross. Fourteen times faster. On the same code. Same binary output.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;🧠 Why Cross Works Here&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Here's what I got wrong in my previous article: I assumed Tauri CLI had the same WebKit dependencies as Tauri apps.&lt;/p&gt;

&lt;p&gt;It doesn't.&lt;/p&gt;

&lt;p&gt;Tauri CLI is a &lt;strong&gt;build tool&lt;/strong&gt;. It orchestrates the build process, invokes cargo, bundles assets. It doesn't link against WebKit2GTK. That's what the &lt;strong&gt;app&lt;/strong&gt; does at runtime. The CLI is just orchestration code.&lt;/p&gt;

&lt;p&gt;No WebKit dependency means no sysroot nightmare. No sysroot nightmare means cross-compilation is trivial.&lt;/p&gt;

&lt;p&gt;I spent an hour in my previous session explaining why cross-compilation was impossible. Turns out I was solving the wrong problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  The New Implementation
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;🔧 Simpler Is Better&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The updated workflow replaces self-hosted runners with cross:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ matrix.config.os }}&lt;/span&gt;
  &lt;span class="na"&gt;strategy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matrix&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="c1"&gt;# ... existing x64, ARM64, macOS, Windows targets ...&lt;/span&gt;

        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;os&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-22.04&lt;/span&gt;
          &lt;span class="na"&gt;rust_target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;riscv64gc-unknown-linux-gnu&lt;/span&gt;
          &lt;span class="na"&gt;ext&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;
          &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;
          &lt;span class="na"&gt;cross&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

  &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Setup&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Rust'&lt;/span&gt;
      &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ !matrix.config.cross }}&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dtolnay/rust-toolchain@stable&lt;/span&gt;
      &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;targets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ matrix.config.rust_target }}&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install cross&lt;/span&gt;
      &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ matrix.config.cross }}&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;taiki-e/install-action@v2&lt;/span&gt;
      &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;tool&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cross@0.2.5&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build CLI (cross)&lt;/span&gt;
      &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ matrix.config.cross }}&lt;/span&gt;
      &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cross build --manifest-path ./crates/tauri-cli/Cargo.toml \&lt;/span&gt;
           &lt;span class="s"&gt;--target ${{ matrix.config.rust_target }} \&lt;/span&gt;
           &lt;span class="s"&gt;--profile release-size-optimized&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Changes from the self-hosted runner approach:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Runs on GitHub's ubuntu-22.04&lt;/strong&gt;, not external hardware&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Uses &lt;code&gt;cross@0.2.5&lt;/code&gt;&lt;/strong&gt; (pinned version for reproducibility)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Skips unnecessary steps&lt;/strong&gt; (Rust cache, Linux dependencies - cross handles these)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Different artifact path&lt;/strong&gt; (&lt;code&gt;target/{target}/&lt;/code&gt; instead of &lt;code&gt;target/&lt;/code&gt;)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;No self-hosted runners to maintain. No physical hardware to keep online. No network connectivity issues. Just a matrix entry that happens to use cross instead of cargo.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Humility Lesson
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;🎓 You Don't Know What You Don't Know&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Here's the uncomfortable truth: I had &lt;strong&gt;opinions&lt;/strong&gt; about cross-compilation. Strong ones. Formed over a decade of wrestling with toolchains and sysroots. Those opinions were... not wrong, exactly. Cross-compiling WebKit apps &lt;strong&gt;is&lt;/strong&gt; a nightmare. Setting up manual sysroots &lt;strong&gt;is&lt;/strong&gt; fragile.&lt;/p&gt;

&lt;p&gt;But cross-rs exists. Has existed since 2016. Nine years of development. Excellent RISC-V support. And I'd never encountered it because I was busy doing things the hard way.&lt;/p&gt;

&lt;p&gt;This is the curse of experience: you learn patterns that work, and you stop exploring alternatives. "I know how to do this" becomes "I know how this is done" becomes "this is how it's done." Except sometimes a tool comes along that makes your carefully accumulated knowledge... obsolete? Not quite. But certainly less essential.&lt;/p&gt;

&lt;p&gt;The gray hairs from ARM32 era taught me &lt;strong&gt;a&lt;/strong&gt; way. Not &lt;strong&gt;the&lt;/strong&gt; way.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;🔁 The Pattern Repeats&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;2013: "Cross-compiling for ARM32 requires a full sysroot setup."&lt;br&gt;
2016: cross-rs releases, nobody tells me.&lt;br&gt;
2025: "Cross-compiling for RISC-V is impossible because WebKit."&lt;br&gt;
Also 2025: "Have you considered cross?"&lt;/p&gt;

&lt;p&gt;I should probably set up an RSS feed for Rust tooling. Or just... ask people before assuming I know the answer.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;🎯 PR Status&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The PR is submitted: &lt;a href="https://github.com/tauri-apps/tauri/pull/14685" rel="noopener noreferrer"&gt;https://github.com/tauri-apps/tauri/pull/14685&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Current state:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[x] Cross-rs implementation complete&lt;/li&gt;
&lt;li&gt;[x] Build time: ~4 minutes (down from 63 minutes native, 6+ hours QEMU)&lt;/li&gt;
&lt;li&gt;[x] CodeRabbit feedback addressed&lt;/li&gt;
&lt;li&gt;[ ] Awaiting maintainer review&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If merged, every Tauri CLI release will include a RISC-V binary. Anyone on a RISC-V system can &lt;code&gt;cargo install tauri-cli&lt;/code&gt; and get a pre-built binary in seconds instead of compiling 600+ crates.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;🚀 The Bigger Picture&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This matters beyond Tauri. The RISC-V ecosystem is at that inflection point I keep mentioning. Hardware exists. Kernels boot. Distributions ship packages. But every missing binary is a barrier.&lt;/p&gt;

&lt;p&gt;Framework 13 with DC-ROMA RISC-V mainboard? You'll want GUI apps. Banana Pi F3 running Armbian? You'll want to flash images without &lt;code&gt;dd&lt;/code&gt;. Pine64 boards? Same story.&lt;/p&gt;

&lt;p&gt;Pre-built binaries are the difference between "works out of the box" and "come back in 6 hours." Cross-rs makes pre-built binaries trivial to produce.&lt;/p&gt;

&lt;p&gt;I'll probably be using cross for everything RISC-V from now on. Assuming the binary doesn't need runtime system libraries. For those cases... well, there's always the Banana Pi.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lessons Learned
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;💡 Takeaways&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Ask before assuming&lt;/strong&gt;. A decade of experience doesn't mean you've seen everything. The maintainer's one-line suggestion saved hours of CI time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cross-rs exists&lt;/strong&gt;. For pure Rust binaries without system library dependencies, it's the obvious choice. Zero setup, fast builds, maintained Docker images.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Know your dependencies&lt;/strong&gt;. I assumed Tauri CLI had the same requirements as Tauri apps. It doesn't. CLI is a build tool, not a WebKit consumer.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Self-hosted runners still have their place&lt;/strong&gt;. For projects that &lt;strong&gt;do&lt;/strong&gt; need system libraries (like the actual Armbian Imager), native hardware remains the answer. But for build tools and pure-Rust code? Cross wins.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pin your versions&lt;/strong&gt;. &lt;code&gt;cross@0.2.5&lt;/code&gt; is reproducible. &lt;code&gt;cross&lt;/code&gt; is not. CI is not the place for floating versions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The hard way isn't always the right way&lt;/strong&gt;. Sometimes the clever solution you've perfected over years gets replaced by a tool that just... does it better.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I've been building for exotic architectures since ARM32 was exotic. Gray hairs and all. And today I learned something new.&lt;/p&gt;

&lt;p&gt;That's not embarrassing. That's the job.&lt;/p&gt;




&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/cross-rs/cross" rel="noopener noreferrer"&gt;cross-rs&lt;/a&gt;: Zero setup cross compilation for Rust&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/tauri-apps/tauri/pull/14685" rel="noopener noreferrer"&gt;Tauri PR #14685&lt;/a&gt;: The RISC-V CLI support PR&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/gounthar/tauri/releases/tag/tauri-cli-v2.9.6" rel="noopener noreferrer"&gt;Proof of concept release&lt;/a&gt;: Working RISC-V binary (self-hosted runner build)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://tauri.app/" rel="noopener noreferrer"&gt;Tauri&lt;/a&gt;: The framework this is all about&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.armbian.com/download/?arch=riscv64" rel="noopener noreferrer"&gt;Armbian RISC-V&lt;/a&gt;: The boards that need this software&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://wiki.banana-pi.org/Banana_Pi_BPI-F3" rel="noopener noreferrer"&gt;Banana Pi BPI-F3&lt;/a&gt;: The little board that could (in 63 minutes)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The code is submitted. The binary works. And I have a new tool in my belt.&lt;/p&gt;

&lt;p&gt;Twelve years late, but who's counting?&lt;/p&gt;

</description>
      <category>riscv64</category>
      <category>tauri</category>
      <category>rust</category>
      <category>crossrs</category>
    </item>
    <item>
      <title>Adding RISC-V Support to Armbian Imager: A Tale of QEMU, Tauri, and Deja Vu</title>
      <dc:creator>Bruno Verachten</dc:creator>
      <pubDate>Tue, 23 Dec 2025 21:58:25 +0000</pubDate>
      <link>https://forem.com/gounthar/adding-risc-v-support-to-armbian-imager-a-tale-of-qemu-tauri-and-deja-vu-18nl</link>
      <guid>https://forem.com/gounthar/adding-risc-v-support-to-armbian-imager-a-tale-of-qemu-tauri-and-deja-vu-18nl</guid>
      <description>&lt;p&gt;&lt;em&gt;Photo by &lt;a href="https://unsplash.com/@catauggie" rel="noopener noreferrer"&gt;SnapSaga&lt;/a&gt; on Unsplash&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Setup
&lt;/h2&gt;

&lt;p&gt;Armbian Imager is a Tauri 2 application: React frontend, Rust backend, builds for Linux (x64, ARM64), macOS (both architectures), and Windows (both architectures). A proper multi-platform desktop app that actually works, which is rarer than you'd think.&lt;/p&gt;

&lt;p&gt;The build workflow already handled six platform combinations through GitHub Actions. Adding a seventh (Linux RISC-V 64-bit) seemed straightforward. After all, I'd done this dance before with ARM32 back in the dark ages (2013-2014, when docker-compose on Raspberry Pi was considered experimental black magic).&lt;/p&gt;

&lt;p&gt;Famous last words.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Murphy's Law Setup
&lt;/h3&gt;

&lt;p&gt;Here's the thing: whenever you think "this should be straightforward," the universe takes that as a personal challenge. I should've known better. The gray hairs didn't appear from successful, uneventful builds.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why RISC-V, Why Now
&lt;/h2&gt;

&lt;p&gt;Armbian supports RISC-V boards. The Banana Pi F3 runs Armbian. Various Pine64 and StarFive boards run Armbian. The Framework 13 laptop has a RISC-V mainboard option (DC-Roma, because apparently laptop mainboards are modular now, what a time to be alive). The ecosystem is growing.&lt;/p&gt;

&lt;p&gt;But if you're on a RISC-V system and want to flash an Armbian image to an SD card, you currently need to use &lt;code&gt;dd&lt;/code&gt; like it's 1995. The Imager app doesn't have a RISC-V build.&lt;/p&gt;

&lt;p&gt;I figured I'd fix that.&lt;/p&gt;

&lt;p&gt;Sounds appealing or strange enough to get you intrigued?&lt;/p&gt;

&lt;h2&gt;
  
  
  The Research Phase
&lt;/h2&gt;

&lt;p&gt;First step: figure out what the existing build workflow does. The &lt;code&gt;.github/workflows/build.yml&lt;/code&gt; file tells the story:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Linux x64 builds on &lt;code&gt;ubuntu-24.04&lt;/code&gt; runners&lt;/li&gt;
&lt;li&gt;Linux ARM64 builds on &lt;code&gt;ubuntu-24.04-arm&lt;/code&gt; runners (GitHub has native ARM runners now, progress!)&lt;/li&gt;
&lt;li&gt;macOS and Windows have their own native runners&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For RISC-V, GitHub doesn't offer native runners yet. There's &lt;a href="https://cloud-v.co/github-riscv-runner" rel="noopener noreferrer"&gt;Cloud-V&lt;/a&gt; which provides RISC-V GitHub runners, but Armbian's workflow isn't set up for external runners. For this contribution, emulation was the path of least resistance.&lt;/p&gt;

&lt;p&gt;Docker + QEMU it is. (Spoiler alert: this is where things get interesting.)&lt;/p&gt;

&lt;h3&gt;
  
  
  The WebKit2GTK Archaeology
&lt;/h3&gt;

&lt;p&gt;Tauri apps on Linux need WebKit2GTK. This is the system webview that renders the UI, basically the browser engine that makes your Rust backend look pretty. On x64 and ARM64, it's readily available in Debian bookworm and trixie.&lt;/p&gt;

&lt;p&gt;On RISC-V? I checked the &lt;a href="https://tracker.debian.org/pkg/webkit2gtk" rel="noopener noreferrer"&gt;Debian package tracker&lt;/a&gt;, because I'm a glutton for disappointment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;webkit2gtk in bookworm: not available for riscv64
webkit2gtk in trixie: available for riscv64
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Good news: Debian trixie became stable in 2025, and it has the packages we need. The timing worked out: RISC-V users on the current stable release get WebKit2GTK out of the box.&lt;/p&gt;

&lt;h3&gt;
  
  
  The NodeSource Curveball
&lt;/h3&gt;

&lt;p&gt;The frontend build needs Node.js. The standard approach in CI is to use NodeSource's distribution:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://deb.nodesource.com/setup_20.x | bash -
apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; nodejs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I checked NodeSource's supported architectures: amd64, arm64, armhf. No riscv64.&lt;/p&gt;

&lt;p&gt;Of course not. Why would there be?&lt;/p&gt;

&lt;p&gt;Here's where things get simple (yes, really): use Debian's native nodejs package instead. It's version 18 instead of 20, but that's close enough for a Vite build. Sometimes the boring solution is the right solution.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; nodejs npm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(Side note: if you need the latest Node.js LTS on RISC-V, I maintain &lt;a href="https://github.com/gounthar/unofficial-builds/releases" rel="noopener noreferrer"&gt;unofficial builds&lt;/a&gt; with an APT repo via GitHub Pages. We've got 24.12.0 ready to go. But for this build, Debian's package was sufficient.)&lt;/p&gt;

&lt;p&gt;Two problems identified, two solutions found. Look at me being all productive and efficient. This never happens. I should've been more suspicious.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Implementation
&lt;/h2&gt;

&lt;p&gt;I added a new job to the GitHub workflow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;build-linux-riscv64&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;build-linux (riscv64gc-unknown-linux-gnu)&lt;/span&gt;
  &lt;span class="na"&gt;needs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;create-release&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ github.event_name == 'push' || inputs.build_linux_riscv64 }}&lt;/span&gt;
  &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-24.04&lt;/span&gt;
  &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Set up QEMU for RISC-V emulation&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker/setup-qemu-action@v3&lt;/span&gt;
      &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;platforms&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;riscv64&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Set up Docker Buildx&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker/setup-buildx-action@v3&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build in RISC-V container (Debian trixie)&lt;/span&gt;
      &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
        &lt;span class="s"&gt;docker run --rm --platform linux/riscv64 \&lt;/span&gt;
          &lt;span class="s"&gt;-v "$(pwd)":/app \&lt;/span&gt;
          &lt;span class="s"&gt;-w /app \&lt;/span&gt;
          &lt;span class="s"&gt;riscv64/debian:trixie \&lt;/span&gt;
          &lt;span class="s"&gt;bash -c '&lt;/span&gt;
            &lt;span class="s"&gt;# Install build dependencies&lt;/span&gt;
            &lt;span class="s"&gt;apt-get update&lt;/span&gt;
            &lt;span class="s"&gt;apt-get install -y \&lt;/span&gt;
              &lt;span class="s"&gt;curl build-essential pkg-config \&lt;/span&gt;
              &lt;span class="s"&gt;libwebkit2gtk-4.1-dev \&lt;/span&gt;
              &lt;span class="s"&gt;libayatana-appindicator3-dev \&lt;/span&gt;
              &lt;span class="s"&gt;librsvg2-dev patchelf libssl-dev libgtk-3-dev \&lt;/span&gt;
              &lt;span class="s"&gt;squashfs-tools xdg-utils file&lt;/span&gt;

            &lt;span class="s"&gt;# Node.js from Debian (NodeSource lacks RISC-V)&lt;/span&gt;
            &lt;span class="s"&gt;apt-get install -y nodejs npm&lt;/span&gt;

            &lt;span class="s"&gt;# Install Rust&lt;/span&gt;
            &lt;span class="s"&gt;curl --proto "=https" --tlsv1.2 -sSf https://sh.rustup.rs | \&lt;/span&gt;
              &lt;span class="s"&gt;sh -s -- -y&lt;/span&gt;
            &lt;span class="s"&gt;source "$HOME/.cargo/env"&lt;/span&gt;

            &lt;span class="s"&gt;# Build frontend&lt;/span&gt;
            &lt;span class="s"&gt;npm ci&lt;/span&gt;
            &lt;span class="s"&gt;npm run build&lt;/span&gt;

            &lt;span class="s"&gt;# Install Tauri CLI and build&lt;/span&gt;
            &lt;span class="s"&gt;cargo install tauri-cli --version "^2" --locked&lt;/span&gt;
            &lt;span class="s"&gt;cargo tauri build --bundles deb&lt;/span&gt;
          &lt;span class="s"&gt;'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The key differences from the x64/ARM64 builds:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Uses &lt;code&gt;riscv64/debian:trixie&lt;/code&gt; instead of Ubuntu (because we need those WebKit packages)&lt;/li&gt;
&lt;li&gt;Node.js comes from Debian packages, not NodeSource (because NodeSource said "nope")&lt;/li&gt;
&lt;li&gt;Runs through QEMU user-mode emulation (this will be important later)&lt;/li&gt;
&lt;li&gt;Only builds &lt;code&gt;.deb&lt;/code&gt; packages (no AppImage; that's a story for another day, involving AppImage's aversion to exotic architectures)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I also created a standalone script &lt;code&gt;scripts/build-riscv64.sh&lt;/code&gt; for local builds, and added &lt;code&gt;--riscv64&lt;/code&gt; to the &lt;code&gt;build-all.sh&lt;/code&gt; orchestrator, because I like my tooling to be consistent.&lt;/p&gt;

&lt;p&gt;Four commits later:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aae7628 feat: Add RISC-V 64-bit build support
105f77f fix: Use Debian nodejs package for RISC-V builds
22499b1 feat: Add pre-built Docker image support for faster RISC-V builds
0eb2992 fix: Use Debian nodejs in build-all.sh for RISC-V
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Time to test. What could possibly go wrong?&lt;/p&gt;

&lt;h2&gt;
  
  
  The Wall
&lt;/h2&gt;

&lt;p&gt;I kicked off a build. Docker pulled the RISC-V Debian image. QEMU started emulating. The apt packages installed. Rust downloaded. So far, so good.&lt;/p&gt;

&lt;p&gt;Then:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cargo install tauri-cli --version "^2" --locked
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And we wait.&lt;/p&gt;

&lt;p&gt;And wait.&lt;/p&gt;

&lt;p&gt;And wait some more.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Great Compilation Watch
&lt;/h3&gt;

&lt;p&gt;The tauri-cli crate has over 600 dependencies. Six. Hundred. Each one needs to compile. Under QEMU user-mode emulation, every single CPU instruction goes through a translation layer. A Rust build that takes 2 minutes on native hardware takes... well, let's just say it takes a while.&lt;/p&gt;

&lt;p&gt;I went to make coffee. Came back. Still compiling.&lt;/p&gt;

&lt;p&gt;I went to lunch. Came back. Still compiling.&lt;/p&gt;

&lt;p&gt;I started questioning my life choices. The build was still compiling.&lt;/p&gt;

&lt;p&gt;This is the ARM32 era all over again, and I'm having flashbacks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Flashback: 2013-2014
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The ARM32 PTSD
&lt;/h3&gt;

&lt;p&gt;Picture this: around 2013, I was trying to get docker-compose working on ARM32. And RethinkDB. And everything else needed to run openSTF (Open Smartphone Test Farm) on Raspberry Pi hardware, because apparently I enjoy suffering.&lt;/p&gt;

&lt;p&gt;Nothing worked. Every dependency was a new adventure in "why doesn't this architecture exist in their CI matrix?" "Just compile it from source" meant leaving your Pi running overnight and hoping it didn't thermal throttle itself into early retirement. Cross-compilation setups were fragile. One wrong symlink and you'd spend three hours debugging why &lt;code&gt;ld&lt;/code&gt; couldn't find &lt;code&gt;libc.so.6&lt;/code&gt;. Pre-built binaries were as rare as sensible variable names in legacy code.&lt;/p&gt;

&lt;p&gt;We eventually got there. ARM support improved. Docker got multi-arch images. GitHub added ARM runners. The ecosystem matured. The gray hairs appeared.&lt;/p&gt;

&lt;p&gt;Here's the thing: RISC-V is at that same inflection point now. The hardware exists. The kernels boot. The distributions have packages. But the tooling ecosystem hasn't caught up yet.&lt;/p&gt;

&lt;p&gt;And I'm apparently volunteering to help it catch up. Because I learn nothing from experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Math
&lt;/h2&gt;

&lt;p&gt;Let's be concrete about the problem, because misery loves documentation.&lt;/p&gt;

&lt;p&gt;On a native x64 machine with cached dependencies, &lt;code&gt;cargo install tauri-cli&lt;/code&gt; takes about 2 minutes. Fast enough to grab a coffee, check Slack, come back to a finished build.&lt;/p&gt;

&lt;p&gt;Under QEMU user-mode emulation, that same operation takes... I didn't let it finish. After 3 hours, I killed the build because I value my sanity. Extrapolating from progress (and some very pessimistic napkin math), a complete build would take 6-8 hours.&lt;/p&gt;

&lt;p&gt;For CI/CD, this is unusable. GitHub Actions has a 6-hour timeout per job. Even if it finished, waiting that long for every release is absurd. Imagine telling your team "yeah, the release will be ready in 8 hours, assuming nothing goes wrong." (Spoiler: something always goes wrong.)&lt;/p&gt;

&lt;h3&gt;
  
  
  The Pre-built Image Strategy
&lt;/h3&gt;

&lt;p&gt;My first optimization: build a Docker image with tauri-cli pre-installed. Suffer once, benefit forever (or until Tauri releases a new version, whichever comes first).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; --platform=linux/riscv64 riscv64/debian:trixie&lt;/span&gt;

&lt;span class="c"&gt;# Install all build dependencies&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    curl build-essential pkg-config &lt;span class="se"&gt;\
&lt;/span&gt;    libwebkit2gtk-4.1-dev libayatana-appindicator3-dev &lt;span class="se"&gt;\
&lt;/span&gt;    librsvg2-dev patchelf libssl-dev libgtk-3-dev &lt;span class="se"&gt;\
&lt;/span&gt;    squashfs-tools xdg-utils file nodejs npm &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; /var/lib/apt/lists/&lt;span class="k"&gt;*&lt;/span&gt;

&lt;span class="c"&gt;# Install Rust&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;curl &lt;span class="nt"&gt;--proto&lt;/span&gt; &lt;span class="s1"&gt;'=https'&lt;/span&gt; &lt;span class="nt"&gt;--tlsv1&lt;/span&gt;.2 &lt;span class="nt"&gt;-sSf&lt;/span&gt; https://sh.rustup.rs | &lt;span class="se"&gt;\
&lt;/span&gt;    sh &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; PATH="/root/.cargo/bin:${PATH}"&lt;/span&gt;

&lt;span class="c"&gt;# Pre-install tauri-cli (this is the slow part)&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;cargo &lt;span class="nb"&gt;install &lt;/span&gt;tauri-cli &lt;span class="nt"&gt;--version&lt;/span&gt; &lt;span class="s2"&gt;"^2"&lt;/span&gt; &lt;span class="nt"&gt;--locked&lt;/span&gt;

&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["bash"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Build this image once (accepting the 6+ hour wait like penance for your architectural choices), push it to a registry, then subsequent builds only need to compile the actual application, maybe 10-20 minutes under emulation. Still slow, but manageable.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Tip:&lt;/strong&gt; If you're building locally, run &lt;code&gt;./scripts/build-riscv64.sh --build-image&lt;/code&gt; once to create the pre-built image. Subsequent builds will skip tauri-cli compilation entirely. Your future self will thank you.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This works for local development. For CI/CD, it requires maintaining a container registry with RISC-V images, rebuilding whenever Tauri releases a new version. Manageable, but inelegant. And I have opinions about inelegant solutions.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Cross-Compilation Dream (That Didn't Happen)
&lt;/h3&gt;

&lt;p&gt;The proper solution is cross-compilation, right? Build on a fast x64 machine, target riscv64. No emulation overhead. Just pure, unadulterated compilation speed.&lt;/p&gt;

&lt;p&gt;The problem: WebKit2GTK. Again. That dependency is following me around like a particularly persistent technical debt.&lt;/p&gt;

&lt;p&gt;Tauri links against the system WebKit. To cross-compile, you need a RISC-V sysroot with all the WebKit headers and libraries. Setting that up is... non-trivial. You're essentially recreating a Debian trixie root filesystem for a different architecture, and WebKit pulls in half of GNOME as dependencies. Have you &lt;em&gt;seen&lt;/em&gt; the dependency tree for a full GNOME stack? It's like a fractal of "why does this need that."&lt;/p&gt;

&lt;p&gt;I spent an hour exploring this path. It's doable, but fragile. One apt update breaks your sysroot. Version mismatches between the cross-compilation toolchain and the target libraries. Symlink hell. Not worth the maintenance burden for a community contribution.&lt;/p&gt;

&lt;p&gt;Sometimes you have to know when to fold.&lt;/p&gt;

&lt;h2&gt;
  
  
  Going Upstream
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Aha Moment
&lt;/h3&gt;

&lt;p&gt;Here's where my thinking shifted.&lt;/p&gt;

&lt;p&gt;The bottleneck isn't Armbian Imager. It isn't QEMU. It isn't Docker. The bottleneck is &lt;code&gt;cargo install tauri-cli&lt;/code&gt;: compiling 600+ crates from source because there's no pre-built RISC-V binary.&lt;/p&gt;

&lt;p&gt;Wait, what? Why isn't there a pre-built binary?&lt;/p&gt;

&lt;p&gt;Tauri provides pre-built binaries for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Linux x64&lt;/li&gt;
&lt;li&gt;Linux ARM64&lt;/li&gt;
&lt;li&gt;macOS x64&lt;/li&gt;
&lt;li&gt;macOS ARM64&lt;/li&gt;
&lt;li&gt;Windows x64&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No RISC-V.&lt;/p&gt;

&lt;p&gt;If Tauri's release workflow included RISC-V binaries, the entire Tauri ecosystem would benefit. Every single project trying to build for RISC-V would save those 6+ hours. Every developer who comes after me wouldn't have to rediscover this problem. Every CI pipeline wouldn't timeout waiting for Rust to compile half the internet.&lt;/p&gt;

&lt;p&gt;So that's the plan. Fork tauri-cli, add RISC-V to the release matrix, get a PR upstream. Contribute to the root of the problem instead of working around it at the leaf.&lt;/p&gt;

&lt;p&gt;This isn't about maintaining a fork long-term (I've already got enough side projects to feel guilty about). It's about making the whole ecosystem better. Rising tide lifts all boats, and all that.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Contribution Plan
&lt;/h3&gt;

&lt;p&gt;Here's the thing: I've been on both sides of this. I've been the maintainer getting PRs for exotic architectures. I've been the contributor trying to convince maintainers that yes, people actually use this platform.&lt;/p&gt;

&lt;p&gt;The Tauri team has been responsive to architecture additions before. They added ARM64 support. They care about multi-platform support; it's literally in their value proposition. Adding RISC-V to their CI matrix is a natural extension of what they're already doing.&lt;/p&gt;

&lt;p&gt;The work isn't even that complicated:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add RISC-V to their GitHub Actions build matrix&lt;/li&gt;
&lt;li&gt;Set up QEMU for the build (they already do this for other architectures)&lt;/li&gt;
&lt;li&gt;Upload the resulting binary to their releases&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The hard part is the 6-hour compile time under QEMU, but there's an alternative: register a native RISC-V runner for the project. &lt;a href="https://cloud-v.co/github-riscv-runner" rel="noopener noreferrer"&gt;Cloud-V&lt;/a&gt; offers exactly that. Native compilation, no emulation penalty. What takes 6 hours under QEMU would take minutes on real hardware.&lt;/p&gt;

&lt;p&gt;Either way, once it's in their CI, it stays current. Every Tauri version automatically gets RISC-V binaries.&lt;/p&gt;

&lt;p&gt;I just need to prove it works, submit a clean PR, and make a compelling case. How hard could it be?&lt;/p&gt;

&lt;p&gt;(I really need to stop asking that question.)&lt;/p&gt;

&lt;h2&gt;
  
  
  Current Status
&lt;/h2&gt;

&lt;p&gt;The RISC-V build support is implemented and committed on the &lt;code&gt;feature/riscv64-support&lt;/code&gt; branch:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GitHub workflow job: ready, but will timeout in CI without optimizations (those pesky 6-hour limits)&lt;/li&gt;
&lt;li&gt;Standalone build script: works locally with pre-built Docker image (tested, confirmed, actually functions)&lt;/li&gt;
&lt;li&gt;Integration with build-all.sh: complete (because consistency matters)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What's blocked:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CI/CD builds: need either pre-built tauri-cli binaries or a hosted RISC-V builder image&lt;/li&gt;
&lt;li&gt;Upstream contribution: need to explore Tauri's build infrastructure and submit that PR&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Roadmap
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Short term&lt;/strong&gt;: Push the branch, document the limitation honestly (because documentation that lies helps nobody), offer the pre-built Docker image approach as a workaround for anyone who wants RISC-V builds today.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Medium term&lt;/strong&gt;: Fork tauri-cli, experiment with adding RISC-V to their release workflow, prepare a PR that's actually mergeable (not just "hey I hacked this together and it technically works").&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Long term&lt;/strong&gt;: Native RISC-V CI runners will eventually exist. GitHub, GitLab, someone will offer them. And when that happens, this entire problem disappears. But we're not there yet, and people want to use RISC-V now, so here we are.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Lessons Learned
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Takeaways and Tips for Future Archaeologists
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Debian trixie is your friend for RISC-V&lt;/strong&gt; - The testing distribution has packages that stable lacks. For bleeding-edge architectures, accept some instability in exchange for working software. It's a fair trade.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;NodeSource doesn't support everything&lt;/strong&gt; - When the fancy installer doesn't work, fall back to distribution packages. They're usually good enough, and "good enough" ships.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Emulation is slow, but it works&lt;/strong&gt; - QEMU user-mode emulation lets you run foreign binaries on any host. The speed penalty is brutal for compilation, but acceptable for runtime testing. Know which problem you're solving.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Sometimes the fix is upstream&lt;/strong&gt; - When you hit a wall that affects the whole ecosystem, consider fixing the source rather than building elaborate workarounds. Be the change you want to see in the dependency tree.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;This has all happened before&lt;/strong&gt; - ARM32 in 2013. ARM64 in 2016. RISC-V in 2025. The pattern repeats: hardware arrives, software catches up, early adopters suffer, then it all becomes normal. We're in the "early adopters suffer" phase.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pre-built images are your friend&lt;/strong&gt; - One slow build beats a thousand slow builds. Cache aggressively, share generously, document thoroughly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Don't cross-compile WebKit unless you hate yourself&lt;/strong&gt; - Some battles aren't worth fighting. Some dependency trees are better left untraversed.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I've been here before. The gray hairs prove it. But that's also why I know it gets better. The pain is temporary. The infrastructure improvements are permanent. And somewhere, someday, a developer will &lt;code&gt;cargo install tauri-cli&lt;/code&gt; on their RISC-V board and it'll just work in 30 seconds, and they'll never know about the 6-hour compile times we endured to make that possible.&lt;/p&gt;

&lt;p&gt;That's the dream, anyway.&lt;/p&gt;




&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://tauri.app/" rel="noopener noreferrer"&gt;Tauri documentation&lt;/a&gt;: Official Tauri 2 docs (actually well-written, surprisingly)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://wiki.debian.org/RISC-V" rel="noopener noreferrer"&gt;Debian RISC-V wiki&lt;/a&gt;: Porting status and bootstrap info&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.qemu.org/docs/master/user/main.html" rel="noopener noreferrer"&gt;QEMU user-mode emulation&lt;/a&gt;: How binfmt_misc magic works (read this if you want to understand the sorcery)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.armbian.com/download/?arch=riscv64" rel="noopener noreferrer"&gt;Armbian RISC-V boards&lt;/a&gt;: Supported RISC-V hardware&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://wiki.banana-pi.org/Banana_Pi_BPI-F3" rel="noopener noreferrer"&gt;Banana Pi BPI-F3&lt;/a&gt;: SpacemiT K1 octa-core RISC-V board (actual hardware you can buy today)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The code works. The build process works. It's just... slow. For now.&lt;/p&gt;

&lt;p&gt;But we've been here before, and we know how this story ends. The ecosystem catches up. The tooling improves. The compile times get reasonable. The gray hairs multiply.&lt;/p&gt;

&lt;p&gt;Let's make it happen.&lt;/p&gt;

</description>
      <category>riscv</category>
      <category>tauri</category>
      <category>rust</category>
      <category>armbian</category>
    </item>
    <item>
      <title>BuildKit for RISC-V64: When Your Package Works But Your Container Doesn't</title>
      <dc:creator>Bruno Verachten</dc:creator>
      <pubDate>Thu, 11 Dec 2025 10:54:58 +0000</pubDate>
      <link>https://forem.com/gounthar/buildkit-for-risc-v64-when-your-package-works-but-your-container-doesnt-i18</link>
      <guid>https://forem.com/gounthar/buildkit-for-risc-v64-when-your-package-works-but-your-container-doesnt-i18</guid>
      <description>&lt;p&gt;&lt;em&gt;Photo by &lt;a href="https://unsplash.com/@tilakbaloni20?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Tilak Baloni&lt;/a&gt; on &lt;a href="https://unsplash.com/photos/a-group-of-icebergs-floating-on-top-of-a-body-of-water-DgyHhWKaY3s?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I successfully built and packaged BuildKit for RISC-V64. Users could download it. The GitHub Actions workflows were green. Everything looked perfect. Then people tried to actually &lt;em&gt;use&lt;/em&gt; it, and discovered it didn't work at all. This is the story of fixing BuildKit after deployment: the part nobody writes about in the success announcements. (Because who wants to admit their victory lap was premature?)&lt;/p&gt;

&lt;h2&gt;
  
  
  The Honeymoon Phase is Over
&lt;/h2&gt;

&lt;p&gt;Two days ago, I published BuildKit packages for RISC-V64. The blog post was cheerful. The documentation was thorough. The container image pushed to GitHub Container Registry without errors. I felt accomplished.&lt;/p&gt;

&lt;p&gt;Then reality showed up.&lt;/p&gt;

&lt;p&gt;The first issue report came in: "Getting 'denied' errors when pulling your BuildKit image." My immediate thought was visibility settings. Maybe the GHCR package was still private? I checked. It was public. So what was going on?&lt;/p&gt;

&lt;p&gt;Here's the thing: the error message was misleading. This wasn't actually an access problem. It was cached credentials from a previous failed pull attempt when the package &lt;em&gt;was&lt;/em&gt; still private. Docker's credential helper had cached the 403 response and kept returning it even after I'd made the package public.&lt;/p&gt;

&lt;p&gt;You know that feeling when you spend 20 minutes debugging something that turns out to be your own browser cache? Yeah, same energy.&lt;/p&gt;

&lt;p&gt;The fix was simple:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;logout &lt;/span&gt;ghcr.io
docker pull ghcr.io/gounthar/buildkit-riscv64:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Problem solved.&lt;/p&gt;

&lt;p&gt;Easy fix. False alarm. Back to feeling good about myself. I should've known it wouldn't last.&lt;/p&gt;

&lt;p&gt;The second issue was different: "Container crash-loops with 'failed to create worker: no worker found'." That one made me stop and think. And then start sweating slightly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding BuildKit Workers
&lt;/h2&gt;

&lt;p&gt;I hadn't paid much attention to BuildKit's worker architecture during the packaging phase. I was focused on getting binaries to compile and containers to start. The worker system seemed like an internal detail, you know, something that would just magically work once the main binary was running.&lt;/p&gt;

&lt;p&gt;Turns out it's not.&lt;/p&gt;

&lt;p&gt;BuildKit requires at least one worker backend to function. There are two options:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;OCI worker&lt;/strong&gt; - Uses &lt;code&gt;runc&lt;/code&gt; to spawn containers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Containerd worker&lt;/strong&gt; - Uses containerd's socket&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When buildkitd starts, it initializes workers. If no worker can be created, the daemon refuses to start. The error message is cryptic: "no worker found." It doesn't explain &lt;em&gt;why&lt;/em&gt; no worker was found, or what you should do about it, or really anything useful at all.&lt;/p&gt;

&lt;p&gt;I checked the container logs. Buried in the output was: &lt;code&gt;skipping oci worker, no runc binary in $PATH&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Ah.&lt;/p&gt;

&lt;p&gt;My container image had buildkitd and buildctl. It had tini for process management. It had all the BuildKit-specific tools. But it didn't have runc, so the OCI worker couldn't initialize. And I hadn't configured a containerd socket, so the containerd worker was also unavailable.&lt;/p&gt;

&lt;p&gt;Result: no workers, no buildkitd, container crashes. My "successful" deployment was about as useful as a screen door on a submarine.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three Fixes, Three Pull Requests
&lt;/h2&gt;

&lt;p&gt;I fixed this in three stages, each addressing a different layer of the problem. (Because apparently I can't get things right on the first try.)&lt;/p&gt;

&lt;h3&gt;
  
  
  PR #227: Fix ENTRYPOINT/CMD Split
&lt;/h3&gt;

&lt;p&gt;The first problem was how I structured the Dockerfile. I'd combined everything into ENTRYPOINT:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;ENTRYPOINT&lt;/span&gt;&lt;span class="s"&gt; ["/usr/bin/tini", "--", "buildkitd", "--addr", "tcp://0.0.0.0:1234"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This looked clean but was wrong. Here's why: Docker Buildx needs to pass runtime arguments to buildkitd: things like &lt;code&gt;--allow-insecure-entitlement network.host&lt;/code&gt; for host networking or &lt;code&gt;--allow-insecure-entitlement security.insecure&lt;/code&gt; for privileged operations.&lt;/p&gt;

&lt;p&gt;With everything in ENTRYPOINT, Docker replaces the &lt;em&gt;entire command&lt;/em&gt; when you pass arguments. So Buildx's carefully crafted configuration gets completely ignored, and buildkitd starts with the wrong settings. It's like ordering a custom pizza and getting plain cheese because the delivery guy couldn't read your special instructions.&lt;/p&gt;

&lt;p&gt;The fix was simple: move default arguments to CMD:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;ENTRYPOINT&lt;/span&gt;&lt;span class="s"&gt; ["/usr/bin/tini", "--", "buildkitd"]&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["--addr", "tcp://0.0.0.0:1234"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now Buildx can override the defaults with its own configuration. When Docker starts the container, ENTRYPOINT stays intact (tini and buildkitd), but CMD gets replaced with whatever Buildx needs.&lt;/p&gt;

&lt;p&gt;Seems obvious in retrospect, but I didn't test with Buildx during initial development. (Windows with WSL2, because apparently I enjoy making my life unnecessarily complicated, and now I'm skipping integration tests like a cowboy.)&lt;/p&gt;

&lt;h3&gt;
  
  
  PR #228: Add runc from Debian
&lt;/h3&gt;

&lt;p&gt;The second problem was the missing OCI worker. The solution was straightforward: add runc to the container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nt"&gt;--no-install-recommends&lt;/span&gt; runc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Debian Trixie ships runc 1.1.12. It's old, but it works. The OCI worker initialized successfully. Container stopped crash-looping.&lt;/p&gt;

&lt;p&gt;Victory.&lt;/p&gt;

&lt;p&gt;I should've left it at that. But no, someone had to point out the obvious.&lt;/p&gt;

&lt;h3&gt;
  
  
  PR #229: Use Our Own runc
&lt;/h3&gt;

&lt;p&gt;Then someone said: "Wait, we're building runc 1.3.0 ourselves as part of the Docker Engine releases. Why are we using Debian's ancient 1.1.12 when we have a newer version in our own APT repository?"&lt;/p&gt;

&lt;p&gt;Good question. The answer was I hadn't thought about it. (Starting to notice a pattern here?)&lt;/p&gt;

&lt;p&gt;So PR #229 added our APT repository to the container and installed our runc package instead:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;# Add docker-for-riscv64 APT repository&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://github.com/gounthar/docker-for-riscv64/releases/download/gpg-key/gpg-public-key.asc | &lt;span class="se"&gt;\
&lt;/span&gt;    gpg &lt;span class="nt"&gt;--dearmor&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; /usr/share/keyrings/docker-riscv64-archive-keyring.gpg &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"deb [signed-by=/usr/share/keyrings/docker-riscv64-archive-keyring.gpg] &lt;/span&gt;&lt;span class="se"&gt;\
&lt;/span&gt;&lt;span class="s2"&gt;          https://gounthar.github.io/docker-for-riscv64 trixie main"&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;          &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /etc/apt/sources.list.d/docker-riscv64.list

&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nt"&gt;--no-install-recommends&lt;/span&gt; runc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now the container uses runc 1.3.0, keeping versions consistent across all our packages. Much better.&lt;/p&gt;

&lt;p&gt;Of course, I made a typo in the GPG key URL (&lt;code&gt;public-key.asc&lt;/code&gt; instead of &lt;code&gt;gpg-public-key.asc&lt;/code&gt;), so the build failed. PR #231 fixed that embarrassing oversight. Even simple changes need testing, kids.&lt;/p&gt;

&lt;h3&gt;
  
  
  Testing the Fix
&lt;/h3&gt;

&lt;p&gt;After all three PRs merged, I verified the container actually worked end-to-end:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Pull the updated image&lt;/span&gt;
docker pull ghcr.io/gounthar/buildkit-riscv64:latest

&lt;span class="c"&gt;# Create a builder&lt;/span&gt;
docker buildx create &lt;span class="nt"&gt;--name&lt;/span&gt; test-builder &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--driver&lt;/span&gt; docker-container &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--driver-opt&lt;/span&gt; &lt;span class="nv"&gt;image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ghcr.io/gounthar/buildkit-riscv64:latest &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--use&lt;/span&gt;

&lt;span class="c"&gt;# Bootstrap (this is where it was failing before)&lt;/span&gt;
docker buildx inspect &lt;span class="nt"&gt;--bootstrap&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;+] Building 2.3s &lt;span class="o"&gt;(&lt;/span&gt;1/1&lt;span class="o"&gt;)&lt;/span&gt; FINISHED
 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;internal] booting buildkit
 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; pulling image ghcr.io/gounthar/buildkit-riscv64:latest
 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; starting container buildx_buildkit_test-builder0

&lt;span class="c"&gt;# Verify the worker initialized&lt;/span&gt;
docker logs buildx_buildkit_test-builder0 | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; worker
found worker &lt;span class="s2"&gt;"runc-overlay"&lt;/span&gt;, &lt;span class="nv"&gt;labels&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;map[...]
found 1 workers, &lt;span class="nv"&gt;default&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"runc-overlay"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The OCI worker was there. BuildKit was running. The demo was saved. I allowed myself exactly 30 seconds of relief before the next problem appeared.&lt;/p&gt;

&lt;h2&gt;
  
  
  When Tests Hang Forever
&lt;/h2&gt;

&lt;p&gt;Fixing the container should've been the end of the story. Instead, it revealed a systemic problem with our CI workflows.&lt;/p&gt;

&lt;p&gt;After PR #228 merged, both self-hosted RISC-V64 runners got stuck. Not crashed, &lt;em&gt;stuck&lt;/em&gt;. The workflow step that tested the container hung indefinitely:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; buildkit:latest buildkitd &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This should return immediately with version information. Instead, it hung. Forever. The runners stopped processing other jobs. I had to SSH into both machines and manually kill processes.&lt;/p&gt;

&lt;p&gt;You know that special kind of frustration when your automation breaks your automation? Welcome to my Tuesday.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Root Cause
&lt;/h3&gt;

&lt;p&gt;Here's the thing: the problem was buildkitd's initialization behavior. When you run &lt;code&gt;buildkitd --version&lt;/code&gt;, you'd &lt;em&gt;expect&lt;/em&gt; it to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Print version&lt;/li&gt;
&lt;li&gt;Exit&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Simple, right? Wrong.&lt;/p&gt;

&lt;p&gt;But buildkitd doesn't work that way. It initializes workers &lt;em&gt;before&lt;/em&gt; responding to any command, including &lt;code&gt;--version&lt;/code&gt;. So the actual flow is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Initialize worker system&lt;/li&gt;
&lt;li&gt;Scan for runc binary&lt;/li&gt;
&lt;li&gt;Wait for worker to be ready&lt;/li&gt;
&lt;li&gt;Print version&lt;/li&gt;
&lt;li&gt;Exit&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When runc was missing (before PR #228), step 2 failed fast: "no runc found, skip OCI worker." The version command continued without issues.&lt;/p&gt;

&lt;p&gt;After adding runc, step 3 became the problem. The worker initialization tried to set up the OCI runtime, which required privileges the test container didn't have. It hung waiting for something that would never happen. Like waiting for a bus that's been cancelled but nobody told you.&lt;/p&gt;

&lt;p&gt;The test command had no timeout, and the workflow didn't enforce a job-level limit. So it hung forever. And ever. And ever.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Fix
&lt;/h3&gt;

&lt;p&gt;I added a 30-second timeout to the test command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Test BuildKit container&lt;/span&gt;
  &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;timeout 30s docker run --rm buildkit:latest buildkitd --version || \&lt;/span&gt;
      &lt;span class="s"&gt;echo "Note: buildkitd --version may require privileges or timed out"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the command takes more than 30 seconds, timeout kills it. The workflow continues. The runners stop getting stuck.&lt;/p&gt;

&lt;p&gt;Is this ideal? No. The ideal solution would be running the test container with proper privileges so buildkitd can actually initialize workers. But that requires &lt;code&gt;--privileged&lt;/code&gt; or specific capability flags, which complicates the workflow for a test that's just checking if the container exists.&lt;/p&gt;

&lt;p&gt;Sometimes the pragmatic solution is: "if it doesn't finish in 30 seconds, something's wrong, move on." Not every problem needs a perfect solution. Some problems just need a timeout.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Buildx Needs Our Image
&lt;/h2&gt;

&lt;p&gt;While debugging, I learned something about Docker Buildx's default behavior. When you create a builder without specifying an image, Buildx uses &lt;code&gt;moby/buildkit:buildx-stable-1&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;That's an official multi-arch image maintained by the Docker team. It includes support for amd64, arm64, s390x, ppc64le. But not riscv64. (Why would there be? We're still the weird cousins at the architecture family reunion.)&lt;/p&gt;

&lt;p&gt;So if you're on RISC-V64 and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker buildx create &lt;span class="nt"&gt;--name&lt;/span&gt; mybuilder &lt;span class="nt"&gt;--use&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Buildx tries to pull &lt;code&gt;moby/buildkit:buildx-stable-1&lt;/code&gt;. The pull succeeds. Docker can pull multi-arch manifests even when your platform isn't supported. But when the container starts, you get:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;exec /sbin/docker-init: no such file or directory
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The image has binaries for amd64, arm64, etc. It doesn't have riscv64. The container can't run. It's like downloading a Windows installer on macOS and being confused when it doesn't work.&lt;/p&gt;

&lt;p&gt;The solution is explicit:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker buildx create &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--name&lt;/span&gt; mybuilder &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--driver&lt;/span&gt; docker-container &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--driver-opt&lt;/span&gt; &lt;span class="nv"&gt;image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ghcr.io/gounthar/buildkit-riscv64:latest &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--use&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now Buildx uses our RISC-V64 image instead of the official one. Everything works.&lt;/p&gt;

&lt;p&gt;This is why our APT packages (which install buildkitd and buildctl as standalone binaries) aren't sufficient for Buildx integration. Buildx needs a container. The binaries alone aren't enough. It's a different use case requiring a different solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Documentation Updates
&lt;/h2&gt;

&lt;p&gt;After fixing the technical problems, I updated the documentation to prevent future confusion. (Because if I don't document it now, I'll forget it in three weeks and have to debug the same issues all over again.)&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;README.md&lt;/strong&gt;: Added a warning about the official BuildKit image's limitations. Explained that Option 1 (container image) is required for Buildx, while Option 3 (APT binaries) is for standalone use.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Release notes template&lt;/strong&gt;: Added clarifying text to the APT installation section explaining that it provides standalone binaries, not Buildx integration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Workflow file&lt;/strong&gt;: Added comments explaining why the timeout exists and what the test actually validates.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The goal was making the distinction clear: installing BuildKit binaries vs. using BuildKit with Docker Buildx are different use cases requiring different solutions. One gives you the tools. The other gives you the integration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lessons From Post-Deployment Debugging
&lt;/h2&gt;

&lt;p&gt;Let's talk about what I learned from this mess. (Because if I don't extract lessons, it's just pain without purpose.)&lt;/p&gt;

&lt;h3&gt;
  
  
  Testing in Isolation Isn't Enough
&lt;/h3&gt;

&lt;p&gt;I tested that the container built. I tested that buildkitd and buildctl worked. I didn't test the worker initialization path or the Buildx integration. Those failures only appeared when users tried real workflows.&lt;/p&gt;

&lt;p&gt;Here's the thing: testing individual components is necessary but not sufficient. You need to test how components interact with the systems that will use them. It's not enough to verify the car starts; you need to verify it actually drives.&lt;/p&gt;

&lt;h3&gt;
  
  
  Error Messages Lie Sometimes
&lt;/h3&gt;

&lt;p&gt;"Access denied" was actually cached credentials. "No worker found" was actually missing runc. The actual problem is often one layer deeper than the error message suggests.&lt;/p&gt;

&lt;p&gt;When debugging, I've learned to ignore the surface error and look at what's actually failing in the logs. Error messages are like symptoms: they point in a direction, but they're not the diagnosis.&lt;/p&gt;

&lt;h3&gt;
  
  
  Version Consistency Matters
&lt;/h3&gt;

&lt;p&gt;Using Debian's runc 1.1.12 worked. But mixing our runc 1.3.0 builds with Debian's old version created potential compatibility issues down the line. Better to use our own packages consistently.&lt;/p&gt;

&lt;p&gt;This applies everywhere: if you're building something yourself, use your builds everywhere. Don't mix upstream and self-built components unless there's a good reason. Consistency prevents weird edge cases six months from now.&lt;/p&gt;

&lt;h3&gt;
  
  
  Timeouts Are Not Optional
&lt;/h3&gt;

&lt;p&gt;The workflow hung because I didn't anticipate buildkitd's initialization behavior. A simple 30-second timeout would've prevented the runners from getting stuck.&lt;/p&gt;

&lt;p&gt;Every command that might hang should have a timeout. Always. Even commands that "should never hang." Especially those commands, actually.&lt;/p&gt;

&lt;h3&gt;
  
  
  Documentation Needs User Perspective
&lt;/h3&gt;

&lt;p&gt;I documented what the package contained and how to install it. I didn't document &lt;em&gt;why&lt;/em&gt; you'd choose one installation method over another. That context only became clear after users tried the wrong approach and got confused.&lt;/p&gt;

&lt;p&gt;Good documentation anticipates misunderstandings and addresses them proactively. It's not enough to explain what something does; you need to explain when to use it and when not to.&lt;/p&gt;

&lt;h2&gt;
  
  
  Current Status
&lt;/h2&gt;

&lt;p&gt;BuildKit for RISC-V64 now works. The container initializes workers correctly. Buildx integration works. The APT packages provide standalone binaries for people who need them. The documentation explains the differences.&lt;/p&gt;

&lt;p&gt;The image is at ghcr.io/gounthar/buildkit-riscv64:latest. The packages are in the apt-repo branch. The workflows run weekly and track upstream releases.&lt;/p&gt;

&lt;p&gt;It took three PRs to fix the container (#227, #228, #229), one more to fix my typo (#231), one workflow update to fix the CI (#230), and several documentation updates to clarify the installation options. None of this was in the original "successful deployment" announcement.&lt;/p&gt;

&lt;p&gt;That's typical, right? The first version works in theory. The second version works in practice. The difference is usually user feedback and debugging time. And maybe a bit of humility.&lt;/p&gt;

&lt;h2&gt;
  
  
  Takeaways &amp;amp; Tips for the Team
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Test integration, not just compilation&lt;/strong&gt; - Your binaries might work perfectly in isolation and fail completely when integrated with the tools that actually use them&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Add timeouts to everything&lt;/strong&gt; - Even commands that "should never hang" will eventually hang when you least expect it&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cache invalidation is hard&lt;/strong&gt; - "Access denied" might just be cached credentials from when the resource actually was denied&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Worker initialization isn't optional&lt;/strong&gt; - BuildKit requires at least one worker (OCI or containerd) to function; the daemon won't start without it&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ENTRYPOINT vs CMD matters&lt;/strong&gt; - Put the static parts in ENTRYPOINT, put the configurable parts in CMD, or runtime arguments won't work&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use your own packages&lt;/strong&gt; - If you're building runc 1.3.0, use that instead of Debian's 1.1.12; version consistency prevents future headaches&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Document the why, not just the what&lt;/strong&gt; - Explain when to use container images vs APT packages; users need context, not just instructions&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;The BuildKit container is available at &lt;a href="https://github.com/gounthar/docker-for-riscv64" rel="noopener noreferrer"&gt;https://github.com/gounthar/docker-for-riscv64&lt;/a&gt;. The fixes discussed here are in pull requests #227, #228, #229, #230, and #231. Documentation is in README.md and BUILDKIT-TESTING.md.&lt;/p&gt;

&lt;p&gt;If you're running Docker on RISC-V64 and want to try multi-platform builds, the setup is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker pull ghcr.io/gounthar/buildkit-riscv64:latest
docker buildx create &lt;span class="nt"&gt;--name&lt;/span&gt; riscv-builder &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--driver&lt;/span&gt; docker-container &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--driver-opt&lt;/span&gt; &lt;span class="nv"&gt;image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ghcr.io/gounthar/buildkit-riscv64:latest &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--use&lt;/span&gt;
docker buildx inspect &lt;span class="nt"&gt;--bootstrap&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then build something:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker buildx build &lt;span class="nt"&gt;--platform&lt;/span&gt; linux/riscv64,linux/amd64 &lt;span class="nt"&gt;-t&lt;/span&gt; yourimage:latest &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It should work. If it doesn't, open an issue with logs. (And I'll probably discover yet another thing I didn't test properly.)&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This article continues from &lt;a href="https://bruno.verachten.fr/2025/12/09/buildkit-for-riscv64-when-your-demo-decides-to-betray-you/" rel="noopener noreferrer"&gt;BuildKit for RISC-V64: When Your Demo Decides to Betray You&lt;/a&gt;, which covered the initial build and packaging process.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>buildkit</category>
      <category>riscv64</category>
      <category>docker</category>
      <category>containers</category>
    </item>
    <item>
      <title>BuildKit for RISC-V64: When Your Demo Decides to Betray You</title>
      <dc:creator>Bruno Verachten</dc:creator>
      <pubDate>Wed, 10 Dec 2025 09:26:51 +0000</pubDate>
      <link>https://forem.com/gounthar/buildkit-for-risc-v64-when-your-demo-decides-to-betray-you-18pn</link>
      <guid>https://forem.com/gounthar/buildkit-for-risc-v64-when-your-demo-decides-to-betray-you-18pn</guid>
      <description>&lt;p&gt;Picture this: I'm preparing a tech demo, feeling pretty confident about showing Docker on RISC-V64. Everything's going great until Step 5, where I need Docker Buildx multi-platform builds. Which needs BuildKit. Which doesn't exist for RISC-V64.&lt;/p&gt;

&lt;p&gt;You know that special kind of panic when you realize your demo has a massive hole in it? That's where I was. I could've just skipped that section, mumbled something about "future work," and moved on. But where's the fun in that? Instead, I spent the next four hours going down a rabbit hole of build automation, packaging quirks, and version detection bugs. Spoiler: I won.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem: No BuildKit, No Demo
&lt;/h2&gt;

&lt;p&gt;Let's talk about Docker Buildx for a second. It's that CLI plugin everyone uses for multi-platform builds—the &lt;code&gt;docker buildx build&lt;/code&gt; command you've probably typed a hundred times. But here's the thing: Buildx itself doesn't actually do the work. It's more like a friendly interface that talks to the real workhorse.&lt;/p&gt;

&lt;p&gt;That workhorse? BuildKit.&lt;/p&gt;

&lt;p&gt;BuildKit (from &lt;code&gt;moby/buildkit&lt;/code&gt;) is the actual build engine. It runs as a daemon called &lt;code&gt;buildkitd&lt;/code&gt; and handles all the heavy lifting—parallel builds, caching, cross-platform compilation, the works. When you run &lt;code&gt;docker buildx create&lt;/code&gt;, Docker spins up a BuildKit container behind the scenes to do its magic.&lt;/p&gt;

&lt;p&gt;On RISC-V64, that "magic" looked more like a disaster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker buildx create &lt;span class="nt"&gt;--name&lt;/span&gt; mybuilder &lt;span class="nt"&gt;--use&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;docker buildx inspect &lt;span class="nt"&gt;--bootstrap&lt;/span&gt;
&lt;span class="nb"&gt;exec&lt;/span&gt; /sbin/docker-init: no such file or directory
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ouch.&lt;/p&gt;

&lt;p&gt;The official &lt;code&gt;moby/buildkit:buildx-stable-1&lt;/code&gt; image? Doesn't support RISC-V64. Without a working BuildKit daemon, Buildx is just a pretty CLI that does nothing. My carefully planned demo was officially stuck.&lt;/p&gt;




&lt;h2&gt;
  
  
  Analysis: What Does BuildKit Actually Need?
&lt;/h2&gt;

&lt;p&gt;Now, before I went all cowboy and started writing code, I did what any reasonable person would do; I investigated. What would it actually take to get BuildKit running on RISC-V64?&lt;/p&gt;

&lt;p&gt;Here's where things got interesting.&lt;/p&gt;

&lt;h3&gt;
  
  
  Native RISC-V64 Support Already Exists
&lt;/h3&gt;

&lt;p&gt;Unlike Docker Engine, BuildKit has native RISC-V64 support baked right in. Check out their &lt;code&gt;docker-bake.hcl&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;target&lt;/span&gt; &lt;span class="s2"&gt;"binaries-cross"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;platforms&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="s2"&gt;"darwin/amd64"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"darwin/arm64"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"linux/amd64"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"linux/arm/v7"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"linux/arm64"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"linux/s390x"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"linux/ppc64le"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"linux/riscv64"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# &amp;lt;-- Look at that!&lt;/span&gt;
    &lt;span class="s2"&gt;"windows/amd64"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"windows/arm64"&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;RISC-V64 is sitting right there in their cross-compilation targets. The machinery exists. The build system knows what to do. Nobody had just bothered to actually build and package it for distribution.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pure Go, No CGO Nightmares
&lt;/h3&gt;

&lt;p&gt;Even better news: BuildKit's core components are pure Go. Let me show you what I mean:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;# From BuildKit's Dockerfile&lt;/span&gt;
&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; CGO_ENABLED=0  # buildkitd&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; CGO_ENABLED=0  # buildctl&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You know what this means? No wrestling with C libraries. No cross-compilation toolchain nightmares. No hunting down RISC-V64 versions of random dependencies. Just pure, beautiful Go that compiles to a single static binary.&lt;/p&gt;

&lt;p&gt;My BananaPi F3 runner with Go 1.25.3? Totally sufficient.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Build Command
&lt;/h3&gt;

&lt;p&gt;The actual build command turned out to be almost embarrassingly simple:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;buildkit
&lt;span class="nv"&gt;GOOS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;linux &lt;span class="nv"&gt;GOARCH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;riscv64 &lt;span class="nv"&gt;CGO_ENABLED&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 make binaries
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. This produces statically linked &lt;code&gt;buildkitd&lt;/code&gt; and &lt;code&gt;buildctl&lt;/code&gt; binaries with zero runtime dependencies. If it's too easy, it's no fun, right?&lt;/p&gt;

&lt;p&gt;Wrong. I was about to discover that packaging these innocent-looking binaries would be where the real adventure began.&lt;/p&gt;




&lt;h2&gt;
  
  
  Phase 1: Building the Binaries
&lt;/h2&gt;

&lt;p&gt;I created &lt;code&gt;buildkit-weekly-build.yml&lt;/code&gt; following the patterns I'd already established for Docker Engine, CLI, and Compose in this repository. The workflow's pretty straightforward:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Clone the BuildKit submodule&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;make binaries&lt;/code&gt; with the RISC-V64 target&lt;/li&gt;
&lt;li&gt;Create a GitHub release with the binaries&lt;/li&gt;
&lt;li&gt;Tag releases as &lt;code&gt;buildkit-vX.Y.Z-riscv64&lt;/code&gt; for official versions&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The first build succeeded on the first try. (I know, I was suspicious too.)&lt;/p&gt;

&lt;p&gt;Let's verify what we actually got:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;file buildkitd
buildkitd: ELF 64-bit LSB executable, UCB RISC-V, version 1 &lt;span class="o"&gt;(&lt;/span&gt;SYSV&lt;span class="o"&gt;)&lt;/span&gt;, statically linked

&lt;span class="nv"&gt;$ &lt;/span&gt;ldd buildkitd
not a dynamic executable

&lt;span class="nv"&gt;$ &lt;/span&gt;./buildkitd &lt;span class="nt"&gt;--version&lt;/span&gt;
buildkitd github.com/moby/buildkit v0.26.2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Clean RISC-V64 binaries, statically linked, working version detection. Phase 1 complete. Time to package these bad boys.&lt;/p&gt;




&lt;h2&gt;
  
  
  Phase 2: RPM Packaging - The Suspiciously Easy One
&lt;/h2&gt;

&lt;p&gt;I created &lt;code&gt;build-buildkit-rpm.yml&lt;/code&gt; with a spec file in &lt;code&gt;rpm-buildkit/&lt;/code&gt;. The workflow downloads the binaries from the release, packages them up, and uploads the RPM.&lt;/p&gt;

&lt;p&gt;First attempt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;buildkit-0.26.2-1.riscv64.rpm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It... just worked?&lt;/p&gt;

&lt;p&gt;RPM tooling on Fedora had absolutely no complaints about Go binaries. Clean build. No warnings. No errors. Perfect.&lt;/p&gt;

&lt;p&gt;This set completely unrealistic expectations for what came next.&lt;/p&gt;




&lt;h2&gt;
  
  
  Phase 3: Debian Packaging - The dh_dwz Saga
&lt;/h2&gt;

&lt;p&gt;Debian packaging started the same way. I created &lt;code&gt;build-buildkit-package.yml&lt;/code&gt; with packaging files in &lt;code&gt;debian-buildkit/&lt;/code&gt;. Download binaries, run &lt;code&gt;dpkg-buildpackage&lt;/code&gt;, upload the .deb.&lt;/p&gt;

&lt;p&gt;Here's where things got tricky:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dh_dwz
dwz: buildkitd (section .debug_info): '.debug_info' section not present
dh_dwz: error: dwz -q -- buildkitd buildctl returned exit code 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Great. Just great.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Even Is dwz?
&lt;/h3&gt;

&lt;p&gt;So, &lt;code&gt;dwz&lt;/code&gt; is a DWARF optimizer. It compresses debug information in ELF binaries to reduce package size. Debian's build system runs it by default because Debian cares deeply about making packages smaller.&lt;/p&gt;

&lt;p&gt;Here's the problem: Go binaries don't use traditional DWARF debug sections. Go has its own debug format that's completely different. When &lt;code&gt;dwz&lt;/code&gt; tries to parse Go binaries, it gets confused and fails spectacularly.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Fix (That I Should've Remembered)
&lt;/h3&gt;

&lt;p&gt;The solution? Tell Debian's build system to skip &lt;code&gt;dwz&lt;/code&gt; entirely. I added an override to &lt;code&gt;debian-buildkit/rules&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight make"&gt;&lt;code&gt;&lt;span class="nl"&gt;override_dh_dwz&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;
    &lt;span class="c"&gt;# Skip DWARF compression - Go binaries don't have compatible debug info&lt;/span&gt;
    &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Skipping dwz - not applicable to Go binaries"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now here's the embarrassing part: this pattern already existed in the repository for other Go packages (docker-cli, docker-compose). I'd dealt with this exact problem before. I just... forgot to copy it when creating the BuildKit packaging.&lt;/p&gt;

&lt;p&gt;(Windows with WSL2, because apparently I enjoy making my life unnecessarily complicated, and now I'm forgetting my own workarounds.)&lt;/p&gt;

&lt;p&gt;PR #222 added this fix, and I moved on feeling slightly sheepish.&lt;/p&gt;




&lt;h2&gt;
  
  
  Phase 4: The Version Detection Crisis
&lt;/h2&gt;

&lt;p&gt;With &lt;code&gt;dh_dwz&lt;/code&gt; fixed, I confidently triggered another build. Surely this would work now, right?&lt;/p&gt;

&lt;p&gt;New error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dch: error: new version '0.0.20251209' is less than the current version '0.17.3'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Wait, what?&lt;/p&gt;

&lt;p&gt;The workflow was trying to package a dev build instead of the official release. But why?&lt;/p&gt;

&lt;h3&gt;
  
  
  The Dev Build Problem
&lt;/h3&gt;

&lt;p&gt;Let me explain how my weekly build workflow creates releases. There are two types:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Official releases&lt;/strong&gt;: &lt;code&gt;buildkit-v0.26.2-riscv64&lt;/code&gt; (tracks upstream versions)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dev builds&lt;/strong&gt;: &lt;code&gt;buildkit-v20251209-dev&lt;/code&gt; (weekly snapshots from main branch)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Dev builds get a synthetic version number using the format &lt;code&gt;0.0.YYYYMMDD&lt;/code&gt;. This makes them easy to identify as development snapshots, but it also makes them &lt;em&gt;lower&lt;/em&gt; than any real semver version like &lt;code&gt;0.17.3&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The packaging workflow was auto-detecting the most recent release and finding a dev build first. It tried to update the changelog from &lt;code&gt;0.17.3&lt;/code&gt; to &lt;code&gt;0.0.20251209&lt;/code&gt;, which Debian rightfully rejected as a downgrade.&lt;/p&gt;

&lt;p&gt;Here's the thing: Debian doesn't let you go backward in version numbers. That would break the entire package management system. So my "clever" versioning scheme for dev builds was actually creating a mess.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fix 1: Only Match Official Releases
&lt;/h3&gt;

&lt;p&gt;I changed the awk pattern in both packaging workflows. Before, it looked like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Before: matches any buildkit-v* tag&lt;/span&gt;
&lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="nt"&gt;-F&lt;/span&gt;&lt;span class="s1"&gt;'\t'&lt;/span&gt; &lt;span class="s1"&gt;'$3 ~ /^buildkit-v/ {print $3; exit}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This matched everything—official releases and dev builds. I needed it to be more specific:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# After: only matches semver format&lt;/span&gt;
&lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="nt"&gt;-F&lt;/span&gt;&lt;span class="s1"&gt;'\t'&lt;/span&gt; &lt;span class="s1"&gt;'$3 ~ /^buildkit-v[0-9]+\.[0-9]+\.[0-9]+/ {print $3; exit}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now &lt;code&gt;buildkit-v0.26.2-riscv64&lt;/code&gt; matches perfectly. &lt;code&gt;buildkit-v20251209-dev&lt;/code&gt; doesn't match at all.&lt;/p&gt;

&lt;p&gt;PR #224 implemented this fix.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fix 2: Auto-Detection Should Default to Nothing
&lt;/h3&gt;

&lt;p&gt;The workflow_dispatch input had a hardcoded default that was causing problems:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Before&lt;/span&gt;
&lt;span class="na"&gt;workflow_dispatch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;release_tag&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;buildkit-v20251209-dev'&lt;/span&gt;  &lt;span class="c1"&gt;# Hardcoded to dev build&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This meant every manual trigger defaulted to a dev build unless you explicitly changed it. Not great.&lt;/p&gt;

&lt;p&gt;Changed to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# After&lt;/span&gt;
&lt;span class="na"&gt;workflow_dispatch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;release_tag&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;BuildKit&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;release&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;tag&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;(leave&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;empty&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;to&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;auto-detect&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;latest&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;official)'&lt;/span&gt;
      &lt;span class="na"&gt;required&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When the input is empty, the workflow auto-detects the latest official release using the semver awk pattern. No more accidental dev build packaging.&lt;/p&gt;

&lt;p&gt;PR #225 completed this fix.&lt;/p&gt;




&lt;h2&gt;
  
  
  Phase 5: Success (Finally!)
&lt;/h2&gt;

&lt;p&gt;After merging all three PRs (#222, #224, #225), I triggered the Debian workflow manually with &lt;code&gt;buildkit-v0.26.2-riscv64&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The build completed without errors. The release now has everything we need:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;File&lt;/th&gt;
&lt;th&gt;Size&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;buildkitd&lt;/code&gt; (binary)&lt;/td&gt;
&lt;td&gt;~45 MB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;buildctl&lt;/code&gt; (binary)&lt;/td&gt;
&lt;td&gt;~18 MB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;buildkit-0.26.2-1.riscv64.rpm&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;~58 MB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;buildkit_0.26.2-riscv64-1_riscv64.deb&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;~58 MB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;buildkit-dbgsym_0.26.2-riscv64-1_riscv64.deb&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;~300 KB&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;My demo's Step 5? No longer blocked.&lt;/p&gt;




&lt;h2&gt;
  
  
  Installation: Actually Using This Thing
&lt;/h2&gt;

&lt;p&gt;So you want to try this yourself? Here's how.&lt;/p&gt;

&lt;h3&gt;
  
  
  Debian/Ubuntu
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Download and install&lt;/span&gt;
wget https://github.com/gounthar/docker-for-riscv64/releases/download/buildkit-v0.26.2-riscv64/buildkit_0.26.2-riscv64-1_riscv64.deb
&lt;span class="nb"&gt;sudo &lt;/span&gt;dpkg &lt;span class="nt"&gt;-i&lt;/span&gt; buildkit_0.26.2-riscv64-1_riscv64.deb

&lt;span class="c"&gt;# Verify it actually works&lt;/span&gt;
buildkitd &lt;span class="nt"&gt;--version&lt;/span&gt;
buildctl &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Fedora/RHEL
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Download and install&lt;/span&gt;
wget https://github.com/gounthar/docker-for-riscv64/releases/download/buildkit-v0.26.2-riscv64/buildkit-0.26.2-1.riscv64.rpm
&lt;span class="nb"&gt;sudo &lt;/span&gt;dnf &lt;span class="nb"&gt;install&lt;/span&gt; ./buildkit-0.26.2-1.riscv64.rpm

&lt;span class="c"&gt;# Verify it actually works&lt;/span&gt;
buildkitd &lt;span class="nt"&gt;--version&lt;/span&gt;
buildctl &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  What This Actually Enables
&lt;/h2&gt;

&lt;p&gt;With BuildKit properly packaged, Docker users on RISC-V64 can now:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Run buildkitd as a system service&lt;/strong&gt; for persistent build caching (no more rebuilding everything every time)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use Buildx for multi-platform builds&lt;/strong&gt; targeting riscv64 alongside amd64, arm64, and other architectures&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build container images locally&lt;/strong&gt; without relying on remote builders or QEMU emulation&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;My demo's Step 5 works now. More importantly, anyone else trying to use Docker Buildx on RISC-V64 won't hit the same wall I did.&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Learnings (So You Don't Have to Learn Them the Hard Way)
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Go binaries need &lt;code&gt;override_dh_dwz&lt;/code&gt; in Debian packaging. The dwz optimizer cannot handle Go's debug format and will fail the build. Just skip it entirely—Go binaries are already pretty well optimized.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; When supporting both official releases and dev builds, use semver regex patterns to distinguish them. &lt;code&gt;v[0-9]+\.[0-9]+\.[0-9]+&lt;/code&gt; matches real versions. &lt;code&gt;vYYYYMMDD-dev&lt;/code&gt; does not. This prevents accidental downgrades in package version numbers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Unlike Docker Engine (which required multiple patches and workarounds), BuildKit requires zero RISC-V64 patches. It builds directly from upstream with &lt;code&gt;CGO_ENABLED=0&lt;/code&gt;. Sometimes things are actually easier than you expect.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  References &amp;amp; Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;BuildKit repository: &lt;a href="https://github.com/moby/buildkit" rel="noopener noreferrer"&gt;https://github.com/moby/buildkit&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Docker Buildx: &lt;a href="https://github.com/docker/buildx" rel="noopener noreferrer"&gt;https://github.com/docker/buildx&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Release with packages: &lt;a href="https://github.com/gounthar/docker-for-riscv64/releases/tag/buildkit-v0.26.2-riscv64" rel="noopener noreferrer"&gt;https://github.com/gounthar/docker-for-riscv64/releases/tag/buildkit-v0.26.2-riscv64&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Build analysis: BUILDKIT-RISCV64-ANALYSIS.md&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Takeaways &amp;amp; Tips
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Always check upstream build configs first&lt;/strong&gt; - BuildKit already supported RISC-V64, I just needed to actually build it&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Go binaries need special handling in Debian&lt;/strong&gt; - Add &lt;code&gt;override_dh_dwz&lt;/code&gt; to your rules file or the build will fail&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Version schemes matter for packaging&lt;/strong&gt; - Dev builds with &lt;code&gt;0.0.YYYYMMDD&lt;/code&gt; versions will confuse package managers expecting semver&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use regex patterns to filter releases&lt;/strong&gt; - Distinguish official releases from dev builds automatically with semver patterns&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RPM packaging is often easier than .deb&lt;/strong&gt; - No dwz issues, cleaner workflows, fewer surprises&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Total time from blocked demo to working packages: approximately 4 hours. Not bad for gaining multi-platform build support on an entire architecture.&lt;/p&gt;

&lt;p&gt;If you're running Docker on RISC-V64 and want to try Buildx, give it a whirl; the packages are ready and waiting.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Header image: Photo by &lt;a href="https://unsplash.com/@diegogonzalez" rel="noopener noreferrer"&gt;Diego González&lt;/a&gt; on &lt;a href="https://unsplash.com/photos/qVv32VbZfac" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>buildkit</category>
      <category>riscv64</category>
      <category>docker</category>
      <category>buildx</category>
    </item>
    <item>
      <title>Fixing Concurrent GitHub Actions Workflows: Multi-Architecture Package Repository Guide</title>
      <dc:creator>Bruno Verachten</dc:creator>
      <pubDate>Tue, 25 Nov 2025 16:35:00 +0000</pubDate>
      <link>https://forem.com/gounthar/fixing-concurrent-github-actions-workflows-multi-architecture-package-repository-guide-emi</link>
      <guid>https://forem.com/gounthar/fixing-concurrent-github-actions-workflows-multi-architecture-package-repository-guide-emi</guid>
      <description>&lt;p&gt;Building and distributing software packages across multiple architectures (x86_64, aarch64, riscv64) sounds great in theory. But when I tried to automate the entire pipeline with GitHub Actions—from Docker builds to APT/RPM repositories—everything started colliding. Workflows fought over the same repository branch, RPM builds failed with mysterious "185+ unpackaged files" errors, and dependency declarations became stale. Here's the technical journey of making it all work smoothly, with lessons about concurrent git operations, RPM spec file semantics, and the reset-and-restore pattern.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Multi-Architecture CI/CD Builds Matter for Modern DevOps
&lt;/h2&gt;

&lt;p&gt;I had built an impressive automated infrastructure for OpenSCAD: Docker builds for three architectures, automated Debian and RPM package extraction, and automatic updates to APT and RPM repositories hosted on GitHub Pages. On paper, it was beautiful. In practice, my workflows were fighting each other like drunk sailors.&lt;/p&gt;

&lt;p&gt;Why build for three architectures? Because RISC-V (riscv64) is the future of open hardware, ARM (aarch64) is everywhere from Raspberry Pis to data centers, and x86_64 is still the dominant desktop architecture. Supporting all three means OpenSCAD can run on the widest possible range of hardware—from experimental RISC-V development boards to production ARM servers to traditional Intel/AMD machines.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Here's why this matters for the OpenSCAD community specifically:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ARM64 adoption&lt;/strong&gt;: Apple Silicon Macs, Raspberry Pi 4/5, and ARM-based cloud instances are increasingly common for developers and makers. Not supporting ARM64 means alienating a growing user base.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RISC-V experimentation&lt;/strong&gt;: While still niche, RISC-V is becoming the go-to architecture for open hardware projects, educational institutions, and researchers. Supporting it now positions OpenSCAD as the CAD tool for the open hardware movement.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;x86_64 compatibility&lt;/strong&gt;: Still the dominant architecture for Windows and Linux desktops where most 3D modeling happens. Can't abandon the core user base.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The metric that convinced me this was worth the effort? GitHub's download statistics showed 15% of macOS users and 8% of Linux users were on ARM64 architectures as of November 2024. That's not a rounding error—that's a significant chunk of potential users.&lt;/p&gt;

&lt;p&gt;But automating daily builds across all these platforms? That's where things got interesting.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Challenge: GitHub Actions Concurrency Conflicts
&lt;/h2&gt;

&lt;p&gt;The symptoms were varied and frustrating:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Concurrent workflow conflicts&lt;/strong&gt;: Multiple packaging workflows would try to update the &lt;code&gt;gh-pages&lt;/code&gt; branch simultaneously, causing git push failures&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RPM packaging failures&lt;/strong&gt;: The build would succeed, but RPM complained about "185+ unpackaged files found" and refused to create packages&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Debian dependency issues&lt;/strong&gt;: Packages built fine but couldn't install on Debian Trixie because they declared dependencies for the old Bookworm versions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;YAML syntax errors&lt;/strong&gt;: Multi-line commit messages in workflows were silently failing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stale documentation&lt;/strong&gt;: The README hadn't been updated to reflect the new repository structure and installation methods&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each issue seemed simple in isolation. Together, they represented a systemic problem with my build infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Complete Fix: Eight Commits
&lt;/h2&gt;

&lt;p&gt;Before diving into the technical details, here's the roadmap of fixes that solved these issues:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;ef3f001f8&lt;/strong&gt;: Fixed Debian package dependencies (libmimalloc2.1→libmimalloc3, libzip4→libzip5)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;fe9a7d3b7&lt;/strong&gt;: Fixed RPM packaging by changing &lt;code&gt;%dir&lt;/code&gt; to recursive inclusion&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ad0452a22&lt;/strong&gt;: Added concurrency control to release workflow&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;1f809a98b&lt;/strong&gt;: Implemented reset-and-restore pattern in RPM repository update&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;e3791b3c1&lt;/strong&gt;: Added reset-and-restore pattern to APT repository update&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;15ac24c20&lt;/strong&gt;: Fixed YAML syntax for commit messages&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;928536698&lt;/strong&gt;: Added retry logic with --clobber for asset uploads&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;(Final)&lt;/strong&gt;: Comprehensively updated README.md&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each commit addressed a specific issue, making the changes reviewable and revertible if needed. Now let's explore how each solution works.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solution Part 1: Taming Concurrent Workflows
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Issue 1: The Concurrent Repository Update Problem
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Understanding the Collision
&lt;/h4&gt;

&lt;p&gt;The architecture was straightforward: when a Docker build completed, it triggered multiple packaging workflows in parallel - one for Debian packages, one for RPM packages. Each workflow would:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Checkout the &lt;code&gt;gh-pages&lt;/code&gt; branch&lt;/li&gt;
&lt;li&gt;Download the artifacts&lt;/li&gt;
&lt;li&gt;Generate repository metadata&lt;/li&gt;
&lt;li&gt;Commit and push changes&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The problem? When two workflows ran simultaneously, both would checkout the same commit, make different changes, and try to push. The second one would fail because the branch had moved forward.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Here's what the actual git error looked like:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;git push origin gh-pages
&lt;span class="go"&gt;To https://github.com/gounthar/openscad.git
&lt;/span&gt;&lt;span class="gp"&gt; ! [rejected]        gh-pages -&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;gh-pages &lt;span class="o"&gt;(&lt;/span&gt;non-fast-forward&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="go"&gt;error: failed to push some refs to 'https://github.com/gounthar/openscad.git'
hint: Updates were rejected because the tip of your current branch is behind
hint: its remote counterpart. Integrate the remote changes (e.g.
hint: 'git merge origin/gh-pages') before pushing again.
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That "non-fast-forward" message is git's polite way of saying: "Someone else changed this branch while you were working, and I don't know how to combine your changes with theirs."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How workflow_run triggers actually work&lt;/strong&gt;: GitHub Actions has a &lt;code&gt;workflow_run&lt;/code&gt; trigger that fires when another workflow completes. The key thing to understand is that these triggers are &lt;em&gt;asynchronous&lt;/em&gt;—multiple workflows can trigger simultaneously from the same completion event. There's no built-in queuing or serialization. This means when the Docker build finishes for all three architectures, both the APT packaging workflow and the RPM packaging workflow receive their triggers at approximately the same instant (within milliseconds of each other).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What GitHub's API actually returns&lt;/strong&gt; when you query for workflow runs using &lt;code&gt;gh run list&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;gh run list &lt;span class="nt"&gt;--workflow&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;package-from-docker.yml &lt;span class="nt"&gt;--status&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;success &lt;span class="nt"&gt;--limit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 &lt;span class="nt"&gt;--json&lt;/span&gt; databaseId,createdAt,conclusion
&lt;span class="o"&gt;[&lt;/span&gt;
  &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"conclusion"&lt;/span&gt;: &lt;span class="s2"&gt;"success"&lt;/span&gt;,
    &lt;span class="s2"&gt;"createdAt"&lt;/span&gt;: &lt;span class="s2"&gt;"2025-11-20T14:23:45Z"&lt;/span&gt;,
    &lt;span class="s2"&gt;"databaseId"&lt;/span&gt;: 11234567890
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That &lt;code&gt;databaseId&lt;/code&gt; is what we use to download artifacts and correlate workflow runs. The &lt;code&gt;createdAt&lt;/code&gt; timestamp is critical for determining if two workflows were triggered by the same Docker build.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Workflow 1: Updates APT repository&lt;/span&gt;
git fetch origin gh-pages
git checkout gh-pages
&lt;span class="c"&gt;# ... make changes to dists/ directory ...&lt;/span&gt;
git push origin gh-pages  &lt;span class="c"&gt;# ✓ Success&lt;/span&gt;

&lt;span class="c"&gt;# Workflow 2 (running simultaneously): Updates RPM repository&lt;/span&gt;
git fetch origin gh-pages
git checkout gh-pages      &lt;span class="c"&gt;# Gets the OLD commit&lt;/span&gt;
&lt;span class="c"&gt;# ... make changes to rpm/ directory ...&lt;/span&gt;
git push origin gh-pages  &lt;span class="c"&gt;# ✗ REJECTED: non-fast-forward&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a classic race condition. I needed a way to ensure only one workflow could update the repository at a time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Personal note&lt;/strong&gt;: I've been burned by concurrent git operations before—once spent three hours debugging a corrupt repository before realizing two CI jobs were pushing simultaneously. Since then, I've been paranoid about race conditions in automation. The reset-and-restore pattern has become my go-to solution because it's forgiving of my mistakes.&lt;/p&gt;

&lt;h4&gt;
  
  
  GitHub Actions Concurrency Control
&lt;/h4&gt;

&lt;p&gt;GitHub Actions has a &lt;code&gt;concurrency&lt;/code&gt; feature that can prevent multiple workflow runs from executing simultaneously. I added this to the release creation workflow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;concurrency&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;group&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;release-creation&lt;/span&gt;
  &lt;span class="na"&gt;cancel-in-progress&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;group&lt;/code&gt; key defines what workflows share the concurrency limit. The &lt;code&gt;cancel-in-progress: false&lt;/code&gt; setting ensures we queue workflows instead of canceling them - important because we don't want to lose work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why "release-creation" and not something more specific?&lt;/strong&gt; Here's a naming pattern I learned the hard way: concurrency group names should be &lt;em&gt;semantic&lt;/em&gt; (what they protect), not &lt;em&gt;technical&lt;/em&gt; (which workflows use them). I initially tried &lt;code&gt;release-v${{ github.ref }}&lt;/code&gt; thinking "one release per tag," but that didn't prevent the underlying repository conflicts. The name &lt;code&gt;release-creation&lt;/code&gt; clearly signals "only one release creation process at a time"—which is exactly the protection we need.&lt;/p&gt;

&lt;p&gt;However, this alone wasn't enough. The concurrency control works at the workflow level, but I had multiple &lt;em&gt;different&lt;/em&gt; workflows (APT update, RPM update, release creation) all modifying the same branch.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Visualizing the workflow dependency chain:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Docker Build (3 architectures)
         |
         | (workflow_run trigger - parallel!)
         |
    +----+----+
    |         |
    v         v
APT Update  RPM Update
    |         |
    +----+----+
         |
         v (both must complete)
  Release Creation
         |
         v
   Asset Uploads
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The challenge here is that APT Update and RPM Update both need to modify &lt;code&gt;gh-pages&lt;/code&gt; simultaneously, while Release Creation needs to wait for &lt;em&gt;both&lt;/em&gt; to complete. This dependency graph is what caused the original collisions.&lt;/p&gt;

&lt;h4&gt;
  
  
  The Reset-and-Restore Pattern
&lt;/h4&gt;

&lt;p&gt;I needed a strategy for handling inevitable conflicts. My first attempt was the traditional merge approach:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# DON'T DO THIS - it doesn't work well in automation&lt;/span&gt;
git fetch origin gh-pages
git merge origin/gh-pages  &lt;span class="c"&gt;# Conflict when both modify same files&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The problem with merging is that it requires conflict resolution, which is impossible to automate reliably when you don't know what the conflicts will be.&lt;/p&gt;

&lt;p&gt;Instead, I implemented a &lt;strong&gt;reset-and-restore pattern&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Save our changes to temporary directory&lt;/span&gt;
&lt;span class="nv"&gt;TEMP_DIR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;mktemp&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;cp&lt;/span&gt; &lt;span class="nt"&gt;-a&lt;/span&gt; downloads/rpm-&lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$TEMP_DIR&lt;/span&gt;&lt;span class="s2"&gt;/"&lt;/span&gt; 2&amp;gt;/dev/null &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;true
cp&lt;/span&gt; &lt;span class="nt"&gt;-a&lt;/span&gt; rpm/ &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$TEMP_DIR&lt;/span&gt;&lt;span class="s2"&gt;/"&lt;/span&gt; 2&amp;gt;/dev/null &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;true
cp&lt;/span&gt; &lt;span class="nt"&gt;-a&lt;/span&gt; index.html &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$TEMP_DIR&lt;/span&gt;&lt;span class="s2"&gt;/"&lt;/span&gt; 2&amp;gt;/dev/null &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;true&lt;/span&gt;

&lt;span class="c"&gt;# Fetch and reset to latest gh-pages&lt;/span&gt;
git fetch origin gh-pages
git reset &lt;span class="nt"&gt;--hard&lt;/span&gt; origin/gh-pages

&lt;span class="c"&gt;# Restore our changes on top&lt;/span&gt;
&lt;span class="nb"&gt;cp&lt;/span&gt; &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$TEMP_DIR&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;/rpm-&lt;span class="k"&gt;*&lt;/span&gt; downloads/ 2&amp;gt;/dev/null &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;true
cp&lt;/span&gt; &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$TEMP_DIR&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;/rpm &lt;span class="nb"&gt;.&lt;/span&gt; 2&amp;gt;/dev/null &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;true
cp&lt;/span&gt; &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$TEMP_DIR&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;/index.html &lt;span class="nb"&gt;.&lt;/span&gt; 2&amp;gt;/dev/null &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;true
rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$TEMP_DIR&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="c"&gt;# Re-commit and retry push&lt;/span&gt;
git add &lt;span class="nt"&gt;-A&lt;/span&gt;
git commit &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"update RPM repository with latest packages"&lt;/span&gt;
git push origin gh-pages
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This pattern works because:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;It's non-destructive&lt;/strong&gt;: We save our changes before resetting&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;It's additive&lt;/strong&gt;: We overlay our changes on top of the latest state&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;It avoids conflicts&lt;/strong&gt;: By using &lt;code&gt;cp -a&lt;/code&gt; (copy with attributes), we overwrite entire directories atomically&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;It's idempotent&lt;/strong&gt;: Running it multiple times produces the same result&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I wrapped this in a retry loop with exponential backoff:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;MAX_RETRIES&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3
&lt;span class="nv"&gt;RETRY_COUNT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0
&lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;$RETRY_COUNT&lt;/span&gt; &lt;span class="nt"&gt;-lt&lt;/span&gt; &lt;span class="nv"&gt;$MAX_RETRIES&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
  if &lt;/span&gt;git push origin gh-pages&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Successfully pushed to gh-pages"&lt;/span&gt;
    &lt;span class="nb"&gt;break
  &lt;/span&gt;&lt;span class="k"&gt;else
    &lt;/span&gt;&lt;span class="nv"&gt;RETRY_COUNT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;$((&lt;/span&gt;RETRY_COUNT &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="k"&gt;))&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;$RETRY_COUNT&lt;/span&gt; &lt;span class="nt"&gt;-lt&lt;/span&gt; &lt;span class="nv"&gt;$MAX_RETRIES&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
      &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Push failed, retrying (&lt;/span&gt;&lt;span class="nv"&gt;$RETRY_COUNT&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="nv"&gt;$MAX_RETRIES&lt;/span&gt;&lt;span class="s2"&gt;)..."&lt;/span&gt;
      &lt;span class="c"&gt;# [reset-and-restore pattern here]&lt;/span&gt;
      &lt;span class="nb"&gt;sleep &lt;/span&gt;2
    &lt;span class="k"&gt;else
      &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Failed to push after &lt;/span&gt;&lt;span class="nv"&gt;$MAX_RETRIES&lt;/span&gt;&lt;span class="s2"&gt; attempts"&lt;/span&gt;
      &lt;span class="nb"&gt;exit &lt;/span&gt;1
    &lt;span class="k"&gt;fi
  fi
done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Why Not Use Git Locks?
&lt;/h4&gt;

&lt;p&gt;Some might ask: "Why not use git's built-in locking?" The problem is that GitHub doesn't support LFS file locking for regular repository operations, and implementing distributed locking correctly is notoriously difficult. The reset-and-restore pattern is simpler and more reliable for this use case.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Comparing concurrency strategies for repository updates:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Strategy&lt;/th&gt;
&lt;th&gt;Pros&lt;/th&gt;
&lt;th&gt;Cons&lt;/th&gt;
&lt;th&gt;Best For&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Mutex/Lock File&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;True serialization&lt;/td&gt;
&lt;td&gt;Requires shared storage, complex cleanup on failure, race conditions creating lock&lt;/td&gt;
&lt;td&gt;Single-server deployments&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Reset-and-Restore&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;No coordination needed, handles conflicts gracefully&lt;/td&gt;
&lt;td&gt;Potential for lost work if changes truly conflict&lt;/td&gt;
&lt;td&gt;Additive operations (different directories)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Queue-Based&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Ordered processing, no conflicts&lt;/td&gt;
&lt;td&gt;Requires external queue service (Redis, RabbitMQ), added complexity&lt;/td&gt;
&lt;td&gt;High-conflict scenarios with many writers&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Advisory Locks&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Lightweight, built into git&lt;/td&gt;
&lt;td&gt;Not supported by GitHub, requires custom implementation&lt;/td&gt;
&lt;td&gt;Self-hosted git servers only&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;For GitHub Actions modifying different directories of the same branch, reset-and-restore is the sweet spot. We don't need a heavyweight queue system because our changes are inherently non-conflicting (APT touches &lt;code&gt;dists/&lt;/code&gt;, RPM touches &lt;code&gt;rpm/&lt;/code&gt;). The reset-and-restore pattern recognizes this and optimizes for the common case.&lt;/p&gt;

&lt;h4&gt;
  
  
  Troubleshooting Tip #1: Debugging Workflow Synchronization Issues
&lt;/h4&gt;

&lt;p&gt;When workflows aren't synchronizing properly, here's how to investigate:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Get the last 5 workflow runs with timestamps&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;gh run list &lt;span class="nt"&gt;--workflow&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;package-from-docker.yml &lt;span class="nt"&gt;--limit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;5 &lt;span class="nt"&gt;--json&lt;/span&gt; databaseId,createdAt,conclusion

&lt;span class="c"&gt;# Check if two specific runs are within your sync window&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;gh api repos/OWNER/REPO/actions/runs/RUN_ID_1 &lt;span class="nt"&gt;--jq&lt;/span&gt; &lt;span class="s1"&gt;'.created_at'&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;gh api repos/OWNER/REPO/actions/runs/RUN_ID_2 &lt;span class="nt"&gt;--jq&lt;/span&gt; &lt;span class="s1"&gt;'.created_at'&lt;/span&gt;

&lt;span class="c"&gt;# Calculate time difference (Unix timestamps)&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s2"&gt;"2025-11-20T14:23:45Z"&lt;/span&gt; +%s
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s2"&gt;"2025-11-20T14:24:30Z"&lt;/span&gt; +%s
&lt;span class="c"&gt;# Subtract to get difference in seconds&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If your workflows are consistently failing to synchronize, check these:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Sync window too narrow&lt;/strong&gt;: 5 minutes (300s) works for most cases, but slow builds might need more&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Workflow dispatch triggering wrong runs&lt;/strong&gt;: Manual triggers should fetch &lt;em&gt;latest successful&lt;/em&gt;, not just &lt;em&gt;latest&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Time zone issues&lt;/strong&gt;: Always use UTC for timestamp comparisons—local time will break everything&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Issue 2: Asset Upload Retry Logic
&lt;/h3&gt;

&lt;h4&gt;
  
  
  GitHub Release Upload Failures
&lt;/h4&gt;

&lt;p&gt;Even after fixing the concurrency issues, I occasionally saw release asset uploads fail:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;Uploading openscad_2025.11.19-1_amd64.deb...
Error: Resource not accessible by integration
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These failures were intermittent, suggesting either rate limiting or transient API issues.&lt;/p&gt;

&lt;h4&gt;
  
  
  The Solution: Retry with --clobber
&lt;/h4&gt;

&lt;p&gt;I added retry logic with GitHub CLI's &lt;code&gt;--clobber&lt;/code&gt; flag:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Upload .deb packages with retry&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="nb"&gt;ls &lt;/span&gt;release-assets/&lt;span class="k"&gt;*&lt;/span&gt;.deb 1&amp;gt;/dev/null 2&amp;gt;&amp;amp;1&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Uploading Debian packages..."&lt;/span&gt;
  &lt;span class="k"&gt;for &lt;/span&gt;deb &lt;span class="k"&gt;in &lt;/span&gt;release-assets/&lt;span class="k"&gt;*&lt;/span&gt;.deb&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
    &lt;/span&gt;&lt;span class="nv"&gt;filename&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;basename&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$deb&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Uploading &lt;/span&gt;&lt;span class="nv"&gt;$filename&lt;/span&gt;&lt;span class="s2"&gt;..."&lt;/span&gt;
    &lt;span class="c"&gt;# Use --clobber to replace existing assets with same name&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; gh release upload &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ steps.version.outputs.tag &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;}"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$deb&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;--clobber&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
      &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Warning: Failed to upload &lt;/span&gt;&lt;span class="nv"&gt;$filename&lt;/span&gt;&lt;span class="s2"&gt;, retrying..."&lt;/span&gt;
      &lt;span class="nb"&gt;sleep &lt;/span&gt;2
      gh release upload &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ steps.version.outputs.tag &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;}"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$deb&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;--clobber&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Error: Failed to upload &lt;/span&gt;&lt;span class="nv"&gt;$filename&lt;/span&gt;&lt;span class="s2"&gt; after retry"&lt;/span&gt;
    &lt;span class="k"&gt;fi
  done
fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;--clobber&lt;/code&gt; flag tells GitHub CLI to replace existing assets with the same name. This is important for idempotency - if the upload partially succeeds or we re-run the workflow, we don't want to fail because an asset already exists.&lt;/p&gt;

&lt;p&gt;The retry logic handles transient failures gracefully:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Try to upload&lt;/li&gt;
&lt;li&gt;If it fails, wait 2 seconds&lt;/li&gt;
&lt;li&gt;Try again with &lt;code&gt;--clobber&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;If it still fails, log an error but continue&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This ensures that one failed asset upload doesn't block the entire release.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Troubleshooting Tip #5&lt;/strong&gt;: Debugging GitHub release asset upload failures:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Check if the release exists and can accept uploads&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;gh release view v2025.11.20 &lt;span class="nt"&gt;--json&lt;/span&gt; assets,uploadUrl

&lt;span class="c"&gt;# Manually upload an asset to test permissions&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;gh release upload v2025.11.20 test-package.deb &lt;span class="nt"&gt;--clobber&lt;/span&gt;

&lt;span class="c"&gt;# If upload fails with "Resource not accessible", check:&lt;/span&gt;
&lt;span class="c"&gt;# 1. Workflow permissions in repository settings&lt;/span&gt;
&lt;span class="c"&gt;# 2. GITHUB_TOKEN has write access to releases&lt;/span&gt;
&lt;span class="c"&gt;# 3. Release isn't in draft mode (can't upload to drafts via API)&lt;/span&gt;

&lt;span class="c"&gt;# View recent upload attempts in workflow logs&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;gh run view &lt;span class="nt"&gt;--log&lt;/span&gt; | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-A&lt;/span&gt; 5 &lt;span class="s2"&gt;"Uploading.*&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="s2"&gt;deb"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Common causes of upload failures:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Rate limiting&lt;/strong&gt;: GitHub API has rate limits; adding &lt;code&gt;sleep 2&lt;/code&gt; between uploads helps&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Permission issues&lt;/strong&gt;: Workflow needs &lt;code&gt;contents: write&lt;/code&gt; permission&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;File already exists&lt;/strong&gt;: Use &lt;code&gt;--clobber&lt;/code&gt; to replace, or delete and re-upload&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network timeouts&lt;/strong&gt;: Large files (&amp;gt;100MB) may need longer timeouts or chunked uploads&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Solution Part 2: Package Manager Deep Dives
&lt;/h2&gt;

&lt;p&gt;With concurrent workflow coordination solved, the next set of challenges came from the package managers themselves - each with their own semantic quirks.&lt;/p&gt;

&lt;h3&gt;
  
  
  Issue 3: The RPM Packaging Mystery
&lt;/h3&gt;

&lt;h4&gt;
  
  
  The 185+ Unpackaged Files Error
&lt;/h4&gt;

&lt;p&gt;After solving the concurrency issue, I ran into a different problem. The RPM build would complete successfully, but &lt;code&gt;rpmbuild&lt;/code&gt; would fail at the packaging stage:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;Processing files: openscad-2025.11.19-1.x86_64
error: Installed (but unpackaged) file(s) found:
   /usr/share/openscad/color-schemes/cornfield/background.json
   /usr/share/openscad/color-schemes/cornfield/colors.json
   /usr/share/openscad/color-schemes/cornfield/gui.json
   ... (185+ more files) ...

RPM build errors:
    Installed (but unpackaged) file(s) found
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This was baffling. I was using the &lt;code&gt;%{_datadir}/openscad/&lt;/code&gt; directive in my spec file, which should have included everything in that directory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Here's what happened in the real failure scenario&lt;/strong&gt;: When workflows were out of sync, the APT workflow would complete and push its changes to &lt;code&gt;gh-pages&lt;/code&gt; before the RPM workflow finished building packages. Then when the RPM workflow tried to push, it got rejected. But here's the nasty part—it wouldn't fail loudly. The workflow would show "success" because the RPM build step completed, but the push to &lt;code&gt;gh-pages&lt;/code&gt; failed silently. This meant the APT repository got updated with new packages, but the RPM repository stayed stale. Users on Fedora/RHEL would see version mismatches between what the release said was available and what &lt;code&gt;dnf&lt;/code&gt; could actually find. I discovered this only after manually inspecting &lt;code&gt;gh run list&lt;/code&gt; output and noticing the push failures buried in the logs.&lt;/p&gt;

&lt;h4&gt;
  
  
  Debugging Mini-Story: Discovering the 5-Minute Window
&lt;/h4&gt;

&lt;p&gt;Here's how I figured out the synchronization window:&lt;/p&gt;

&lt;p&gt;I noticed that sometimes both workflows would create a release together, and sometimes they wouldn't. The pattern wasn't immediately obvious. So I exported the workflow run data to CSV and started analyzing timestamps:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;gh run list &lt;span class="nt"&gt;--workflow&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;package-from-docker.yml &lt;span class="nt"&gt;--limit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;50 &lt;span class="nt"&gt;--json&lt;/span&gt; databaseId,createdAt,conclusion &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; deb_runs.json
&lt;span class="nv"&gt;$ &lt;/span&gt;gh run list &lt;span class="nt"&gt;--workflow&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;package-rpm-from-docker.yml &lt;span class="nt"&gt;--limit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;50 &lt;span class="nt"&gt;--json&lt;/span&gt; databaseId,createdAt,conclusion &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; rpm_runs.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then I wrote a quick Python script to calculate time differences between paired runs. What I found:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Successful releases&lt;/strong&gt;: Both workflows started within 30-90 seconds of each other&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Failed releases&lt;/strong&gt;: One workflow started 10+ minutes after the other (different Docker builds)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The 5-minute window (300 seconds) became the sweet spot—wide enough to catch genuine pairs from the same build, narrow enough to reject stale runs from different builds. Too narrow (60s) and slow builds would miss pairing. Too wide (15 minutes) and we'd incorrectly pair runs from consecutive builds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mistakes to Avoid #1&lt;/strong&gt;: Don't blindly trust workflow "success" status. Check the actual job steps—a workflow can succeed overall even if critical steps like &lt;code&gt;git push&lt;/code&gt; fail non-fatally.&lt;/p&gt;

&lt;h4&gt;
  
  
  Understanding RPM %files Semantics
&lt;/h4&gt;

&lt;p&gt;The problem was subtle but important. In RPM spec files, the &lt;code&gt;%files&lt;/code&gt; section has two ways to specify directories:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;%dir directive&lt;/strong&gt;: Packages the directory itself but not its contents&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Trailing slash&lt;/strong&gt;: Packages everything recursively under that directory&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Glob patterns&lt;/strong&gt;: Packages matching files/directories using wildcards&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;RPM file categorization is more nuanced than it appears&lt;/strong&gt;. The spec file doesn't just list what to package—it defines &lt;em&gt;ownership semantics&lt;/em&gt;. When you use &lt;code&gt;%dir /usr/share/openscad/&lt;/code&gt;, you're saying "I own this directory entry in the filesystem, but I'm not claiming ownership of its contents." This is crucial for shared directories where multiple packages might install files. For example, &lt;code&gt;/usr/share/icons/hicolor/&lt;/code&gt; is a shared directory owned by the &lt;code&gt;hicolor-icon-theme&lt;/code&gt; package, but dozens of applications install their icons there. Each app uses patterns like &lt;code&gt;%{_datadir}/icons/hicolor/*/apps/myapp.*&lt;/code&gt; to claim only their own icons, not the directory itself.&lt;/p&gt;

&lt;p&gt;The trailing slash syntax (&lt;code&gt;/usr/share/openscad/&lt;/code&gt;) means "I own this directory AND recursively everything in it." It's RPM's way of saying "this is my territory, all of it." The glob syntax (&lt;code&gt;/usr/share/openscad/*&lt;/code&gt;) is similar but more explicit—it expands at build time to include all items matching the pattern.&lt;/p&gt;

&lt;p&gt;Here's what I had originally:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;%files
%{_bindir}/openscad
%{_defaultdocdir}/%{name}/COPYING
# ... other files ...
%dir %{_datadir}/openscad/   # ← This was the problem!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;%dir&lt;/code&gt; directive told RPM: "Package this directory entry, but not the files inside it." This is useful when you want to own the directory structure but let other packages populate it. But in my case, I wanted to package all the files.&lt;/p&gt;

&lt;p&gt;The fix was simple but not obvious:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;%files
%{_bindir}/openscad
%{_defaultdocdir}/%{name}/COPYING
# ... other files ...
%{_datadir}/openscad/   # Trailing slash = recursive inclusion
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By removing &lt;code&gt;%dir&lt;/code&gt; and keeping the trailing slash, RPM now understood: "Package this directory and everything under it recursively."&lt;/p&gt;

&lt;h4&gt;
  
  
  Why This Matters
&lt;/h4&gt;

&lt;p&gt;This distinction exists because RPM has sophisticated ownership semantics. Multiple packages can share directories, and RPM needs to know:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Who owns the directory itself?&lt;/li&gt;
&lt;li&gt;Who owns the files inside?&lt;/li&gt;
&lt;li&gt;What happens when a package is removed?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Using &lt;code&gt;%dir&lt;/code&gt; signals: "I own this directory structure, but other packages might put files in it." Using just the path with a trailing slash signals: "I own this directory AND everything in it."&lt;/p&gt;

&lt;p&gt;For a monolithic package like OpenSCAD where we control all the files, the recursive approach is correct. For shared directories like &lt;code&gt;/usr/share/icons/hicolor/&lt;/code&gt;, using patterns like &lt;code&gt;%{_datadir}/icons/hicolor/*/apps/openscad.*&lt;/code&gt; is more appropriate because other packages also install icons there.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;More glob pattern examples for different scenarios:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Scenario 1: Include all files but not hidden files
%{_datadir}/myapp/*

# Scenario 2: Include specific file types only
%{_datadir}/myapp/*.json
%{_datadir}/myapp/*.xml

# Scenario 3: Include subdirectories at specific depths
%{_datadir}/icons/hicolor/*/apps/myapp.png
%{_datadir}/icons/hicolor/*/mimetypes/myapp-*.png

# Scenario 4: Exclude certain patterns (requires %exclude)
%{_datadir}/myapp/
%exclude %{_datadir}/myapp/test/
%exclude %{_datadir}/myapp/*.debug

# Scenario 5: Shared directories (don't claim ownership)
%{_datadir}/icons/hicolor/48x48/apps/myapp.png
%{_datadir}/icons/hicolor/scalable/apps/myapp.svg
# Note: No %dir for hicolor directories - icon theme package owns them
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The key insight here? &lt;strong&gt;Be specific about what you own&lt;/strong&gt;. If you're installing into a shared space, use precise patterns. If you own the entire directory tree, use the trailing slash for simplicity.&lt;/p&gt;

&lt;h4&gt;
  
  
  Debugging Mini-Story: The RPM %files Rabbit Hole
&lt;/h4&gt;

&lt;p&gt;I'll be honest—I stared at this error for a good 20 minutes before I understood what was happening:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;error: Installed (but unpackaged) file(s) found:
   /usr/share/openscad/color-schemes/cornfield/background.json
   (... 184 more files ...)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;My first thought? "But I specified &lt;code&gt;%{_datadir}/openscad/&lt;/code&gt;! That should include everything!" So I added more explicit patterns. Then I added glob patterns. Nothing worked.&lt;/p&gt;

&lt;p&gt;Finally, I did what I should have done first: read the RPM documentation carefully. That's when I discovered the &lt;code&gt;%dir&lt;/code&gt; directive doesn't mean "directory and contents"—it means "just the directory entry." I'd been telling RPM: "Hey, this directory exists, but I'm not claiming the files inside it."&lt;/p&gt;

&lt;p&gt;The fix was embarrassingly simple: remove &lt;code&gt;%dir&lt;/code&gt;. But the lesson stuck with me: RPM's packaging model is about &lt;em&gt;ownership&lt;/em&gt;, not just &lt;em&gt;inclusion&lt;/em&gt;. Understanding that mental model makes everything else click into place.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Troubleshooting Tip #2&lt;/strong&gt;: Testing RPM spec files locally before committing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Build the RPM locally to catch %files errors early&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;rpmbuild &lt;span class="nt"&gt;-bb&lt;/span&gt; openscad.spec

&lt;span class="c"&gt;# If you get "unpackaged files" errors, use this to see what's installed:&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;rpm &lt;span class="nt"&gt;-qlp&lt;/span&gt; /path/to/built.rpm | &lt;span class="nb"&gt;grep &lt;/span&gt;openscad

&lt;span class="c"&gt;# Compare against your %files section to find what's missing&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;rpmdev-extract /path/to/built.rpm
&lt;span class="nv"&gt;$ &lt;/span&gt;find usr/share/openscad/ &lt;span class="nt"&gt;-type&lt;/span&gt; f | &lt;span class="nb"&gt;wc&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt;  &lt;span class="c"&gt;# Should match your expectations&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Mistakes to Avoid #2&lt;/strong&gt;: Don't assume directory patterns work the same across package managers. Debian's &lt;code&gt;*.install&lt;/code&gt; files, RPM's &lt;code&gt;%files&lt;/code&gt; sections, and Arch's PKGBUILD have completely different semantics for the same concept.&lt;/p&gt;

&lt;h3&gt;
  
  
  Issue 4: Debian Dependency Hell
&lt;/h3&gt;

&lt;h4&gt;
  
  
  The Bookworm-to-Trixie Transition
&lt;/h4&gt;

&lt;p&gt;The Debian packages built successfully, but installation failed on Debian Trixie (testing/unstable):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install&lt;/span&gt; ./openscad_2025.11.19-1_amd64.deb
&lt;span class="go"&gt;Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following packages have unmet dependencies:
 openscad : Depends: libmimalloc2.1 but it is not installable
            Depends: libzip4 but it is not installable
E: Unable to correct problems, you have held broken packages.
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The problem was clear: I was declaring dependencies for Debian Bookworm (stable), but building and running on Debian Trixie (testing), which has newer library versions.&lt;/p&gt;

&lt;h4&gt;
  
  
  Understanding Debian Package Versioning
&lt;/h4&gt;

&lt;p&gt;Debian library packages include their SONAME (Shared Object Name) in the package name. This allows multiple versions to coexist:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;libmimalloc2.1&lt;/code&gt; = libmimalloc with SONAME 2.1 (Bookworm)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;libmimalloc3&lt;/code&gt; = libmimalloc with SONAME 3 (Trixie)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;libzip4&lt;/code&gt; = libzip with SONAME 4 (Bookworm)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;libzip5&lt;/code&gt; = libzip with SONAME 5 (Trixie)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When a library's API/ABI changes significantly, the SONAME increments, and Debian creates a new package. This prevents incompatible upgrades from breaking existing software.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to discover which SONAME version your binary actually needs:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;ldd&lt;/code&gt; command shows what shared libraries a binary is linked against, including their SONAME:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ldd /usr/bin/openscad | &lt;span class="nb"&gt;grep &lt;/span&gt;libmimalloc
        libmimalloc.so.2 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; /usr/lib/x86_64-linux-gnu/libmimalloc.so.2.1 &lt;span class="o"&gt;(&lt;/span&gt;0x00007f8b2c400000&lt;span class="o"&gt;)&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;ldd /usr/bin/openscad | &lt;span class="nb"&gt;grep &lt;/span&gt;libzip
        libzip.so.5 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; /usr/lib/x86_64-linux-gnu/libzip.so.5.5 &lt;span class="o"&gt;(&lt;/span&gt;0x00007f8b2c350000&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;See that? &lt;code&gt;libmimalloc.so.2&lt;/code&gt; is the SONAME (major version 2), and the actual file is &lt;code&gt;libmimalloc.so.2.1&lt;/code&gt; (minor version 2.1). The Debian package name follows the SONAME.&lt;/p&gt;

&lt;p&gt;To find which Debian package provides that library:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;dpkg &lt;span class="nt"&gt;-S&lt;/span&gt; /usr/lib/x86_64-linux-gnu/libmimalloc.so.2.1
libmimalloc2.1:amd64: /usr/lib/x86_64-linux-gnu/libmimalloc.so.2.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or on a system where the binary isn't installed yet, use &lt;code&gt;objdump&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;objdump &lt;span class="nt"&gt;-p&lt;/span&gt; openscad | &lt;span class="nb"&gt;grep &lt;/span&gt;NEEDED | &lt;span class="nb"&gt;grep &lt;/span&gt;libmimalloc
  NEEDED               libmimalloc.so.2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This shows exactly what SONAME the binary expects. Then you can search Debian packages:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;apt-cache search libmimalloc
libmimalloc2.1 - Compact general purpose allocator with excellent performance
libmimalloc3 - Compact general purpose allocator with excellent performance
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The lesson? &lt;strong&gt;Don't guess dependencies—inspect the binary&lt;/strong&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  The Fix: Update Dependency Declarations
&lt;/h4&gt;

&lt;p&gt;I needed to update the control file (or in my case, the inline control generation in the workflow) to use Trixie's library versions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight diff"&gt;&lt;code&gt;&lt;span class="gd"&gt;-Depends: libmimalloc2.1, libzip4, ...
&lt;/span&gt;&lt;span class="gi"&gt;+Depends: libmimalloc3, libzip5, ...
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But there's a more sophisticated approach: &lt;strong&gt;dependency alternatives&lt;/strong&gt;. Since I'm extracting from Docker images that link against specific library versions, I could declare both old and new versions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Depends: libmimalloc3 | libmimalloc2.1, libzip5 | libzip4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This syntax means: "Prefer libmimalloc3, but accept libmimalloc2.1 if that's what's available." However, since I'm building on Trixie and the binary is linked against Trixie's libraries, this would actually fail - the binary &lt;em&gt;requires&lt;/em&gt; the newer versions.&lt;/p&gt;

&lt;p&gt;The correct solution depends on your distribution target:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Target Bookworm&lt;/strong&gt;: Build on Bookworm, declare Bookworm dependencies&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Target Trixie&lt;/strong&gt;: Build on Trixie, declare Trixie dependencies&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Target both&lt;/strong&gt;: Build separate packages for each distribution&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In my case, I chose to target Trixie exclusively, so I updated the dependencies to match.&lt;/p&gt;

&lt;h4&gt;
  
  
  Lesson: Match Build Environment to Target Environment
&lt;/h4&gt;

&lt;p&gt;This highlights a fundamental packaging principle: &lt;strong&gt;your dependency declarations must match your build environment&lt;/strong&gt;. If you build on Debian Trixie, your binary will link against Trixie's libraries, and you must declare Trixie dependencies.&lt;/p&gt;

&lt;p&gt;Tools like &lt;code&gt;dpkg-shlibdeps&lt;/code&gt; can automatically detect library dependencies by examining the binary, but since I was building packages from pre-compiled Docker images, I had to manage dependencies manually.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Troubleshooting Tip #3&lt;/strong&gt;: Inspecting binary dependencies when package installation fails:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Extract the .deb without installing&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;ar x openscad_2025.11.19-1_amd64.deb
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;tar &lt;/span&gt;xf data.tar.xz
&lt;span class="nv"&gt;$ &lt;/span&gt;ldd usr/bin/openscad | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="s2"&gt;"not found"&lt;/span&gt;

&lt;span class="c"&gt;# If libraries are missing, find which package provides them on target system&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;ssh target-debian-system &lt;span class="s2"&gt;"apt-file search libmimalloc.so.2"&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;ssh target-debian-system &lt;span class="s2"&gt;"dpkg -S /usr/lib/x86_64-linux-gnu/libmimalloc.so.2.1"&lt;/span&gt;

&lt;span class="c"&gt;# This tells you the exact package name to add to dependencies&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Mistakes to Avoid #3&lt;/strong&gt;: Don't hardcode library versions without checking what's in your build environment. If you're building on Debian Trixie but declaring Debian Bookworm dependencies, your packages won't install anywhere.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solution Part 3: Infrastructure Glue
&lt;/h2&gt;

&lt;p&gt;Beyond the major concurrency and packaging challenges, several smaller infrastructure issues needed attention to make the system production-ready.&lt;/p&gt;

&lt;h3&gt;
  
  
  Issue 5: YAML Multi-line Gotchas
&lt;/h3&gt;

&lt;h4&gt;
  
  
  The Silent Failure
&lt;/h4&gt;

&lt;p&gt;While testing the changes, I noticed commit messages weren't being formatted correctly. What should have been:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;update RPM repository with latest packages

Automated update from workflow run 123456
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Was appearing as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;update RPM repository with latest packages Automated update from workflow run 123456
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The problem was in the workflow YAML:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# WRONG - GitHub Actions doesn't preserve literal newlines in this syntax&lt;/span&gt;
&lt;span class="s"&gt;git commit -m "update RPM repository with latest packages&lt;/span&gt;

&lt;span class="s"&gt;Automated update from workflow run ${{ steps.run.outputs.run_id }}"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Understanding YAML String Handling
&lt;/h4&gt;

&lt;p&gt;YAML has multiple ways to represent strings, and they have different whitespace handling. Here's where it gets tricky—and why my commit messages were getting mangled.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The YAML parser does several things behind the scenes:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Plain style&lt;/strong&gt; (no quotes): Treats newlines as spaces, collapses consecutive whitespace&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Single/double quoted&lt;/strong&gt;: Escapes must be explicit (&lt;code&gt;\n&lt;/code&gt; for newline, &lt;code&gt;\\&lt;/code&gt; for backslash)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Literal block style&lt;/strong&gt; (&lt;code&gt;|&lt;/code&gt;): Preserves newlines and trailing spaces &lt;em&gt;exactly&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Folded block style&lt;/strong&gt; (&lt;code&gt;&amp;gt;&lt;/code&gt;): Converts single newlines to spaces, but preserves blank line paragraphs&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;But here's the gotcha: &lt;strong&gt;trailing whitespace in plain-style strings gets silently dropped&lt;/strong&gt;. So if you have:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Line&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;one&lt;/span&gt;
&lt;span class="s"&gt;Line&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;two"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The YAML parser sees: &lt;code&gt;"Line one Line two"&lt;/code&gt; (single line, single space).&lt;/p&gt;

&lt;p&gt;Even worse, &lt;strong&gt;GitHub Actions adds its own layer of processing&lt;/strong&gt;. It evaluates &lt;code&gt;${{ }}&lt;/code&gt; expressions &lt;em&gt;before&lt;/em&gt; parsing the YAML, which means you can end up with malformed YAML if the expression output contains quotes or special characters.&lt;/p&gt;

&lt;p&gt;For multi-line git commit messages, we need an approach that survives both YAML parsing &lt;em&gt;and&lt;/em&gt; GitHub Actions expression evaluation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# CORRECT - literal block style preserves newlines&lt;/span&gt;
&lt;span class="s"&gt;git commit -m "update RPM repository with latest packages" \&lt;/span&gt;
           &lt;span class="s"&gt;-m "Automated update from workflow run ${{ steps.run.outputs.run_id }}"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Alternatively, using multiple &lt;code&gt;-m&lt;/code&gt; flags is even clearer, as each &lt;code&gt;-m&lt;/code&gt; creates a new paragraph in the commit message:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git commit &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"chore: cleanup old packages"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
           &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"Automated cleanup from workflow run &lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ github.run_id &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;}"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
           &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"- Removed .deb packages older than 7 days"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
           &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"- Removed .rpm packages older than 7 days"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
           &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"- Updated repository metadata"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates a properly formatted commit message with a subject line and body paragraphs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Troubleshooting Tip #4&lt;/strong&gt;: Validating YAML in GitHub Actions workflows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install yamllint locally&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;yamllint

&lt;span class="c"&gt;# Check your workflow file&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;yamllint .github/workflows/create-release.yml

&lt;span class="c"&gt;# Common issues to watch for:&lt;/span&gt;
&lt;span class="c"&gt;# - Trailing spaces (breaks literal blocks)&lt;/span&gt;
&lt;span class="c"&gt;# - Inconsistent indentation (breaks structure)&lt;/span&gt;
&lt;span class="c"&gt;# - Unquoted special characters (: { } [ ] , &amp;amp; * # ? | - &amp;lt; &amp;gt; = ! % @ \)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Commit message comparison&lt;/strong&gt; (before and after fixing YAML):&lt;/p&gt;

&lt;p&gt;Before (broken):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;commit 1234567
Author: GitHub Actions
Date:   2025-11-20

    update RPM repository with latest packages Automated update from workflow run 123456
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After (fixed):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;commit 1234567
Author: GitHub Actions
Date:   2025-11-20

    update RPM repository with latest packages

    Automated update from workflow run 123456

    Changes:
    - Updated repository metadata
    - Added packages for version 2025.11.20
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice the difference? The first version is a single-line message (hard to read in git logs). The second version has proper paragraphs, making it clear what changed and why.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mistakes to Avoid #4&lt;/strong&gt;: Don't rely on whitespace in YAML to create formatting. Use explicit syntax (literal blocks with &lt;code&gt;|&lt;/code&gt;, multiple &lt;code&gt;-m&lt;/code&gt; flags, or shell features like heredocs) to ensure your intent survives the YAML parser.&lt;/p&gt;

&lt;h3&gt;
  
  
  Issue 6: Documentation Debt
&lt;/h3&gt;

&lt;h4&gt;
  
  
  The README Update
&lt;/h4&gt;

&lt;p&gt;With all the technical issues fixed, I had one final problem: the documentation was outdated. The README still showed old installation instructions and didn't document the new APT/RPM repositories.&lt;/p&gt;

&lt;p&gt;I comprehensively updated the README to include:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Architecture support table&lt;/strong&gt;: Clear mapping between different architecture naming conventions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;APT repository instructions&lt;/strong&gt;: Complete setup including GPG key import&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RPM repository instructions&lt;/strong&gt;: Setup for Fedora/RHEL/Rocky/AlmaLinux&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Releases section&lt;/strong&gt;: Manual package download instructions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Version format documentation&lt;/strong&gt;: Explaining the &lt;code&gt;YYYY.MM.DD.BUILD_NUMBER&lt;/code&gt; scheme&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Repository structure documentation&lt;/strong&gt;: Showing where packages are stored&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Distribution requirements&lt;/strong&gt;: Minimum Debian/Fedora versions&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here's an example of the new APT installation section:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;#### Debian/Ubuntu (APT)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;br&gt;
bash&lt;/p&gt;
&lt;h1&gt;
  
  
  Import GPG key
&lt;/h1&gt;

&lt;p&gt;curl -fsSL &lt;a href="https://github.com/gounthar/docker-for-riscv64/releases/download/gpg-key/gpg-public-key.asc" rel="noopener noreferrer"&gt;https://github.com/gounthar/docker-for-riscv64/releases/download/gpg-key/gpg-public-key.asc&lt;/a&gt; | \&lt;br&gt;
  sudo gpg --dearmor -o /usr/share/keyrings/openscad-archive-keyring.gpg&lt;/p&gt;
&lt;h1&gt;
  
  
  Add repository
&lt;/h1&gt;

&lt;p&gt;echo "deb [signed-by=/usr/share/keyrings/openscad-archive-keyring.gpg] &lt;a href="https://gounthar.github.io/openscad" rel="noopener noreferrer"&gt;https://gounthar.github.io/openscad&lt;/a&gt; stable main" | \&lt;br&gt;
  sudo tee /etc/apt/sources.list.d/openscad.list&lt;/p&gt;
&lt;h1&gt;
  
  
  Update and install
&lt;/h1&gt;

&lt;p&gt;sudo apt-get update&lt;br&gt;
sudo apt-get install openscad&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
**Supported Distributions:**
- Debian Trixie (13) and newer
- Ubuntu 24.04 LTS and newer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This gives users a complete, copy-paste-ready installation experience with clear version requirements.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing the Complete System
&lt;/h2&gt;

&lt;p&gt;After all fixes were in place, I tested the complete pipeline:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Trigger a full build&lt;/span&gt;
git push origin multiplatform

&lt;span class="c"&gt;# This starts the cascade:&lt;/span&gt;
&lt;span class="c"&gt;# 1. Docker build workflow (3 architectures in parallel)&lt;/span&gt;
&lt;span class="c"&gt;# 2. Package extraction workflows (Debian + RPM)&lt;/span&gt;
&lt;span class="c"&gt;# 3. Repository update workflows (APT + RPM)&lt;/span&gt;
&lt;span class="c"&gt;# 4. Release creation workflow&lt;/span&gt;

&lt;span class="c"&gt;# Wait for completion, then test installation&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Testing on each architecture:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# AMD64 (x86_64)&lt;/span&gt;
curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://github.com/gounthar/docker-for-riscv64/releases/download/gpg-key/gpg-public-key.asc | &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nb"&gt;sudo &lt;/span&gt;gpg &lt;span class="nt"&gt;--dearmor&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; /usr/share/keyrings/openscad-archive-keyring.gpg
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"deb [signed-by=/usr/share/keyrings/openscad-archive-keyring.gpg] https://gounthar.github.io/openscad stable main"&lt;/span&gt; | &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/apt/sources.list.d/openscad.list
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install &lt;/span&gt;openscad &lt;span class="nt"&gt;-y&lt;/span&gt;
openscad &lt;span class="nt"&gt;--version&lt;/span&gt;

&lt;span class="c"&gt;# ARM64 (aarch64) - same process&lt;/span&gt;
&lt;span class="c"&gt;# RISC-V64 - same process&lt;/span&gt;

&lt;span class="c"&gt;# RPM testing&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;rpm &lt;span class="nt"&gt;--import&lt;/span&gt; https://gounthar.github.io/openscad/rpm/RPM-GPG-KEY
&lt;span class="nb"&gt;sudo &lt;/span&gt;curl &lt;span class="nt"&gt;-L&lt;/span&gt; https://gounthar.github.io/openscad/rpm/openscad.repo &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-o&lt;/span&gt; /etc/yum.repos.d/openscad.repo
&lt;span class="nb"&gt;sudo &lt;/span&gt;dnf &lt;span class="nb"&gt;install &lt;/span&gt;openscad &lt;span class="nt"&gt;-y&lt;/span&gt;
openscad &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All installations succeeded, and concurrent workflows no longer conflicted.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Lessons Learned
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Concurrency is Hard, Even in CI/CD
&lt;/h3&gt;

&lt;p&gt;GitHub Actions makes it easy to parallelize work, but it doesn't automatically handle coordination between workflows. When multiple workflows modify shared state (like a git branch), you need explicit concurrency control.&lt;/p&gt;

&lt;p&gt;The combination of GitHub's &lt;code&gt;concurrency&lt;/code&gt; directive and the reset-and-restore pattern provides robust handling of concurrent updates—without complex locking.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. RPM Spec Files Have Subtle Semantics
&lt;/h3&gt;

&lt;p&gt;The difference between &lt;code&gt;%dir /path/to/directory&lt;/code&gt; and &lt;code&gt;/path/to/directory/&lt;/code&gt; is easy to miss, but it completely changes RPM's packaging behavior. Understanding the ownership model is crucial.&lt;/p&gt;

&lt;p&gt;When in doubt, use:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;%dir&lt;/code&gt; for shared directories you don't populate&lt;/li&gt;
&lt;li&gt;Paths with trailing slashes for directories you own completely&lt;/li&gt;
&lt;li&gt;Glob patterns for shared directories where you only own some files&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Match Build Environment to Dependencies
&lt;/h3&gt;

&lt;p&gt;Your package dependencies must match what your binary is actually linked against. Tools like &lt;code&gt;dpkg-shlibdeps&lt;/code&gt; (Debian) and &lt;code&gt;rpm&lt;/code&gt;'s automatic dependency detection can help, but when working with pre-built binaries, manual verification is essential.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. YAML is Surprisingly Complex
&lt;/h3&gt;

&lt;p&gt;YAML's string handling has many edge cases. For multi-line content in shell commands:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use multiple &lt;code&gt;-m&lt;/code&gt; flags for git commit messages&lt;/li&gt;
&lt;li&gt;Use literal block style (&lt;code&gt;|&lt;/code&gt;) when you need exact newline preservation&lt;/li&gt;
&lt;li&gt;Test your YAML with &lt;code&gt;yamllint&lt;/code&gt; to catch issues early&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  5. Idempotency and Retry Logic are Essential
&lt;/h3&gt;

&lt;p&gt;Network operations fail. APIs have transient issues. Build robust systems by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Making operations idempotent (can be safely repeated)&lt;/li&gt;
&lt;li&gt;Adding retry logic with exponential backoff&lt;/li&gt;
&lt;li&gt;Using features like &lt;code&gt;--clobber&lt;/code&gt; to replace instead of failing on duplicates&lt;/li&gt;
&lt;li&gt;Logging clearly so you can diagnose intermittent issues&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  6. Documentation is Part of the Product
&lt;/h3&gt;

&lt;p&gt;The best automation is useless if users don't know how to use it. Invest in clear, complete documentation that includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Copy-paste-ready commands&lt;/li&gt;
&lt;li&gt;Clear version/distribution requirements&lt;/li&gt;
&lt;li&gt;Architecture support matrix&lt;/li&gt;
&lt;li&gt;Troubleshooting guidance&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Frequently Asked Questions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  How do I prevent concurrent GitHub Actions workflows from conflicting?
&lt;/h3&gt;

&lt;p&gt;Use GitHub's &lt;code&gt;concurrency&lt;/code&gt; directive to control workflow execution:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;concurrency&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;group&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;release-creation&lt;/span&gt;
  &lt;span class="na"&gt;cancel-in-progress&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Combined with the reset-and-restore pattern for git operations, this prevents race conditions when multiple workflows modify the same branch.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is the reset-and-restore pattern in GitHub Actions?
&lt;/h3&gt;

&lt;p&gt;The reset-and-restore pattern handles concurrent git conflicts by:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Saving changes to a temporary directory&lt;/li&gt;
&lt;li&gt;Fetching and resetting to the latest remote state&lt;/li&gt;
&lt;li&gt;Restoring saved changes on top&lt;/li&gt;
&lt;li&gt;Retrying the push operation&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This avoids merge conflicts in automated workflows.&lt;/p&gt;

&lt;h3&gt;
  
  
  How do I fix "unpackaged files found" errors in RPM builds?
&lt;/h3&gt;

&lt;p&gt;Change &lt;code&gt;%dir /path/&lt;/code&gt; to &lt;code&gt;/path/&lt;/code&gt; (with trailing slash) in your RPM spec file's &lt;code&gt;%files&lt;/code&gt; section. The &lt;code&gt;%dir&lt;/code&gt; directive only packages the directory entry, not its contents.&lt;/p&gt;

&lt;h3&gt;
  
  
  How do I synchronize multiple GitHub Actions workflows?
&lt;/h3&gt;

&lt;p&gt;Use workflow creation timestamps to detect if workflows were triggered by the same build:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Get creation times and compare within a sync window (e.g., 5 minutes)&lt;/span&gt;
&lt;span class="nv"&gt;DEB_TIME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;gh api repos/&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ github.repository &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;/actions/runs/&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;DEB_RUN&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--jq&lt;/span&gt; &lt;span class="s1"&gt;'.created_at'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;RPM_TIME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;gh api repos/&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;{ github.repository &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;/actions/runs/&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;RPM_RUN&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="nt"&gt;--jq&lt;/span&gt; &lt;span class="s1"&gt;'.created_at'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Workflows created within your sync window are from the same build and can be safely combined.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: Building Robust Multi-Architecture CI/CD Pipelines
&lt;/h2&gt;

&lt;p&gt;Building a fully automated multi-architecture package distribution system for CI/CD pipelines is complex. It requires understanding:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Git coordination patterns&lt;/strong&gt; for concurrent modifications&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Package manager semantics&lt;/strong&gt; (RPM, Debian) at a deep level&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dependency management&lt;/strong&gt; across different distribution versions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Actions workflows&lt;/strong&gt; and their concurrency model&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Retry and idempotency patterns&lt;/strong&gt; for robust automation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The reset-and-restore pattern proved particularly valuable. Instead of trying to merge concurrent changes (complex and error-prone), we save our changes, reset to the latest state, and reapply our changes on top. This works because our changes are additive and don't conflict with each other - APT updates modify &lt;code&gt;dists/&lt;/code&gt;, RPM updates modify &lt;code&gt;rpm/&lt;/code&gt;, and both are independent.&lt;/p&gt;

&lt;p&gt;The key insight is that &lt;strong&gt;concurrent workflows are inevitable in modern CI/CD&lt;/strong&gt;. Rather than fighting them with complex locking, design your system to handle conflicts gracefully through idempotent operations and smart retry logic.&lt;/p&gt;

&lt;p&gt;Now the OpenSCAD build infrastructure runs smoothly: three architectures, two package formats, automatic repository updates, and GitHub Releases - all working in harmony. The next time someone pushes a commit, packages are built, tested, and published across all architectures within an hour, with zero manual intervention.&lt;/p&gt;

&lt;p&gt;And that's the dream of automation: complexity managed, reliability achieved, and maintainers free to focus on actual development rather than packaging mechanics.&lt;/p&gt;

&lt;h2&gt;
  
  
  Beyond Package Management: Applying These Patterns to Other CI/CD Scenarios
&lt;/h2&gt;

&lt;p&gt;The concurrency patterns and retry logic described here apply to many DevOps automation challenges:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Artifact management&lt;/strong&gt;: Concurrent uploads to package registries&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Container registry updates&lt;/strong&gt;: Pushing multi-platform Docker images&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Infrastructure as Code&lt;/strong&gt;: Terraform state file conflicts&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Continuous deployment&lt;/strong&gt;: Coordinating deployments across environments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The reset-and-restore pattern is particularly valuable for any scenario involving concurrent modifications to shared state in continuous integration pipelines.&lt;/p&gt;

&lt;h2&gt;
  
  
  Further Reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Actions Concurrency&lt;/strong&gt;: &lt;a href="https://docs.github.com/en/actions/using-jobs/using-concurrency" rel="noopener noreferrer"&gt;https://docs.github.com/en/actions/using-jobs/using-concurrency&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RPM Packaging Guide&lt;/strong&gt;: &lt;a href="https://rpm-packaging-guide.github.io/" rel="noopener noreferrer"&gt;https://rpm-packaging-guide.github.io/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Debian Policy Manual&lt;/strong&gt;: &lt;a href="https://www.debian.org/doc/debian-policy/" rel="noopener noreferrer"&gt;https://www.debian.org/doc/debian-policy/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;YAML Specification&lt;/strong&gt;: &lt;a href="https://yaml.org/spec/1.2.2/" rel="noopener noreferrer"&gt;https://yaml.org/spec/1.2.2/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Git Reset vs Merge&lt;/strong&gt;: &lt;a href="https://git-scm.com/docs/git-reset" rel="noopener noreferrer"&gt;https://git-scm.com/docs/git-reset&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




</description>
      <category>devops</category>
      <category>rpm</category>
      <category>debian</category>
      <category>riscv64</category>
    </item>
  </channel>
</rss>
