<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Depot</title>
    <description>The latest articles on Forem by Depot (@depot).</description>
    <link>https://forem.com/depot</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/depot"/>
    <language>en</language>
    <item>
      <title>Optimize your Dockerfile for 5x faster builds</title>
      <dc:creator>The Charmizard</dc:creator>
      <pubDate>Fri, 23 Jan 2026 22:07:48 +0000</pubDate>
      <link>https://forem.com/depot/optimize-your-dockerfile-for-5x-faster-builds-214a</link>
      <guid>https://forem.com/depot/optimize-your-dockerfile-for-5x-faster-builds-214a</guid>
      <description>&lt;p&gt;Slow Docker builds are annoying. You make a small code change, run &lt;code&gt;docker build&lt;/code&gt;, and wait. This post shows you how to optimize your Dockerfile for faster builds.&lt;/p&gt;

&lt;p&gt;These Dockerfile optimization techniques will work whether you're building locally or in CI, and for any language. The examples in this post use a Node.js application with about 40 dependencies (including common packages like ORMs, UI libraries, and AWS SDKs) to represent a production React/Node application.&lt;/p&gt;

&lt;p&gt;Here's a preview of the biggest wins after optimization:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Before&lt;/th&gt;
&lt;th&gt;After&lt;/th&gt;
&lt;th&gt;Improvement&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Rebuild time (after code change)&lt;/td&gt;
&lt;td&gt;37.4s&lt;/td&gt;
&lt;td&gt;7.1s&lt;/td&gt;
&lt;td&gt;5.3x faster&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Final image size&lt;/td&gt;
&lt;td&gt;2.49 GB&lt;/td&gt;
&lt;td&gt;1.11 GB&lt;/td&gt;
&lt;td&gt;55% smaller&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Build times can vary by 10-15% between runs depending on system load, network conditions, and npm registry response times. The values in this post are averages across three builds.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem: A slow Dockerfile
&lt;/h2&gt;

&lt;p&gt;The example Dockerfile takes about 37 seconds to rebuild after code changes. The optimizations in the following sections bring that down to about 7 seconds. 37 seconds isn't &lt;em&gt;the worst&lt;/em&gt;, but saving 30 seconds per build adds up. More importantly, the benefits of these optimizations scale with your app.&lt;/p&gt;

&lt;p&gt;Here's an unoptimized Dockerfile that rebuilds everything on every code change:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; node:22&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm ci
&lt;span class="k"&gt;RUN &lt;/span&gt;npm run build
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["npm", "start"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you change a single line of application code, Docker invalidates the cache at the &lt;code&gt;COPY . .&lt;/code&gt; instruction and rebuilds everything after it. So even though &lt;code&gt;package.json&lt;/code&gt; didn't change, Docker reinstalls all the npm packages.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build times:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First build: 37.1s&lt;/li&gt;
&lt;li&gt;After code change: 37.4s (still rebuilds everything)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The rebuild is about as slow as the first build because the cache invalidates when Docker reaches the first changed instruction (&lt;code&gt;COPY . .&lt;/code&gt;) and forces a complete reinstall of dependencies. The main problem? The order of the instructions. The Dockerfile copies all the files before installing dependencies, so any file change forces a full dependency reinstall.&lt;/p&gt;

&lt;h2&gt;
  
  
  Order instructions to maximize cache reuse
&lt;/h2&gt;

&lt;p&gt;The Docker build cache invalidates at the first changed instruction and rebuilds everything after it. To minimize rebuilds, order your Dockerfile from least to most frequently changed.&lt;/p&gt;

&lt;p&gt;For most apps, dependencies change less often than your source code, so you should install dependencies before you copy source files. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; node:22&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package.json package-lock.json ./&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm ci
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm run build
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["npm", "start"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Build time improvement:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Before: 37.4s (rebuilds everything)&lt;/li&gt;
&lt;li&gt;After: 7.1s (reuses dependency cache)&lt;/li&gt;
&lt;li&gt;Improvement: 5.3x faster&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Reordering the Dockerfile instructions cut rebuild time by 81% (from 37.4 seconds to 7.1 seconds). The builder reuses the cached &lt;code&gt;npm ci&lt;/code&gt; layer when only the application code changes and the package manifest files remain unchanged.&lt;/p&gt;

&lt;h3&gt;
  
  
  How the Docker build cache reuses layers
&lt;/h3&gt;

&lt;p&gt;The builder calculates a cache checksum from &lt;code&gt;package.json&lt;/code&gt; and &lt;code&gt;package-lock.json&lt;/code&gt;. As long as those files don't change, the &lt;code&gt;npm ci&lt;/code&gt; layer stays cached—even when you modify other source files.&lt;/p&gt;

&lt;p&gt;For details on how the builder determines cache validity, see &lt;a href="https://dev.to/blog/ultimate-guide-to-docker-build-cache#cache-invalidation-for-add-and-copy-instructions"&gt;Cache invalidation for ADD and COPY instructions&lt;/a&gt; in our Ultimate guide to Docker build cache post.&lt;/p&gt;

&lt;h3&gt;
  
  
  Optimize instruction order for any language
&lt;/h3&gt;

&lt;p&gt;You can apply the principle of ordering instructions from least to most frequently changed for any language. Copy your dependency manifest files first, install dependencies, then copy your source code.&lt;/p&gt;

&lt;p&gt;Python example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; requirements.txt .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Go example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; go.mod go.sum ./&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;go mod download
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ruby example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; Gemfile Gemfile.lock ./&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;bundle &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Exclude unnecessary files from the build
&lt;/h2&gt;

&lt;p&gt;Every file in your build context can invalidate the cache. If you copy a directory that includes files Docker doesn't need, changing those files forces a rebuild even though the files don't affect your application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use a .dockerignore file
&lt;/h3&gt;

&lt;p&gt;A &lt;code&gt;.dockerignore&lt;/code&gt; file works like &lt;code&gt;.gitignore&lt;/code&gt; but for Docker builds. To exclude files and directories from the build context, create a &lt;code&gt;.dockerignore&lt;/code&gt; file in the same directory as your Dockerfile.&lt;/p&gt;

&lt;p&gt;An example &lt;code&gt;.dockerignore&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;node_modules
npm-debug.log
.git
.gitignore
README.md
.env
.vscode
.idea
dist
build
*.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Why exclude these files from your Docker build?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Some files are rebuilt during the build.

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;node_modules&lt;/code&gt;, &lt;code&gt;dist&lt;/code&gt;, &lt;code&gt;build&lt;/code&gt;: Regenerated by &lt;code&gt;npm ci&lt;/code&gt; and &lt;code&gt;npm run build&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Development files aren't needed at runtime.

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;.env&lt;/code&gt;: May contain secrets, changes frequently.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;.vscode&lt;/code&gt;, &lt;code&gt;.idea&lt;/code&gt;: Editor configs.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;README.md&lt;/code&gt;: Documentation.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Large files slow down context transfer.

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;.git&lt;/code&gt;: Version history (can be hundreds of MB).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;*.log&lt;/code&gt;: Accumulated logs.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Build context size reduced
&lt;/h3&gt;

&lt;p&gt;Excluding unnecessary files reduces the build context size, which speeds up the initial context transfer to the builder and prevents cache invalidation from irrelevant file changes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build context size:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Without &lt;code&gt;.dockerignore&lt;/code&gt;: 325 MB&lt;/li&gt;
&lt;li&gt;With &lt;code&gt;.dockerignore&lt;/code&gt;: 1.2 kB&lt;/li&gt;
&lt;li&gt;Reduction: 99.6% (from 325 MB to 1.2 kB)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without &lt;code&gt;.dockerignore&lt;/code&gt;, Docker copies your entire local &lt;code&gt;node_modules&lt;/code&gt; directory (which gets rebuilt anyway) and Git history (which isn't needed in the image).&lt;/p&gt;

&lt;h2&gt;
  
  
  Separate build and runtime dependencies
&lt;/h2&gt;

&lt;p&gt;Most apps need build tools that aren't required at runtime. Multi-stage builds let you use a full build environment to compile your application, then copy only the runtime artifacts to a minimal final image.&lt;/p&gt;

&lt;p&gt;First, let's see what a single-stage build includes, then we'll split it into stages.&lt;/p&gt;

&lt;h3&gt;
  
  
  Single-stage build with all dependencies
&lt;/h3&gt;

&lt;p&gt;A single-stage build includes everything:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; node:22&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package.json package-lock.json ./&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm ci
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm run build
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["npm", "start"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This image includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build tools from the base image&lt;/li&gt;
&lt;li&gt;All npm dependencies (dev and production)&lt;/li&gt;
&lt;li&gt;Source code and build artifacts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Final image size: 2.49 GB&lt;/p&gt;

&lt;h3&gt;
  
  
  Multi-stage build splits build and runtime
&lt;/h3&gt;

&lt;p&gt;Multi-stage builds split the process into distinct stages:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;# Stage 1: deps - install all dependencies once&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;node:22&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;deps&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package.json package-lock.json ./&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm ci

&lt;span class="c"&gt;# Stage 2: build - inherits from deps, compiles the application&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;deps&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;build&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm run build

&lt;span class="c"&gt;# Stage 3: prod - minimal runtime image with production dependencies only&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;node:22-slim&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;prod&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package.json package-lock.json ./&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm ci &lt;span class="nt"&gt;--omit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;dev
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=build /app/dist ./dist&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=build /app/public ./public&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["npm", "start"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The deps stage installs all dependencies once. The build stage inherits from deps (reusing the installed dependencies without reinstalling), copies the source code, and builds the application. The production stage starts fresh with &lt;code&gt;node:22-slim&lt;/code&gt; and runs &lt;code&gt;npm ci --omit=dev&lt;/code&gt; to install only production dependencies (no dev tools or testing frameworks).&lt;/p&gt;

&lt;p&gt;The key here is that the build stage uses &lt;code&gt;FROM deps&lt;/code&gt;, so it doesn't need to reinstall dependencies. This pattern is cleaner and more efficient than reinstalling dependencies in each stage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Image size improvement:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Single-stage build: 2.49 GB&lt;/li&gt;
&lt;li&gt;Multi-stage build: 1.11 GB&lt;/li&gt;
&lt;li&gt;Improvement: 55% smaller&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The multi-stage build maintains fast rebuild times for code changes (between 7 and 8 seconds) because the deps stage remains cached when only source code changes. The size reduction is an additional benefit that doesn't compromise rebuild speed.&lt;/p&gt;

&lt;p&gt;Cache benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Changes to the source code get rebuilt from the build stage forward, but don't affect earlier stages.&lt;/li&gt;
&lt;li&gt;Changes to the production stage don't invalidate the build stage cache.&lt;/li&gt;
&lt;li&gt;Each stage maintains its own cache layers independently.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: BuildKit executes independent stages in parallel. If your application has multiple components that can be built independently, you can build them simultaneously. For more information, see &lt;a href="https://depot.dev/blog/how-buildkit-parallelizes-your-builds#stage-parallelism" rel="noopener noreferrer"&gt;Stage parallelism&lt;/a&gt; in the How BuildKit parallelizes your builds post.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use cache mounts for package managers
&lt;/h2&gt;

&lt;p&gt;Cache mounts provide persistent storage for package manager caches across builds. Unlike the regular build cache that invalidates when dependencies change, cache mounts let the package manager maintain its own cache of downloaded packages.&lt;/p&gt;

&lt;p&gt;While code changes are more common, optimizing dependency updates matters when you're actively developing and frequently adding packages.&lt;/p&gt;

&lt;p&gt;Cache mounts are most useful when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your project has hundreds of dependencies.&lt;/li&gt;
&lt;li&gt;You frequently add or update individual dependencies.&lt;/li&gt;
&lt;li&gt;Package downloads are slow (large packages or slow network).&lt;/li&gt;
&lt;li&gt;You're building on the same machine or builder repeatedly.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Without cache mounts
&lt;/h3&gt;

&lt;p&gt;The baseline for this test uses the multi-stage Dockerfile from the previous section and adds a new dependency. Unlike code changes, which take about 7 seconds because the deps stage remains cached, dependency changes force the deps stage to rebuild along with all downstream stages.&lt;/p&gt;

&lt;p&gt;Standard dependency installation downloads packages every time dependencies change:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package.json package-lock.json ./&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm ci
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you add a new dependency, the &lt;code&gt;npm ci&lt;/code&gt; layer rebuilds with an empty npm cache, forcing npm to re-download all packages.&lt;/p&gt;

&lt;p&gt;Build time without cache mounts: 32.3s&lt;/p&gt;

&lt;h3&gt;
  
  
  With cache mounts
&lt;/h3&gt;

&lt;p&gt;Cache mounts persist the npm cache directory across builds:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package.json package-lock.json ./&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nt"&gt;--mount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cache,target&lt;span class="o"&gt;=&lt;/span&gt;/root/.npm &lt;span class="se"&gt;\
&lt;/span&gt;    npm ci
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;--mount=type=cache,target=/root/.npm&lt;/code&gt; option tells the builder to mount persistent storage at npm's cache directory. When dependencies change and the &lt;code&gt;npm ci&lt;/code&gt; layer needs to rebuild, npm can reuse packages from this persisted cache instead of downloading them again.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build time improvement (adding one new package):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Without cache mount: 32.3s&lt;/li&gt;
&lt;li&gt;With cache mount: 28.5s&lt;/li&gt;
&lt;li&gt;Improvement: 3.8s faster (12% improvement)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The improvement here is pretty minimal because npm ci is already fast with modern npm and good network connections.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cache mounts for other package managers
&lt;/h3&gt;

&lt;p&gt;Python (pip) example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nt"&gt;--mount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cache,target&lt;span class="o"&gt;=&lt;/span&gt;/root/.cache/pip &lt;span class="se"&gt;\
&lt;/span&gt;    pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Go example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nt"&gt;--mount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cache,target&lt;span class="o"&gt;=&lt;/span&gt;/go/pkg/mod &lt;span class="se"&gt;\
&lt;/span&gt;    go mod download
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ruby example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nt"&gt;--mount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cache,target&lt;span class="o"&gt;=&lt;/span&gt;/usr/local/bundle/cache &lt;span class="se"&gt;\
&lt;/span&gt;    bundle &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each package manager has its own default cache location. The cache mount makes that location persistent across builds.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cache mount persistence
&lt;/h3&gt;

&lt;p&gt;Cache mounts persist on a single Docker daemon or builder instance. When you build locally, the cache persists between your local builds. In CI environments, cache mount persistence depends on your setup.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Ephemeral CI runners&lt;/strong&gt; (GitHub Actions default runners, fresh containers): Cache mounts don't persist between builds. You'll need to configure external cache backends.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Persistent builders&lt;/strong&gt; (self-hosted runners, Depot, BuildKit with remote cache): Cache mounts persist and provide speed benefits.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For details on using cache mounts in CI, see &lt;a href="https://depot.dev/blog/how-to-use-buildkit-cache-mounts-in-ci" rel="noopener noreferrer"&gt;How to use BuildKit cache mounts in CI providers&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Choose the right base image
&lt;/h2&gt;

&lt;p&gt;Your base image choice affects both image size and cache behavior. Smaller base images mean less data to download and store in cache layers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Base image options for Node.js
&lt;/h3&gt;

&lt;p&gt;The following table lists Node 22 base images. The image sizes vary between architectures and versions, but the use cases remain the same.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Image&lt;/th&gt;
&lt;th&gt;Use case&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;node:22&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Full development environment with build tools and system libraries.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;node:22-slim&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Roughly 75% smaller than the full base image. Production runtime without build tools and documentation.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;node:22-alpine&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Roughly 85% smaller than the full base image. Minimal runtime based on Alpine Linux.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Image sizes vary by Node version.&lt;/p&gt;

&lt;h3&gt;
  
  
  When to use each base image
&lt;/h3&gt;

&lt;p&gt;Use full base images (for example, &lt;code&gt;node:22&lt;/code&gt;) for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build stages in multi-stage builds where you need compilation tools&lt;/li&gt;
&lt;li&gt;Development environments&lt;/li&gt;
&lt;li&gt;Applications with native dependencies that need build toolchains&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use slim images (for example, &lt;code&gt;node:22-slim&lt;/code&gt;) for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Production runtime stages&lt;/li&gt;
&lt;li&gt;Applications without native dependencies or with pre-compiled binaries&lt;/li&gt;
&lt;li&gt;When you need glibc compatibility (most Linux binaries expect glibc)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use Alpine images (for example, &lt;code&gt;node:22-alpine&lt;/code&gt;) for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Minimal production deployments where size is critical&lt;/li&gt;
&lt;li&gt;Applications without native dependencies&lt;/li&gt;
&lt;li&gt;When you've verified your native dependencies work with musl libc&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Example build with optimized base images
&lt;/h3&gt;

&lt;p&gt;Here's an example of a production multi-stage build with appropriate base images:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;# Stage 1: deps - install all dependencies&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;node:22&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;deps&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package.json package-lock.json ./&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm ci

&lt;span class="c"&gt;# Stage 2: build - inherits from deps for compilation&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;deps&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;build&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm run build

&lt;span class="c"&gt;# Stage 3: prod - uses slim image for smaller final image&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;node:22-slim&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;prod&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package.json package-lock.json ./&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm ci &lt;span class="nt"&gt;--omit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;dev
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=build /app/dist ./dist&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=build /app/public ./public&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["npm", "start"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This example gives you full access to build tools during compilation but keeps the final image small.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to find optimization opportunities
&lt;/h2&gt;

&lt;p&gt;To optimize effectively, measure where time is spent during your builds.&lt;/p&gt;

&lt;h3&gt;
  
  
  View detailed build output
&lt;/h3&gt;

&lt;p&gt;Use the plain progress output to show container output and see timing for each build step:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nt"&gt;--progress&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;plain &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;--progress&lt;/code&gt; flag output shows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Time spent on each instruction&lt;/li&gt;
&lt;li&gt;Whether layers came from cache (CACHED markers)&lt;/li&gt;
&lt;li&gt;Context transfer times&lt;/li&gt;
&lt;li&gt;Layer push/pull times&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To find optimization opportunities, look for steps that take more than 10 seconds and aren't marked as &lt;code&gt;CACHED&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Identify optimization opportunities
&lt;/h3&gt;

&lt;p&gt;Look for the following in your builds:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Long-running RUN instructions that execute frequently (good candidates for better caching or cache mounts)&lt;/li&gt;
&lt;li&gt;Large COPY operations that invalidate often (add to .dockerignore)&lt;/li&gt;
&lt;li&gt;Sequential operations that could be parallelized (multi-stage builds)&lt;/li&gt;
&lt;li&gt;Repeated package downloads that could use cache mounts&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Complete optimized example
&lt;/h2&gt;

&lt;p&gt;Optimization checklist:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reorder instructions (least to most frequently changed)&lt;/li&gt;
&lt;li&gt;Add &lt;code&gt;.dockerignore&lt;/code&gt; file&lt;/li&gt;
&lt;li&gt;Use multi-stage builds&lt;/li&gt;
&lt;li&gt;Add cache mounts for package managers&lt;/li&gt;
&lt;li&gt;Choose appropriate base images&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here's the full optimized Dockerfile incorporating all techniques:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;# Stage 1: deps - install all dependencies once&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;node:22&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;deps&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package.json package-lock.json ./&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nt"&gt;--mount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cache,target&lt;span class="o"&gt;=&lt;/span&gt;/root/.npm &lt;span class="se"&gt;\
&lt;/span&gt;    npm ci

&lt;span class="c"&gt;# Stage 2: build - compile using the full dependency tree&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;deps&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;build&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm run build

&lt;span class="c"&gt;# Stage 3: prod - minimal runtime image with only production dependencies&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;node:22-slim&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;prod&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package.json package-lock.json ./&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nt"&gt;--mount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cache,target&lt;span class="o"&gt;=&lt;/span&gt;/root/.npm &lt;span class="se"&gt;\
&lt;/span&gt;    npm ci &lt;span class="nt"&gt;--omit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;dev
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=build /app/dist ./dist&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=build /app/public ./public&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["npm", "start"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the following &lt;code&gt;.dockerignore&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;node_modules
npm-debug.log
.git
.gitignore
README.md
.env
.vscode
.idea
dist
build
*.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Final results
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Optimization&lt;/th&gt;
&lt;th&gt;Primary benefit&lt;/th&gt;
&lt;th&gt;Impact&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Instruction reordering&lt;/td&gt;
&lt;td&gt;Code change rebuilds&lt;/td&gt;
&lt;td&gt;5.3x faster (biggest win)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Multi-stage builds&lt;/td&gt;
&lt;td&gt;Image size&lt;/td&gt;
&lt;td&gt;55% smaller&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cache mounts&lt;/td&gt;
&lt;td&gt;Dependency updates&lt;/td&gt;
&lt;td&gt;12% faster&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;.dockerignore&lt;/td&gt;
&lt;td&gt;Build context&lt;/td&gt;
&lt;td&gt;99.6% smaller context&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;These optimizations made rebuild time over 5x faster and reduced image size by 55%. If you need a place to start, go with instruction reordering. It delivers the biggest speed improvement in 30 seconds of work.&lt;/p&gt;

&lt;p&gt;Building your Docker image using Depot can make your builds even faster. See &lt;a href="//depot.dev/docs/container-builds/overview"&gt;Depot container builds&lt;/a&gt; for how it all works.&lt;/p&gt;

&lt;p&gt;Written by: Andrea Anderson - Technical Writer at Depot&lt;/p&gt;

</description>
      <category>docker</category>
      <category>devops</category>
      <category>tutorial</category>
      <category>cicd</category>
    </item>
    <item>
      <title>How BuildKit Parallelizes Your Builds</title>
      <dc:creator>The Charmizard</dc:creator>
      <pubDate>Fri, 21 Nov 2025 02:02:46 +0000</pubDate>
      <link>https://forem.com/depot/how-buildkit-parallelizes-your-builds-1fnf</link>
      <guid>https://forem.com/depot/how-buildkit-parallelizes-your-builds-1fnf</guid>
      <description>&lt;p&gt;When you run &lt;code&gt;docker build&lt;/code&gt;, you might assume your Dockerfile instructions execute one after another, like a traditional script. But behind the scenes, BuildKit is doing something far more sophisticated. At the heart of BuildKit lies a DAG (Directed Acyclic Graph) solver that transforms your Dockerfile into an optimized execution plan, identifying all parallelizable operations while maintaining dependency order.&lt;/p&gt;

&lt;p&gt;We've previously covered how BuildKit works &lt;a href="https://depot.dev/blog/buildkit-in-depth" rel="noopener noreferrer"&gt;here&lt;/a&gt;, but today we're discussing how BuildKit parallelizes your builds to make them faster and more efficient.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding BuildKit's DAG graph solver
&lt;/h2&gt;

&lt;p&gt;BuildKit parses build instructions into something called LLB (Low-Level Build) format, creating a dependency graph of all the operations needed to produce your final image:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkdtikdr9lqt77dsh6ipe.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkdtikdr9lqt77dsh6ipe.webp" alt="Diagram showing the flow of information through BuildKit: the Dockerfile to the BuildKit frontend, and then to the LLB, then to the BuildKit backend and finally to the Docker image." width="800" height="365"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;The DAG solver examines each instruction in your build and determines what it depends on. If instruction B needs the output of instruction A, they run sequentially. But if instructions B and C both only depend on A, they can run at the same time once A completes. This dependency analysis happens before any actual building starts, allowing BuildKit to create the most efficient execution plan possible.&lt;/p&gt;

&lt;p&gt;This graph-based approach is what enables BuildKit to be fully concurrent. Every node in the graph represents a build operation, and BuildKit can execute any nodes that don't have unmet dependencies. It's constantly looking for work it can parallelize, which is why modern container builds can be surprisingly fast when structured properly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three levels of parallelism in action
&lt;/h2&gt;

&lt;p&gt;Once BuildKit has built its execution graph, parallelism happens at three distinct levels, each contributing to faster builds in different ways.&lt;/p&gt;

&lt;h3&gt;
  
  
  Stage parallelism
&lt;/h3&gt;

&lt;p&gt;The most visible form of parallelism happens when you have multiple stages in a multi-stage Dockerfile that don't depend on each other. Consider a typical web application with both frontend and backend components:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;node:20&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;frontend&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; frontend/ .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; npm build

&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;golang:1.21&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;backend&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; backend/ .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;go build &lt;span class="nt"&gt;-o&lt;/span&gt; server

&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;alpine&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;final&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=frontend /app/dist /static&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=backend /app/server /usr/bin/&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;BuildKit recognizes that the &lt;code&gt;frontend&lt;/code&gt; and &lt;code&gt;backend&lt;/code&gt; stages have no dependencies on each other. While your Node.js dependencies are installing and your React app is building, BuildKit is simultaneously compiling your Go backend on a separate thread or CPU core. The &lt;code&gt;final&lt;/code&gt; stage waits for both to complete, but you've effectively cut your build time by running these independent workloads in parallel.&lt;/p&gt;

&lt;h3&gt;
  
  
  Instruction parallelism
&lt;/h3&gt;

&lt;p&gt;Even within a single stage, BuildKit finds opportunities for parallel execution. When you have multiple &lt;code&gt;COPY&lt;/code&gt; instructions that don't depend on each other, or when different branches of your build graph can be resolved independently, BuildKit executes them concurrently.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foqtqgey2t77h57ylxdf4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foqtqgey2t77h57ylxdf4.png" alt="Instruction level parallelism" width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is particularly noticeable when you're copying multiple directories or files that will be processed separately later. BuildKit can fetch all these resources in parallel rather than sequentially, shaving precious seconds off your build time. The solver is smart enough to understand which operations can safely overlap and which must wait for their dependencies.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deduplication across concurrent builds
&lt;/h3&gt;

&lt;p&gt;While not being very obvious, perhaps the most clever optimization is BuildKit's automatic deduplication. This isn't just about caching, but about recognizing when multiple parts of your build are trying to do the exact same thing and ensuring that work happens only once.&lt;/p&gt;

&lt;p&gt;Imagine you're building multiple services that all start from the same base image and run &lt;code&gt;npm ci&lt;/code&gt; with identical &lt;code&gt;package.json&lt;/code&gt; files. Without deduplication, each service would run its own &lt;code&gt;npm ci&lt;/code&gt;, even if they're building simultaneously. But BuildKit is smarter than that. It uses content-addressable storage and checksums to identify when different build contexts would produce identical layers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6wdoz2d9qsr1f0ckovgf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6wdoz2d9qsr1f0ckovgf.png" alt="BuildKit deduplication within a build" width="800" height="1184"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When BuildKit detects this situation, something interesting happens. The first build starts computing the layer while the others wait. Once the first build completes that &lt;code&gt;npm ci&lt;/code&gt;, all the waiting builds immediately use that result and move on. The same operation that would have run three times runs only once.&lt;/p&gt;

&lt;p&gt;This deduplication happens automatically across concurrent builds on the same runner, whether they're triggered by different developers pushing to the same repository or a &lt;code&gt;docker bake&lt;/code&gt; command building multiple targets at once.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scaling beyond a single builder with Depot's auto-scaling
&lt;/h2&gt;

&lt;p&gt;While BuildKit's parallelism features are powerful, they're ultimately limited by the resources of the machine they're running on. A single builder can only do so much, even with perfect parallelization. This is where Depot's &lt;a href="https://depot.dev/docs/container-builds/how-to-guides/autoscaling" rel="noopener noreferrer"&gt;autoscaling&lt;/a&gt; feature changes the game.&lt;/p&gt;

&lt;p&gt;By default, all builds for a Depot project run on a single BuildKit instance per architecture. This maximizes cache utilization and deduplication benefits, as all builds share the same cache and can deduplicate work across each other. But when you have resource-intensive builds or high build volume, that single instance can become a bottleneck.&lt;/p&gt;

&lt;p&gt;With auto-scaling enabled, Depot automatically provisions additional BuildKit builders when your concurrent build limit is reached. Each new builder starts with a clone of the main builder's cache, so they're not starting from scratch. This means you can handle sudden spikes in build demand, like when multiple developers push changes simultaneously or when your CI/CD pipeline triggers numerous builds at once.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg4t75ln8m4zw426huxk5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg4t75ln8m4zw426huxk5.png" alt="Auto-scaling behavior" width="800" height="830"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The relationship between BuildKit's parallelism and Depot's auto-scaling is complementary. BuildKit optimizes execution within each builder, finding every opportunity to parallelize and deduplicate work. Depot's auto-scaling ensures you have enough builders to handle your workload, preventing any single builder from becoming overwhelmed.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this means for your builds
&lt;/h2&gt;

&lt;p&gt;The combination of BuildKit's intelligent DAG solver, multi-level parallelism, automatic deduplication, and Depot's autoscaling creates a build system that's both fast and efficient. It's not just about throwing more compute at the problem, but rather being smart about how that compute is used, ensuring that work is never duplicated unnecessarily and that every available CPU cycle is put to good use.&lt;/p&gt;

&lt;p&gt;Every build you run benefits from BuildKit's intelligent parallelism, with the DAG solver constantly finding ways to save time without sacrificing correctness. Depot's auto-scaling takes it further by ensuring you always have enough builders to handle the load. Together, they make "waiting for builds" a thing of the past.&lt;/p&gt;

&lt;p&gt;For more details on configuring build parallelism and auto-scaling in your Depot projects, see our &lt;a href="https://depot.dev/docs/container-builds/build-parallelism" rel="noopener noreferrer"&gt;build parallelism documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Author: &lt;br&gt;
Pedro Guerra&lt;br&gt;
Support Engineer @ Depot&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>performance</category>
      <category>devops</category>
      <category>docker</category>
    </item>
    <item>
      <title>A marketer's hour of hell: What I finally get about slow builds</title>
      <dc:creator>The Charmizard</dc:creator>
      <pubDate>Thu, 23 Oct 2025 18:07:43 +0000</pubDate>
      <link>https://forem.com/depot/a-marketers-hour-of-hell-what-i-finally-get-about-slow-builds-4jg4</link>
      <guid>https://forem.com/depot/a-marketers-hour-of-hell-what-i-finally-get-about-slow-builds-4jg4</guid>
      <description>&lt;p&gt;As a marketer and self-taught developer, I didn't fully grasp the pain of slow builds and broken AF CI/CD pipelines until this week when AWS US-East-1 went down. In my previous positions, I would ask for a change and the engineers would deal with it. From the outside looking in, if it took forever to deploy or they refused to deploy on Friday, or (insert other stereotype about engineers here), it was obviously just an excuse. Right?&lt;/p&gt;

&lt;p&gt;Wrong. And this week, my foot was rightfully shoved in my mouth.&lt;/p&gt;

&lt;p&gt;Here's the thing about those "stereotypes", seasoned developers aren't being dramatic when they take forever to deploy or refuse Friday deploys. They're being smart. And I finally get it now.&lt;/p&gt;

&lt;h2&gt;
  
  
  Life with fast builds
&lt;/h2&gt;

&lt;p&gt;As the head of developer marketing here at Depot, I'm spoiled. If a change needs to be made to the website:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I create a branch&lt;/li&gt;
&lt;li&gt;make my changes and test locally&lt;/li&gt;
&lt;li&gt;create the PR&lt;/li&gt;
&lt;li&gt;merge and deploy to preview&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I see my changes in preview within 2 minutes, and then deploy to live within another 2 minutes. There's no "don't deploy" engineering culture here at Depot because, well... if something goes wrong, you can fix it within minutes instead of hours.&lt;/p&gt;

&lt;p&gt;That was my exact workflow until this week.&lt;/p&gt;

&lt;h2&gt;
  
  
  When everything slowed down
&lt;/h2&gt;

&lt;p&gt;When AWS US-East-1 went down, like much of the internet, we felt it too. I went to make a simple adjustment for tracking a link in &lt;a href="https://depot.dev/customers/posthog" rel="noopener noreferrer"&gt;PostHog&lt;/a&gt;. Made my changes, created my PR, and bam... lint failure. This happens all the time because I forget to run 'fmt', so I quickly ran it and pushed without stopping to check locally, because the last 99 times I've run 'fmt' it hasn't made a damn bit of difference. (Lesson learned, btw).&lt;/p&gt;

&lt;p&gt;Waited for checks, merged, and deployed.&lt;/p&gt;

&lt;p&gt;That push broke the docs pages. An essential piece to ANY devtool website.&lt;/p&gt;

&lt;p&gt;I panicked. Broke out in a cold sweat. Fixed the very small error and pushed the fix, but this time the build took almost 6x longer than usual to fail. Made one more small adjustment, pushed again, another 6x longer build that finally succeeded. Then a long slog to get the fix out to production.&lt;/p&gt;

&lt;p&gt;Overall, the docs page was down for close to an hour. With Depot, in my everyday workflow, with AWS-US-East-1 running on all cylinders, this fix would have taken me 10 minutes tops.&lt;/p&gt;

&lt;p&gt;That's 50 minutes our docs weren't serving our customers. 50 minutes of wasted time.&lt;/p&gt;

&lt;p&gt;Now imagine if that were your entire product going down for that long.&lt;/p&gt;

&lt;h2&gt;
  
  
  The realization
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://depot.dev" rel="noopener noreferrer"&gt;Depot&lt;/a&gt; makes our builds so fast that even a temporary slowdown felt painful. Our usual 2-minute builds stretched to 12+ minutes, and suddenly I understood the frustration engineering teams face daily when their builds crawl. If this brief taste of slow builds was this excruciating, I can't imagine dealing with it as your baseline.&lt;/p&gt;

&lt;p&gt;Those engineers who seem overly cautious about deploys? They're not making excuses. They're managing risk in an environment where a simple fix can take an hour instead of minutes. This week, I finally felt that pain myself and it gave me a whole new appreciation for why speed matters.&lt;/p&gt;

&lt;h2&gt;
  
  
  Related posts
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://depot.dev/blog/why-organizations-have-slow-actions-checkout" rel="noopener noreferrer"&gt;Why 98.5% of organizations have slow actions/checkout&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://depot.dev/blog/we-analyzed-66821-github-actions-runs" rel="noopener noreferrer"&gt;We analyzed 66,821 GitHub Actions runs: 9 hidden gems you're missing&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://depot.dev/blog/hidden-cost-of-self-hosting-ci-runners" rel="noopener noreferrer"&gt;The hidden cost of self hosting CI runners&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://depot.dev/blog/monorepos-worth-the-hype" rel="noopener noreferrer"&gt;Monorepos: Worth the hype?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://depot.dev/blog/guide-to-faster-rust-builds-in-ci" rel="noopener noreferrer"&gt;Guide to faster Rust builds in CI&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>marketing</category>
      <category>aws</category>
      <category>webdev</category>
      <category>productivity</category>
    </item>
    <item>
      <title>We analyzed 66,821 GitHub Actions runs: 9 hidden gems you're missing</title>
      <dc:creator>The Charmizard</dc:creator>
      <pubDate>Thu, 02 Oct 2025 14:41:33 +0000</pubDate>
      <link>https://forem.com/depot/we-analyzed-66821-github-actions-runs-9-hidden-gems-youre-missing-5al8</link>
      <guid>https://forem.com/depot/we-analyzed-66821-github-actions-runs-9-hidden-gems-youre-missing-5al8</guid>
      <description>&lt;p&gt;Do you stick to official GitHub Actions in your workflows? Makes sense. Why risk your CI pipeline on some random third-party action?&lt;/p&gt;

&lt;p&gt;Turns out, the biggest CI wins could be hiding in the actions marketplace, where other developers have solved the exact problems you're facing.&lt;/p&gt;

&lt;p&gt;We analyzed 66,821 workflow runs across all organizations using &lt;a href="https://depot.dev/docs/github-actions/runner-types" rel="noopener noreferrer"&gt;Depot's runners&lt;/a&gt; to uncover the third-party GitHub Actions ecosystem. The results revealed some hidden gems that could transform your workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scope of third-party adoption
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;189&lt;/strong&gt; third-party actions with multi-organization adoption (we filtered out single-organization actions to focus on broadly useful tools)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;63&lt;/strong&gt; authors/organizations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As you’d expect, the actions run the gamut from specialized tools for specific languages to general productivity enhancers.&lt;/p&gt;

&lt;h2&gt;
  
  
  9 third-party actions that are worth your time
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://depot.dev/customers" rel="noopener noreferrer"&gt;Depot customers&lt;/a&gt; are concerned about CI speed and naturally have adopted actions to speed up performance. Here are 9 actions with relatively low adoption that deliver significant value.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Build step optimization
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/dorny/paths-filter" rel="noopener noreferrer"&gt;dorny/paths-filter&lt;/a&gt; - &lt;em&gt;Used by 11% of organizations&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I think this is super cool. This action detects which files changed in a PR and sets outputs you can use to conditionally run jobs. It’s great when you want to control running individual jobs or steps only when certain file changes happen.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dorny/paths-filter@v3&lt;/span&gt;
  &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;changes&lt;/span&gt;
  &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;filters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
      &lt;span class="s"&gt;backend:&lt;/span&gt;
        &lt;span class="s"&gt;- 'src/api/**'&lt;/span&gt;
      &lt;span class="s"&gt;frontend:&lt;/span&gt;
        &lt;span class="s"&gt;- 'src/web/**'&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Run backend tests&lt;/span&gt;
  &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;steps.changes.outputs.backend == 'true'&lt;/span&gt;
  &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pnpm run test:api&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Fast Python package management
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/astral-sh/setup-uv" rel="noopener noreferrer"&gt;astral-sh/setup-uv&lt;/a&gt; - &lt;em&gt;Used by 7% of organizations&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Installing Python dependencies can take a really long time. Astral’s uv comes to the rescue.&lt;/p&gt;

&lt;p&gt;For my most recent Python project, I’ve switched over to uv and am getting about 6x faster installs. It’s great. By default the &lt;code&gt;astral-sh/setup-uv&lt;/code&gt; action caches and that makes things even better.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;astral-sh/setup-uv@v6&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install dependencies&lt;/span&gt;
  &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;uv pip install -r requirements.txt&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Compilation caching
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/Mozilla-Actions/sccache-action" rel="noopener noreferrer"&gt;mozilla-actions/sccache-action&lt;/a&gt; - &lt;em&gt;Used by 6% of organizations&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We're always trying to speed up compiles. This action speeds up compilation for Rust, C++, and other compiled languages by caching compilation results across CI runs. Depot runners are already set up to &lt;a href="https://dev.to/blog/sccache-in-github-actions#enter-sccache"&gt;use &lt;code&gt;sccache&lt;/code&gt; to speed up builds&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For rocker, my Rust rewrite of docker, using sccache cut the build and test time by half.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mozilla-actions/sccache-action@v0.0.9&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build project&lt;/span&gt;
  &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;RUSTC_WRAPPER=sccache cargo build --release&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. System package caching
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/awalsh128/cache-apt-pkgs-action" rel="noopener noreferrer"&gt;awalsh128/cache-apt-pkgs-action&lt;/a&gt; - &lt;em&gt;Used by 2% of organizations&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In CI, installing packages from &lt;code&gt;apt&lt;/code&gt; can take a long time. This action can cache packages eliminating repeated package downloads and installations. There are &lt;a href="https://github.com/awalsh128/cache-apt-pkgs-action/tree/master?tab=readme-ov-file#caveats" rel="noopener noreferrer"&gt;caveats&lt;/a&gt; if your package has pre- or post-scripts. Well worth a try.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;awalsh128/cache-apt-pkgs-action@v1&lt;/span&gt;
  &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;packages&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;libssl-dev&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  5. Robust CI pipelines
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/nick-fields/retry" rel="noopener noreferrer"&gt;nick-fields/retry&lt;/a&gt; - &lt;em&gt;Used by 4% of organizations&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;As much as we engineers don’t want it to be true, it’s not uncommon for tests to be flaky. &lt;code&gt;nick-fields/retry&lt;/code&gt; can automatically retry failed steps with configurable backoff. I’m not exactly sure how to categorize this one; feels like it removes some of the need to restart CI runs, so, performance? In any case, it might be controversial, but it’s definitely pragmatic.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nick-fields/retry@v3&lt;/span&gt;
  &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;timeout_minutes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
    &lt;span class="na"&gt;max_attempts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pnpm run integration-tests&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  6. Better PR feedback
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/marocchino/sticky-pull-request-comment" rel="noopener noreferrer"&gt;marocchino/sticky-pull-request-comment&lt;/a&gt; - &lt;em&gt;Used by 4% of organizations&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Updates a single comment on PRs. This is nice as it can help reviewers get context from a PR instead of digging through CI logs. This example shows how to put the contents of a file as a comment in the PR.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;marocchino/sticky-pull-request-comment@v2&lt;/span&gt;
  &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;coverage-results.md&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  7. Beautiful test result reports
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/dorny/test-reporter" rel="noopener noreferrer"&gt;dorny/test-reporter&lt;/a&gt; - &lt;em&gt;Used by 3% of organizations&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Another nice action by &lt;code&gt;dorny&lt;/code&gt;. With this test failures are immediately visible in PR checks with detailed context. For me, it is pretty painful to search through go test logs for the word &lt;code&gt;fail&lt;/code&gt;. Too many tests or logs have that as their name! Here is a way to get a nice simplified view. I should note that this supports more than go tests!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Run tests&lt;/span&gt;
  &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;go test -json ./... &amp;gt; testresults.json&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Test Report&lt;/span&gt;
  &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dorny/test-reporter@v2&lt;/span&gt;
  &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Go Tests&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;testresults.json&lt;/span&gt;
    &lt;span class="na"&gt;reporter&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;golang-json&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  8. Universal binary installer
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/taiki-e/install-action" rel="noopener noreferrer"&gt;taiki-e/install-action&lt;/a&gt; - &lt;em&gt;Used by 3% of organizations&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This simplifies and speeds up getting the right tools into the CI environment. It installs precompiled binaries from GitHub releases with automatic caching and platform detection. The GitHub repo includes a &lt;a href="https://github.com/taiki-e/install-action/blob/main/TOOLS.md" rel="noopener noreferrer"&gt;list&lt;/a&gt; of all the tools it supports ready to go. Really nice and simple.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;taiki-e/install-action@v2&lt;/span&gt;
  &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;tool&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cargo-nextest,just,cargo-hack&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  9. Enforce PR standards
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/amannn/action-semantic-pull-request" rel="noopener noreferrer"&gt;amannn/action-semantic-pull-request&lt;/a&gt; - &lt;em&gt;Used by 3% of organizations&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I mean, I like and use conventional commits, but I figure your commit message is your own. If you want to enforce semantic PR titles for automated changelogs and better commit history, you can use this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;amannn/action-semantic-pull-request@v5&lt;/span&gt;
  &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;types&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
      &lt;span class="s"&gt;fix&lt;/span&gt;
      &lt;span class="s"&gt;feat&lt;/span&gt;
      &lt;span class="s"&gt;docs&lt;/span&gt;
      &lt;span class="s"&gt;ci&lt;/span&gt;
      &lt;span class="s"&gt;chore&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Honorable mentions
&lt;/h2&gt;

&lt;p&gt;The downside of having a really interesting dataset is that you have to pick only a few. I figured I’d share a few more metrics as bellwethers for our industry.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI and automation&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;anthropics/claude-code-action&lt;/code&gt; - AI-powered code review (4% adoption)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Testing and quality&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;chromaui/action&lt;/code&gt; - Visual regression testing (3% adoption)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;cypress-io/github-action&lt;/code&gt; - E2E testing with zero config (3% adoption)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;codecov/codecov-action&lt;/code&gt; - Coverage reporting (7% adoption)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Infrastructure&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;pulumi/actions&lt;/code&gt; - Infrastructure as code deployments (3% adoption)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;hashicorp/setup-terraform&lt;/code&gt; - Terraform workflows (4% adoption)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Security and secrets&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;dopplerhq/cli-action&lt;/code&gt; - Centralized secrets management (3% adoption)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;1password/load-secrets-action&lt;/code&gt; - 1Password integration (&amp;lt;1% adoption)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Language-specific&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;pnpm/action-setup&lt;/code&gt; - Fast Node.js package management (17% adoption)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ruby/setup-ruby&lt;/code&gt; - Ruby environment setup (5% adoption)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;dtolnay/rust-toolchain&lt;/code&gt; - Rust toolchain management (6% adoption)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;oven-sh/setup-bun&lt;/code&gt; - Bun JavaScript runtime (6% adoption)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The bottom line
&lt;/h2&gt;

&lt;p&gt;The GitHub Actions third-party ecosystem has matured into a sophisticated toolkit where specialized solutions often outperform general-purpose alternatives. Maybe it's time to see what you're missing?&lt;/p&gt;

</description>
      <category>github</category>
      <category>devops</category>
      <category>architecture</category>
      <category>programming</category>
    </item>
    <item>
      <title>Git is really cool, actually</title>
      <dc:creator>The Charmizard</dc:creator>
      <pubDate>Sat, 27 Sep 2025 00:17:00 +0000</pubDate>
      <link>https://forem.com/depot/git-is-really-cool-actually-3b59</link>
      <guid>https://forem.com/depot/git-is-really-cool-actually-3b59</guid>
      <description>&lt;p&gt;I have a favorite Git protocol operation. It's fetch.&lt;/p&gt;

&lt;p&gt;Before you judge me for having strong opinions about wire protocols, hear me out: fetch reveals the best things about Git's design. The conversation between your client and the server is simple on the surface: the client says "I have these commits" and the server responds with exactly what's missing. Nothing wasted or redundant. Just a stateless request-response exchange that manages to sync distributed repositories without either side maintaining complex session state. Let's break down how it works.&lt;/p&gt;

&lt;h2&gt;
  
  
  git's smart protocol is really cool, actually
&lt;/h2&gt;

&lt;p&gt;Modern Git clients and servers use a "&lt;a href="https://Git-scm.com/docs/http-protocol" rel="noopener noreferrer"&gt;smart protocol&lt;/a&gt;". Essentially, it lets the Git client negotiate with the server on what objects it needs. On the surface, the Git smart protocol is pretty simple:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Client -&amp;gt; Server [what can you do?]&lt;/li&gt;
&lt;li&gt;Server -&amp;gt; Client [can list references, grab commits, and push up commits]&lt;/li&gt;
&lt;li&gt;Client -&amp;gt; Server [please list refs matching x filters, or send me some objects I'm missing, or take these new commits, or …]&lt;/li&gt;
&lt;li&gt;Server -&amp;gt; Client [here's what you need]&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;There are a couple things to note here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Git's smart protocol is request -&amp;gt; response. The client sends a single request to the server, and the server statelessly gives a response. This design makes it much easier to scale servers and reduces the amount of complexity (and bugs :P) of Git server implementations. And new Git features (like partial clones) can be added without worrying as much about backwards compatibility.&lt;/li&gt;
&lt;li&gt;The protocol works over multiple different transports: HTTPS, SSH, or even Git's own (legacy) protocol.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's the high level picture. But things get interesting when we dive into an actual operation. Let's take a closer look at my favorite Git protocol operation (if I dare to have a favorite): fetch.&lt;/p&gt;

&lt;h3&gt;
  
  
  example: fetch
&lt;/h3&gt;

&lt;p&gt;Fetch, of course, follows the same request/response pattern:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Client -&amp;gt; Server: "hey, I'd like to fetch [commit/branch]. I have these commits, so I don't need them or any commits before them. I'm done specifying what I want."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here's an example of a real request. You can see something similar by running &lt;code&gt;GIT_TRACE_PACKET=1 git fetch origin main&lt;/code&gt; on a repo with some data to fetch:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# 1. The server advertises capabilities&lt;/span&gt;
&lt;span class="c"&gt;# 2. The client asks what commits are available&lt;/span&gt;

&lt;span class="c"&gt;# 3. The server responds with commits and their corresponding branches&lt;/span&gt;
fetch&amp;lt; 8f2037f2b3df3d34baec05df27f41f9c4fe601cd refs/heads/main
fetch&amp;lt; 3a54ae1da97c7c4d404829232bc0a2b824c78d13 refs/tags/v0.0.1
...
fetch&amp;lt; 0000 &lt;span class="c"&gt;# we're done, server&lt;/span&gt;

&lt;span class="c"&gt;# 4. Client requests objects (wants/haves + pack optimizations)&lt;/span&gt;
fetch&amp;gt; &lt;span class="nb"&gt;command&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;fetch
fetch&amp;gt; &lt;span class="nv"&gt;agent&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;git/2.51.0-Darwin
fetch&amp;gt; object-format&lt;span class="o"&gt;=&lt;/span&gt;sha1
fetch&amp;gt; 0001
fetch&amp;gt; thin-pack
fetch&amp;gt; ofs-delta
&lt;span class="c"&gt;# Ask for main's tip; tell the server what we already have&lt;/span&gt;
fetch&amp;gt; want 8f2037f2b3df3d34baec05df27f41f9c4fe601cd
fetch&amp;gt; have 56cf9cd609ce7f39bc9bfeacea14cee05a137164
fetch&amp;gt; have 733ea9d857c90e23369942515c341ef56f16c680
fetch&amp;gt; have ed4b55cfbcd73e853c782d48346d929898dcf54a
.. many many more refs
fetch&amp;gt; 0000 &lt;span class="c"&gt;# we're done, please respond&lt;/span&gt;

&lt;span class="c"&gt;# 5. Server acks the common base(s), signals readiness&lt;/span&gt;
fetch&amp;lt; acknowledgments
fetch&amp;lt; ACK 56cf9cd609ce7f39bc9bfeacea14cee05a137164
fetch&amp;lt; ACK 733ea9d857c90e23369942515c341ef56f16c680
...
fetch&amp;lt; ready
fetch&amp;lt; 0001

&lt;span class="c"&gt;# Packfile over sideband (progress + data multiplexed)&lt;/span&gt;
fetch&amp;lt; packfile
sideband&amp;lt; &lt;span class="se"&gt;\2&lt;/span&gt;Enumerating objects: 16, &lt;span class="k"&gt;done&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
sideband&amp;lt; &lt;span class="se"&gt;\2&lt;/span&gt;Compressing objects: 100% &lt;span class="o"&gt;(&lt;/span&gt;9/9&lt;span class="o"&gt;)&lt;/span&gt;, &lt;span class="k"&gt;done&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
sideband&amp;lt; PACK ...                  &lt;span class="c"&gt;# (binary packfile)&lt;/span&gt;
sideband&amp;lt; &lt;span class="se"&gt;\2&lt;/span&gt;Total 16 &lt;span class="o"&gt;(&lt;/span&gt;delta 7&lt;span class="o"&gt;)&lt;/span&gt;, reused 14 &lt;span class="o"&gt;(&lt;/span&gt;delta 7&lt;span class="o"&gt;)&lt;/span&gt;, pack-reused 0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is pretty sick. The server only sends exactly the data needed, and nothing more. Sync is a hard problem, and this solution is elegant from the client's side.&lt;/p&gt;

&lt;p&gt;The main downside is that crawling the commit graph to figure out what needs to be sent is &lt;strong&gt;expensive&lt;/strong&gt; for the server. Lots of very smart folks have done a lot of work to optimize this (&lt;a href="https://gitlab.com/gitlab-org/gitaly/-/blob/a568252daff606a31d25915ab585e6f49f1238f6/doc/design_pack_objects_cache.md" rel="noopener noreferrer"&gt;GitLab has a cool doc on some of the work they did&lt;/a&gt;), but the first graph crawl of large repos will almost always be painful.&lt;/p&gt;

&lt;p&gt;If you were looking closely, you might have noticed this specific line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;sideband&amp;lt; PACK ...                  &lt;span class="c"&gt;# (binary packfile)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the server figures out what objects you're missing, how does it actually send them? That's where packfiles come in.&lt;/p&gt;

&lt;h3&gt;
  
  
  packfiles are really cool, actually
&lt;/h3&gt;

&lt;p&gt;Git has four kinds of objects:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Blobs&lt;/strong&gt; (raw file contents)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Trees&lt;/strong&gt; (directories pointing to blobs and other trees)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Commits&lt;/strong&gt; (metadata + pointers to a root tree and parent commits)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tags&lt;/strong&gt; (human-readable labels pointing to any object)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Put those together, and you get the full history of a project.&lt;/p&gt;

&lt;p&gt;Think of these objects like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Tag &lt;span class="o"&gt;(&lt;/span&gt;v1.0.0&lt;span class="o"&gt;)&lt;/span&gt;
 └── Commit &lt;span class="o"&gt;(&lt;/span&gt;abc123&lt;span class="o"&gt;)&lt;/span&gt;
     ├── Author:   Billy &amp;lt;billy@example.com&amp;gt;
     ├── Committer: Billy &amp;lt;billy@example.com&amp;gt;
     ├── Message:  &lt;span class="s2"&gt;"Add src/util.c"&lt;/span&gt;
     ├── Parent:   &lt;span class="o"&gt;(&lt;/span&gt;789abc&lt;span class="o"&gt;)&lt;/span&gt;
     └── Tree &lt;span class="o"&gt;(&lt;/span&gt;def456&lt;span class="o"&gt;)&lt;/span&gt;
         ├── Blob README.md   &lt;span class="o"&gt;(&lt;/span&gt;sha1:111111&lt;span class="o"&gt;)&lt;/span&gt;
         ├── Blob main.c      &lt;span class="o"&gt;(&lt;/span&gt;sha1:222222&lt;span class="o"&gt;)&lt;/span&gt;
         └── Tree src/ &lt;span class="o"&gt;(&lt;/span&gt;ghi789&lt;span class="o"&gt;)&lt;/span&gt;
             └── Blob util.c  &lt;span class="o"&gt;(&lt;/span&gt;sha1:333333&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Object files are normally just loose files in &lt;code&gt;.git/objects/&lt;/code&gt;. Sending all those objects over a network individually would be expensive and inefficient. Packfiles are Git's solution: they're like compressed tarballs for Git objects, but smarter. They compress, deduplicate, and send diffs instead of whole files.&lt;/p&gt;

&lt;p&gt;Here's a toy example. Suppose you had a blob with the contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Hello world
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then you commit a change to make it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Hello universe
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Git can send a delta that says something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;reuse characters 0 through 6 ("Hello ") of the old blob
insert "universe"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deltas also stack on top of each other, by default up to a maximum of 50 before Git just creates a new object. When you have hundreds or thousands of commits, this can save a ton on network bandwidth.&lt;/p&gt;

&lt;p&gt;Packfiles also come with pack index files that map object IDs to their byte offsets inside the (sometimes very large) binary pack. Instead of scanning through a multi GB packfile, Git uses the index to find exactly what it needs. For example, when reversing deltas to create the final file, the index points Git to each delta.&lt;/p&gt;

&lt;p&gt;Put it all together, and you can clone the Linux kernel, with millions of commits and gigabytes of history, in a single highly compressed packfile.&lt;/p&gt;

&lt;h2&gt;
  
  
  Git's protocol sorta sucks
&lt;/h2&gt;

&lt;p&gt;Before I give any critiques, I can't stress enough that I think Git's protocol is genuinely beautiful. Git isn't trying to offer a complete development workflow; it's trying to offer a distributed version control system (VCS). With that goal, it's more than succeeded. It's the most popular VCS by far, and for good reason.&lt;/p&gt;

&lt;p&gt;However, we can also be honest and say that software development has changed since &lt;a href="https://git-scm.com/book/en/v2/Getting-Started-A-Short-History-of-Git" rel="noopener noreferrer"&gt;2005&lt;/a&gt;. Think of the rest of this section as a wish list of what I'd want from a future VCS.&lt;/p&gt;

&lt;h3&gt;
  
  
  Git &amp;amp; modern development
&lt;/h3&gt;

&lt;p&gt;At its core, Git only has two concepts: objects and refs. That simplicity is excellent for a content addressable storage system, but insufficient for modern software development. Reviews, pull requests, issues, and CI are all a major part of modern development. None of these exist in the protocol itself. We end up naming everything by branch names, then layering meaning. While the client speaks "blobs/trees/commits," the rest of the world speaks "did CI pass and do two reviewers with CODEOWNERS approval sign off?"&lt;/p&gt;

&lt;p&gt;Authorization has a smaller but still notable gap. The protocol can say "you can fetch" and "you can push to these refs," and that's about it. Real policy-required reviews, linear history, protected paths, merge queues, and status checks all live outside the protocol. When a push is rejected, the wire just says "no." Was it a missing approval? A failing check? A merge queue gate? Wrong trailer? You have to click around a UI to learn the actual reason. We've built sophisticated policy engines around Git, but the client gets a shrug.&lt;/p&gt;

&lt;h3&gt;
  
  
  Out-of-band CI workflow
&lt;/h3&gt;

&lt;p&gt;CI exposes a different hole. There's no first class handshake like "prove this commit is green before I accept it" or "block until my queued build finishes." Instead, hosts stitch it together with webhooks and REST calls: you push, runners start, statuses get attached, dashboards turn red or green. It works, but it's out of band. The transport doesn't know your push is gated on a pipeline, so the workflow devolves into: push -&amp;gt; alt-tab -&amp;gt; browser -&amp;gt; refresh -&amp;gt; repeat.&lt;/p&gt;

&lt;p&gt;The protocol stays simple and stable; everything that makes software development feel modern happens off to the side.&lt;/p&gt;

&lt;h2&gt;
  
  
  FIN
&lt;/h2&gt;

&lt;p&gt;At the end of the day, Git’s protocol is still &lt;strong&gt;really cool&lt;/strong&gt; 😎. The whole design is built on a handful of simple ideas: content addressable objects, stateless request/response exchanges, and packfiles to make distribution efficient. Those ingredients helped to create the most popular VCS on the planet. The protocol is simple, extensible, and elegant.&lt;/p&gt;

&lt;p&gt;But it’s also clear that Git, by itself, isn’t the whole story anymore. Almost everything that makes modern development “modern” happens outside the protocol. Git’s wire format just shrugs and says: “objects and refs.” Hosting providers, plugins, and extensions do the rest. That separation is both a blessing (Git stays stable and compatible) and a curse (every workflow feels bolted on).&lt;/p&gt;

&lt;p&gt;So if you care about sync or VCS protocols, Git is both an inspiration and a challenge. Maybe the future is Git plus layers. Maybe it’s a brand new VCS that takes CI, reviews, and policy as first-class citizens. Either way, Git is here to stay for a while, and it's for a good reason.&lt;/p&gt;

&lt;h2&gt;
  
  
  Related posts
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://depot.dev/blog/github-actions-runner-architecture-part-1-the-listener" rel="noopener noreferrer"&gt;GitHub Actions Runner architecture: The listener&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://depot.dev/blog/what-is-a-tar-file" rel="noopener noreferrer"&gt;What is a tar file?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://depot.dev/blog/booting-containers-faster-with-estargz" rel="noopener noreferrer"&gt;Pulling containers faster with eStargz&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://depot.dev/blog/buildkit-in-depth" rel="noopener noreferrer"&gt;BuildKit in depth: Docker's build engine explained&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Written by Billy @ Depot&lt;/p&gt;

</description>
      <category>git</category>
      <category>productivity</category>
      <category>devops</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Collaborating with Claude on docs</title>
      <dc:creator>The Charmizard</dc:creator>
      <pubDate>Fri, 26 Sep 2025 13:20:38 +0000</pubDate>
      <link>https://forem.com/depot/collaborating-with-claude-on-docs-1p37</link>
      <guid>https://forem.com/depot/collaborating-with-claude-on-docs-1p37</guid>
      <description>&lt;p&gt;You've built a new feature, tested it, and are ready to finally ship it. You check in with the team to make sure everything is good to go before launching it for customers and then someone asks “Do we have docs for this?”&lt;/p&gt;

&lt;p&gt;Documentation often becomes an afterthought in software development, so I want to explore using a Claude agent to help me identify documentation gaps and generate initial drafts for the missing documentation.&lt;/p&gt;

&lt;p&gt;I’m using the Depot Claude remote sandboxes, because I think being able to return to my session and share it with the team will be useful. For more information on these, check out this &lt;a href="https://depot.dev/docs/agents/claude-code/quickstart" rel="noopener noreferrer"&gt;quickstart guide&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The workflow
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Understanding the change
&lt;/h3&gt;

&lt;p&gt;I want to start by making sure that Claude understands the change being made. For the simplicity of this example, I’ll choose a small change represented by a single commit to try out the workflow. I’m going to start a Depot Claude session and provide my own session id so it’s easy to reference again when I next resume it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;depot claude &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--org&lt;/span&gt; ORG_ID &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--repository&lt;/span&gt; https://github.com/depot/cli.git &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--branch&lt;/span&gt; main &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--session-id&lt;/span&gt; doc-hiding-summaries &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="s2"&gt;"Can you explain the changes from this commit: 4febf18142a6bfafaaad7258b9c34f2cb251cb07"&lt;/span&gt;


✓ Claude sandbox started!
  Session ID: doc-hiding-summaries
  Link: https://depot.dev/orgs/ORG_ID/claude/doc-hiding-summaries
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Going to the provided link, I can see that Claude has a good understanding of this change. For larger changes, you might need more back-and-forth to ensure alignment, but this single commit was straightforward enough.&lt;/p&gt;

&lt;h3&gt;
  
  
  Check for documentation gaps
&lt;/h3&gt;

&lt;p&gt;From here, I basically want to know whether or not this is already documented. If so, I may want to expand the documentation given this new change. If it’s not present, I want to add it in general. So we can resume the session and ask it to pull down another repo that hosts our docs and check for us.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;depot claude &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--org&lt;/span&gt; ORG_ID &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--resume&lt;/span&gt; doc-hiding-summaries &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="s2"&gt;"I want to check if there is already documentation for this feature in our docs repo. &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
  Can you pull down the docs repo's main branch and check? &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
  The repository is here: https://github.com/depot/docs.git"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It looks like we need to make sure the agent has permission to clone the repository. I’ll ask it to do this again but this time provide the &lt;code&gt;--allowedTools&lt;/code&gt; flag to specify the tools it’ll be allowed to use so that we don’t run into access issues.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;depot claude &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--org&lt;/span&gt; ORG_ID &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--resume&lt;/span&gt; doc-hiding-summaries &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--allowedTools&lt;/span&gt; &lt;span class="s2"&gt;"Bash(git:*)"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="s2"&gt;"I want to check if there is already documentation for this feature in our docs repo. &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
  Can you pull down the docs repo's main branch and check? &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
  The repository is here: https://github.com/depot/docs.git"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;--allowedTools "Bash(git:*)"&lt;/code&gt; flag grants Claude permission to use git commands, which is safer than bypassing all permission checks.&lt;/p&gt;

&lt;p&gt;Based on the output, I can see that it successfully cloned the docs repo and checked to see if this feature was already documented (it was not). Great, so we should probably add some new documentation about this environment variable; not just this particular change, but for its usage in general.&lt;/p&gt;

&lt;h3&gt;
  
  
  Generate a draft
&lt;/h3&gt;

&lt;p&gt;At this point, I think we know enough to start generating some documentation. I’m going to ask Claude to look at holistically how that environment variable is used to get a full understanding of it. I then want it to pick a place in the docs to add some of this documentation and pitch the change it would make. I’m still pretty hands on when it comes to working with AI as it’s still a newer workflow for me, so I like to see the plan before we make changes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;depot claude &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--org&lt;/span&gt; ORG_ID &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--resume&lt;/span&gt; doc-hiding-summaries &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="s2"&gt;"Since there is no documentation for this environment variable at all, can you look at more generally how it's used and what it can do? &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
  Then I'd like you to find a good place in the docs repo where you think it would make the most sense to add this documentation. &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
  And finally pitch the documentation change you think should be made based on your findings."&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I can see that Claude was able to successfully get a more holistic picture of how to use this environment variable, and it was able to find a good place for it. I disagree a bit with where Claude placed it on the page, but overall that seems mostly good to me. I’m going to provide a bit of feedback before doing anything else.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;depot claude &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--org&lt;/span&gt; ORG_ID &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--resume&lt;/span&gt; doc-hiding-summaries &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="s2"&gt;"I like what you've done. &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
  Instead of an Environment Variables section after authentication, can you create an FAQ section at the bottom of the page where you generally answer how to minimize the build output and rewrite this in that format"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This seems reasonable to me as a starting point and now I want to get feedback from others.&lt;/p&gt;

&lt;h3&gt;
  
  
  Back into the codebase
&lt;/h3&gt;

&lt;p&gt;At this point, I will ask Claude to checkout a branch, commit this change, and then create a draft pull request to get this back into the codebase.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;depot claude &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--org&lt;/span&gt; ORG_ID &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--resume&lt;/span&gt; doc-hiding-summaries &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--allowedTools&lt;/span&gt; &lt;span class="s2"&gt;"Edit"&lt;/span&gt; &lt;span class="s2"&gt;"Bash(git:*)"&lt;/span&gt; &lt;span class="s2"&gt;"Bash(gh pr:*)"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="s2"&gt;"Can you create a branch, commit this change, and then create a pull request in the docs repo with this. &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
  I'd like you to make sure the link to this sandbox is in the description: https://depot.dev/orgs/ORG_ID/claude/doc-hiding-summaries in case anyone needs further context."&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I have a pull request now which is great and I can see the changes. This is also the point where CI checks are running and I can see that there are some formatting and lint errors. I’ll ask the agent to use our package json scripts to format these changes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;depot claude &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--org&lt;/span&gt; ORG_ID &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--resume&lt;/span&gt; doc-hiding-summaries &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--allowedTools&lt;/span&gt; &lt;span class="s2"&gt;"Edit"&lt;/span&gt; &lt;span class="s2"&gt;"Bash(git:*)"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="s2"&gt;"Can you check the package.json and run the formatter"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I officially have a PR with these changes and they're passing all the checks, ready for iterating on and formal review.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fthtsv03305jr1h2wizlr.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fthtsv03305jr1h2wizlr.webp" alt="A GitHub interface showing a draft PR" width="512" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqiqjutvux9wp5rfx2mt6.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqiqjutvux9wp5rfx2mt6.webp" alt="A GitHub interface showing the file changes from a draft pull request" width="512" height="337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now it's time to work with the team to make sure the docs are right and follow our normal standards.&lt;/p&gt;

&lt;h3&gt;
  
  
  Team collaboration and context sharing
&lt;/h3&gt;

&lt;p&gt;This is where the collaborative aspect of Depot sandboxes becomes really valuable. When a teammate needs context about the code I'm documenting, they can access the same Claude session via the shared URL. All the context, the original change analysis, repository exploration, and reasoning behind documentation decisions are preserved and easily viewable by others.&lt;/p&gt;

&lt;p&gt;For example, if a reviewer questions why we placed the documentation in the FAQ section rather than a dedicated environment variables page, they can ask Claude directly in the same session. Claude has full context of the decision-making process and can explain the reasoning or help iterate on alternatives.&lt;/p&gt;

&lt;p&gt;This collaborative aspect eliminates the typical back-and-forth of "Why did you document it this way?" or "Can you explain what this feature actually does?"&lt;/p&gt;

&lt;h2&gt;
  
  
  A more complex experiment
&lt;/h2&gt;

&lt;p&gt;The first workflow I tried used a very tiny feature that was entirely undocumented. I want to try this out on a larger feature that was built with a lot more changes and iterations and has some partial documentation. My workflows tend to be more hands-on as I dip my toes into incorporating agents into my workflow, but if you wanted to experiment with more free reign and complex changes you could try a prompt like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;depot claude &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--org&lt;/span&gt; ORG_ID &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--repository&lt;/span&gt; https://github.com/depot/cli.git &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--branch&lt;/span&gt; main &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--session-id&lt;/span&gt; docs-claude-sandboxes &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--allowedTools&lt;/span&gt; &lt;span class="s2"&gt;"Edit"&lt;/span&gt; &lt;span class="s2"&gt;"Bash(git:*)"&lt;/span&gt; &lt;span class="s2"&gt;"Bash(gh pr:*)"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="s2"&gt;"We have recently added changes to the depot claude command. &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
  It now defaults to starting claude in a remote sandbox rather than using a local claude on your local machine. &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
  Can you validate that we have docs for this in the docs repo: https://github.com/depot/docs.git and then add or update them and generally make sure they are up to date. &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
  I want to show examples both with how to start the depot claude locally and remotely. &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
  Then please create a branch and commit this change and then create a draft pull request so that i can review it"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I tried it, and ultimately this ended up being faster than manually looking through and writing these docs myself. I have tweaks I want to make and this provides a great starting point that I can iterate from. The hardest part of most of these workflows was making sure I had the correct &lt;code&gt;--allowedTools&lt;/code&gt; configured.&lt;/p&gt;

&lt;h2&gt;
  
  
  When this workflow shines
&lt;/h2&gt;

&lt;p&gt;Based on this experiment, I've found this approach works particularly well for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Complex features spanning multiple files&lt;/strong&gt;: Claude can analyze the entire codebase to understand feature scope and usage patterns.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cross-repository documentation&lt;/strong&gt;: When your docs live in a separate repo from your code, Claude can work across both seamlessly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Post-development documentation&lt;/strong&gt;: Great for that common scenario where features are built first and docs come later.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Limitations
&lt;/h2&gt;

&lt;p&gt;Currently this interaction is limited to using the CLI so there’s no way to kick off these remote sessions via the UI; there’s a lot of back and forth between the terminal and the UI. This may be improved upon in the future, but it’s the current state of the world.&lt;/p&gt;

&lt;p&gt;Additionally, the workflow requires some familiarity with the &lt;code&gt;--allowedTools&lt;/code&gt; syntax to grant appropriate permissions safely. You can use the &lt;code&gt;--dangerously-skip-permissions&lt;/code&gt; as an alternative here but, as the flag name implies, this may be risky. You'll need to experiment with permissions based on what actions you want Claude to take.&lt;/p&gt;

&lt;p&gt;It’s a great starting off point but at this stage, you likely still need human intervention to make sure the docs created are in the right place and capture the right context for what you want documented. These models do a great job but it’s still a good idea to double check everything and push back if something seems incorrect.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final thoughts
&lt;/h2&gt;

&lt;p&gt;This is my first time experimenting with this workflow and I wanted to see how I could use these sandboxes to improve my process of adding docs after I’ve finished with a feature. I could see this being especially useful with not-so-tiny changes or when you have different people working on the features than writing the documentation.&lt;/p&gt;

&lt;p&gt;There are a lot more iterations of this workflow that can be explored. Some interesting possibilities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;make this a part of PR process that gets kicked off by a code change&lt;/li&gt;
&lt;li&gt;add context to a particular session over time during development&lt;/li&gt;
&lt;li&gt;point the agent to an existing style guide to ensure consistent documentation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Want to try this yourself? Pick a small feature that needs docs, &lt;a href="https://depot.dev/docs/agents/claude-code/quickstart" rel="noopener noreferrer"&gt;set up a Claude session with &lt;code&gt;depot claude&lt;/code&gt;&lt;/a&gt;, and walk through the workflow.&lt;/p&gt;

&lt;p&gt;Written by Iris @ Depot&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>tutorial</category>
      <category>devops</category>
    </item>
    <item>
      <title>Using AI as my engineering copilot (not autopilot)</title>
      <dc:creator>The Charmizard</dc:creator>
      <pubDate>Tue, 09 Sep 2025 13:29:09 +0000</pubDate>
      <link>https://forem.com/depot/using-ai-as-my-engineering-copilot-not-autopilot-458</link>
      <guid>https://forem.com/depot/using-ai-as-my-engineering-copilot-not-autopilot-458</guid>
      <description>&lt;p&gt;AI has suddenly become accessible in ways that actually matter for my day-to-day engineering work. I'm not just telling AI to build something and hoping for the best. I still want control over how things are implemented, especially when I'm the one who has to maintain the code.&lt;/p&gt;

&lt;p&gt;Let me share how I've integrated AI (primarily Claude) into my workflow while keeping my hands firmly on the wheel.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem with "just make it work"
&lt;/h2&gt;

&lt;p&gt;I recently asked Claude to help with a feature involving switching between table and filtered views on a GitHub Actions log detail page. Technically, it worked. But the implementation? Lots of &lt;code&gt;useEffect&lt;/code&gt; hooks that would trigger each other to achieve the final result. While it worked, it was a lot more complex to read and reason through which can make it easier to introduce unexpected bugs and behaviors in future changes. If I was reviewing this code in a PR, I would leave comments to see if there was a more straightforward way of achieving this result. There was and with the guidance of "let's do this in a more declarative way that doesn't rely so heavily on useEffects ...", I was able to get a much cleaner solution.&lt;/p&gt;

&lt;p&gt;Making something work isn't always enough. As engineers most of our time is spent reading code, not just writing it. If the logic is convoluted and hard to follow, we've solved one problem while creating another.&lt;/p&gt;

&lt;h2&gt;
  
  
  How I actually use AI
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Navigating unfamiliar territory
&lt;/h3&gt;

&lt;p&gt;At Depot, we have multiple codebases, and I'm definitely more familiar with some than others. When I need to dive into unfamiliar code, I don't ask Claude to "explain this entire codebase" since that's too broad to be useful. Instead, I focus my questions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"I'm looking for where validation happens for this specific type of request. What are all the different contexts?"&lt;/li&gt;
&lt;li&gt;"I found something in this file. Is this relevant to what I'm looking for? What does it actually do?"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While Claude is super powerful, it's not 100% always right, and I often find myself pushing back: "You said this, but I found this other thing that seems to contradict it. Am I misunderstanding?" About half the time, Claude responds with some version of "You know what, you're right." It can be a great starting point while you find your bearings in an unfamiliar codebase allowing you both to learn as you go, but it's always useful to double check.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pairing with a very fast typist
&lt;/h3&gt;

&lt;p&gt;I come from a background of formal pair programming where someone is the driver who primarily handles typing and making the changes, and the other is the navigator who guides the direction of what changes are made. With Claude's interactive mode, I've adapted this approach. I'm the navigator; Claude is the driver.&lt;/p&gt;

&lt;p&gt;Here's how it works: I'll start with "Here's a page I want as a boilerplate. Create a new one based on this but just render hello world." Then: "Now fetch this specific data from our database" (with details about the where clause and which table to use).&lt;/p&gt;

&lt;p&gt;Every step, I'm reviewing the code, making sure it makes sense. The difference from human pairing is the 30-second wait while Claude types, but the principle is the same. I'm guiding the implementation based on how I would build it, just faster.&lt;/p&gt;

&lt;p&gt;Sometimes Claude tries to be "helpful" and re-adds things I deliberately removed. So I've learned to say: "Hey, I've made some changes. Start from here and move forward."&lt;/p&gt;

&lt;p&gt;In general, I've found using interactive mode to review each change along the way to be useful in making sure Claude is on the right track. This gives me opportunities to tweak direction and also try different implementations out along the way.&lt;/p&gt;

&lt;h3&gt;
  
  
  The tedious stuff
&lt;/h3&gt;

&lt;p&gt;AI really shines here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Comparing log files to spot timing differences or anomalies (so much faster than my human eyes)&lt;/li&gt;
&lt;li&gt;Making a specific component generic so I can reuse it&lt;/li&gt;
&lt;li&gt;Finding every place in the codebase where we update certain records so I can add a new field consistently&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For verification, I'll implement something myself, then ask: "I intended to do X. Did I miss anywhere?" It's like having a second pair of eyes that never gets tired.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data visualization and queries
&lt;/h3&gt;

&lt;p&gt;We've set up MCP to give Claude read-only access to some databases. Now I can ask: "Have we seen an increase in build failures in the last hour?" Claude figures out which tables to query, writes the SQL, executes it, and gives me a graph with the most recent failing builds.&lt;/p&gt;

&lt;p&gt;There are still times when I'm writing one off queries, but Claude does a great job especially across multiple databases pulling relevant data to help me identify and visualize any trends that may be relevant.&lt;/p&gt;

&lt;h3&gt;
  
  
  Workflow intelligence
&lt;/h3&gt;

&lt;p&gt;These agents help with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Providing insight into the key changes made since the last time I looked at some code.&lt;/li&gt;
&lt;li&gt;Acting as an automated code reviewer that catches when my PR description says one thing but the implementation does another (great at finding typos, occasionally catches real issues)&lt;/li&gt;
&lt;li&gt;Generating test cases based on existing code (though you still need human judgment about what's worth testing)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All of these workflows work great, but they have one major limitation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Remote agents for holding context
&lt;/h3&gt;

&lt;p&gt;One of the limitations with using Claude sessions is losing context from session to session. Switching machines and switching context can result in starting from scratch since it doesn't remember the previous conversations or decisions made along the way.&lt;/p&gt;

&lt;p&gt;This is where Depot's &lt;a href="https://depot.dev/docs/agents/overview" rel="noopener noreferrer"&gt;remote agent sandboxes&lt;/a&gt; can help. Instead of running Claude locally, it runs in a persistent cloud environment with full access to your project files, git history, and terminal.&lt;/p&gt;

&lt;p&gt;This can be useful when I want to tell it to periodically do something without having to keep providing the same context, and I don't need the interactive mode. You can get started with our &lt;a href="https://depot.dev/docs/agents/claude-code/quickstart" rel="noopener noreferrer"&gt;Claude Code quickstart guide&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The bottom line
&lt;/h2&gt;

&lt;p&gt;AI has been a very powerful tool to accelerate my work, not replace my thinking. It's been helpful with understanding unfamiliar concepts and codebases while also still enabling me to have control over the architecture and implementation of changes I make. It can do a lot of the tedious work for you and implement a surprising amount of changes on its own, but it can work even better to create maintainable solutions when you provide insight and guidance along the way.&lt;/p&gt;

&lt;p&gt;The key is maintaining that balance: letting AI handle the tedious parts while keeping control over the architecture, patterns, and maintainability of the code. Because at the end of the day, I'm not just trying to make something work. I'm building something that my team and I will need to understand, modify, and maintain for months or years to come.&lt;/p&gt;

&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What's the difference between AI as a copilot vs autopilot?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Using AI as an autopilot means letting it make architectural decisions and write code without oversight. As a copilot, you guide the implementation while AI handles the mechanical work—like having a very fast typist who follows your directions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How can I share Claude Code sessions with my team?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Depot's Claude Code sessions allow you to share and resume AI coding sessions across your entire team. Anyone in your organization can pick up where you left off, making collaboration with AI agents seamless.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Can I run Claude Code in CI/CD pipelines?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Yes! With Depot's remote agent sandboxes and Claude Code in GitHub Actions, you can run AI coding agents in your CI workflows. This enables automated code reviews, updates, and maintenance tasks directly in your development pipeline.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What are remote agent sandboxes?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Remote agent sandboxes provide persistent cloud environments where AI agents like Claude Code can work with full filesystem context, Git integration, and project history. No more starting from scratch—your AI agent maintains context across sessions.&lt;/p&gt;

&lt;p&gt;Written by Iris @ Depot&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>tutorial</category>
      <category>learning</category>
      <category>devops</category>
    </item>
    <item>
      <title>From Go code to container image with Depot API</title>
      <dc:creator>The Charmizard</dc:creator>
      <pubDate>Thu, 28 Aug 2025 16:33:28 +0000</pubDate>
      <link>https://forem.com/depot/from-go-code-to-container-image-with-depot-api-1ddp</link>
      <guid>https://forem.com/depot/from-go-code-to-container-image-with-depot-api-1ddp</guid>
      <description>&lt;p&gt;In this post, we're going to create a more specialized container building system using the lower-level &lt;a href="https://depot.dev/docs/container-builds/reference/api-overview" rel="noopener noreferrer"&gt;Depot API&lt;/a&gt; Go SDK. This is a more advanced use case that builds on our previous &lt;a href="https://depot.dev/blog/go-code-to-container-depot-api" rel="noopener noreferrer"&gt;blog post&lt;/a&gt;. Here, we are going to create a tool that gets tar content and builds a container.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Depot's build flow works
&lt;/h2&gt;

&lt;p&gt;With the Depot API, there are three steps to build a container.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Register a build with Depot.&lt;/li&gt;
&lt;li&gt;Acquire a Depot build machine.&lt;/li&gt;
&lt;li&gt;Configure, build, and push a container.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Before you start
&lt;/h2&gt;

&lt;p&gt;Before you start, ensure you have a &lt;a href="https://depot.dev/docs/core-concepts#projects" rel="noopener noreferrer"&gt;Depot project setup&lt;/a&gt; and an &lt;a href="https://depot.dev/docs/container-builds/reference/api-authentication" rel="noopener noreferrer"&gt;organization API token&lt;/a&gt; generated.&lt;/p&gt;

&lt;h2&gt;
  
  
  Install the Go package
&lt;/h2&gt;

&lt;p&gt;To start, let's add these imports and run &lt;strong&gt;go mod tidy&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s"&gt;"context"&lt;/span&gt;
    &lt;span class="s"&gt;"encoding/json"&lt;/span&gt;
    &lt;span class="s"&gt;"log"&lt;/span&gt;
    &lt;span class="s"&gt;"os"&lt;/span&gt;
    &lt;span class="s"&gt;"time"&lt;/span&gt;

    &lt;span class="s"&gt;"github.com/depot/depot-go/build"&lt;/span&gt;
    &lt;span class="s"&gt;"github.com/depot/depot-go/machine"&lt;/span&gt;
    &lt;span class="n"&gt;cliv1&lt;/span&gt; &lt;span class="s"&gt;"github.com/depot/depot-go/proto/depot/cli/v1"&lt;/span&gt;
    &lt;span class="s"&gt;"github.com/moby/buildkit/client"&lt;/span&gt;
    &lt;span class="s"&gt;"github.com/moby/buildkit/session"&lt;/span&gt;
    &lt;span class="s"&gt;"github.com/moby/buildkit/session/upload/uploadprovider"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Register a build
&lt;/h2&gt;

&lt;p&gt;First, we request a new build with the Depot API. This request associates a build with a specific project's isolated cache. When registered, the Depot API will respond with the new build's ID and a one-time build token. We defer reporting the build result, buildErr, to the API. Any non-nil error is a build failure.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="n"&gt;token&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Getenv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"DEPOT_TOKEN"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;project&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Getenv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"DEPOT_PROJECT_ID"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;req&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;cliv1&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;CreateBuildRequest&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;ProjectId&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;project&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="n"&gt;build&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;build&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NewBuild&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;token&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fatal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;buildErr&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt;
&lt;span class="k"&gt;defer&lt;/span&gt; &lt;span class="n"&gt;build&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Finish&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;buildErr&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Acquire a builder machine
&lt;/h2&gt;

&lt;p&gt;Next, we use the build ID and build token to request a new ephemeral builder machine.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;buildkit&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;machine&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Machine&lt;/span&gt;
&lt;span class="n"&gt;buildkit&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;buildErr&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;machine&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Acquire&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;build&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ID&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;build&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Token&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"arm64"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;buildErr&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since you're interacting directly with the underlying BuildKit in Depot, you must remember to release the machine when the build completes. By default, machines stay on for two minutes after a build to handle additional builds.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;defer&lt;/span&gt; &lt;span class="n"&gt;buildkit&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Release&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Connect to BuildKit
&lt;/h2&gt;

&lt;p&gt;Once we have the BuildKit connection from the &lt;code&gt;Acquire&lt;/code&gt; call, we connect to the machine’s BuildKit using mutual TLS. Once connected we can use all the power of BuildKit in Depot.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;buildkitClient&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Client&lt;/span&gt;
&lt;span class="n"&gt;buildkitClient&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;buildErr&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;buildkit&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Connect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;buildErr&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Configure the build
&lt;/h2&gt;

&lt;p&gt;We create a BuildKit solver configuration. BuildKit conceptualizes builds as a graph of operations that are solved. In this case, we are going to configure the solver to receive a tar that contains the Dockerfile and all the needed content to run the build.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="c"&gt;/*
 * howdy.tar.gz is a compressed tar archive that contains the Dockerfile and
 * any other files needed to build the image.
 */&lt;/span&gt;
&lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"howdy.tar.gz"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;uploader&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;uploadprovider&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;New&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="c"&gt;// Special buildkit URL for HTTP over gRPC over gRPC.&lt;/span&gt;
&lt;span class="n"&gt;contextURL&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;uploader&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;solverOptions&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;SolveOpt&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;Frontend&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"dockerfile.v0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;FrontendAttrs&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="k"&gt;map&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="s"&gt;"platform"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"linux/arm64"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;"context"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;  &lt;span class="n"&gt;contextURL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c"&gt;// The tar file&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="n"&gt;Session&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Attachable&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;uploader&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We provide BuildKit with a session attachable uploader. We need to explain BuildKit's rather unique solutions to client/server communication. The reasons for this design choice are unclear. The "what" is a method for a server to request content from a client.&lt;/p&gt;

&lt;p&gt;An attachable is a BuildKit concept for a gRPC service hosted on the &lt;strong&gt;&lt;em&gt;client&lt;/em&gt;&lt;/strong&gt; on top of a bi-directional gRPC stream. This client-hosted service can be called by the server. In other words, the server tunnels gRPC requests back to the client.&lt;/p&gt;

&lt;p&gt;BuildKit's uploadprovider takes this tunneling approach further by layering HTTP/1.1 requests on top of the gRPC-over-gRPC connection. As you can imagine, this layering of networking can be brittle to network drops and thus build retries are not uncommon.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌────────────────┐                                                             ┌────────────────┐
│Buildkit Client │                                                             │ Buildkit Server│
└────────────────┘                                                             └────────────────┘
         │                                                                              │
         │                                                                              │
        ┌┴─┐   Layer 1: Client calls Server Session Service                          ┌──┤
        │  │───(Bidirectional gRPC)────────────────────────────────────────────────▶ │  │
        │  │                                                                         │  │
        │ ┌┴─┐                   Layer 2: Server calls Client hosted Upload Service  │  │
        │ │  │ ◀────────────────────────────────────(Bi-Di gRPC tunneled over gRPC)──│  │
        │ │  │                                                                       │  │
        │ │  │                                                                       │  │
        │ │  │                                      Layer 3: Server asks for upload  │  │
        │ │ ┌┴─┐ ◀───────────────────────────────────(HTTP/1.1 over gRPC over gRPC)──│  │
        │ │ │  │                                                                     │  │
        │ │ │  │                                                                     │  │
        │ │ │  │ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ▶ │  │
        │ │ └┬─┘                                                                     │  │
        │ │  │                                                                       │  │
        │ │  │─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─▶ │  │
        │ └┬─┘                                                                       │  │
        │  │                                                                         │  │
        │  │ ◀ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ └─┬┘
        └┬─┘                                                                           │
         │                                                                             │
         │                                                                             │
         ▼                                                                             ▼
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Nevertheless, we serve the tar file from the client to the BuildKit server over this protocol.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stream build output
&lt;/h2&gt;

&lt;p&gt;Next, we can run a goroutine that prints the output of each step of the build.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="n"&gt;buildStatusCh&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="nb"&gt;make&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;chan&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;SolveStatus&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;go&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;enc&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NewEncoder&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Stdout&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;enc&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;SetIndent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"  "&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;status&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="n"&gt;buildStatusCh&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;enc&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;BuildKit will return status messages for each step of the build. Here, we're printing them as JSON to stdout.&lt;/p&gt;

&lt;h2&gt;
  
  
  Build the container
&lt;/h2&gt;

&lt;p&gt;Lastly, we ask BuildKit to solve our request to build the container. The build will reuse cached steps stored in the project cache.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;buildErr&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;buildkitClient&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Solve&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;solverOptions&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;buildStatusCh&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;buildErr&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this example, you could build a service that receives a tar and creates an image.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;With Depot's Go SDK, you can build sophisticated container building services without managing BuildKit infrastructure yourself. This approach gives you the performance benefits of Depot's optimized build machines—including native ARM64 support, persistent caches, and sub-second machine provisioning—while maintaining full control over the build process.&lt;/p&gt;

&lt;p&gt;The pattern we've explored here powers many production services that need programmatic container builds: CI/CD platforms, development environments, and SaaS applications that containerize user code. By leveraging Depot's API, you get enterprise-grade build performance without the operational overhead.&lt;/p&gt;

&lt;p&gt;Ready to get started? Generate an &lt;a href="https://depot.dev/docs/container-builds/reference/api-authentication" rel="noopener noreferrer"&gt;organization API token&lt;/a&gt; and explore our &lt;a href="https://depot.dev/docs/container-builds/reference/api-overview" rel="noopener noreferrer"&gt;API documentation&lt;/a&gt; to begin building your own container services.&lt;/p&gt;

&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What is Depot's Go SDK?&lt;/strong&gt;&lt;br&gt;
Depot's Go SDK provides low-level access to BuildKit infrastructure for programmatic container builds. It allows you to build Docker images from your applications without managing BuildKit servers yourself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When should I use the API instead of the CLI?&lt;/strong&gt;&lt;br&gt;
Use Depot's API when you need to build container images programmatically, such as in SaaS platforms that package user code or CI/CD systems that need custom build logic. The CLI is better for interactive development workflows.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Do I need to manage BuildKit infrastructure?&lt;/strong&gt;&lt;br&gt;
No, Depot handles all BuildKit infrastructure including machine provisioning, scaling, and maintenance. You get ephemeral build machines that spin up in seconds and automatically clean up after builds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How does caching work with the API?&lt;/strong&gt;&lt;br&gt;
Each Depot project gets isolated cache storage that persists across builds. The BuildKit solver automatically reuses cached layers, dramatically speeding up subsequent builds with similar content.&lt;/p&gt;

&lt;h2&gt;
  
  
  Related posts
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://depot.dev/blog/depot-api" rel="noopener noreferrer"&gt;Now available: Depot API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://depot.dev/blog/depot-with-dagger" rel="noopener noreferrer"&gt;Building with Depot and Dagger&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://depot.dev/blog/how-to-use-buildkit-cache-mounts-in-ci" rel="noopener noreferrer"&gt;How to use BuildKit cache mounts in CI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://depot.dev/blog/buildkit-in-depth" rel="noopener noreferrer"&gt;BuildKit in-depth: Docker's build engine explained&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Written by: Chris Goller @ Depot&lt;/p&gt;

</description>
      <category>go</category>
      <category>tutorial</category>
      <category>docker</category>
      <category>startup</category>
    </item>
    <item>
      <title>Guide to faster Rust builds in CI</title>
      <dc:creator>The Charmizard</dc:creator>
      <pubDate>Fri, 08 Aug 2025 20:51:12 +0000</pubDate>
      <link>https://forem.com/depot/guide-to-faster-rust-builds-in-ci-3njl</link>
      <guid>https://forem.com/depot/guide-to-faster-rust-builds-in-ci-3njl</guid>
      <description>&lt;p&gt;We’ll be building the &lt;a href="https://github.com/zed-industries/zed" rel="noopener noreferrer"&gt;Zed&lt;/a&gt; project, a code editor written in Rust. Zed is a large codebase with many dependencies, making it a good candidate for exploring build optimizations. We'll also be starting off using Depot's GitHub Actions runners, which have a ton of &lt;a href="https://depot.dev/blog/introducing-github-actions-ultra-runners" rel="noopener noreferrer"&gt;optimizations already implemented&lt;/a&gt;. I would know, I &lt;a href="https://depot.dev/blog/accelerating-builds-improve-ec2-boot-time" rel="noopener noreferrer"&gt;worked on some of them!&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Just for fun, I built Zed with GitHub’s equivalent runner. We ran the same &lt;a href="https://github.com/depot/zed-test/actions/runs/16666944005/job/47175083235" rel="noopener noreferrer"&gt;baseline workflow&lt;/a&gt; on &lt;code&gt;ubuntu-latest-8&lt;/code&gt;. The &lt;a href="https://github.com/depot/zed-test/actions/runs/16679327806" rel="noopener noreferrer"&gt;GitHub runner workflow&lt;/a&gt; took 43m 16s - a staggering 61.1% slower than Depot’s 26m 51s baseline. The easiest optimization you can make to your CI workflows is to use Depot!&lt;/p&gt;

&lt;h2&gt;
  
  
  Toolchain optimizations
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Mold linker
&lt;/h3&gt;

&lt;p&gt;First, let’s work on optimizing the Rust toolchain itself. The fewer changes we need to make to this codebase, the better. We can start by replacing the default linker (lld on Linux) with &lt;a href="https://github.com/rui314/mold" rel="noopener noreferrer"&gt;mold&lt;/a&gt;. This linker is designed to be really fast, especially for projects with large build outputs.&lt;/p&gt;

&lt;h4&gt;
  
  
  Results
&lt;/h4&gt;

&lt;p&gt;The mold linker showed negligible results for the Zed codebase. When applied only to release builds (not tests), the &lt;a href="https://github.com/depot/zed-test/actions/runs/16679565239" rel="noopener noreferrer"&gt;mold workflow&lt;/a&gt; completed in 28m 49s - just 0.7% slower than baseline.&lt;/p&gt;

&lt;p&gt;Test execution took 15m 27s and release builds took 11m 35s. The minimal difference doesn't justify the added complexity of configuring a different linker for the Zed codebase.&lt;/p&gt;

&lt;h3&gt;
  
  
  Nightly features
&lt;/h3&gt;

&lt;p&gt;The nightly Rust compiler has several features that can help to speed up builds, if you’re willing to switch Rust compiler versions. The two that we’ll use in this example are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;-Z share-generics&lt;/code&gt;: This feature allows the compiler to share generic code across different compilation units, which can significantly reduce build times for large projects.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-Z threads=8&lt;/code&gt;: This feature allows the compiler to parse files and expand macros in parallel&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Results
&lt;/h4&gt;

&lt;p&gt;Switching to Rust nightly provided significant benefits. The &lt;a href="https://github.com/depot/zed-test/actions/runs/16731664175" rel="noopener noreferrer"&gt;nightly workflow&lt;/a&gt; completed in 26m 30s - a 7.3% improvement over baseline.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Total time: 26m 30s (7.3% faster than baseline)&lt;/li&gt;
&lt;li&gt;Test execution: 15m 43s (similar to baseline)&lt;/li&gt;
&lt;li&gt;Build time: 9m 1s (22.7% faster than baseline)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The nightly features (&lt;code&gt;-Z threads=8&lt;/code&gt; and &lt;code&gt;-Z share-generics=y&lt;/code&gt;) dramatically improve build times by allowing the compiler to do more work concurrently and avoid redundant compilation of generic code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Caching strategies
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Caching dependency downloads
&lt;/h3&gt;

&lt;p&gt;Let’s get into the real meat of speeding up Rust builds: caching. Cargo, the Rust package manager, can download hundreds of dependencies before compiling them. This can be slow, especially for large projects with many dependencies. By caching these downloads, we can speed up build times significantly.&lt;/p&gt;

&lt;h4&gt;
  
  
  Results
&lt;/h4&gt;

&lt;p&gt;Caching Cargo dependencies showed good improvements when combined with nightly features. The &lt;a href="https://github.com/depot/zed-test/actions/runs/16731664369" rel="noopener noreferrer"&gt;cargo caching workflow with nightly&lt;/a&gt; completed in 26m 18s - 8.0% faster than baseline.&lt;/p&gt;

&lt;p&gt;By caching the &lt;code&gt;~/.cargo&lt;/code&gt; directories between runs, we avoid re-downloading and re-indexing hundreds of crates.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sccache
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/mozilla/sccache" rel="noopener noreferrer"&gt;sccache&lt;/a&gt; is a compiler cache for Rust that can significantly speed up builds by caching compiled artifacts. It works by storing the output of compilation in a local or remote cache, so that subsequent builds can reuse these artifacts instead of recompiling everything from scratch.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://depot.dev/docs/cache/reference/sccache" rel="noopener noreferrer"&gt;Depot Cache&lt;/a&gt; has built-in support for sccache storage, allowing you to share cached artifacts across different CI runs and local development environments. Using the &lt;code&gt;depot&lt;/code&gt; CLI, you can easily set up sccache to use Depot as its cache backend by running the &lt;code&gt;depot cargo&lt;/code&gt; command.&lt;/p&gt;

&lt;h4&gt;
  
  
  Results
&lt;/h4&gt;

&lt;p&gt;Using &lt;code&gt;depot cargo&lt;/code&gt; with sccache showed promising results with a warm cache:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cold cache&lt;/strong&gt;: &lt;a href="https://github.com/depot/zed-test/actions/runs/16731664252" rel="noopener noreferrer"&gt;33m 23s&lt;/a&gt; (16.8% slower than baseline, 26.8% slower than previous optimization)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Test execution: 13m 18s (12.3% faster)&lt;/li&gt;
&lt;li&gt;Build time: 14m 54s (27.7% slower)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Warm cache&lt;/strong&gt;: &lt;a href="https://github.com/depot/zed-test/actions/runs/16732584997" rel="noopener noreferrer"&gt;25m 18s&lt;/a&gt; (11.5% faster than baseline, 4.0% faster than previous optimization)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Test execution: 13m 12s (12.9% faster)&lt;/li&gt;
&lt;li&gt;Build time: 10m 48s (7.4% faster)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With a warm cache, sccache provides meaningful speedups for both test and build times.&lt;/p&gt;

&lt;h2&gt;
  
  
  Compiler backend alternatives
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Compiler backends?
&lt;/h3&gt;

&lt;p&gt;For those of you who aren't compiler nerds, &lt;code&gt;rustc&lt;/code&gt; uses LLVM as its backend for machine code generation. You can think of LLVM as a library that the Rust compiler uses to generate machine code for different architectures. However, LLVM can be slow to compile large projects, since it’s designed for optimized machine code first, and compile speed second. This is where alternative backends come in.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cranelift for debug builds and tests
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://cranelift.dev/" rel="noopener noreferrer"&gt;Cranelift&lt;/a&gt; is an alternative backend for the Rust compiler, designed to compile fast enough to be used in development iterations. It trades off some runtime performance for faster compile times, making it a great choice for debugging and testing builds.&lt;/p&gt;

&lt;h4&gt;
  
  
  Results
&lt;/h4&gt;

&lt;p&gt;Cranelift fails to compile the Zed codebase on both stable and nightly Rust. The &lt;a href="https://github.com/depot/zed-test/actions/runs/16730115341" rel="noopener noreferrer"&gt;cranelift workflow&lt;/a&gt; shows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;error: asm! and global_asm! sym operands are not yet supported
error: could not compile &lt;span class="sb"&gt;`&lt;/span&gt;wasmtime-fiber&lt;span class="sb"&gt;`&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;lib&lt;span class="o"&gt;)&lt;/span&gt; due to 1 previous error
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a known limitation of Cranelift - it doesn't support all LLVM features, particularly inline assembly. Many Rust projects that depend on low-level crates (like wasmtime in Zed's case) cannot use Cranelift as a drop-in replacement. This limitation exists regardless of whether you use stable or nightly Rust.&lt;/p&gt;

&lt;p&gt;For now, we'll &lt;a href="https://github.com/depot/zed-test/actions/runs/16727448022" rel="noopener noreferrer"&gt;revert this change&lt;/a&gt;. While Cranelift can provide dramatic speedups for projects it supports, it won't work for our example codebase.&lt;/p&gt;

&lt;h2&gt;
  
  
  CI-specific optimizations
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Cargo nextest with matrix builds for massive parallelization
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://nexte.st/" rel="noopener noreferrer"&gt;cargo-nextest&lt;/a&gt; is a next-generation test runner for Rust that can bring significant performance improvements over the default &lt;code&gt;cargo test&lt;/code&gt;. The main advantage is that it runs each test in its own separate process, allowing us to run many tests in parallel.&lt;/p&gt;

&lt;h4&gt;
  
  
  Results
&lt;/h4&gt;

&lt;p&gt;Cargo nextest showed excellent performance with a warm sccache. The &lt;a href="https://github.com/depot/zed-test/actions/runs/16733819334" rel="noopener noreferrer"&gt;nextest workflow&lt;/a&gt; completed in 18m 34s - a 35.0% improvement over baseline. Test execution took 10m 46s, which is 28.9% faster than baseline.&lt;/p&gt;

&lt;p&gt;Nextest delivers substantial performance improvements when combined with warm sccache, making it an excellent choice for CI pipelines.&lt;/p&gt;

&lt;h3&gt;
  
  
  Architecture optimization
&lt;/h3&gt;

&lt;p&gt;The single biggest impact on build times is often the hardware you’re running on. More CPU cores generally mean faster builds, especially for highly parallelizable tasks like Rust compilation. Doubling your CI costs isn’t always an option, of course.&lt;/p&gt;

&lt;h4&gt;
  
  
  Results
&lt;/h4&gt;

&lt;p&gt;To test the impact of more CPU cores, we doubled the runner size from 8 to 16 cores. The &lt;a href="https://github.com/depot/zed-test/actions/runs/16733860911" rel="noopener noreferrer"&gt;nextest-16-cores workflow&lt;/a&gt; results were interesting:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Total time: 21m 25s (25.1% faster than baseline)&lt;/li&gt;
&lt;li&gt;Test execution: 7m 31s (50.3% faster test execution than baseline)&lt;/li&gt;
&lt;li&gt;Build time: 11m 54s (slower than 8-core due to cache warmth differences)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The 16-core configuration showed excellent test parallelization (7m 31s vs 10m 46s on 8 cores), but the overall time was impacted by sccache variability. This demonstrates that raw CPU scaling isn't always the answer - cache effectiveness can be more important than core count.&lt;/p&gt;

&lt;h2&gt;
  
  
  Platform-specific optimizations
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Linux I/O optimizations with RAM disks
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/RAM_drive" rel="noopener noreferrer"&gt;RAM disks&lt;/a&gt; provide a filesystem backed by memory instead of persistent storage, offering dramatically faster I/O operations. For Rust builds, this can significantly reduce the time spent reading and writing intermediate compilation artifacts.&lt;/p&gt;

&lt;p&gt;Our runners reserve some memory for a RAM disk. When a job starts, VMs will automatically create and mount this RAM disk to the runner, which helps to make every I/O operation, including reading and writing files for compilation, much faster.&lt;/p&gt;

&lt;p&gt;For self-hosted runners or other CI providers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /mnt/ramdisk
&lt;span class="nb"&gt;sudo &lt;/span&gt;mount &lt;span class="nt"&gt;-t&lt;/span&gt; tmpfs &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;16G tmpfs /mnt/ramdisk

&lt;span class="c"&gt;# Alternative: ramfs (pure RAM, no size limits - use with caution!)&lt;/span&gt;
&lt;span class="c"&gt;# sudo mount -t ramfs ramfs /mnt/ramdisk&lt;/span&gt;

&lt;span class="c"&gt;# Configure Rust to use it&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;CARGO_TARGET_DIR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/mnt/ramdisk/target
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;TMPDIR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/mnt/ramdisk/tmp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;After testing various Rust build optimizations on the Zed codebase, here are the key findings:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Optimization&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Total&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Tests&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Build&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;vs Baseline&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;GitHub Runner&lt;/td&gt;
&lt;td&gt;43m 16s&lt;/td&gt;
&lt;td&gt;~30m&lt;/td&gt;
&lt;td&gt;~13m&lt;/td&gt;
&lt;td&gt;+51.2%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Baseline (Depot)&lt;/td&gt;
&lt;td&gt;28m 36s&lt;/td&gt;
&lt;td&gt;15m 10s&lt;/td&gt;
&lt;td&gt;11m 40s&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Mold linker&lt;/td&gt;
&lt;td&gt;28m 49s&lt;/td&gt;
&lt;td&gt;15m 27s&lt;/td&gt;
&lt;td&gt;11m 35s&lt;/td&gt;
&lt;td&gt;+0.7%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Nightly features&lt;/td&gt;
&lt;td&gt;26m 30s&lt;/td&gt;
&lt;td&gt;15m 43s&lt;/td&gt;
&lt;td&gt;9m 1s&lt;/td&gt;
&lt;td&gt;-7.3%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cargo caching&lt;/td&gt;
&lt;td&gt;26m 18s&lt;/td&gt;
&lt;td&gt;~14m&lt;/td&gt;
&lt;td&gt;~10m&lt;/td&gt;
&lt;td&gt;-8.0%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Depot cargo (warm)&lt;/td&gt;
&lt;td&gt;25m 18s&lt;/td&gt;
&lt;td&gt;13m 12s&lt;/td&gt;
&lt;td&gt;10m 48s&lt;/td&gt;
&lt;td&gt;-11.5%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cranelift&lt;/td&gt;
&lt;td&gt;Failed&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Nextest&lt;/td&gt;
&lt;td&gt;18m 34s&lt;/td&gt;
&lt;td&gt;10m 46s&lt;/td&gt;
&lt;td&gt;5m 46s&lt;/td&gt;
&lt;td&gt;-35.0%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Nextest (16 cores)&lt;/td&gt;
&lt;td&gt;21m 25s&lt;/td&gt;
&lt;td&gt;7m 31s&lt;/td&gt;
&lt;td&gt;11m 54s&lt;/td&gt;
&lt;td&gt;-25.1%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Fin
&lt;/h2&gt;

&lt;p&gt;The biggest wins came from using nextest with warm sccache (35.0% improvement on 8 cores) and combining multiple optimizations. Depot cargo with warm sccache provided an 11.5% improvement, while properly configured nightly features added 7.3%.&lt;/p&gt;

&lt;p&gt;Ready to speed up your own Rust builds? Start with &lt;a href="https://depot.dev/github-actions" rel="noopener noreferrer"&gt;Depot's optimized runners&lt;/a&gt; and work your way through these techniques - your future self (and your CI bill) will thank you.&lt;/p&gt;

&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What's the single most impactful optimization for Rust CI builds?&lt;/strong&gt; Cargo nextest with warm sccache provides exceptional performance. We achieved a 35.0% speedup on standard 8-core runners.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Should I use sccache for all my Rust builds?&lt;/strong&gt; It depends on your use case. sccache speeds up test compilation (11-14% faster) but can slow down release builds by up to 50%. Consider a hybrid approach: use sccache for tests and regular cargo for release builds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Is switching to Rust nightly worth it for build performance?&lt;/strong&gt; Yes, if configured properly. Nightly features like &lt;code&gt;-Z share-generics=y&lt;/code&gt; and &lt;code&gt;-Z threads=8&lt;/code&gt; provided a 7.3% overall speedup with 22.7% faster build times in our testing. Make sure to actually pass these flags via RUSTFLAGS, not just install the nightly toolchain.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why did Cranelift fail to compile the Zed codebase?&lt;/strong&gt; Cranelift doesn't support all LLVM features, particularly inline assembly. Projects with low-level dependencies (like wasmtime) often can't use Cranelift as a drop-in replacement, despite its potential for dramatic speedups.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What's the easiest optimization to implement right now?&lt;/strong&gt; Using cargo nextest. With warm sccache, this single change delivered a 35.0% speedup. It's a drop-in replacement for &lt;code&gt;cargo test&lt;/code&gt; that runs tests in parallel more efficiently.&lt;/p&gt;

&lt;p&gt;Author: Billy Batista @ Depot.dev&lt;/p&gt;

</description>
      <category>rust</category>
      <category>tutorial</category>
      <category>devops</category>
      <category>cicd</category>
    </item>
    <item>
      <title>A practical guide to debugging GitHub Actions</title>
      <dc:creator>The Charmizard</dc:creator>
      <pubDate>Fri, 25 Jul 2025 00:59:19 +0000</pubDate>
      <link>https://forem.com/depot/a-practical-guide-to-debugging-github-actions-17m1</link>
      <guid>https://forem.com/depot/a-practical-guide-to-debugging-github-actions-17m1</guid>
      <description>&lt;p&gt;GitHub jobs fail for reasons that make no sense all the time. You haven't changed anything in your code, yet the workflow breaks anyway. It's frustrating because you just want your changes to build and deploy. Debugging action runners can be a headache, but here's how to start untangling what's going wrong.&lt;/p&gt;

&lt;h2&gt;
  
  
  GitHub Actions failures
&lt;/h2&gt;

&lt;p&gt;GitHub and GitHub actions are a huge part of so many engineers' workflow. It’s something that people rely upon heavily in their day to day work life. We want to not have to think about how a job is getting run for the most part and just see everything go green so changes can be merged and deployed.&lt;/p&gt;

&lt;p&gt;So when there are issues, it’s very disruptive to teams.&lt;/p&gt;

&lt;h3&gt;
  
  
  The team impact
&lt;/h3&gt;

&lt;p&gt;Not only can issues with actions and jobs just generally be frustrating, but they can be pretty &lt;a href="https://depot.dev/blog/self-hosting-github-actions" rel="noopener noreferrer"&gt;costly to your team&lt;/a&gt;. If it’s an error that doesn’t seem relevant to the change, a lot of people’s first reaction can be to re-run the failed jobs. For flaky issues, this may fix this particular instance of a problem, but it can also mean waiting a long time just for another failure to occur.&lt;/p&gt;

&lt;p&gt;These failures, especially if a test suite is running on the job, may cause people to stop trusting that particular job / check. This may be a problem down the line if this failure is at some point identifying a real issue. Additionally, these failures might be occurring when a change needs to get in quickly. If you need to put out a quick fix for something and a job failure is causing you to be unable to deploy the change, this is going to be a big problem especially if this is during an incident or some other high stakes circumstance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Debugging GitHub Actions
&lt;/h2&gt;

&lt;p&gt;In many cases these jobs are running a test suite, so sometimes the problem is actually the change being made. Other jobs may end up with build failures which can also possibly be the fault of the change being made. However, what about the case where the problem isn’t actually the code change? The issue may be that the runner your job is executing on is running out of memory or utilizing all of its CPU. There is also the case where your jobs aren’t failing but they are taking longer to run but it’s not clear as to why.&lt;/p&gt;

&lt;h3&gt;
  
  
  Utilizing GitHub Logs
&lt;/h3&gt;

&lt;p&gt;In the GitHub UI, you have the ability to look through the job logs to see what is going on with your job. In the case that there is a really explicit error, this UI does a good job at scrolling you to and highlighting what the error that caused the job to fail. This is often useful for identifying whether the issue is related to the code change.&lt;/p&gt;

&lt;p&gt;If it’s not clearly related to the code change, it can be used as a starting off point to investigate what the actual issue is. Sometimes the error that causes the job to fail is not as obvious and requires more investigation to determine the problem.&lt;/p&gt;

&lt;h4&gt;
  
  
  Log point in time resource usage
&lt;/h4&gt;

&lt;p&gt;When GitHub Actions jobs fail unexpectedly, memory exhaustion is often the culprit, but the symptoms aren't always obvious in the logs. Rather than guessing, you can add simple monitoring steps to your workflows that capture resource usage before and after critical operations. For memory diagnostics, adding &lt;code&gt;free -h&lt;/code&gt; and &lt;code&gt;df -h&lt;/code&gt; commands provides immediate visibility into RAM and disk utilization at key points in your build process. If CPU bottlenecks are suspected, logging system information with &lt;code&gt;nproc&lt;/code&gt;, &lt;code&gt;lscpu&lt;/code&gt;, and &lt;code&gt;uptime&lt;/code&gt; reveals processor constraints and load averages. Wrapping expensive operations with the &lt;code&gt;time&lt;/code&gt; command shows exactly where delays occur.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Check memory usage&lt;/span&gt;
  &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;echo "Memory usage before:"&lt;/span&gt;
    &lt;span class="s"&gt;free -h&lt;/span&gt;
    &lt;span class="s"&gt;df -h&lt;/span&gt;

    &lt;span class="s"&gt;# Your actual build/test steps here&lt;/span&gt;

    &lt;span class="s"&gt;echo "Memory usage after:"&lt;/span&gt;
    &lt;span class="s"&gt;free -h&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Monitor CPU usage&lt;/span&gt;
  &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;echo "CPU info:"&lt;/span&gt;
    &lt;span class="s"&gt;nproc&lt;/span&gt;
    &lt;span class="s"&gt;lscpu&lt;/span&gt;
    &lt;span class="s"&gt;echo "Load average:"&lt;/span&gt;
    &lt;span class="s"&gt;uptime&lt;/span&gt;
    &lt;span class="s"&gt;# Time your critical steps&lt;/span&gt;
    &lt;span class="s"&gt;time make build&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This approach requires no external dependencies or API keys, just a few extra workflow steps that output directly to your job logs, making it perfect for quick debugging sessions or permanent monitoring of resource-intensive builds. The downside of this approach is that you now have to spend time searching through your logs to try and find where these values were outputted and also keep track of them if there are multiple of these outputs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Utilizing an observability tool
&lt;/h3&gt;

&lt;p&gt;Looking at logs is simple enough if you have very few locations you’re logging this data. You may want to collect this data more than just a single time which will require searching through GitHub logs to find all the instances of these values. While identifying the failure-inducing error in GitHub's logs works rather well, viewing other more mundane logs can be tedious and slow. The logs for each non-errored step begin in a collapsed view and don't return in search results until you uncollapse the step and then alter the filter. Typically to speed up this search, I utilize viewing the raw logs so I can more quickly search through them to find the information I seek, but this does forgo a lot of nice formatting that makes the logs easier to parse. A more practical alternative to trudging through these logs would be to send this usage data as metrics to an external observability platform.&lt;/p&gt;

&lt;h4&gt;
  
  
  Report point in time resource usage to an external observability tool
&lt;/h4&gt;

&lt;p&gt;If you want to get these same point in time metrics but have them viewable beyond just searching through your GitHub logs, you could opt to collect and send these metrics to an external observability tool like DataDog for managing and viewing usage metrics you collect. For GitHub-hosted runners, you can implement point-in-time monitoring by sending custom metrics to observability platforms like DataDog during workflow execution. This involves capturing system metrics (memory usage, CPU load, disk utilization) at key moments such as before builds, after tests, or continuously during long-running processes and posting them via HTTP API.&lt;/p&gt;

&lt;p&gt;Point-in-time metrics can be effective enough for identifying resource bottlenecks, tracking performance trends over time, and correlating system state with job failures. The implementation is straightforward: collect metrics using standard Unix commands, format them as JSON, and send via curl to DataDog's series endpoint with appropriate tags for filtering and analysis.&lt;/p&gt;

&lt;p&gt;Here's a simple example that captures and sends memory utilization during a workflow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Send metrics to DataDog&lt;/span&gt;
  &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;# Simple curl commands&lt;/span&gt;
    &lt;span class="s"&gt;MEM_USED=$(free | grep Mem | awk '{print ($3/$2) * 100.0}')&lt;/span&gt;
    &lt;span class="s"&gt;curl -X POST "https://api.datadoghq.com/api/v1/series" \&lt;/span&gt;
      &lt;span class="s"&gt;-H "DD-API-KEY: ${{ secrets.DD_API_KEY }}" \&lt;/span&gt;
      &lt;span class="s"&gt;-H "Content-Type: application/json" \&lt;/span&gt;
      &lt;span class="s"&gt;-d '{&lt;/span&gt;
        &lt;span class="s"&gt;"series": [{&lt;/span&gt;
          &lt;span class="s"&gt;"metric": "github.actions.memory.percent",&lt;/span&gt;
          &lt;span class="s"&gt;"points": [[$(date +%s), $MEM_USED]],&lt;/span&gt;
          &lt;span class="s"&gt;"tags": ["repo:${{ github.repository }}"]&lt;/span&gt;
        &lt;span class="s"&gt;}]&lt;/span&gt;
      &lt;span class="s"&gt;}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Agent with a self hosted runner for continuous usage metrics
&lt;/h4&gt;

&lt;p&gt;If you're using GitHub-hosted runners, you're limited to point-in-time metrics collection during job execution. However, self-hosted runners offer the opportunity for more continuous monitoring by installing observability agents directly on the runner infrastructure. For example with a DataDog agent running consistently on your self-hosted runners, you get baseline system metrics including CPU, memory, disk, network collected every 30-60 seconds. This provides much richer insights into runner performance and its resource utilization.&lt;/p&gt;

&lt;p&gt;Install the agent on your runner:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;DD_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;your_api_key bash &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;curl &lt;span class="nt"&gt;-L&lt;/span&gt; https://s3.amazonaws.com/dd-agent/scripts/install_script.sh&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add a step to your workflow to tag the metrics emitted from the agent:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Tag DataDog metrics for this job&lt;/span&gt;
  &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;# Add GitHub-specific tags to the runner's metrics&lt;/span&gt;
    &lt;span class="s"&gt;curl -X POST "https://api.datadoghq.com/api/v1/tags/hosts/$HOSTNAME" \&lt;/span&gt;
      &lt;span class="s"&gt;-H "Content-Type: application/json" \&lt;/span&gt;
      &lt;span class="s"&gt;-H "DD-API-KEY: ${{ secrets.DD_API_KEY }}" \&lt;/span&gt;
      &lt;span class="s"&gt;-d '{&lt;/span&gt;
        &lt;span class="s"&gt;"tags": [&lt;/span&gt;
          &lt;span class="s"&gt;"github_job:${{ github.job }}",&lt;/span&gt;
          &lt;span class="s"&gt;"github_repo:${{ github.repository }}",&lt;/span&gt;
          &lt;span class="s"&gt;"github_run_id:${{ github.run_id }}"&lt;/span&gt;
        &lt;span class="s"&gt;]&lt;/span&gt;
      &lt;span class="s"&gt;}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is going to provide a lot of data that can be used to help identify your issues but is likely a more heavy handed approach than many would take. This requires you both to manage your own self-hosted runners as well as utilize an external observability tool. If you’re not already managing self hosted runners or utilizing an observability tool, this is likely a far too heavy and expensive approach for debugging this.&lt;/p&gt;

&lt;h2&gt;
  
  
  Our approach: built-in observability
&lt;/h2&gt;

&lt;p&gt;Similarly to the last option, we use an agent for collecting job data. Our setup involves deploying an OpenTelemetry (OTEL) collector directly on &lt;a href="https://depot.dev/products/github-actions" rel="noopener noreferrer"&gt;Depot action runners&lt;/a&gt; that continuously captures observability data while the job executes. We store the following to be graphed and viewed in the Depot app:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Memory utilization metrics with sub-second granularity throughout the job lifecycle&lt;/li&gt;
&lt;li&gt;CPU utilization metrics with detailed timing information&lt;/li&gt;
&lt;li&gt;Complete job logs with step associations&lt;/li&gt;
&lt;li&gt;Process-level resource utilization for granular visibility&lt;/li&gt;
&lt;li&gt;Out of memory (OOM) events with detailed timing information&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal of this approach is to collect the information that commonly provides insights around what the problem was without requiring users to make changes to their workflows or to adopt an observability or data visualization tool to access this data which is sometimes prohibitively expensive. Out of memory indicators provide very clear feedback that the problem is memory constraints. The peak resource utilization for CPU and memory values provide insights into whether or not one of these is a constraint on the system, ultimately being the culprit for the job failure or slowdown. Process level metrics for a particular step allow you to more deeply identify which process may be hogging resources and causing problems. Your job logs and their timings can be used to further identify where other slow downs occurred while also providing an easier interface for searching through and identifying relevant events.&lt;/p&gt;

&lt;h2&gt;
  
  
  Taking action
&lt;/h2&gt;

&lt;p&gt;Now that you have more data to be able to identify what is actually causing the failures or slow downs, you can take targeted actions to alleviate the issue.&lt;/p&gt;

&lt;h3&gt;
  
  
  High memory usage
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Immediate solutions
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Upgrade runner types: Use a larger runner with more memory. This has the tradeoff of being more expensive.&lt;/li&gt;
&lt;li&gt;Implement build caching: Cache dependencies and build artifacts to reduce memory overhead during subsequent runs.&lt;/li&gt;
&lt;li&gt;Set memory limits: Prevent memory runaway by only allowing a limited amount of memory for a particular process.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Optimize resource-heavy operations
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Docker builds: Use multi-stage builds and layer caching to reduce memory requirements.&lt;/li&gt;
&lt;li&gt;Large jobs: Break monolithic jobs into smaller sequential jobs that consume less memory.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Identify memory leaks
&lt;/h4&gt;

&lt;p&gt;If you see sudden memory spikes between "before" and "after" measurements, investigate:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Test suites that don't properly clean up resources&lt;/li&gt;
&lt;li&gt;Build processes that accumulate temporary files&lt;/li&gt;
&lt;li&gt;Applications that don't release memory between operations&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  CPU bottlenecks
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Immediate solutions
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Upgrade runner types: Use a larger runner with more CPU cores. This has the tradeoff of being more expensive.&lt;/li&gt;
&lt;li&gt;Limit parallel operations: Avoid overwhelming the runner's CPU cores.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Split and distribute work
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Job matrices: Use matrix strategies to distribute CPU-intensive work across multiple runners simultaneously.&lt;/li&gt;
&lt;li&gt;Sequential jobs: Chain dependent jobs instead of running everything in parallel to reduce peak CPU load.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;When job failures happen, they can be frustrating and time-consuming. Debugging them means understanding the context so you can separate the real cause from the noise. The key to effective GitHub Actions debugging is picking the right strategy for your team’s scale and needs.&lt;/p&gt;

&lt;p&gt;Start simple with logging for one-off issues, and as your CI/CD usage grows, consider adding an external observability tool.&lt;/p&gt;

&lt;p&gt;If you're already using Depot as your job runner, you get &lt;a href="https://depot.dev/blog/introducing-github-job-details-observability-for-your-cicd-pipeline" rel="noopener noreferrer"&gt;built-in metrics in the job details&lt;/a&gt; to kickstart this investigation. The next time a job fails unexpectedly, you'll be equipped to pinpoint whether it's a code issue or a resource constraint and fix it faster.&lt;/p&gt;

&lt;h3&gt;
  
  
  FAQ
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Why do GitHub Actions jobs fail even when my code hasn't changed?&lt;/strong&gt; Because many failures are caused by the environment, not your code. Common causes include resource limits (memory, CPU, disk), flaky network dependencies, outdated cache layers, or GitHub-hosted runner instability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How can I tell if the failure is a real bug or just a flaky runner?&lt;/strong&gt; Start by checking the logs for consistent errors. If rerunning the job produces different results or passes without changes, it's likely a flaky issue caused by resource constraints, timing, or network hiccups.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What's the simplest way to debug a flaky job?&lt;/strong&gt; Add lightweight resource logging (free -h, df -h, uptime) around critical steps and compare runs. This shows if memory or CPU is maxing out. For deeper analysis, use an observability tool or a runner with built-in metrics (like Depot).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What's the fastest fix for 'No space left on device'?&lt;/strong&gt; Use docker system prune to clean up unused layers, remove unnecessary artifacts, and ensure .dockerignore excludes large files. If space is still tight, upgrade the runner size.&lt;/p&gt;

&lt;p&gt;Authored by Iris @ Depot&lt;/p&gt;

</description>
      <category>github</category>
      <category>webdev</category>
      <category>tutorial</category>
      <category>learning</category>
    </item>
    <item>
      <title>How we automated GitHub Actions Runner updates with Claude</title>
      <dc:creator>The Charmizard</dc:creator>
      <pubDate>Thu, 17 Jul 2025 00:41:26 +0000</pubDate>
      <link>https://forem.com/depot/how-we-automated-github-actions-runner-updates-with-claude-3eo3</link>
      <guid>https://forem.com/depot/how-we-automated-github-actions-runner-updates-with-claude-3eo3</guid>
      <description>&lt;p&gt;We recently launched &lt;a href="https://depot.dev/blog/now-available-claude-code-sessions-in-depot" rel="noopener noreferrer"&gt;Claude Code Sessions in Depot&lt;/a&gt;, a feature that allows you to share Claude code sessions with both developers and your CI workflows. In our previous blog post, we noted that "we've been using Claude Code at Depot since pretty much the moment it dropped," but we didn't elaborate on &lt;em&gt;how&lt;/em&gt;. This article will demonstrate one of our most valuable CI uses: consistently keeping our forks updated.&lt;/p&gt;

&lt;h2&gt;
  
  
  Background
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Problem: We waste hours updating the GHA upstream image
&lt;/h3&gt;

&lt;p&gt;All of our GHA runners run on their own isolated EC2 instances. As such, we need to build an image that these runners can load, called an AMI. This sounds like it'd be pretty easy! Github keeps the definitions for their runner images &lt;a href="https://github.com/actions/runner-images" rel="noopener noreferrer"&gt;open source&lt;/a&gt;, so it'd just be a matter of modifying their source to work with our runners, and then building the AMI. Unfortunately, it's not quite that simple. We make a large number of modifications to the image to help improve performance, and alongside that, Github's runners run on Azure and ours run on AWS. This means that, instead of just using their source as is, we need to modify it to work with our AMI build software. The way we go about this is by keeping upstream in a git submodule, and then running many of their scripts with a patch file that modifies them to work with our runners. This is a lot of work, and it takes a lot of time to keep up with their changes.&lt;/p&gt;

&lt;p&gt;Keeping our fork in sync with upstream requires a pretty significant amount of developer effort that compounds over time. Each pull from upstream involves pulling in dozens of commits, reviewing every change for compatibility issues, and ensuring that everything introduced by upstream will be compatible with our existing runner software.&lt;/p&gt;

&lt;p&gt;It isn't just the raw time investment that's a problem either, it's the cognitive load of context switching from other deep and intensive work to this task. Kyle, our co-founder and CEO here at Depot, talks about it frequently, but context switching is one of the biggest hidden killers of developer productivity. It shatters whatever flow you had going, and quite frankly, it just doesn't feel great.&lt;/p&gt;

&lt;p&gt;For many months now, we've been wanting to automate this process. Generative AI is the hot new thing, and consuming large amounts of data to output smaller amounts is one area where LLMs can really shine.&lt;/p&gt;

&lt;h2&gt;
  
  
  What we needed Claude to do
&lt;/h2&gt;

&lt;p&gt;There's two main issues we want to tackle here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Keeping a patch file for arm64 up to date&lt;/li&gt;
&lt;li&gt;Knowing what changed between each update&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The first is keeping a file called &lt;code&gt;arm64.patch&lt;/code&gt; up to date. Essentially, Github's runner images are built for x86_64, and we need to modify them to work on arm64. This patch file contains all the modifications we need to make to the upstream source to get it building for our arm machine images. We also need to generally know 'what actually changed since our last update?'. While there are many commits made to the upstream repository, not all of them are relevant to us. We need to be able to quickly identify which changes are important or potentially breaking, and which ones we can ignore. Using &lt;code&gt;depot claude&lt;/code&gt;, we can make this process pretty smooth, by having a CI workflow that consistently keeps us up to date!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution: Using Depot Claude to summarize and analyze changes&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Generating the arm64 patch
&lt;/h2&gt;

&lt;p&gt;Let's get the more tedious work out of the way first: regenerating our arm64 patches. Thinking about what we need Claude to do, we need it to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Attempt to apply the existing arm64 patch file&lt;/li&gt;
&lt;li&gt;If it fails, modify the patch file to apply cleanly, keeping in mind the goal of ensuring this new machine image will build and run correctly on arm64&lt;/li&gt;
&lt;li&gt;Repeat the first two tasks until the patch applies cleanly&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here's the actual prompt we're using as of writing, with a little bash to set up the context for Claude:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;'
# ARM64 Patch Conflict Resolution

## Context
We maintain ARM64 patches that modify the upstream GitHub Actions runner image to work on ARM64 architecture. These patches are ESSENTIAL and must be preserved. The patches failed to apply cleanly after an upstream update, and I need you to fix the conflicts while keeping all the ARM64 modifications.

## CRITICAL: YOU MUST MODIFY THE PATCH TO WORK FOR ARM64
The ARM64 patches are necessary for the runner to work on ARM64. You must:
1. Keep ALL existing ARM64 modifications
2. Only adjust line numbers or context to match the new upstream code
3. NEVER suggest removing a patch file or patch content
4. ALL files that are patched MUST remain patched
5. **CRITICAL**: The new patch MUST make it so that, post-patch, things will run for ARM64

## Your Task
You need to modify the arm64/arm64.patch file to resolve conflicts AND ensure ARM64 functionality. The patch application is failing, which means the line numbers or context in the patch no longer match the upstream files.

**IMPORTANT**: You are NOT just copying the patch file. You must MODIFY it to ensure ARM64 compatibility.

To fix this:
1. Look at the current arm64/arm64.patch file
2. Check the upstream files that are being patched to understand how they've changed
3. Update the patch file with the correct line numbers and context
4. Ensure all ARM64-specific changes are preserved and functional:
   - Change amd64 → arm64, x86_64 → aarch64
   - Update download URLs to ARM64 versions
   - Modify tool configurations for ARM64 (e.g., JAVA_HOME paths)
   - Remove/disable features that don't work on ARM64
   - Ensure binary installations use ARM64 binaries
5. **VERIFY**: The final patch must result in a working ARM64 runner image

## Failed Files
The following issues were detected:
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$failed_files&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;'

Please update the arm64/arm64.patch file to fix these conflicts. Remember:
- Keep ALL ARM64 modifications
- Only update line numbers and context
- Test that the patch applies cleanly
- DO NOT remove any patches or suggest workarounds that would lose ARM64 functionality
- **CRITICAL**: The new patch MUST make it so that, post-patch, things will run for ARM64
- You are MODIFYING the patch to ensure ARM64 compatibility, not just copying it
- You may not consider the task complete until `make` runs without issue
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We have a shell script around running &lt;code&gt;depot claude&lt;/code&gt; that essentially sets up the context needed, checks if the patch file needs to be modified at all, and then runs the prompt. Once Claude modifies the patch file, we can run it through our build process to verify that it applies correctly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;resolve_conflicts&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    print_message &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$YELLOW&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"Attempting automatic conflict resolution with depot claude..."&lt;/span&gt;
    &lt;span class="c"&gt;# grabs the error output from attempting to apply patches&lt;/span&gt;
    &lt;span class="nv"&gt;make_output&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;make 2&amp;gt;&amp;amp;1 &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;true&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

    &lt;span class="nv"&gt;failed_files&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;make_output&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

    &lt;span class="nv"&gt;prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;create_prompt &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$failed_files&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

    print_message &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$YELLOW&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"Running depot claude to resolve conflicts..."&lt;/span&gt;

    &lt;span class="nv"&gt;session_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"arm64-patch-&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; +%Y%m%d-%H%M%S&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; depot claude &lt;span class="nt"&gt;--model&lt;/span&gt; opus &lt;span class="nt"&gt;--org&lt;/span&gt; cl0wyyk6k39487ebgraxasinja &lt;span class="nt"&gt;--session-id&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$session_id&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;--allowedTools&lt;/span&gt; &lt;span class="s2"&gt;"Read Edit Write Grep Glob Bash"&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$prompt&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
        &lt;/span&gt;print_message &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$RED&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"depot claude failed to run"&lt;/span&gt;
        &lt;span class="k"&gt;return &lt;/span&gt;1
    &lt;span class="k"&gt;fi

    &lt;/span&gt;print_message &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$YELLOW&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"Testing if conflicts were resolved..."&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;apply_arm64_patches&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
        &lt;/span&gt;print_message &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$GREEN&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"Conflicts resolved successfully!"&lt;/span&gt;
        &lt;span class="k"&gt;if &lt;/span&gt;generate_patch_file&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
            &lt;/span&gt;print_message &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$GREEN&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"Patch file updated successfully!"&lt;/span&gt;
            &lt;span class="k"&gt;return &lt;/span&gt;0
        &lt;span class="k"&gt;else
            &lt;/span&gt;print_message &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$RED&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"Failed to generate patch file after conflict resolution"&lt;/span&gt;
            &lt;span class="k"&gt;return &lt;/span&gt;1
        &lt;span class="k"&gt;fi
    else
        &lt;/span&gt;print_message &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$RED&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"Conflicts still exist after automatic resolution attempt."&lt;/span&gt;
        &lt;span class="k"&gt;return &lt;/span&gt;1
    &lt;span class="k"&gt;fi&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One of the really nice things about using &lt;code&gt;depot claude&lt;/code&gt; for this kind of task is that, after a workflow run, we can take a look at &lt;em&gt;why&lt;/em&gt; it made certain decisions. That lets us tweak the prompt as needed to make further improvements.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;⏺ Now &lt;span class="nb"&gt;let &lt;/span&gt;me check the upstream install-kubernetes-tools.sh file to see what has changed:

⏺ Read&lt;span class="o"&gt;(&lt;/span&gt;upstream/images/ubuntu/scripts/build/install-kubernetes-tools.sh&lt;span class="o"&gt;)&lt;/span&gt;
  ⎿  Read 50 lines &lt;span class="o"&gt;(&lt;/span&gt;ctrl+r to &lt;span class="nb"&gt;expand&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;

⏺ Update Todos
  ⎿  ☒ Examine the current arm64/arm64.patch file
     ☒ Check the failing file install-kubernetes-tools.sh &lt;span class="k"&gt;in &lt;/span&gt;upstream
     ☐ Analyze the reject file to understand the conflict
     ☐ Update the arm64.patch file with correct context and line numbers
     ☐ Test that the patch applies cleanly
     ☐ Run make to verify everything works

⏺ Let me check the reject file to understand exactly what failed:

⏺ Read&lt;span class="o"&gt;(&lt;/span&gt;arm64/generated/scripts/build/install-kubernetes-tools.sh.rej&lt;span class="o"&gt;)&lt;/span&gt;
  ⎿  Read 20 lines &lt;span class="o"&gt;(&lt;/span&gt;ctrl+r to &lt;span class="nb"&gt;expand&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;

⏺ I can see the issue. The patch is trying to apply changes &lt;span class="k"&gt;for &lt;/span&gt;minikube v1.34.0 with specific
  pinned version, but the upstream file now uses &lt;span class="s2"&gt;"latest"&lt;/span&gt; version with a different download pattern.
   Let me update the patch file to match the new upstream structure:

⏺ Update Todos
  ⎿  ☒ Examine the current arm64/arm64.patch file
     ☒ Check the failing file install-kubernetes-tools.sh &lt;span class="k"&gt;in &lt;/span&gt;upstream
     ☒ Analyze the reject file to understand the conflict
     ☐ Update the arm64.patch file with correct context and line numbers
     ☐ Test that the patch applies cleanly
     ☐ Run make to verify everything works

⏺ Now I&lt;span class="s1"&gt;'ll update the arm64.patch file to fix the kubernetes-tools section:
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, we run this bash script in a larger GHA workflow. It's set to run daily to ensure that we always have the latest patch file ready to go:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# this is a simplified version of the actual workflow file&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Update Upstream&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;schedule&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;cron&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;0&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;17&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*'&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;update-upstream&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;depot-ubuntu-latest-small&lt;/span&gt;
    &lt;span class="na"&gt;permissions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;contents&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;write&lt;/span&gt;
      &lt;span class="na"&gt;pull-requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;write&lt;/span&gt;

    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="c1"&gt;# ... a few setup steps&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Update upstream submodule&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;cd "$WORKTREE_DIR"&lt;/span&gt;
          &lt;span class="s"&gt;git submodule update --init --force&lt;/span&gt;

          &lt;span class="s"&gt;CURRENT_COMMIT=$(git submodule status upstream | awk '{print $1}' | sed 's/^[+-]//')&lt;/span&gt;
          &lt;span class="s"&gt;echo "Current submodule commit: $CURRENT_COMMIT"&lt;/span&gt;

          &lt;span class="s"&gt;cd upstream&lt;/span&gt;
          &lt;span class="s"&gt;git fetch origin main&lt;/span&gt;
          &lt;span class="s"&gt;LATEST_COMMIT=$(git rev-parse origin/main)&lt;/span&gt;
          &lt;span class="s"&gt;echo "Latest upstream commit: $LATEST_COMMIT"&lt;/span&gt;

          &lt;span class="s"&gt;if [ "$CURRENT_COMMIT" != "$LATEST_COMMIT" ]; then&lt;/span&gt;
            &lt;span class="s"&gt;echo "Update needed: $CURRENT_COMMIT -&amp;gt; $LATEST_COMMIT"&lt;/span&gt;
            &lt;span class="s"&gt;git checkout main&lt;/span&gt;
            &lt;span class="s"&gt;git pull origin main&lt;/span&gt;
            &lt;span class="s"&gt;cd ..&lt;/span&gt;
            &lt;span class="s"&gt;git add upstream&lt;/span&gt;
            &lt;span class="s"&gt;echo "CHANGES_EXIST=true" &amp;gt;&amp;gt; $GITHUB_ENV&lt;/span&gt;
          &lt;span class="s"&gt;else&lt;/span&gt;
            &lt;span class="s"&gt;echo "Submodule already up to date"&lt;/span&gt;
            &lt;span class="s"&gt;cd ..&lt;/span&gt;
            &lt;span class="s"&gt;echo "CHANGES_EXIST=false" &amp;gt;&amp;gt; $GITHUB_ENV&lt;/span&gt;
          &lt;span class="s"&gt;fi&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Generate ARM64 patches&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;GH_TOKEN&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.GITHUB_TOKEN }}&lt;/span&gt;
          &lt;span class="na"&gt;DEPOT_TOKEN&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.DEPOT_TOKEN }}&lt;/span&gt;
          &lt;span class="na"&gt;ANTHROPIC_API_KEY&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.ANTHROPIC_API_KEY }}&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;cd "$WORKTREE_DIR"&lt;/span&gt;
          &lt;span class="s"&gt;bash .github/scripts/generate-arm64-patches.sh&lt;/span&gt;

          &lt;span class="s"&gt;if [ "$(git status --porcelain arm64/ x86/ | wc -l)" -gt "0" ]; then&lt;/span&gt;
            &lt;span class="s"&gt;echo "ARM64_PATCHES_UPDATED=true" &amp;gt;&amp;gt; $GITHUB_ENV&lt;/span&gt;
          &lt;span class="s"&gt;else&lt;/span&gt;
            &lt;span class="s"&gt;echo "ARM64_PATCHES_UPDATED=false" &amp;gt;&amp;gt; $GITHUB_ENV&lt;/span&gt;
          &lt;span class="s"&gt;fi&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Commit ARM64 patches&lt;/span&gt;
        &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;env.ARM64_PATCHES_UPDATED == 'true'&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;cd "$WORKTREE_DIR"&lt;/span&gt;
          &lt;span class="s"&gt;git add arm64/ x86/&lt;/span&gt;

          &lt;span class="s"&gt;COMMIT_MSG="Update ARM64 patches for upstream changes"$'\n\n'"- Applied patches using make"$'\n'"- Regenerated patches using make generate-patch"$'\n'"- Resolved any patch conflicts automatically"&lt;/span&gt;

          &lt;span class="s"&gt;git commit -m "$COMMIT_MSG"&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Push changes&lt;/span&gt;
        &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;env.CHANGES_EXIST == 'true'&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;cd "$WORKTREE_DIR"&lt;/span&gt;
          &lt;span class="s"&gt;git push -u origin $BRANCH_NAME&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is already a &lt;strong&gt;major&lt;/strong&gt; improvement to the productivity of Depot developers. Testing, iterating on, and applying the patch file was not only time consuming, but also required a lot of context switching. Now, we can just run this script and let Depot Claude handle the heavy lifting, with human developers just verifying Claude's output and making any necessary adjustments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Analyzing breaking changes
&lt;/h2&gt;

&lt;p&gt;Now let's go even further. The real heavy work with updating &lt;code&gt;upstream&lt;/code&gt; involves reviewing all the changes made. Most of these are just version upgrades, or other minor details. The more serious changes can include:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Upgrades to the Linux kernel or other system libraries&lt;/li&gt;
&lt;li&gt;Buildkit or Buildx upgrades&lt;/li&gt;
&lt;li&gt;Major version upgrades of commonly used packages&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Given this context, being able to ask Depot Claude to summarize the changes and analyze them for potential breaking changes could be a huge time saver. This would still, of course, require a human touch to verify that the changes are acceptable, but it would allow us to focus on the most important changes rather than having to sift through every single line of the diff.&lt;/p&gt;

&lt;p&gt;One interesting problem is that there's been dozens of changes in the past that have caused issues. Having to update the prompt in git to constantly account for these changes would be a pain. Instead, we can create an original session file that all other breaking changes analyses work off of.&lt;/p&gt;

&lt;p&gt;We already have a nice list of every issue that's been introduced in our Git history, so let's go ahead and feed that to a new &lt;code&gt;depot claude&lt;/code&gt; session:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;depot claude &lt;span class="nt"&gt;--session-id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;update-upstream

╭───────────────────────────────────────────────────╮
│ ✻ Welcome to Claude Code!                         │
│                                                   │
│   /help &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="nb"&gt;help&lt;/span&gt;, /status &lt;span class="k"&gt;for &lt;/span&gt;your current setup  │
│                                                   │
│   cwd: /Users/billy/Work/github-actions-image     │
╰───────────────────────────────────────────────────╯

 ※ Tip: Run claude &lt;span class="nt"&gt;--continue&lt;/span&gt; or claude &lt;span class="nt"&gt;--resume&lt;/span&gt; to resume a conversation

&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Here is a list of all previous upstream changes that have caused issues:
...
&lt;span class="k"&gt;**&lt;/span&gt;Upstream Change&lt;span class="k"&gt;**&lt;/span&gt;: Updated Docker Compose from 2.35.1 to 2.36.0/2.36.2
&lt;span class="k"&gt;**&lt;/span&gt;Issue&lt;span class="k"&gt;**&lt;/span&gt;: Version mismatch between different architectures and platforms
&lt;span class="k"&gt;**&lt;/span&gt;Fixes&lt;span class="k"&gt;**&lt;/span&gt;:
- &lt;span class="sb"&gt;`&lt;/span&gt;a341ec95&lt;span class="sb"&gt;`&lt;/span&gt; - fix: update ubuntu 22.04 docker compose to 2.36.0
- &lt;span class="sb"&gt;`&lt;/span&gt;711e1cf5&lt;span class="sb"&gt;`&lt;/span&gt; - fix: upgrade x86 docker compose to 2.36.0
- &lt;span class="sb"&gt;`&lt;/span&gt;9a0ad0d7&lt;span class="sb"&gt;`&lt;/span&gt; - fix: apply docker compose 2.36.0 to x86
- &lt;span class="sb"&gt;`&lt;/span&gt;4317f9c9&lt;span class="sb"&gt;`&lt;/span&gt; - fix: update arm64 and x86 patches &lt;span class="o"&gt;(&lt;/span&gt;updated to 2.36.2&lt;span class="o"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;**&lt;/span&gt;Issue&lt;span class="k"&gt;**&lt;/span&gt;: PowerShell execution permissions broken since 7.4.2
&lt;span class="k"&gt;**&lt;/span&gt;Fix&lt;span class="k"&gt;**&lt;/span&gt;: &lt;span class="sb"&gt;`&lt;/span&gt;1c87a95f&lt;span class="sb"&gt;`&lt;/span&gt; - fix: pwsh issue since 7.4.2 by changing execution permissions
 ...
 Do you understand so far?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we'll ask Depot Claude to summarize the changes in the upstream repository, and analyze them for potential breaking changes based on previous issues. We'll also ask it to provide its response in JSON, so that we can easily parse it later:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PROMPT_FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;'
Analyze the upstream changes and identify any breaking changes that might affect users of this GitHub Actions runner image.

Please respond with ONLY a JSON object (no markdown, no explanations before or after) in this format:
{
  "breaking_changes": [
    {
      "file": "filename",
      "description": "Brief description of the breaking change",
      "impact": "How this affects users",
      "mitigation": "How users can adapt"
    }
  ],
  "summary": "One-line summary of the analysis"
}

Changed files:
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$CHANGED_FILES&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PROMPT_FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="s2"&gt;Commit messages:"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PROMPT_FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$COMMIT_MESSAGES&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PROMPT_FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="s2"&gt;Diff content:"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PROMPT_FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DIFF_CONTENT&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PROMPT_FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We want &lt;code&gt;depot claude&lt;/code&gt; to start with the previous session we created, but save its output (and thus its analysis we can review later) to a new session. We can do this by taking advantage of the &lt;code&gt;--session-id&lt;/code&gt; and &lt;code&gt;--resume&lt;/code&gt; flags:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;depot claude &lt;span class="nt"&gt;--resume&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;update-upstream &lt;span class="nt"&gt;--session-id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"breaking-changes-&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; +%Y%m%d&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;--model&lt;/span&gt; opus &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="nv"&gt;$PROMPT_FILE&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice, we're resuming from the &lt;code&gt;update-upstream&lt;/code&gt; session we made earlier, but saving to a new &lt;code&gt;breaking-changes&lt;/code&gt; session.&lt;/p&gt;

&lt;p&gt;Next, we'll want to take that JSON output, and turn it into a nice to read text summary:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;JSON_CONTENT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;breaking-changes-analysis.json&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;JQ_OUTPUT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;mktemp&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;JQ_ERROR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;mktemp&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$JSON_CONTENT&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'
    if .breaking_changes | length &amp;gt; 0 then
        "## Breaking Changes Detected\n" +
        (.breaking_changes[] |
            "\n### " + .title +
            "\n**Risk Level:** " + .risk_level +
            "\n\n" + .description +
            "\n\n**Mitigation:** " + .mitigation + "\n"
        ) +
        "\n---\n**Summary:** " + .summary
    else
        .summary
    end
'&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$JQ_OUTPUT&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; 2&amp;gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$JQ_ERROR&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$JQ_OUTPUT&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; breaking-changes-summary.txt
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Breaking changes summary written to breaking-changes-summary.txt"&lt;/span&gt;
    &lt;span class="nb"&gt;cat &lt;/span&gt;breaking-changes-summary.txt
&lt;span class="k"&gt;else
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"jq processing failed:"&lt;/span&gt;
    &lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$JQ_ERROR&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Failed to process JSON with jq"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; breaking-changes-summary.txt
&lt;span class="k"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this new &lt;code&gt;breaking-changes-summary.txt&lt;/code&gt;, we can go ahead and commit those changes to a branch.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Check for breaking changes&lt;/span&gt;
&lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;breaking-changes&lt;/span&gt;
&lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;GH_TOKEN&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.GITHUB_TOKEN }}&lt;/span&gt;
    &lt;span class="na"&gt;DEPOT_TOKEN&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.DEPOT_TOKEN }}&lt;/span&gt;
    &lt;span class="na"&gt;ANTHROPIC_API_KEY&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.ANTHROPIC_API_KEY }}&lt;/span&gt;
&lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;cd "$WORKTREE_DIR"&lt;/span&gt;
    &lt;span class="s"&gt;bash .github/scripts/detect-breaking-changes.sh&lt;/span&gt;

    &lt;span class="s"&gt;echo "BREAKING_CHANGES&amp;lt;&amp;lt;EOF" &amp;gt;&amp;gt; $GITHUB_OUTPUT&lt;/span&gt;
    &lt;span class="s"&gt;cat breaking-changes-summary.txt &amp;gt;&amp;gt; $GITHUB_OUTPUT&lt;/span&gt;
    &lt;span class="s"&gt;echo "EOF" &amp;gt;&amp;gt; $GITHUB_OUTPUT&lt;/span&gt;

    &lt;span class="s"&gt;echo "BREAKING_CHANGES_SUMMARY&amp;lt;&amp;lt;EOF" &amp;gt;&amp;gt; $GITHUB_ENV&lt;/span&gt;
    &lt;span class="s"&gt;cat breaking-changes-summary.txt &amp;gt;&amp;gt; $GITHUB_ENV&lt;/span&gt;
    &lt;span class="s"&gt;echo "EOF" &amp;gt;&amp;gt; $GITHUB_ENV&lt;/span&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Commit upstream changes&lt;/span&gt;
&lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;cd "$WORKTREE_DIR"&lt;/span&gt;
    &lt;span class="s"&gt;git add upstream&lt;/span&gt;

    &lt;span class="s"&gt;CURRENT_SHA=$(git submodule status upstream | awk '{print $1}' | sed 's/^[+-]//')&lt;/span&gt;
    &lt;span class="s"&gt;NEW_SHA=$(cd upstream &amp;amp;&amp;amp; git rev-parse HEAD)&lt;/span&gt;

    &lt;span class="s"&gt;COMMIT_MSG="Update upstream repository"$'\n\n'"Updates upstream from $CURRENT_SHA to $NEW_SHA"$'\n\n'"Commits:"&lt;/span&gt;

    &lt;span class="s"&gt;# List all commits between old and new&lt;/span&gt;
    &lt;span class="s"&gt;cd upstream&lt;/span&gt;
    &lt;span class="s"&gt;COMMITS=$(git log --oneline --no-decorate $CURRENT_SHA..$NEW_SHA)&lt;/span&gt;
    &lt;span class="s"&gt;cd ..&lt;/span&gt;

    &lt;span class="s"&gt;COMMIT_MSG="${COMMIT_MSG}"$'\n'"${COMMITS}"&lt;/span&gt;

    &lt;span class="s"&gt;# Add information about breaking changes if any were found&lt;/span&gt;
    &lt;span class="s"&gt;if [ -n "$BREAKING_CHANGES_SUMMARY" ] &amp;amp;&amp;amp; [ "$BREAKING_CHANGES_SUMMARY" != "No significant breaking changes detected." ]; then&lt;/span&gt;
    &lt;span class="s"&gt;COMMIT_MSG="${COMMIT_MSG}"$'\n\n'"Breaking changes detected - see PR description for details"&lt;/span&gt;
    &lt;span class="s"&gt;fi&lt;/span&gt;

    &lt;span class="s"&gt;COMMIT_MSG="${COMMIT_MSG}"&lt;/span&gt;

    &lt;span class="s"&gt;git commit -m "$COMMIT_MSG"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We have some code to either create a new PR, or to update an existing one, depending on whether it already exists or not. The PR is kept in a draft state until a human engineer has time to review and merge the changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fin
&lt;/h2&gt;

&lt;p&gt;What used to consume hours of developer time every week now runs automatically in the background. Our arm64 patches stay fresh, breaking changes get flagged before they break anything, and we can focus on building features instead of maintaining forks.&lt;/p&gt;

&lt;p&gt;We're already looking at expanding this approach to other forks we're maintaining, and all of us at Depot are pretty excited for future improvements we can make. If you have a similar need to maintain forks, run and monitor many agents, or just to analyze your codebase, get started with &lt;code&gt;depot claude&lt;/code&gt; today!&lt;/p&gt;

&lt;p&gt;Author: Billy Batista @ Depot&lt;/p&gt;

</description>
      <category>githubactions</category>
      <category>github</category>
      <category>ai</category>
      <category>devops</category>
    </item>
    <item>
      <title>Depot Changelog: June 2025</title>
      <dc:creator>Kyle Galbraith</dc:creator>
      <pubDate>Thu, 10 Jul 2025 17:37:56 +0000</pubDate>
      <link>https://forem.com/depot/depot-changelog-june-2025-1ipk</link>
      <guid>https://forem.com/depot/depot-changelog-june-2025-1ipk</guid>
      <description>&lt;p&gt;We shipped some awesome new features and improvements in June. Things like our latest egress filtering capabilities, audit logging, and Windows runners. Here is everything we shipped&lt;/p&gt;

&lt;h2&gt;
  
  
  Egress filtering for GitHub Actions Runners
&lt;/h2&gt;

&lt;p&gt;We've shipped an awesome security feature to Depot GitHub Actions Runners. You can enable egress filtering to control exactly which IP addresses, hostnames, and CIDR ranges your GitHub Actions can talk to.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://depot.dev/blog/now-available-egress-filtering-for-github-actions-runners" rel="noopener noreferrer"&gt;Get all the details in our launch post&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Now available: Audit logging
&lt;/h2&gt;

&lt;p&gt;We've rolled out support for audit logging across Depot. This allows you to get fine grained information about what actions are taken in your Depot organization.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://depot.dev/blog/now-available-audit-logging-for-improved-security" rel="noopener noreferrer"&gt;Read the announcement post&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Windows GitHub Actions runners are now GA
&lt;/h2&gt;

&lt;p&gt;We've completed all the work to make our Windows runners generally available to all organizations across Depot. You can see all of the nitty gritty details and runner labels for our &lt;a href="https://depot.dev/docs/github-actions/runner-types#windows-runners" rel="noopener noreferrer"&gt;Windows runners in our docs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://depot.dev/blog/windows-github-actions-runners" rel="noopener noreferrer"&gt;You can also read our full launch post on our blog&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  depot cargo for faster Rust builds
&lt;/h2&gt;

&lt;p&gt;We released a new CLI command called depot cargo that wraps your cargo command with Depot Cache automatically for exponentially faster Rust builds.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://depot.dev/changelog/2025-06-30-depot-cargo-command" rel="noopener noreferrer"&gt;Check out the changelog entry for how to use it&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Dependabot now runs on Depot GitHub Actions runners
&lt;/h2&gt;

&lt;p&gt;You can now run all of your Dependabot jobs on Depot GitHub Actions runners to take advantage of our Ultra Runners, faster caching, unlimited concurrency, and more.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://depot.dev/changelog/2025-06-24-dependabot-support" rel="noopener noreferrer"&gt;Check out our changelog entry for more details on how to enable it&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  And more good stuff...
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/depot/cli" rel="noopener noreferrer"&gt;depot CLI v2.88.0&lt;/a&gt; includes several bug fixes and new features&lt;/li&gt;
&lt;li&gt;Add support to depot push to push without Docker config credentials -- &lt;a href="https://depot.dev/changelog/2025-06-10-depot-push-env-var-auth" rel="noopener noreferrer"&gt;more details in our changelog entry&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Fix for loading cache only targets in depot bake&lt;/li&gt;
&lt;li&gt;Improved documentation for building depot CLI from source&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>news</category>
      <category>showdev</category>
      <category>github</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
