<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Kyle Galbraith</title>
    <description>The latest articles on Forem by Kyle Galbraith (@kylegalbraith).</description>
    <link>https://forem.com/kylegalbraith</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/kylegalbraith"/>
    <language>en</language>
    <item>
      <title>Depot Changelog: June 2025</title>
      <dc:creator>Kyle Galbraith</dc:creator>
      <pubDate>Thu, 10 Jul 2025 17:37:56 +0000</pubDate>
      <link>https://forem.com/depot/depot-changelog-june-2025-1ipk</link>
      <guid>https://forem.com/depot/depot-changelog-june-2025-1ipk</guid>
      <description>&lt;p&gt;We shipped some awesome new features and improvements in June. Things like our latest egress filtering capabilities, audit logging, and Windows runners. Here is everything we shipped&lt;/p&gt;

&lt;h2&gt;
  
  
  Egress filtering for GitHub Actions Runners
&lt;/h2&gt;

&lt;p&gt;We've shipped an awesome security feature to Depot GitHub Actions Runners. You can enable egress filtering to control exactly which IP addresses, hostnames, and CIDR ranges your GitHub Actions can talk to.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://depot.dev/blog/now-available-egress-filtering-for-github-actions-runners" rel="noopener noreferrer"&gt;Get all the details in our launch post&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Now available: Audit logging
&lt;/h2&gt;

&lt;p&gt;We've rolled out support for audit logging across Depot. This allows you to get fine grained information about what actions are taken in your Depot organization.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://depot.dev/blog/now-available-audit-logging-for-improved-security" rel="noopener noreferrer"&gt;Read the announcement post&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Windows GitHub Actions runners are now GA
&lt;/h2&gt;

&lt;p&gt;We've completed all the work to make our Windows runners generally available to all organizations across Depot. You can see all of the nitty gritty details and runner labels for our &lt;a href="https://depot.dev/docs/github-actions/runner-types#windows-runners" rel="noopener noreferrer"&gt;Windows runners in our docs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://depot.dev/blog/windows-github-actions-runners" rel="noopener noreferrer"&gt;You can also read our full launch post on our blog&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  depot cargo for faster Rust builds
&lt;/h2&gt;

&lt;p&gt;We released a new CLI command called depot cargo that wraps your cargo command with Depot Cache automatically for exponentially faster Rust builds.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://depot.dev/changelog/2025-06-30-depot-cargo-command" rel="noopener noreferrer"&gt;Check out the changelog entry for how to use it&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Dependabot now runs on Depot GitHub Actions runners
&lt;/h2&gt;

&lt;p&gt;You can now run all of your Dependabot jobs on Depot GitHub Actions runners to take advantage of our Ultra Runners, faster caching, unlimited concurrency, and more.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://depot.dev/changelog/2025-06-24-dependabot-support" rel="noopener noreferrer"&gt;Check out our changelog entry for more details on how to enable it&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  And more good stuff...
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/depot/cli" rel="noopener noreferrer"&gt;depot CLI v2.88.0&lt;/a&gt; includes several bug fixes and new features&lt;/li&gt;
&lt;li&gt;Add support to depot push to push without Docker config credentials -- &lt;a href="https://depot.dev/changelog/2025-06-10-depot-push-env-var-auth" rel="noopener noreferrer"&gt;more details in our changelog entry&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Fix for loading cache only targets in depot bake&lt;/li&gt;
&lt;li&gt;Improved documentation for building depot CLI from source&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>news</category>
      <category>showdev</category>
      <category>github</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Now available: Build autoscaling for everyone</title>
      <dc:creator>Kyle Galbraith</dc:creator>
      <pubDate>Wed, 02 Jul 2025 20:08:18 +0000</pubDate>
      <link>https://forem.com/depot/now-available-build-autoscaling-for-everyone-9fo</link>
      <guid>https://forem.com/depot/now-available-build-autoscaling-for-everyone-9fo</guid>
      <description>&lt;p&gt;When we first launched Depot, our goal was to make Docker image builds exponentially faster. Why? Because we experienced the absolute drudgery of waiting for container builds locally and in CI. The modern day equivalent of watching paint dry because saving and loading layer cache over networks negated all performance benefits of caching, and building multi-platform images required emulation, bringing builds to a crawl.&lt;/p&gt;

&lt;p&gt;So, we built the solution we had always wanted: a fast, shareable, and reliable container build service that could be used from any existing CI workflow or anywhere you were using &lt;code&gt;docker build&lt;/code&gt;. Today, Depot's container build service is used by thousands of developers to build Docker images faster than ever before. &lt;strong&gt;Saving engineering teams tens of thousands of hours in build time every week.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We don't bullshit about benchmarks, &lt;a href="https://depot.dev/benchmark/posthog" rel="noopener noreferrer"&gt;here is our benchmark for building PostHog's container images.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs3edolq61b4on09d5h9o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs3edolq61b4on09d5h9o.png" alt="Depot benchmark for PostHog container builds" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is the container build architecture?
&lt;/h2&gt;

&lt;p&gt;Before we get into the details of how autoscaling works, it's worth understanding how Depot's container build service works &lt;strong&gt;without&lt;/strong&gt; it. When you run a container build, Depot runs an optimized version of BuildKit to process the build and cache layers to a persistent NVMe drive.&lt;/p&gt;

&lt;p&gt;Behind the scenes, the flow looks like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You run &lt;code&gt;depot build&lt;/code&gt;, which informs our control plane that you'd like to run a container build. We infer the architecture of the build based on the client architecture, or we look at the &lt;code&gt;--platform&lt;/code&gt; flag if it is specified.&lt;/li&gt;
&lt;li&gt;The control plane informs our provisioning system that a request for a container builder has been made for the specified platforms.&lt;/li&gt;
&lt;li&gt;The provisioning system spins up one or more BuildKit builders to process the build. The number of builders is determined by the number of platforms in the build request (i.e., a multi-platform build will spin up a builder for each platform, such as &lt;code&gt;linux/amd64&lt;/code&gt;, &lt;code&gt;linux/arm64&lt;/code&gt;, etc.).&lt;/li&gt;
&lt;li&gt;The authentication details and IP of the builders are returned to the client, and the &lt;code&gt;depot build&lt;/code&gt; command connects directly to the builder to run the build.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In this flow, each BuildKit builder is responsible for processing the build and caching layers to a persistent NVMe drive for the architecture it supports.&lt;/p&gt;

&lt;p&gt;By default, these BuildKit builders can process multiple jobs concurrently on the same host. This is a feature of BuildKit that enables deduplication of work across builds that share similar steps and layers.&lt;/p&gt;

&lt;p&gt;So in this model, multiple &lt;code&gt;depot build&lt;/code&gt; commands can run concurrently on the same BuildKit builder, and the builder will process them in parallel. This is great for most use cases, but it does have a limitation: &lt;strong&gt;the number of concurrent builds is limited by the resources of the single BuildKit builder&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  But one limitation has remained
&lt;/h2&gt;

&lt;p&gt;But one limitation has existed since the beginning: &lt;strong&gt;container builds could only run on a single BuildKit builder&lt;/strong&gt;. Today, we are excited to announce the general availability of &lt;strong&gt;container build autoscaling&lt;/strong&gt;, which removes that limitation, and also explain why you may not always want to use it.&lt;/p&gt;

&lt;h3&gt;
  
  
  How does container build autoscaling work?
&lt;/h3&gt;

&lt;p&gt;With container build autoscaling, we can now automatically scale out your container builds to multiple BuildKit builders based on the number of concurrent builds you want to process on a single builder. This means that if you have a large number of concurrent builds, Depot will automatically spin up additional BuildKit builders to process them in parallel.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu9ylpn66d6bhxokp61tv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu9ylpn66d6bhxokp61tv.png" alt="Depot container build architecture with autoscaling" width="800" height="322"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We've added a step 5 to our previous flow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The control plane and provisioning system automatically scale out additional BuildKit builders based on the number of concurrent builds you want processed on a single builder. The authentication details and IPs of the additional builders are returned to the client, and the &lt;code&gt;depot build&lt;/code&gt; command connects directly to all of the builders to run the build.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  When should I use container build autoscaling?
&lt;/h3&gt;

&lt;p&gt;This is by far the most important question to ask because of the tradeoffs involved. Container build autoscaling is a powerful feature that can significantly speed up your container builds. Still, it does come with some tradeoffs, like cache cloning and losing the ability to deduplicate work across builds that share similar steps and layers.&lt;/p&gt;

&lt;p&gt;That said, container build autoscaling is particularly useful when a single build can consume all of the resources of a single BuildKit builder, or you have a large number of concurrent builds that chew through all of the resources as well.&lt;/p&gt;

&lt;p&gt;In these cases, we recommend starting with &lt;strong&gt;sizing up your container builder&lt;/strong&gt;, which you can see the full sizes available on our &lt;a href="https://depot.dev/pricing" rel="noopener noreferrer"&gt;pricing page&lt;/a&gt;. This allows you to run larger builds on a single builder without needing to scale out to multiple builders.&lt;/p&gt;

&lt;p&gt;However, for instances where you have a large number of concurrent builds or a single build that consumes all the resources of a single builder, container build autoscaling is a great way to speed up your builds.&lt;/p&gt;

&lt;h3&gt;
  
  
  How do I enable container build autoscaling?
&lt;/h3&gt;

&lt;p&gt;To turn on container build autoscaling, you will need to navigate to your &lt;strong&gt;Depot project settings&lt;/strong&gt; for the container build project you want to enable it for. From there, you can enable the autoscaling feature in the settings tab and specify the number of concurrent builds you want to process on a single builder.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyn2ukkfb9ksm32vbr5zr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyn2ukkfb9ksm32vbr5zr.png" alt="Enabled horizontal autoscaling on a Depot project" width="800" height="322"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once enabled, all new builds will automatically scale out to multiple BuildKit builders based on the number of concurrent builds you specified. You can also adjust the number of concurrent builds at any time in the project settings.&lt;/p&gt;

&lt;h2&gt;
  
  
  What about the layer cache?
&lt;/h2&gt;

&lt;p&gt;Autoscaling does come with tradeoffs, and one of those is that the layer cache the additional builders operate on is a clone of the main builder's layer cache. This means that the additional builders operate on a copy of the layer cache, and that copy is &lt;strong&gt;not written back to the main builder's layer cache&lt;/strong&gt;. This means that the additional builders will not be able to share layers with the main builder, and you will not be able to take advantage of the deduplication of work across builds that share similar steps and layers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Billing details
&lt;/h2&gt;

&lt;p&gt;Container build autoscaling is available on &lt;strong&gt;all plans&lt;/strong&gt;. Any cache clones created by the autoscaling feature are not persisted beyond the lifetime of the builder. This means that when the autoscaled builders are terminated, their layer cache clones are also deleted. Thus, cache clones do not count towards storage billing.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's next?
&lt;/h2&gt;

&lt;p&gt;We're excited to make this feature generally available to everyone using Depot for container builds. We believe that this is going to help folks get even faster Docker image builds, and we can't wait to see the new use cases for build performance that this enables.&lt;/p&gt;

&lt;p&gt;We're working on some additional features here to make this feature available on our Build API as well as add some additional logic around how the cache clones are managed.&lt;/p&gt;

&lt;p&gt;If you have any questions or feedback about this feature, please reach out to us on &lt;a href="https://depot.dev/discord" rel="noopener noreferrer"&gt;Discord&lt;/a&gt; and let us know.&lt;/p&gt;

</description>
      <category>news</category>
      <category>docker</category>
      <category>devops</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Now available: Claude Code sessions in Depot</title>
      <dc:creator>Kyle Galbraith</dc:creator>
      <pubDate>Tue, 01 Jul 2025 21:01:30 +0000</pubDate>
      <link>https://forem.com/depot/now-available-claude-code-sessions-in-depot-33kd</link>
      <guid>https://forem.com/depot/now-available-claude-code-sessions-in-depot-33kd</guid>
      <description>&lt;p&gt;We've been using Claude Code at Depot since pretty much the moment it dropped. It's been a game changer for everything from our day to day development to debugging production issues. However, after using it for a while, we realized that we were all getting really annoyed by how challenging it was to share sessions with each other or resume them in ephemeral environments (like CI jobs!).&lt;/p&gt;

&lt;p&gt;So, in traditional Depot fashion, we went and fixed our own problem, and today we're releasing Claude Code sessions in Depot. Now available to all Depot users on any of our pricing plans.&lt;/p&gt;

&lt;h2&gt;
  
  
  So, what are Claude Code sessions in Depot?
&lt;/h2&gt;

&lt;p&gt;We built a new command for the Depot CLI, &lt;code&gt;depot claude&lt;/code&gt;, that allows you to create, resume, and manage Claude Code sessions across your entire organization. This means you can now save your AI coding sessions in Depot, share them with your team, and pick up where you left off—no matter which machine or environment you're working in.&lt;/p&gt;

&lt;p&gt;With Claude Code sessions in Depot, you can now get all of the following right out of the box:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Developers and AI agents&lt;/strong&gt; - Hand off work between human developers and automated Claude agents&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Team members&lt;/strong&gt; - Share complex problem-solving sessions across time zones and teams&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Local and CI environments&lt;/strong&gt; - Start debugging locally and continue in CI, or vice versa&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Different stages of development&lt;/strong&gt; - Maintain context from design through implementation to review&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  How do you use Claude Code sessions in Depot?
&lt;/h3&gt;

&lt;p&gt;To make use of Claude Code sessions in Depot, you first need to make sure you have the &lt;a href="https://depot.dev/docs/cli/installation" rel="noopener noreferrer"&gt;Depot CLI installed&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;With the CLI installed, run &lt;code&gt;depot login&lt;/code&gt; to authenticate your local CLI with your Depot organization. This is required to ensure your sessions are securely stored and accessible across your team.&lt;/p&gt;

&lt;p&gt;With that, you're ready to start using Claude Code sessions in Depot. To create and manage these sessions, you&lt;br&gt;
simply use the &lt;code&gt;depot claude&lt;/code&gt; command instead of the regular &lt;code&gt;claude&lt;/code&gt; command. This command automatically handles session persistence, allowing you to create and resume sessions with ease.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Start a new session with a custom ID&lt;/span&gt;
depot claude &lt;span class="nt"&gt;--session-id&lt;/span&gt; feature-auth-redesign
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This opens Claude Code, but it is wrapped in Depot's session management. You can interact with Claude as you normally would, but your session state is now saved to your Depot organization. Every interaction you have with Claude will be stored securely in your organization, allowing you to resume it later from any machine or environment.&lt;/p&gt;

&lt;p&gt;If you don't specify a session ID, Depot will generate one automatically for you. You can also resume existing sessions by using the &lt;code&gt;--resume&lt;/code&gt; flag with the session ID.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Resume an existing session by ID&lt;/span&gt;
depot claude &lt;span class="nt"&gt;--resume&lt;/span&gt; feature-auth-redesign
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can pass any prompts or parameters you want to Claude, just like you would with the regular &lt;code&gt;claude&lt;/code&gt; command. For example, you can specify a prompt to continue working on a specific task:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Resume a session and provide a prompt&lt;/span&gt;
depot claude &lt;span class="nt"&gt;--resume&lt;/span&gt; feature-auth-redesign &lt;span class="nt"&gt;--model&lt;/span&gt; opus &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"continue implementing the authentication flow"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can even use this from CI as well! Install Claude Code in your CI environment, authenticate with Depot, and then run the &lt;code&gt;depot claude&lt;/code&gt; command to resume a session and continue working on it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# In your GitHub Actions workflow&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;depot/setup-action@v1&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npm install @anthropic-ai/claude-code&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;depot claude --resume pr-${{ github.event.pull_request.number }} \&lt;/span&gt;
 &lt;span class="s"&gt;-p "review this PR for security issues and best practices"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Uhhh, I don't remember my session?
&lt;/h3&gt;

&lt;p&gt;Yeah, we had the same problem. So we added a way to list all of your sessions so you can see what you have available to resume. You can list all available sessions in your organization using the &lt;code&gt;list-sessions&lt;/code&gt; command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;depot claude list-sessions
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From there, you can choose to resume any sessions by selecting the session from the list.&lt;/p&gt;

&lt;p&gt;When you resume a session, Depot will automatically load the last state of that session, including any code, prompts, and context you had previously set up. This means you can seamlessly continue your work without losing any progress. Once you exit again, your session state is saved back to Depot, ready for you or your teammates to pick up later.&lt;/p&gt;

&lt;h3&gt;
  
  
  Where could this be better?
&lt;/h3&gt;

&lt;p&gt;There are a few areas worth being aware of as you begin using Depot's Claude Code sessions. Here is a quick list of things we know about that we are working on improving:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Work produced in sessions are not automatically committed to your git repository. So, if you start a session, we don't automatically create a branch for your session that Claude will work on. This is something we are looking to improve in the future, but for now, we recommend creating a branch for your session and naming your session after that branch. This way, you can easily find the work done in that session.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Claude Code must still be installed in your environment. Depot's Claude Code sessions are a wrapper around the existing Claude Code functionality, so you still need to have the &lt;a href="https://docs.anthropic.com/en/docs/claude-code/overview" rel="noopener noreferrer"&gt;Claude Code CLI installed&lt;/a&gt; and configured in your environment. Depot's sessions make it easier to manage and resume those sessions across different machines and environments.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How does it actually work?
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;claude&lt;/code&gt; CLI already saves each of its session in a line-delimited JSON file in &lt;code&gt;$HOME/.claude&lt;/code&gt;. To enable cross-machine session sharing then, the &lt;code&gt;depot claude&lt;/code&gt; command executes the &lt;code&gt;claude&lt;/code&gt; binary already on your machine and watches for changes to that session file on disk, saving any changes in it via the Depot API.&lt;/p&gt;

&lt;p&gt;To resume a session, the &lt;code&gt;depot&lt;/code&gt; CLI checks the name of the session being resumed, fetches its contents from the Depot API and writes them to disk, then executes the &lt;code&gt;claude&lt;/code&gt; CLI.&lt;/p&gt;

&lt;p&gt;In this way sessions can be named, shared, and resumed from anywhere!&lt;/p&gt;

&lt;h2&gt;
  
  
  Why we built this
&lt;/h2&gt;

&lt;p&gt;Honestly? Because it was fun and we needed it ourselves. But also because we believe that AI coding agents are going to fundamentally change how we build software. They are already helping us write code faster, debug issues more effectively, and even design new features.&lt;/p&gt;

&lt;p&gt;This makes our own use of Claude Code significantly more powerful. By allowing AI agents to maintain context across sessions, we can leverage their capabilities in a way that feels natural and integrated into our existing workflows.&lt;/p&gt;

&lt;p&gt;We can have a Claude Code agent start working on a complex feature, hand it off to a human developer for review, and then have the agent pick up where it left off—all while maintaining full context of the conversation and code changes. That feature can then be submitted as a pull request, where other agents review it, and we can pick up the session of that review agent to address any feedback.&lt;/p&gt;

&lt;p&gt;It's powerful stuff, and we think it will be a helpful tool to accelerate teams even more.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Depot is focused on building tools and services that accelerate delivery pipelines. We started by accelerating Docker image builds, then added our technology and expertise to GitHub Actions, and followed up with our own remote cache service, Depot Cache.&lt;/p&gt;

&lt;p&gt;The combo of all these tools is helping teams build and ship software faster than ever before.&lt;/p&gt;

&lt;p&gt;With the addition of Claude Code sessions, we're taking another step towards accelerating a new facet of the software delivery pipeline by making it faster &amp;amp; simpler for engineers and coding agents to interact on the same codebase.&lt;/p&gt;

&lt;p&gt;We hope you enjoy this as much as we do. If you have any questions, feedback, or ideas for how we can make this even better, please hop into our &lt;a href="https://discord.gg/XpTfcVrr46" rel="noopener noreferrer"&gt;Community Discord&lt;/a&gt; and let us know!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>news</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Self-hosted GitHub Actions runners aren't free</title>
      <dc:creator>Kyle Galbraith</dc:creator>
      <pubDate>Tue, 03 Jun 2025 20:05:45 +0000</pubDate>
      <link>https://forem.com/depot/self-hosted-github-actions-runners-arent-free-2fib</link>
      <guid>https://forem.com/depot/self-hosted-github-actions-runners-arent-free-2fib</guid>
      <description>&lt;p&gt;We released &lt;a href="https://depot.dev/products/github-actions?utm_source=devto" rel="noopener noreferrer"&gt;Depot GitHub Actions Runners&lt;/a&gt; a year ago. Our runners are anywhere between 3-10x faster with 10x faster caching as well. They come pre-configured with a lot of slick automatic add-ons, like RAM disks for faster disk access in jobs that need it, and automatic integration with our remote cache service for tools like Bazel, Gradle, Turborepo, and others.&lt;/p&gt;

&lt;p&gt;Since launching, we've seen a lot of teams come to us from self-hosted GitHub Actions runners. Why? Because they're burned out from all of the operational overhead and complexity of it.&lt;/p&gt;

&lt;p&gt;In this post, we highlight the problems and hidden costs with self-hosted GitHub Actions runners.&lt;/p&gt;

&lt;h2&gt;
  
  
  Do you like operational overhead?
&lt;/h2&gt;

&lt;p&gt;Self-hosting GitHub Actions runners isn't the "set it and forget it" option that folks may think it is.&lt;/p&gt;

&lt;p&gt;The truth is that self-hosted runners increase your operational overhead. The entire system will have to be monitored and maintained. This includes things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Maintaining AMIs&lt;/strong&gt;: You'll need to regularly update your AMIs with the latest security patches and updates. GitHub publishes all of the &lt;a href="https://github.com/actions/runner-images" rel="noopener noreferrer"&gt;runner image definitions&lt;/a&gt; for Packer. But it's a time-consuming and error-prone process.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Babysitting infrastructure&lt;/strong&gt;: You can choose between running ephemeral runners or persistent runners. Ephemeral runners are easier to maintain but can be slower to start up and may not be as reliable. Persistent runners are faster but require more maintenance and can be more prone to failure. Either way, you'll need to monitor the underlying infrastructure and rapidly fix any issues that arise.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Debugging GitHub Runner API issues&lt;/strong&gt;: The GitHub Runner API has bugs and inconsistencies that can cause problems with your self-hosted runners. This can be frustrating to debug and a total time drain.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The overhead of self-hosting GitHub Actions is significantly more cumbersome and time-consuming than people expect when getting started.&lt;/p&gt;

&lt;h2&gt;
  
  
  Free isn't actually free with GitHub Actions
&lt;/h2&gt;

&lt;p&gt;The biggest misconception about self-hosting GitHub Actions runners is that it's free. This is simply not true. There are a lot of hidden costs associated with self-hosting GitHub Actions runners that can add up quickly.&lt;/p&gt;

&lt;p&gt;While the cost per minute is $0, compared to the $0.008 per minute for GitHub-hosted runners, the reality is that self-hosting GitHub Actions runners can be significantly more expensive than you might think.&lt;/p&gt;

&lt;p&gt;The most obvious cost is infrastructure. You'll need to pay for the instances, storage, network transfer, and any other resources you use to support the runners.&lt;/p&gt;

&lt;p&gt;The infrastructure costs only become more problematic the larger your team and CI workloads grow over time.&lt;/p&gt;

&lt;p&gt;Here are some common ways folks try to combat the rising costs:&lt;/p&gt;

&lt;h3&gt;
  
  
  Leverage spot instances or savings plans to reduce instance costs
&lt;/h3&gt;

&lt;p&gt;Spot instances are the first thing folks grab to reduce the cost footprint of self-hosted runners. These can certainly save you money, but you have to accept the risk of losing runners midway through a job, and live with the developer experience around that. The spot market can also be hit and miss, depending on the instance types you're requesting.&lt;/p&gt;

&lt;p&gt;When Spot fails, folks often turn to AWS Savings Plans. This is a great way to save money, but now you're financially locked into self-hosted, and you end up paying for unused capacity on nights and weekends.&lt;/p&gt;

&lt;h3&gt;
  
  
  Egress traffic
&lt;/h3&gt;

&lt;p&gt;Egress traffic is another hidden cost that can add up quickly. If you're not careful with what you're sending out from the runner, you can spend truckloads of money on egress traffic.&lt;/p&gt;

&lt;h3&gt;
  
  
  The human cost
&lt;/h3&gt;

&lt;p&gt;This is hard to calculate out of the gate, but it's there from the beginning and throughout the entire time you choose to self-host GitHub Actions. Humans always have to maintain and babysit the infrastructure and glue components for anything self-hosted.&lt;/p&gt;

&lt;p&gt;The reality is that self-hosting GitHub Actions runners can be a time sink. You'll need to maintain the infrastructure, monitor the runners, and troubleshoot any issues. You'll have to become an expert in GitHub Actions.&lt;/p&gt;

&lt;p&gt;This takes time away from the critical stuff like building features and shipping products for your customers and users.&lt;/p&gt;

&lt;p&gt;This also, ultimately, impacts the developer experience. Spending time dealing with CI issues can be frustrating and downright demoralizing for developers who want to get their work done. The constant context switching between working on the code that matters and dealing with CI issues or quirks like queue time can be a huge drain on morale. This is especially true if you're using self-hosted runners that are unreliable or slow.&lt;/p&gt;

&lt;h2&gt;
  
  
  There are tools to help
&lt;/h2&gt;

&lt;p&gt;There are solutions and workarounds. Before making the case for Depot, it's worth mentioning solutions that we have seen folks trying. While they have their own set of problems as well, they do provide advantages over going it entirely alone.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://github.com/actions/actions-runner-controller" rel="noopener noreferrer"&gt;ARC&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Also known as &lt;code&gt;actions-runner-controller&lt;/code&gt;, ARC is a project from GitHub that allows you to run GitHub Actions runners inside of your own Kubernetes cluster. It is designed to orchestrate and scale based on the number of workflows running across your GitHub repository, organization, or enterprise connections.&lt;/p&gt;

&lt;p&gt;Effectively, the runners scale up and down as containers inside of your K8s cluster. This comes with tradeoffs like needing to manage docker-in-docker setups, choose autoscaling policies, and deal with the complexity of Kubernetes (which some teams may have no problem with and others find more complicated).&lt;/p&gt;

&lt;p&gt;It &lt;em&gt;doesn't&lt;/em&gt; solve the human cost of maintaining the infrastructure, the cost of the infrastructure itself, and it's still prone to the instability and quirks from GitHub's API.&lt;/p&gt;

&lt;h3&gt;
  
  
  Self-hosted runners with the Terraform module: &lt;a href="https://github.com/github-aws-runners/terraform-aws-github-runner" rel="noopener noreferrer"&gt;&lt;code&gt;github-aws-runners/terraform-aws-github-runner&lt;/code&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Formerly known as the &lt;code&gt;philips-labs&lt;/code&gt; module, this strategy allows you to deploy self-hosted GitHub Actions runners on AWS using Terraform. Much like ARC, it provides the glue components in a self-contained module that you can deploy to your AWS account and get connected to your GitHub organization.&lt;/p&gt;

&lt;p&gt;It runs on EC2 instances and defaults to creating ephemeral runners on spot instances. You can bring your own AMIs, and it has similar scaling policies to ARC that you can choose from. However, the infrastructure footprint is even larger than ARC in that there are a lot of glue pieces running in Lambda, EC2, and S3, and additional one-off things you have to configure, like a GitHub App to authenticate with GitHub. To date, it also only supports Linux and Windows runners.&lt;/p&gt;

&lt;p&gt;It comes with all of the infrastructure and human costs of self-hosting GitHub Actions runners, but it provides a lot of flexibility and control over the infrastructure. The complexity and challenges of dealing with GitHub API outages, quirks, and optimizing for queue times still rests with you.&lt;/p&gt;

&lt;h2&gt;
  
  
  What about Depot?
&lt;/h2&gt;

&lt;p&gt;Depot is a build acceleration platform for engineering teams that want to build faster without the operational overhead, complexity, and time drain of doing it all themselves. We focus on accelerating GitHub Actions by providing our own runners optimized for performance, reliability, and interfacing with GitHub Actions.&lt;/p&gt;

&lt;p&gt;If you use Depot GitHub Actions runners on our hosted platform, you get all of the following benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Up to 10x faster runners. We've optimized everything about the GitHub Actions runner to be faster. This includes things like fast I/O via ramdisks, faster caching, and automatic integration with our suite of remote cache supported tools like Bazel, Gradle, Turborepo, and others.&lt;/li&gt;
&lt;li&gt;Single tenant runners. We launch a runner within 3-5 seconds, and it is dedicated to your job. When your job is done, we nuke the runner from orbit so it will never be used again.&lt;/li&gt;
&lt;li&gt;No concurrency limits. Launch as many jobs as you'd like, we don't limit the number of jobs that you can run at a time.&lt;/li&gt;
&lt;li&gt;Per-second billing. We don't do the bizarre GitHub thing where they round up your 30 second build to 1 minute to charge you for it. We charge by the minute but track by the second. So you get two 30 second builds before we charge you 1 minute.&lt;/li&gt;
&lt;li&gt;No egress charges. We don't charge you for egress traffic. We don't charge you for ingress traffic either, but we don't have any limits on that.&lt;/li&gt;
&lt;li&gt;Infrastructure is our concern, not yours. You don't have to think about your infrastructure costs, wasted capacity on nights and weekends, network or storage costs, or any other hidden costs that come with self-hosting GitHub Actions runners. We take care of all of that for you.&lt;/li&gt;
&lt;li&gt;No human costs. You don't have to worry about maintaining the infrastructure, monitoring the runners, or troubleshooting any GitHub API issues. We take care of all of that for you.&lt;/li&gt;
&lt;li&gt;No operational overhead. You don't have to worry about maintaining AMIs, debugging GitHub Runner API issues, or dealing with the complexities of self-hosting GitHub Actions runners. We take care of all of that for you.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  You can self-host the data plane if you want
&lt;/h3&gt;

&lt;p&gt;We also offer the ability to deploy the Depot data plane into your own environment. This means you get all the security &amp;amp; compliance benefits of self-hosted, leverage any compute commitments you have, and don't have any of the operations overhead of self-hosting GitHub Actions yourself. The pain and quirks of running GitHub Actions runners rest with us, and you get all of Depot's performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Every team and organization is different. Some teams are happy to self-host GitHub Actions runners and deal with the overhead and complexity of it. Others are not.&lt;/p&gt;

&lt;p&gt;Either way, having all of the information and thinking through the complexities and challenges, allows you to at least go into the process with eyes wide open and avoid any surprises. We personally believe that all builds, no matter whether they're GitHub Actions, Docker, Bazel, Gradle, or whatever, should be fast, reliable, and as easy to get started with as possible. You should get exponentially faster builds and never have to think about it again.&lt;/p&gt;

&lt;p&gt;That's why we built Depot. We think Depot is the best way to accelerate your builds and get the most out of your GitHub Actions runners with whatever flexibility you prefer, whether it's self-hosted or in our cloud. If you're interested in learning more, &lt;a href="https://depot.dev/sign-up?utm_source=devto" rel="noopener noreferrer"&gt;sign up for a 7-day free trial and give things a spin&lt;/a&gt; and if you have any questions or just want to bounce around ideas, feel free to hop in our &lt;a href="https://discord.gg/MMPqYSgDCg" rel="noopener noreferrer"&gt;Community Discord&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>githubactions</category>
      <category>github</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Faster Claude Code agents in GitHub Actions</title>
      <dc:creator>Kyle Galbraith</dc:creator>
      <pubDate>Mon, 02 Jun 2025 17:09:58 +0000</pubDate>
      <link>https://forem.com/depot/faster-claude-code-agents-in-github-actions-1p2h</link>
      <guid>https://forem.com/depot/faster-claude-code-agents-in-github-actions-1p2h</guid>
      <description>&lt;p&gt;It's been an exciting week as the news starts rolling in about bringing agentic coding out of the IDE and your own terminal and into other parts of our development workflows. Anthropic has brought my favorite agentic coding interaction, Claude Code, to GitHub Actions.&lt;/p&gt;

&lt;p&gt;This means that we can now use Claude Code to write and run code by simply asking Claude in pull requests and issues. Behind the scenes, Claude Code runs inside of GitHub Actions workflows that we control, making it easier to automate tasks and improve our development processes.&lt;/p&gt;

&lt;p&gt;In this post, we're going to explore how to use Claude Code in GitHub Actions and how to make it even faster and cheaper with the power of Depot GitHub Actions runners.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Claude Code?
&lt;/h2&gt;

&lt;p&gt;Claude Code is an agentic coding tool that lives in your terminal and has a deep understanding of your codebase. It can help you code faster by giving it specific prompts to go work on in the background. Until now, it's been limited to running on your local machine as the only supported path. However, others have been working on bringing it to more places, like GitHub Actions.&lt;/p&gt;

&lt;p&gt;But now, an official Claude Code GitHub App and an official GitHub Action make it even easier to use Claude Code in your GitHub Actions workflows. This is a game changer, as it allows you to use GitHub Actions as a background automation platform. Multiple Claude Code agents work on your codebase in parallel, processing different tasks simultaneously, significantly speeding up your development process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring Claude Code in GitHub Actions
&lt;/h2&gt;

&lt;p&gt;To get started, it's recommended that you install the official GitHub app for Claude. This app allows you to tag &lt;code&gt;@claude&lt;/code&gt; in your issues and PRs with a specific prompt to go work on.&lt;/p&gt;

&lt;p&gt;Installing the app can actually be done directly from &lt;code&gt;claude&lt;/code&gt; in your terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;claude /install-github-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This installs the GitHub App to your repository, adds your Anthropic API key as a secret, and configures a workflow in &lt;code&gt;.github/workflows/claude.yml&lt;/code&gt; that looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Claude Code&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;issue_comment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;types&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;created&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;pull_request_review_comment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;types&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;created&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;issues&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;types&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;opened&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;assigned&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;pull_request_review&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;types&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;submitted&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;claude&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
      &lt;span class="s"&gt;(github.event_name == 'issue_comment' &amp;amp;&amp;amp; contains(github.event.comment.body, '@claude')) ||&lt;/span&gt;
      &lt;span class="s"&gt;(github.event_name == 'pull_request_review_comment' &amp;amp;&amp;amp; contains(github.event.comment.body, '@claude')) ||&lt;/span&gt;
      &lt;span class="s"&gt;(github.event_name == 'pull_request_review' &amp;amp;&amp;amp; contains(github.event.review.body, '@claude')) ||&lt;/span&gt;
      &lt;span class="s"&gt;(github.event_name == 'issues' &amp;amp;&amp;amp; (contains(github.event.issue.body, '@claude') || contains(github.event.issue.title, '@claude')))&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;permissions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;contents&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;read&lt;/span&gt;
      &lt;span class="na"&gt;id-token&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;write&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout repository&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;fetch-depth&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Run Claude Code&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;claude&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;anthropics/claude-code-action@beta&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;anthropic_api_key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.ANTHROPIC_API_KEY }}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you've installed this, you must merge in the pull request Claude opens before you can use it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Current gotchas
&lt;/h3&gt;

&lt;p&gt;Because this is a beta action, things are very much in flux and under development. There are a few things to note:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You will need to make sure you have the GitHub CLI installed (i.e., &lt;code&gt;gh&lt;/code&gt;) before you run the install command.&lt;/li&gt;
&lt;li&gt;If you get &lt;code&gt;Couldn't install GitHub App: gh: Not Found (HTTP 404)&lt;/code&gt; when running &lt;code&gt;/install-github-app&lt;/code&gt;, it means that your local &lt;code&gt;gh&lt;/code&gt; auth token doesn't have workflow permissions (i.e., it can't create the new &lt;code&gt;claude.yml&lt;/code&gt; workflow). You can fix this by running &lt;code&gt;gh auth refresh -h github.com -s workflow&lt;/code&gt; and then re-running the &lt;code&gt;/install-github-app&lt;/code&gt; command.&lt;/li&gt;
&lt;li&gt;The default tools in the action are pretty tightly scoped. They are listed below, and anything you add to &lt;code&gt;allowed_tools&lt;/code&gt; will be appended to the end of this list:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Edit&lt;/li&gt;
&lt;li&gt;Glob&lt;/li&gt;
&lt;li&gt;Grep&lt;/li&gt;
&lt;li&gt;LS&lt;/li&gt;
&lt;li&gt;Read&lt;/li&gt;
&lt;li&gt;Write&lt;/li&gt;
&lt;li&gt;mcp__github_file_ops__commit_files&lt;/li&gt;
&lt;li&gt;mcp__github_file_ops__delete_files&lt;/li&gt;
&lt;li&gt;mcp__github__update_issue_comment&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Having Claude Code open pull requests after completing tasks
&lt;/h2&gt;

&lt;p&gt;The initial example workflows are heavily focused on running Claude Code to tag issues, add comments to issues, and add pull request reviews. This is great, but it doesn't really take advantage of the full power of Claude Code.&lt;/p&gt;

&lt;p&gt;One of the things I'm most excited about is the ability to have Claude Code open pull requests after completing tasks. This is a great way to automate getting code changes reviewed and merged into your codebase.&lt;/p&gt;

&lt;p&gt;To do this, you must update the &lt;code&gt;permissions&lt;/code&gt; section of the workflow to include &lt;code&gt;pull-request: write&lt;/code&gt;, so Claude can open pull requests. You must also include &lt;code&gt;mcp__github__create_pull_request&lt;/code&gt; in the &lt;code&gt;allowed_tools&lt;/code&gt; input.&lt;/p&gt;

&lt;p&gt;Here is what the Claude Code workflow looks like with these changes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight diff"&gt;&lt;code&gt;&lt;span class="p"&gt;name: Claude Code
&lt;/span&gt;&lt;span class="err"&gt;
&lt;/span&gt;&lt;span class="p"&gt;on:
&lt;/span&gt;  issue_comment:
    types: [created]
  pull_request_review_comment:
    types: [created]
  issues:
    types: [opened, assigned]
  pull_request_review:
    types: [submitted]
&lt;span class="err"&gt;
&lt;/span&gt;&lt;span class="p"&gt;jobs:
&lt;/span&gt;  claude:
    if: |
      (github.event_name == 'issue_comment' &amp;amp;&amp;amp; contains(github.event.comment.body, '@claude')) ||
      (github.event_name == 'pull_request_review_comment' &amp;amp;&amp;amp; contains(github.event.comment.body, '@claude')) ||
      (github.event_name == 'pull_request_review' &amp;amp;&amp;amp; contains(github.event.review.body, '@claude')) ||
      (github.event_name == 'issues' &amp;amp;&amp;amp; (contains(github.event.issue.body, '@claude') || contains(github.event.issue.title, '@claude')))
    runs-on: ubuntu-latest
    permissions:
      contents: read
      id-token: write
&lt;span class="gi"&gt;+      pull-requests: write
&lt;/span&gt;    steps:
      - name: Checkout repository
        uses: actions/checkout@v4
        with:
          fetch-depth: 1
&lt;span class="err"&gt;
&lt;/span&gt;      - name: Run Claude Code
        id: claude
        uses: anthropics/claude-code-action@beta
        with:
          anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
&lt;span class="gi"&gt;+          allowed_tools: 'mcp__github__create_pull_request'
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With that change, you can now tag &lt;code&gt;@claude&lt;/code&gt; in your issues to have it open a pull request for the changes it made. Here is an example from our open-source docs repository, &lt;a href="https://github.com/depot/docs" rel="noopener noreferrer"&gt;&lt;code&gt;depot/docs&lt;/code&gt;&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3bf6s0y3ni1mezt3463q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3bf6s0y3ni1mezt3463q.png" alt="Claude Code opens pull request" width="800" height="930"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Speeding up Claude Code with Depot GitHub Actions runners
&lt;/h2&gt;

&lt;p&gt;What's particularly cool about Anthropic making agentic coding backed by GitHub Actions is that it allows you to control where you want the Claude Code agent to run. This means that you can use Depot GitHub Actions runners to run Claude Code in a much faster environment than the default GitHub-hosted runners.&lt;/p&gt;

&lt;p&gt;We've built out our GitHub Actions runners to have faster CPUs, ramdisks with faster IO, and network speeds that make caching exponentially faster. They are also half the cost of GitHub-hosted runners, meaning you can run your Claude Code agents for a fraction of the cost.&lt;/p&gt;

&lt;p&gt;To move a Claude Code agent to run on Depot GitHub Actions runners, just update the &lt;code&gt;runs-on&lt;/code&gt; section of the workflow to use &lt;code&gt;depot-ubuntu-latest&lt;/code&gt; instead of &lt;code&gt;ubuntu-latest&lt;/code&gt;. Here is what the updated workflow looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight diff"&gt;&lt;code&gt;&lt;span class="p"&gt;name: Claude Code
&lt;/span&gt;&lt;span class="err"&gt;
&lt;/span&gt;&lt;span class="p"&gt;on:
&lt;/span&gt;  issue_comment:
    types: [created]
  pull_request_review_comment:
    types: [created]
  issues:
    types: [opened, assigned]
  pull_request_review:
    types: [submitted]
&lt;span class="err"&gt;
&lt;/span&gt;&lt;span class="p"&gt;jobs:
&lt;/span&gt;  claude:
    if: |
      (github.event_name == 'issue_comment' &amp;amp;&amp;amp; contains(github.event.comment.body, '@claude')) ||
      (github.event_name == 'pull_request_review_comment' &amp;amp;&amp;amp; contains(github.event.comment.body, '@claude')) ||
      (github.event_name == 'pull_request_review' &amp;amp;&amp;amp; contains(github.event.review.body, '@claude')) ||
      (github.event_name == 'issues' &amp;amp;&amp;amp; (contains(github.event.issue.body, '@claude') || contains(github.event.issue.title, '@claude')))
&lt;span class="gd"&gt;-    runs-on: ubuntu-latest
&lt;/span&gt;&lt;span class="gi"&gt;+    runs-on: depot-ubuntu-latest
&lt;/span&gt;    permissions:
      contents: read
      id-token: write
      pull-requests: write
    steps:
      - name: Checkout repository
        uses: actions/checkout@v4
        with:
          fetch-depth: 1
&lt;span class="err"&gt;
&lt;/span&gt;      - name: Run Claude Code
        id: claude
        uses: anthropics/claude-code-action@beta
        with:
          anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
          allowed_tools: 'mcp__github__create_pull_request'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Making this switch will make your Claude workflow significantly cheaper and faster. If you have a Claude Code agent that, on average, takes &lt;strong&gt;5 minutes&lt;/strong&gt;, it will cost you about &lt;strong&gt;$0.04/session&lt;/strong&gt; on GitHub-hosted runners. That same session on Depot GitHub Actions runners will cost you &lt;strong&gt;$0.02/session&lt;/strong&gt; on our faster default runners.&lt;/p&gt;

&lt;p&gt;But you can also tune this further with any of our runner types. For example, we have a &lt;code&gt;small&lt;/code&gt; runner with only 2 cores, 2 GB of RAM, and 100 GB of disk at just &lt;strong&gt;$0.002/minute&lt;/strong&gt;. This is an excellent option for running Claude Code agents doing small tasks that don't require a lot of resources. This runner type costs &lt;strong&gt;$0.01/session&lt;/strong&gt; on Depot GitHub Actions runners.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;So there you have it - Claude Code on Depot runners is a powerful combo. Let's break down why this matters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Save money&lt;/strong&gt;: Running Claude on Depot runners costs half what you'd pay on GitHub-hosted runners. If you're pinching pennies, our small runners drop the price to just $0.01 per session for simple tasks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Speed things up&lt;/strong&gt;: Depot's faster CPUs, RAM disks, and network make everything run smoother. Your AI agents will thank you.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Get more done&lt;/strong&gt;: With Claude agents handling routine stuff in parallel, your team can focus on the interesting problems instead of the boring ones.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Super simple setup&lt;/strong&gt;: You're good to go with a few tweaks to your YAML file. No complex configuration needed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This whole "AI agents in CI" thing is still pretty new, but is clearly where some portion of development is heading. The coolest part is how easy it is to set up - install the Claude GitHub App, add those permissions for PR creation, and switch to depot-ubuntu-latest. That's literally it.&lt;/p&gt;

&lt;p&gt;I'm really excited to see what people build with this setup. Having multiple Claude agents working on different parts of your codebase while running on faster and cheaper infrastructure feels like a glimpse into the future of development.&lt;/p&gt;

&lt;p&gt;Want to try it yourself? &lt;a href="https://depot.dev/sign-up?utm_source=devto" rel="noopener noreferrer"&gt;Sign up for a 7-day free trial of Depot&lt;/a&gt; and get access to all our build performance for GitHub Actions, Docker, Bazel, Turborepo, Gradle, and much more.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kyle Galbraith&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;em&gt;CEO &amp;amp; Co-founder of Depot&lt;/em&gt;&lt;br&gt;&lt;br&gt;
&lt;em&gt;Platform Engineer who despises slow builds turned founder. Expat living in 🇫🇷&lt;/em&gt;&lt;/p&gt;




</description>
      <category>github</category>
      <category>cicd</category>
      <category>devops</category>
      <category>productivity</category>
    </item>
    <item>
      <title>BuildKit in depth: Docker's build engine explained</title>
      <dc:creator>Kyle Galbraith</dc:creator>
      <pubDate>Thu, 01 Feb 2024 12:28:45 +0000</pubDate>
      <link>https://forem.com/kylegalbraith/buildkit-in-depth-dockers-build-engine-explained-l2a</link>
      <guid>https://forem.com/kylegalbraith/buildkit-in-depth-dockers-build-engine-explained-l2a</guid>
      <description>&lt;p&gt;This article explains how BuildKit works in depth, why it's faster than Docker's previous build engine, and what it looks like under the hood.&lt;/p&gt;

&lt;p&gt;BuildKit is Docker's new default build engine &lt;a href="https://docs.docker.com/engine/release-notes/23.0/#2300" rel="noopener noreferrer"&gt;as of Docker Engine v23.0&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Despite BuildKit being used by millions of developers, the documentation out there is relatively sparse. This has led to it being seen as a bit of a black box. At &lt;a href="https://depot.dev/" rel="noopener noreferrer"&gt;Depot&lt;/a&gt;, we've been working with (and reverse engineering) BuildKit for years and have developed a deep understanding of it throughout this process. Now we understand the inner workings, we have a better appreciation for it and want to share that knowledge with you.&lt;/p&gt;

&lt;p&gt;In this article, we explain how BuildKit works under the hood, covering everything from frontends and backends to LLB (low-level build) and DAGs (directed acyclic graphs). We help to demystify BuildKit and explain why it's such an improvement over Docker's original build engine.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is BuildKit?
&lt;/h2&gt;

&lt;p&gt;BuildKit is a build engine that takes a configuration file (such as a Dockerfile) and converts it into a built artifact (such as a Docker image). It's faster than Docker's original build engine due to its ability to optimize your build by parallelizing build steps whenever possible and through more advanced &lt;a href="https://docs.docker.com/build/guide/layers/" rel="noopener noreferrer"&gt;layer caching&lt;/a&gt; capabilities.&lt;/p&gt;

&lt;h2&gt;
  
  
  BuildKit speeds up Docker builds with parallelization
&lt;/h2&gt;

&lt;p&gt;A Dockerfile can consist of build &lt;em&gt;stages&lt;/em&gt;, each of which can contain one or more &lt;em&gt;steps&lt;/em&gt;. BuildKit can determine the dependencies between each stage in the build process. If two stages can be run in parallel, they will be. Stages are a great way to break your Docker image build up into parallelizable steps — for example, you could install your dependencies, build your application at the same time, and then combine the two to form your final image.&lt;/p&gt;

&lt;p&gt;To take advantage of parallelization, you must rewrite your Dockerfile to use multi-stage builds. A _stage _is a section of your Dockerfile that starts with a &lt;code&gt;FROM&lt;/code&gt; statement and continues until you reach another &lt;code&gt;FROM&lt;/code&gt; statement. Stages can be run in parallel, so by this mechanism, the steps in one stage can run in parallel with the steps in another.&lt;/p&gt;

&lt;p&gt;It's worth noting that the steps &lt;em&gt;within&lt;/em&gt; a stage run in a linear order, but the order in which stages run may not be linear. To determine the order in which stages will be run, BuildKit detects the name of each stage — which, for a Dockerfile, is the word after the &lt;code&gt;as&lt;em&gt; &lt;/em&gt;&lt;/code&gt;keyword in a &lt;code&gt;FROM &lt;/code&gt;statement.&lt;/p&gt;

&lt;p&gt;To determine the stage that another stage depends on, we look at the word _after _the &lt;code&gt;FROM&lt;/code&gt; keyword. In the example below, &lt;code&gt;FROM docker-image as stage1&lt;/code&gt; means that the &lt;code&gt;stage1&lt;/code&gt; stage depends on the Docker image from Docker Hub, and &lt;code&gt;FROM stage1 as stage2&lt;/code&gt; means that the &lt;code&gt;stage2&lt;/code&gt; stage depends on the &lt;code&gt;stage1&lt;/code&gt; stage. It's possible to chain many stages together in this way.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM docker-image as stage1
RUN command1

FROM stage1 as stage2
RUN command2

FROM stage2 as stage3
RUN command3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's also possible to have multiple stages depend on one stage:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM docker-image as parent
…
FROM parent as child1
…
FROM parent as child2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;BuildKit is able to evaluate the structure of these &lt;code&gt;FROM&lt;/code&gt;statements and work out the dependency tree between the steps in each stage.&lt;/p&gt;

&lt;h3&gt;
  
  
  Optimize your Dockerfile to take advantage of parallelization
&lt;/h3&gt;

&lt;p&gt;Rewriting your Dockerfile to use multi-stage builds will allow you to take advantage of the speed improvements that BuildKit brings to Docker.&lt;/p&gt;

&lt;p&gt;In the example below you can see an unoptimized file with a single stage and an optimized file with two stages: one named &lt;code&gt;build&lt;/code&gt;, and another which is unnamed (this is the naming convention for the final stage in a Dockerfile).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdepot.dev%2Fimages%2Fbuildkit-in-depth-image1.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdepot.dev%2Fimages%2Fbuildkit-in-depth-image1.webp" alt="Comparison of two Dockerfiles that both build and deploy a production Node application. One is unoptimized for BuildKit and has a single stage. The other is optimized for BuildKit, and uses two different stages that run in parallel. The code for both Dockerfiles is made available in GitHub later in the article."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;span&gt;&lt;br&gt;
  Optimizing this Dockerfile allows steps such as enabling Corepack to run in parallel with copying the package.json&lt;br&gt;
  file and the pnpm-lock.yaml file into the /app directory.&lt;br&gt;
&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;Once your Dockerfile has been optimized to run in multiple stages, BuildKit can run them in parallel. Below you can see the difference between using BuildKit to run multiple stages in parallel (for building and deploying a Node app) and running all steps sequentially (without multi-stage builds). This Node app deployment example will be used throughout this article, and the Dockerfiles — both optimized for BuildKit (multi-stage) and unoptimized (basic) — are &lt;a href="https://github.com/depot/examples/tree/main/buildkit" rel="noopener noreferrer"&gt;available on our GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdepot.dev%2Fimages%2Fbuildkit-in-depth-image2.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdepot.dev%2Fimages%2Fbuildkit-in-depth-image2.webp" alt="Flow diagram showing how the different  raw `RUN` endraw  statements of a Dockerfile are processed, with and without BuildKit. The “without BuildKit” diagram shows a sequential line of steps. The “with BuildKit” diagram shows two branches where stages are running in parallel."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;span&gt;&lt;br&gt;
  If you use BuildKit to parallelize your stages, your build will complete much faster.&lt;br&gt;
&lt;/span&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  BuildKit speeds up Docker builds with layer caching
&lt;/h2&gt;

&lt;p&gt;BuildKit is also able to improve build performance through clever use of layer caching. With layer caching, each step of your Dockerfile (such as &lt;code&gt;RUN&lt;/code&gt;, &lt;code&gt;COPY&lt;/code&gt;, and &lt;code&gt;ADD&lt;/code&gt;) is cached individually, as a separate reusable layer.&lt;/p&gt;

&lt;p&gt;Often, individual layers can be reused, as the results of a build step can be retrieved from the cache rather than rebuilt every time. This eliminates many steps from the build process and often dramatically increases overall build performance.&lt;/p&gt;

&lt;p&gt;The hierarchy of layers in BuildKit's layer cache is a tree structure, so if one build step has changed between builds, that build step plus all its child steps in the hierarchy must be rebuilt. With traditional single-stage builds, every single step depends on the previous step, so it can be immensely frustrating if you have a RUN statement that invalidates the cache in an early part of your Dockerfile — because all subsequent statements must be recomputed any time that statement runs. &lt;a href="https://depot.dev/blog/faster-builds-with-docker-caching#order-your-layers-wisely" rel="noopener noreferrer"&gt;The order of your statements&lt;/a&gt; in a Dockerfile has a major impact on optimizing your build to leverage caching.&lt;/p&gt;

&lt;p&gt;However, if you've optimized your Dockerfile for BuildKit, used multi-stage builds, and ordered your statements to maximize cache hits, you can reuse previous build results much more frequently.&lt;/p&gt;
&lt;h2&gt;
  
  
  BuildKit under the hood
&lt;/h2&gt;

&lt;p&gt;"BuildKit builds are based on a binary intermediate format called LLB that is used for defining the dependency graph for processes running part of your build. tl;dr: LLB is to Dockerfile what LLVM IR is to C."&lt;/p&gt;

&lt;p&gt;— &lt;a href="https://github.com/moby/buildkit#exploring-llb" rel="noopener noreferrer"&gt;BuildKit's README file&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To truly understand how BuildKit works, let's unpack this statement. BuildKit has taken inspiration from compiler designers by creating an intermediate representation between the input and the output to its system. In compiler design, an intermediate representation is a data structure or some human-readable code (such as assembly language) that sits between the source code input and the machine code output. This intermediate representation is later converted into different types of machine code for each different machine the code needs to run on.&lt;/p&gt;

&lt;p&gt;BuildKit uses this same principle by inserting an intermediate representation between the Dockerfile input and the final Docker image. BuildKit's intermediate representation is known as a low-level build (LLB), which is a directed acyclic graph (DAG) data structure that sits at the heart of BuildKit's information flow.&lt;/p&gt;
&lt;h2&gt;
  
  
  The flow of information through BuildKit: frontends, backends and LLB
&lt;/h2&gt;

&lt;p&gt;Continuing with the compiler comparison, BuildKit also uses the concept of frontends and backends.&lt;/p&gt;

&lt;p&gt;The frontend is the part of BuildKit that takes the input (usually a Dockerfile) and converts it to LLB. BuildKit has frontends for a variety of different inputs including &lt;a href="https://nixos.org/" rel="noopener noreferrer"&gt;Nix&lt;/a&gt;, &lt;a href="https://openllb.github.io/hlb/" rel="noopener noreferrer"&gt;HLB&lt;/a&gt;, and &lt;a href="https://github.com/vito/bass" rel="noopener noreferrer"&gt;Bass&lt;/a&gt;, all of which take different inputs but build Docker images, and &lt;a href="https://github.com/denzp/cargo-wharf" rel="noopener noreferrer"&gt;CargoWharf&lt;/a&gt;, which is used to build something else entirely (a Rust project). This shows the versatility BuildKit has to build many different types of artifacts, even though the most common use currently is building Docker images from Dockerfiles.&lt;/p&gt;

&lt;p&gt;The backend takes the LLB as an input and converts it into a build artifact (such as a Docker image) for the machine architecture that you've specified. It builds the artifact by using a &lt;a href="https://opensource.com/article/21/9/container-runtimes" rel="noopener noreferrer"&gt;container runtime&lt;/a&gt; — either runc or containerd (which uses runc under the hood anyway).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdepot.dev%2Fimages%2Fbuildkit-in-depth-image3.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdepot.dev%2Fimages%2Fbuildkit-in-depth-image3.webp" alt="Diagram showing the flow of information through BuildKit: the Dockerfile to the BuildKit frontend, and then to the LLB, then to the BuildKit backend and finally to the Docker image."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;span&gt;&lt;br&gt;
  BuildKit's frontend acts as an interface between the input (Dockerfile) and the LLB. The backend is the interface&lt;br&gt;
  between the LLB and the output (Docker image).&lt;br&gt;
&lt;/span&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  BuildKit's LLB
&lt;/h2&gt;

&lt;p&gt;We've referred to the LLB a few times so far — but what exactly is it?&lt;/p&gt;

&lt;p&gt;It's a &lt;em&gt;directed acyclic graph (DAG) data structure&lt;/em&gt;, which is a special type of &lt;a href="https://en.wikipedia.org/wiki/Graph_(abstract_data_type)" rel="noopener noreferrer"&gt;graph data structure&lt;/a&gt;. In a DAG, each event is represented as a node with arrows that flow in a particular direction, hence the word “directed.” Arrows start at a &lt;em&gt;parent node&lt;/em&gt; and end on a &lt;em&gt;child node&lt;/em&gt;. Child nodes are only allowed to execute after _all _parent nodes have finished executing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdepot.dev%2Fimages%2Fbuildkit-in-depth-image4.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdepot.dev%2Fimages%2Fbuildkit-in-depth-image4.webp" alt="A diagram of a directed acyclic graph (DAG). It consists of a number of nodes that have all been labeled alphabetically. All the nodes are connected by various arrows and the arrows are all pointing in one direction."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There can be no loops in a DAG, hence the word “acyclic.” This is necessary for modeling build steps, as if a build process allowed loops, the process would never complete because two steps would require each other to finish before each one starts!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdepot.dev%2Fimages%2Fbuildkit-in-depth-image5.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdepot.dev%2Fimages%2Fbuildkit-in-depth-image5.webp" alt="A small DAG is shown with just two nodes, A and B. An arrow points from A to B. A further arrow that points from B to A is highlighted red and crossed out to emphasize that this direction of travel is not allowed in a DAG. Allowing flow from A to B and then back to A again would count as a cycle, which is not allowed."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;BuildKit's LLB DAG is used to represent which build steps depend on each other and the order in which everything needs to happen. This ensures that certain steps don't occur before other steps are completed (like installing a package before downloading it).&lt;/p&gt;

&lt;p&gt;In the case of Docker builds, BuildKit uses its Docker frontend to create the LLB from the Dockerfile. For example, &lt;a href="https://github.com/depot/examples/blob/main/buildkit/multi-stage/Dockerfile" rel="noopener noreferrer"&gt;this Dockerfile&lt;/a&gt; for building and deploying a Node app would create the following LLB DAG:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdepot.dev%2Fimages%2Fbuildkit-in-depth-image6.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdepot.dev%2Fimages%2Fbuildkit-in-depth-image6.webp" alt="A diagram of the LLB DAG for the above Dockerfile. Each node is a particular type of LLB operation such as SourceOp, ExecOp, or FileOp."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;span&gt;&lt;br&gt;
  In this LLB DAG, each node represents an operation that can happen. Each LLB operation can take one or more&lt;br&gt;
  filesystems as its input and output one or more filesystems.&lt;br&gt;
&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;To help you understand more about the LLB operations that your Dockerfile would translate to, we built a free tool that converts any given Dockerfile into LLB through a real-time editor. Our &lt;a href="https://depot.dev/dockerfile-explorer" rel="noopener noreferrer"&gt;Dockerfile Explorer&lt;/a&gt; is easy to use — simply paste your Dockerfile into the box on the left and then view the LLB operations on the right.&lt;/p&gt;

&lt;p&gt;Our Node Dockerfile creates a number of LLB operations, the first three of which can be viewed below. Each operation has a type such as SourceOp or ExecOp, a unique identifier in the form of a hash value, and some extra data like the environment and the commands to be run. The hash values indicate the dependencies between the operations. For example, the first ExecOp operation has a hash value of &lt;code&gt;0534a47f&lt;/code&gt;, and the second ExecOp operation takes as its input an operation with a hash of the same value (&lt;code&gt;0534a47f&lt;/code&gt;). This shows that these two operations are directly linked on the LLB DAG.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdepot.dev%2Fimages%2Fbuildkit-in-depth-image7.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdepot.dev%2Fimages%2Fbuildkit-in-depth-image7.webp" alt="Screenshot of Depot's Dockerfile Explorer, showing three different operations: a SourceOp and two ExecOps. The output hash of the first ExecOp is highlighted, as is the input hash of the second ExecOp, and it shows the hashes are the same."&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  The different BuildKit LLB operations explained
&lt;/h2&gt;
&lt;h3&gt;
  
  
  SourceOp
&lt;/h3&gt;

&lt;p&gt;This loads source files or images from a source location, such as DockerHub, a Git repository, or your local build context.&lt;/p&gt;

&lt;p&gt;All SourceOp operations that originated from a Dockerfile have been generated from Dockerfile &lt;code&gt;FROM&lt;/code&gt; statements.&lt;/p&gt;
&lt;h3&gt;
  
  
  ExecOp
&lt;/h3&gt;

&lt;p&gt;ExecOp always executes a command. It's equivalent to Dockerfile &lt;code&gt;RUN&lt;/code&gt; statements.&lt;/p&gt;
&lt;h3&gt;
  
  
  FileOp
&lt;/h3&gt;

&lt;p&gt;This is for operations that relate to files or directories, including Dockerfile statements such as &lt;code&gt;ADD&lt;/code&gt; (add a file or directory), &lt;code&gt;COPY&lt;/code&gt; (copy a file or directory), or &lt;code&gt;WORKDIR&lt;/code&gt; (set the working directory of your Docker container).&lt;/p&gt;

&lt;p&gt;It's possible to use this operation to copy the output of other steps in different stages into a single step. Using &lt;a href="https://github.com/depot/examples/blob/main/buildkit/multi-stage/Dockerfile" rel="noopener noreferrer"&gt;our example Dockerfile&lt;/a&gt;, the &lt;code&gt;COPY --from&lt;/code&gt; statement copies some of the resources from the output of the previous &lt;code&gt;build&lt;/code&gt; stage into the final stage.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM node:20
…
COPY --from=build /appbuild /app/build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can use the Dockerfile Explorer to see how BuildKit deals with this — it takes the output of the final step in each stage and adds them together.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdepot.dev%2Fimages%2Fbuildkit-in-depth-image8.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdepot.dev%2Fimages%2Fbuildkit-in-depth-image8.webp" alt="Screenshot within the Dockerfile Explorer. It shows that the hash of the final steps in the build stage is b71bec19."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdepot.dev%2Fimages%2Fbuildkit-in-depth-image9.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdepot.dev%2Fimages%2Fbuildkit-in-depth-image9.webp" alt="Screenshot within the Dockerfile Explorer. It shows that the hash of the penultimate step of the final stage is 3ca1e723 and that the final FileOp step (which copies files from the build stage to the final stage) takes both b71bec19 and 3ca1e723 as inputs."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  MergeOp
&lt;/h3&gt;

&lt;p&gt;MergeOp allows you to merge multiple inputs into a single flat layer (and is the underlying mechanism behind Docker's &lt;code&gt;COPY --link&lt;/code&gt;).&lt;/p&gt;

&lt;h3&gt;
  
  
  DiffOp
&lt;/h3&gt;

&lt;p&gt;This is a way of calculating the difference between two inputs and producing a single output with the difference represented as a new layer, which you might then want to merge into another layer using MergeOp.&lt;/p&gt;

&lt;p&gt;However, this operation is currently not available for the Dockerfile frontend.&lt;/p&gt;

&lt;h3&gt;
  
  
  BuildOp
&lt;/h3&gt;

&lt;p&gt;This is an &lt;a href="https://github.com/moby/buildkit/blob/master/solver/pb/ops.proto#L186C1-L187C73" rel="noopener noreferrer"&gt;experimental&lt;/a&gt; operation that implements nested LLB builds (for example, running one LLB build that produces another dynamic LLB).&lt;/p&gt;

&lt;p&gt;This operation is also unavailable for the Docker frontend.&lt;/p&gt;

&lt;h2&gt;
  
  
  BuildKit speeds up your Docker builds using its LLB DAG
&lt;/h2&gt;

&lt;p&gt;Although BuildKit can take multiple frontends, the Dockerfile frontend is by far the most popular. BuildKit uses its Dockerfile frontend to convert statements from your Dockerfile into a DAG of LLB operations — including SourceOp, ExecOp and FileOp — and then it uses that LLB format to build an artifact, like a Docker image, for the specified architectures that were requested.&lt;/p&gt;

&lt;p&gt;At Depot, we've taken what was already great about BuildKit and further optimized it to build Docker images up to 20x faster on cloud builders with persistent caching. We've developed our own &lt;a href="https://depot.dev/blog/introducing-depot" rel="noopener noreferrer"&gt;drop-in replacement CLI&lt;/a&gt;, &lt;code&gt;depot build&lt;/code&gt;, that can be used to replace your existing &lt;code&gt;docker build&lt;/code&gt; wherever you're building images today. Sign up today for our &lt;a href="https://depot.dev/sign-up" rel="noopener noreferrer"&gt;7-day free trial&lt;/a&gt; and try it out for yourself.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>devops</category>
      <category>containers</category>
    </item>
    <item>
      <title>Build Docker images faster using build cache</title>
      <dc:creator>Kyle Galbraith</dc:creator>
      <pubDate>Sun, 07 Jan 2024 11:07:04 +0000</pubDate>
      <link>https://forem.com/kylegalbraith/build-docker-images-faster-using-build-cache-4bk4</link>
      <guid>https://forem.com/kylegalbraith/build-docker-images-faster-using-build-cache-4bk4</guid>
      <description>&lt;p&gt;When working with Docker, the faster we can build an image, the quicker our development workflows and deployment pipelines can be. Docker's build cache, also known as the layer cache, is a powerful tool that can significantly speed up an image build when it can be tapped into across builds. In this post, we'll explore how Docker's build cache works and share strategies for using it effectively to optimize your Dockerfiles &amp;amp; image builds.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding Docker Build Cache
&lt;/h2&gt;

&lt;p&gt;Before we dive into optimizations, let's understand how Docker's build cache works. Each instruction in a Dockerfile creates a layer in the final image. Think of these layers as building blocks, each adding new content on top of the previous layers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdepot.dev%2Fimages%2Fdocker-build-cache-layers.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdepot.dev%2Fimages%2Fdocker-build-cache-layers.webp" alt="Docker build cache layers"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When a layer changes, Docker invalidates the cache for that layer and all subsequent layers, requiring them to be rebuilt. For instance, if you modify a source file in your project, the &lt;code&gt;COPY&lt;/code&gt; command will have to run again to reflect those changes in the image, leading to cache invalidation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdepot.dev%2Fimages%2Fdocker-build-cache-layers-invalidation.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdepot.dev%2Fimages%2Fdocker-build-cache-layers-invalidation.webp" alt="How Docker build cache gets invalidated"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Tips for efficiently using the Docker build cache
&lt;/h2&gt;

&lt;p&gt;The more we can avoid cache invalidation, or the later we can have our cache invalidate, the faster our Docker image builds can be. Let's explore some strategies for doing just that.&lt;/p&gt;

&lt;h3&gt;
  
  
  Order your layers wisely
&lt;/h3&gt;

&lt;p&gt;Ordering our commands in a Dockerfile can play a huge role in leveraging the Docker layer cache and how often we invalidate it. Let's take a look at an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; node:20&lt;/span&gt;

&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is an inefficient Dockerfile. The &lt;code&gt;COPY&lt;/code&gt; command will invalidate the cache for all subsequent layers whenever a file changes in our project, forcing our &lt;code&gt;npm install&lt;/code&gt; and &lt;code&gt;npm build&lt;/code&gt; commands to execute even if none of our dependencies changed. We can improve this by being more thoughtful about when we copy in our source code and install our dependencies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; node:20&lt;/span&gt;

&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package.json package-lock.json /app/&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We've moved our source code copy to after our &lt;code&gt;npm install&lt;/code&gt; command. We copy in our &lt;code&gt;package.json&lt;/code&gt; and &lt;code&gt;package-lock.json&lt;/code&gt; to install our dependencies. We then copy in our source code and execute our &lt;code&gt;npm build&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This is a small change that can have a significant impact on build time. Now, instead of every source code change forcing us to reinstall our dependencies, we only have to do so when our &lt;code&gt;package.json&lt;/code&gt; or &lt;code&gt;package-lock.json&lt;/code&gt; files change.&lt;/p&gt;

&lt;h3&gt;
  
  
  Keep your layers small and focused
&lt;/h3&gt;

&lt;p&gt;The less stuff in our build, the faster our Docker image build can be. By keeping our layers small and focused, we can keep our image smaller, cache smaller, and reduce the number of things that can invalidate the cache.&lt;/p&gt;

&lt;p&gt;We've written other posts about keeping Docker images small that are worth reading in conjunction with this post:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://depot.dev/blog/optimize-for-remote-docker-image-builds" rel="noopener noreferrer"&gt;How to optimize Docker image builds for Depot&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://depot.dev/blog/how-to-reduce-your-docker-image-size" rel="noopener noreferrer"&gt;How to reduce your Docker image size&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://depot.dev/blog/fast-dockerfiles-theory-and-practice" rel="noopener noreferrer"&gt;Fast Dockerfiles: theory and practice&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here are a few tips and tricks that are relevant to efficiently using the Docker build cache.&lt;/p&gt;

&lt;h4&gt;
  
  
  Avoid copying files that are not needed
&lt;/h4&gt;

&lt;p&gt;A common mistake we see is copying in files not needed in the final image. For instance, if we are building a Node.js application, we may inadvertently copy in our &lt;code&gt;node_modules&lt;/code&gt; directory when, in fact, we are running &lt;code&gt;npm install&lt;/code&gt; again in our Dockerfile.&lt;/p&gt;

&lt;p&gt;This is a waste of time and space. A good guiding principle is to only copy in the files and directories we know are needed in our final image. So, if we take our earlier example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; node:20&lt;/span&gt;

&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package.json package-lock.json /app/&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In addition, our &lt;code&gt;docker build&lt;/code&gt; is invoked with the full context of our&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;.git/
node_modules/
app/
  index.js
  package.json
  package-lock.json
README.md
Dockerfile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our &lt;code&gt;COPY&lt;/code&gt; command is copying in our entire build context; we can easily visualize our build context with our &lt;a href="https://depot.dev/blog/build-context" rel="noopener noreferrer"&gt;debug build context feature&lt;/a&gt;. In this example, we are copying in many unnecessary files and directories like &lt;code&gt;.git&lt;/code&gt;, &lt;code&gt;node_modules&lt;/code&gt;, and our &lt;code&gt;README&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;It is far better to be more specific with our &lt;code&gt;COPY&lt;/code&gt; command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; ./app /app&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, we are only copying the &lt;code&gt;app&lt;/code&gt; folder into our build and final image.&lt;/p&gt;

&lt;h4&gt;
  
  
  Use &lt;code&gt;.dockerignore&lt;/code&gt; to exclude files and directories
&lt;/h4&gt;

&lt;p&gt;Sometimes, knowing exactly what files and directories are needed in our final image can be tricky. So we can use a &lt;code&gt;.dockerignore&lt;/code&gt; file to explicitly define the files we know should be excluded. For our example above, we could create a &lt;code&gt;.dockerignore&lt;/code&gt; file with the following contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.git
node_modules
README.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Avoid unnecessary dependencies from package managers
&lt;/h4&gt;

&lt;p&gt;We commonly install dependencies into our images from package managers like &lt;code&gt;npm&lt;/code&gt;, &lt;code&gt;pip&lt;/code&gt;, &lt;code&gt;apt&lt;/code&gt;, &lt;code&gt;apk&lt;/code&gt;, etc. for Node, Python, Debian, and Alpine images. It's important to be mindful of what dependencies we are installing and if they are needed in our final image. There are tricks we can sometimes use like &lt;a href="https://depot.dev/blog/dockerfile-linting-issues#3-use---no-install-recommends-to-avoid-installing-unnecessary-packages" rel="noopener noreferrer"&gt;&lt;code&gt;--no-install-recommends&lt;/code&gt;&lt;/a&gt; to avoid package managers installing additional dependencies that are not needed.&lt;/p&gt;

&lt;p&gt;Sometimes dependencies are only needed for building our application, but not for running it; in those cases, we can leverage multi-stage builds to avoid having them in our final image.&lt;/p&gt;

&lt;h4&gt;
  
  
  Leverage the &lt;code&gt;RUN&lt;/code&gt; cache for finer-grained control
&lt;/h4&gt;

&lt;p&gt;Also known as &lt;a href="https://depot.dev/blog/how-to-use-buildkit-cache-mounts-in-ci" rel="noopener noreferrer"&gt;BuildKit cache mounts&lt;/a&gt;. This specialized cache allows us to do more fine-grained caching across builds. Here is an example of the RUN cache in action with a Ubuntu image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; ubuntu&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; /etc/apt/apt.conf.d/docker-clean
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="nt"&gt;--mount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cache,target&lt;span class="o"&gt;=&lt;/span&gt;/var/cache/apt &lt;span class="se"&gt;\
&lt;/span&gt;    apt update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get &lt;span class="nt"&gt;--no-install-recommends&lt;/span&gt; &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; gcc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Reduce the total number of layers
&lt;/h3&gt;

&lt;p&gt;The more layers we have in our image, the more layers we have to rebuild when the cache is invalidated, and the more opportunities for the cache to be invalidated. Below are some handy tips for reducing the number of layers in our image.&lt;/p&gt;

&lt;h4&gt;
  
  
  Combine multiple &lt;code&gt;RUN&lt;/code&gt; commands where possible
&lt;/h4&gt;

&lt;p&gt;The number one Dockerfile lint issue we've detected in Depot is &lt;a href="https://depot.dev/blog/dockerfile-linting-issues#1-multiple-consecutive-run-instructions" rel="noopener noreferrer"&gt;multiple consecutive run instructions&lt;/a&gt;. The more we combine &lt;code&gt;RUN&lt;/code&gt; commands, the fewer layers we will have in our image. For example, if we had a Dockerfile like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;RUN &lt;/span&gt;some-command-that-creates-big-file
&lt;span class="k"&gt;RUN &lt;/span&gt;some-command-that-removes-big-file
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates an unnecessary layer in our image, the layer that initially downloaded the big file. We can combine these two commands into a single &lt;code&gt;RUN&lt;/code&gt; command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;RUN &lt;/span&gt;some-command-that-creates-big-file &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    some-command-that-removes-big-file
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This downloads the big file and removes it all within a single layer, saving us an intermediate layer with the large file present.&lt;/p&gt;

&lt;h4&gt;
  
  
  Be thoughtful about base images
&lt;/h4&gt;

&lt;p&gt;The base image we use can significantly impact the number of layers in our image. Choosing an image closely related to the application or service we are containerizing can avoid recreating unnecessary layers. It can also help us stay updated with security patches and other updates that a particular framework or tool may have.&lt;/p&gt;

&lt;p&gt;It's also worth considering using smaller base images to improve build performance and reduce final image size. For instance, if we are building a Node.js application, we may be able to use the &lt;code&gt;node:alpine&lt;/code&gt; image instead of the &lt;code&gt;node&lt;/code&gt; image. This can reduce the number of layers and final image size in our image.&lt;/p&gt;

&lt;h4&gt;
  
  
  Take advantage of multi-stage builds
&lt;/h4&gt;

&lt;p&gt;A multi-stage build allows us to have multiple &lt;code&gt;FROM&lt;/code&gt; instructions in our Dockerfile. This can be useful for reducing the number of layers in our final image. For instance, if we are building a Node.js application, we may have a Dockerfile like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;node:20-alpine&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;build&lt;/span&gt;

&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package.json yarn.lock tsconfig.json ./&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;yarn &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--immutable&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; src/ ./src/&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;yarn build

&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; node:20-alpine&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=build /app/node_modules /app/node_modules&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=build /app/dist /app/dist&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["node", "./dist/index.js"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;The Docker build cache, when leveraged correctly, can significantly speed up our Docker image builds. By being mindful of how we order our layers, what we copy into our image, and how we structure our Dockerfiles, we can make our builds faster and more efficient.&lt;/p&gt;

&lt;p&gt;Using the Docker build cache efficiently can speed up your internal development, CI/CD pipelines, and deployments. With Depot, we had another speed boost to this problem by persisting your cache automatically across a distributed storage cluster that your entire team and CI workflows can share. With even faster caching and native Intel &amp;amp; Arm CPUs for zero-emulation builds, we've seen Depot folks getting 30x faster Docker image builds.&lt;/p&gt;

&lt;p&gt;If you want to learn more about how Depot can help you optimize your Docker image builds, &lt;a href="https://depot.dev" rel="noopener noreferrer"&gt;sign up for our free trial&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>docker</category>
    </item>
    <item>
      <title>How to build multi-platform Docker images in GitHub Actions</title>
      <dc:creator>Kyle Galbraith</dc:creator>
      <pubDate>Mon, 30 Oct 2023 10:02:00 +0000</pubDate>
      <link>https://forem.com/kylegalbraith/how-to-build-multi-platform-docker-images-in-github-actions-2gal</link>
      <guid>https://forem.com/kylegalbraith/how-to-build-multi-platform-docker-images-in-github-actions-2gal</guid>
      <description>&lt;p&gt;In this post, we will focus on building multi-platform Docker images, as well as Arm images, in GitHub Actions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multi-platform images and Arm support
&lt;/h2&gt;

&lt;p&gt;By default, Docker images are built for the architecture of the machine running the build. If you build an image on an Intel machine, the image will be built for Intel. If you build an image on an Arm machine, the image will be built for Arm.&lt;/p&gt;

&lt;p&gt;If you want to build an image for a different architecture than the machine you are building on, you can specify the &lt;code&gt;--platform&lt;/code&gt; flag during a Docker build. For example, if you are building on an Intel machine and want to build an Arm image, you can specify &lt;code&gt;--platform linux/arm64&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nt"&gt;--platform&lt;/span&gt; linux/arm64 &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A Docker image can also be built for multiple architectures simultaneously. This produces what is often referred to as a multi-platform or multi-architecture image. If you want to build a multi-platform Docker image for both Intel and Arm, you can specify multiple platforms in the &lt;code&gt;--platform&lt;/code&gt; flag.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nt"&gt;--platform&lt;/span&gt; linux/arm64,linux/amd64 &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A multi-platform Docker image build triggers two builds, one for each architecture, and produces a single image that supports both platforms. But, to build that image, one of the architectures must be emulated using &lt;a href="https://www.qemu.org/"&gt;qemu emulation&lt;/a&gt;. If this is a multi-platform image built in CI, like GitHub Actions, the Arm portion (i.e., &lt;code&gt;linux/arm64&lt;/code&gt;) will be emulated.&lt;/p&gt;

&lt;p&gt;Alternatively, there is the option to configure &lt;code&gt;docker buildx build&lt;/code&gt; to use multiple builders, one for each platform. This method removes the need to emulate the non-host machine architecture. But, in exchange, you have to run your own native builders. For more details, we have a blog post on &lt;a href="https://depot.dev/blog/building-arm-containers#option-3-running-your-own-builder-instances"&gt;running your own builder instances&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building multi-platform Docker images in GitHub Actions
&lt;/h2&gt;

&lt;p&gt;This section assumes you know the basics of building Docker images in GitHub Actions. If you are new to building Docker images in GitHub Actions, we have a &lt;a href="https://depot.dev/blog/docker-layer-caching-in-github-actions#building-docker-images-in-github-actions"&gt;blog post&lt;/a&gt; that covers the basics of building Docker images in GitHub Actions.&lt;/p&gt;

&lt;p&gt;To build a multi-platform Docker image in GitHub Actions, we must configure QEMU emulation and &lt;code&gt;buildx&lt;/code&gt; in our workflow.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build multi-platform Docker image&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;build-with-docker&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build multi-platform Docker image&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-20.04&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v3&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker/setup-qemu-action@v3&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker/setup-buildx-action@v3&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker/build-push-action@v5&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
          &lt;span class="na"&gt;platforms&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;linux/amd64,linux/arm64&lt;/span&gt;
          &lt;span class="na"&gt;cache-from&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;type=gha&lt;/span&gt;
          &lt;span class="na"&gt;cache-to&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;type=gha,mode=max&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Three steps in our workflow are worth noting:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;docker/setup-qemu-action&lt;/code&gt; configures QEMU emulation for the Arm portion of our multi-platform Docker image build. This is required for multi-platform Docker image builds in GitHub Actions, as the hosted runners are Intel machines.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;docker/setup-buildx-action&lt;/code&gt; configures &lt;code&gt;buildx&lt;/code&gt; for our workflow. It's required for multi-platform Docker image builds in GitHub Actions that use &lt;code&gt;docker/build-push-action&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;docker/build-push-action&lt;/code&gt; builds and pushes our Docker image. We specify the &lt;code&gt;platforms&lt;/code&gt; argument to build our image for both Intel and Arm architectures. If we wanted to push our image to a registry, we could add an additional step above &lt;code&gt;docker/build-push-action&lt;/code&gt; to login to our registry and then specify the &lt;code&gt;push&lt;/code&gt; argument. We specify the &lt;code&gt;cache-from&lt;/code&gt; and &lt;code&gt;cache-to&lt;/code&gt; parameters to store the Docker layer cache via the GitHub Cache API.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker/login-action@v3&lt;/span&gt;
  &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.DOCKERHUB_USERNAME }}&lt;/span&gt;
    &lt;span class="na"&gt;password&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.DOCKERHUB_TOKEN }}&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker/build-push-action@v5&lt;/span&gt;
  &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
    &lt;span class="na"&gt;platforms&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;linux/amd64,linux/arm64&lt;/span&gt;
    &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;your-repo&amp;gt;:&amp;lt;your-tag&amp;gt;&lt;/span&gt;
    &lt;span class="na"&gt;cache-from&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;type=gha&lt;/span&gt;
    &lt;span class="na"&gt;cache-to&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;type=gha,mode=max&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you wanted to build a Docker image for only Arm, you would change the &lt;code&gt;platforms&lt;/code&gt; argument to just &lt;code&gt;linux/arm64&lt;/code&gt; but keep the rest of the workflow to use QEMU emulation.&lt;/p&gt;

&lt;p&gt;This is a basic example of building multi-platform Docker images in GitHub Actions with QEMU emulation. It's functional, but it has limitations that are worth noting:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The Arm portion of the build is emulated using QEMU. We've seen up to 30x speedups when building on native Arm machines vs emulating Arm in CI via our benchmarks like &lt;a href="https://depot.dev/benchmark/temporal"&gt;Temporal's multi-platform build&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;The GitHub Cache API only supports a &lt;a href="https://docs.github.com/en/actions/using-workflows/caching-dependencies-to-speed-up-workflows#usage-limits-and-eviction-policy"&gt;maximum size of 10 GB&lt;/a&gt; for the entire repository, each architecture will have to share this 10 GB limit.&lt;/li&gt;
&lt;li&gt;Loading and saving cache over networks is slow, meaning the loading and saving could negate any performance benefits of using the cached layers for simple image builds&lt;/li&gt;
&lt;li&gt;The cache is locked to GitHub Actions and can't be used in other systems or on local machines&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;An alternative to emulation is to run your own &lt;a href="https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/using-self-hosted-runners-in-a-workflow"&gt;GitHub Action runners&lt;/a&gt; on Arm instances. This comes with a significant increase in infrastructure and complexity you must manage.&lt;/p&gt;

&lt;h2&gt;
  
  
  A managed solution
&lt;/h2&gt;

&lt;p&gt;We built Depot to eliminate the pain of emulation and other limitations above, not only in GitHub Actions but in all &lt;a href="https://depot.dev/integrations"&gt;CI providers&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Depot is a remote container build service that runs an optimized version of BuildKit on native cloud instances for Intel and Arm. We manage your Docker layer cache on fast NVMe SSDs that make your cache instantly available across builds. The layer cache can be used from your CI build in GitHub Actions and your local machine when you use &lt;code&gt;depot build&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Each build runs on a dedicated single-tenant BuildKit machine, using native Intel or Arm CPUs. Each instance includes 16 CPUs, 32GB memory, and the persistent layer cache can be expanded up to 500 GB.&lt;/p&gt;

&lt;p&gt;Depot effectively provides on-demand access to one or more remote BuildKit builders and automatically routes each build platform to the machine best suited to build for that platform.&lt;/p&gt;

&lt;p&gt;If you are interested in trying out Depot in your GitHub Actions workflow, check out our &lt;a href="https://depot.dev/docs/integrations/github-actions"&gt;GitHub Actions integration guide&lt;/a&gt; and &lt;a href="https://depot.dev/start"&gt;get started with Depot&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>devops</category>
      <category>github</category>
    </item>
    <item>
      <title>Faster Docker builds for Arm without emulation</title>
      <dc:creator>Kyle Galbraith</dc:creator>
      <pubDate>Tue, 17 Oct 2023 08:40:16 +0000</pubDate>
      <link>https://forem.com/depot/faster-docker-builds-for-arm-without-emulation-ep0</link>
      <guid>https://forem.com/depot/faster-docker-builds-for-arm-without-emulation-ep0</guid>
      <description>&lt;p&gt;Building a Docker image for the Arm architecture is loaded with inefficiencies. With the adoption of Arm-based devices like M1 / M2 MacBooks, and the growing popularity of Arm-based servers like AWS Graviton, it is becoming more important to build Arm and multi-platform containers. It can be a challenge to build these containers efficiently.&lt;/p&gt;

&lt;h2&gt;
  
  
  Emulation is painfully slow
&lt;/h2&gt;

&lt;p&gt;Today, most people build Arm images using emulation. Why? Because emulation is built into Docker and &lt;code&gt;buildx&lt;/code&gt; out of the box. By passing the &lt;code&gt;--platform linux/arm64&lt;/code&gt; flag to &lt;code&gt;docker buildx build&lt;/code&gt;, Docker will use emulation to build the image for Arm if the host architecture is Intel.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker buildx build &lt;span class="nt"&gt;--platform&lt;/span&gt; linux/arm64 &lt;span class="nt"&gt;-t&lt;/span&gt; org/repo:tag &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or, to build an image for multiple architectures, also know as a multi-platform image, you can pass multiple platforms. Here we tell it to build an image for both Intel &amp;amp; Arm in parallel:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker buildx build &lt;span class="nt"&gt;--platform&lt;/span&gt; linux/arm64,linux/amd64 &lt;span class="nt"&gt;-t&lt;/span&gt; org/repo:tag &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Side note on multi-platform images
&lt;/h3&gt;

&lt;p&gt;Building multi-architecture Docker images like the example above results in one half of the build happening on the native host platform and the other half happening in an emulated platform. But multiple container images aren't produced. It's one image that contains a image manifest that states which platforms this Docker container image can run on. You can use tools like &lt;code&gt;docker buildx imagetools&lt;/code&gt; or &lt;code&gt;docker manifest&lt;/code&gt; to actually inspect these manifest (note: &lt;code&gt;docker manifest&lt;/code&gt; is still an experimental feature).&lt;/p&gt;

&lt;p&gt;So if you were to &lt;code&gt;docker run --rm org/repo:tag&lt;/code&gt; and you were on an Arm server, the daemon will ask the Docker registry for the image manifest and select the image with a matching platform to use for the launched container.&lt;/p&gt;

&lt;p&gt;Emulation is a logical place to start as that is what Docker Desktop supports out of the box when installing Docker. But it's slow, really slow, and it gets exponentially worse for more complex applications:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://depot.dev/benchmark/mastodon" rel="noopener noreferrer"&gt;Mastodon's emulated builds&lt;/a&gt; take around 55 minutes to complete.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://depot.dev/benchmark/temporal" rel="noopener noreferrer"&gt;Temporal's emulated builds&lt;/a&gt; take as many as 80 minutes to complete!&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The benchmarks shown above are happening in GitHub Actions with Intel runners and asking for multi-platform images. So, when we need to build the Arm image (&lt;code&gt;linux/arm64&lt;/code&gt;), we have to use emulation during the Docker build of that architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building Docker images for Arm natively
&lt;/h2&gt;

&lt;p&gt;You can use other tricks like &lt;a href="https://depot.dev/blog/building-arm-containers#alternative-cross-compilation" rel="noopener noreferrer"&gt;cross-compilation&lt;/a&gt; in your Dockerfile to try and work around the slowness of emulation. But it's not a great experience. You have to get crafty with multi-stage builds and maintain cross-compilation toolchains.&lt;/p&gt;

&lt;p&gt;The far better option is to build Docker images for Arm natively by running the builds on real Arm hardware.&lt;/p&gt;

&lt;p&gt;Unfortunately, this isn't a great experience if you're trying to do it yourself. You must &lt;a href="https://depot.dev/blog/building-arm-containers#option-3-running-your-own-builder-instances" rel="noopener noreferrer"&gt;run your own builder instances&lt;/a&gt;, maintain them, keep them up to date, and ensure they are always available. It's a lot of DevOps work.&lt;/p&gt;

&lt;h2&gt;
  
  
  The fastest way to build Docker images for Arm
&lt;/h2&gt;

&lt;p&gt;With &lt;a href="https://depot.dev/start" rel="noopener noreferrer"&gt;Depot&lt;/a&gt;, you get native Intel &amp;amp; Arm builders right out of the box—no emulation, no complicated cross-compilation, and no running your own builders. Just fast builds on native hardware.&lt;/p&gt;

&lt;p&gt;It's as simple as installing our &lt;a href="https://depot.dev/docs/cli/installation" rel="noopener noreferrer"&gt;&lt;code&gt;depot&lt;/code&gt; CLI&lt;/a&gt; and running our &lt;a href="https://depot.dev/docs/cli/reference#depot-configure-docker" rel="noopener noreferrer"&gt;&lt;code&gt;configure-docker&lt;/code&gt;&lt;/a&gt; command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;depot configure-docker
docker buildx build &lt;span class="nt"&gt;--platform&lt;/span&gt; linux/amd64,linux/arm64 &lt;span class="nt"&gt;-t&lt;/span&gt; org/repo:tag &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;+] Building 0.9s &lt;span class="o"&gt;(&lt;/span&gt;32/32&lt;span class="o"&gt;)&lt;/span&gt; FINISHED                                                                                                                                                     docker-container:depot_456
 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;depot] build: https://depot.dev/orgs/123/projects/456/builds/dw0n0x4b4g                                                                                                          0.0s
 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;depot] build: https://depot.dev/orgs/123/projects/456/builds/ttcb3q4ss5                                                                                                          0.0s
 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;depot] launching arm64 machine                                                                                                                                                                                                 0.4s
 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;depot] launching amd64 machine                                                                                                                                                                                                 0.3s
 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;depot] connecting to arm64 machine                                                                                                                                                                                             0.1s
 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;depot] connecting to amd64 machine
 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;internal] load .dockerignore                                                                                                                                                                                                   0.1s
 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; transferring context: 116B                                                                                                                                                                                                   0.1s
 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;internal] load build definition from Dockerfile                                                                                                                                                                                0.1s
 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; transferring dockerfile: 435B                                                                                                                                                                                                0.1s
 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;internal] load .dockerignore                                                                                                                                                                                                   0.1s
 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; transferring context: 116B                                                                                                                                                                                                   0.1s
 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;internal] load build definition from Dockerfile                                                                                                                                                                                0.1s
 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; transferring dockerfile: 435B                                                                                                                                                                                                0.1s
 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;linux/amd64 internal] load metadata &lt;span class="k"&gt;for &lt;/span&gt;docker.io/library/node:16-alpine                                                                                                                                                       0.5s
 &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;linux/arm64 internal] load metadata &lt;span class="k"&gt;for &lt;/span&gt;docker.io/library/node:16-alpine
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With a Depot project configured, you can now build native multi-platform Docker images for Arm without the pain of emulation, cross-compilation, or running your own builders.&lt;/p&gt;

&lt;p&gt;The results of building Docker images for Arm with Depot speak for themselves:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The &lt;a href="https://depot.dev/benchmark/mastodon" rel="noopener noreferrer"&gt;Mastodon benchmark&lt;/a&gt; went from &lt;strong&gt;55 minutes&lt;/strong&gt; with emulation, down to &lt;strong&gt;3 minutes&lt;/strong&gt; with native CPUs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;a href="https://depot.dev/benchmark/temporal" rel="noopener noreferrer"&gt;Temporal benchmark&lt;/a&gt; went from &lt;strong&gt;80 minutes&lt;/strong&gt; with emulation, down to &lt;strong&gt;2 minutes&lt;/strong&gt; with native CPUs.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Try it out
&lt;/h2&gt;

&lt;p&gt;Depot launches on-demand builders for both Intel &amp;amp; Arm with 16 CPUs, 32GB of memory, and up to 500GB of persistent cache storage that is shared across all your builds and teammates.&lt;/p&gt;

&lt;p&gt;If you're looking for the fastest way to build Docker images for Arm, &lt;a href="https://depot.dev/start" rel="noopener noreferrer"&gt;sign up for Depot&lt;/a&gt; and try it yourself.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>arm</category>
      <category>devops</category>
    </item>
    <item>
      <title>The complete guide to getting started with building Docker images</title>
      <dc:creator>Kyle Galbraith</dc:creator>
      <pubDate>Wed, 27 Sep 2023 08:09:42 +0000</pubDate>
      <link>https://forem.com/kylegalbraith/the-complete-guide-to-getting-started-with-building-docker-images-4cch</link>
      <guid>https://forem.com/kylegalbraith/the-complete-guide-to-getting-started-with-building-docker-images-4cch</guid>
      <description>&lt;p&gt;Packaging applications and services into containers has been around for a while. Docker was a technology that came out of another idea called DotCloud in 2013. So, even the Docker containers we know and love today are a decade old. But it's important to remember that the underlying technology of a Docker container is even older.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Docker?
&lt;/h2&gt;

&lt;p&gt;The underlying technologies backing Docker containers are low-level Linux kernel components like cgroups, namespaces, and a union-capable file system like OverlayFS. These technologies are what allow Docker containers to be so lightweight and portable. Combined, they allow a single Linux VM to run multiple containers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installing Docker
&lt;/h3&gt;

&lt;p&gt;To get started with Docker, you need to install it first. Depending on what you're running containers on, there are multiple ways to do that. Here are three Docker installation guides:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/desktop/install/linux-install/"&gt;Install Docker for Linux&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/desktop/install/mac-install/"&gt;Install Docker for Mac&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/desktop/install/windows-install/"&gt;Install Docker for Windows&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each Docker installation guide ultimately installs Docker Desktop and configures the Docker engine on the given operating system.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Docker image vs. a Docker container?
&lt;/h2&gt;

&lt;p&gt;When getting started with Docker, a common question is, what is the difference between a Docker image and a Docker container? A Docker image is a series of layers stacked on each other that form the dependencies and source code needed to run your application. During a Docker image build, all those layers get packaged together to produce a final Docker image.&lt;/p&gt;

&lt;p&gt;A Docker container is a runnable instance of a Docker image. You can run multiple containers with the same image to run multiple copies of your application or service.&lt;/p&gt;

&lt;p&gt;A Docker image is the source code and dependencies packaged together, and the Docker container is the running instance of that image.&lt;/p&gt;

&lt;h2&gt;
  
  
  So, what is a Dockerfile?
&lt;/h2&gt;

&lt;p&gt;A Dockerfile is a file that contains instructions for how to build a Docker image. It's a text file that includes a series of instructions that are executed in order to build a Docker image. The Dockerfile is the recipe that produces our Docker image.&lt;/p&gt;

&lt;p&gt;As we will see in a minute, a Dockerfile is executed from top to bottom during a given docker image build. Instructions are invoked in order, and each instruction generally maps to an image layer. Those layers, stacked on top of each other one after the other, form our final Docker image.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dockerfile instructions and what they do
&lt;/h3&gt;

&lt;p&gt;Several different instructions can be used in a Dockerfile. Each instruction is a command that is executed during the build process. The most common instructions are:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Instruction&lt;/th&gt;
&lt;th&gt;What it does&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;FROM&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Defines a new build stage and sets the base image for that stage&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;RUN&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Executes any commands it is given in a new layer on top of the current image that has been built up to that point&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;COPY&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Copies the contents from a source directory to the filesystem at the path passed in to a new layer in the image&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ADD&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;A more advanced version of &lt;code&gt;COPY&lt;/code&gt; that supports things like local tar extraction and remote URLs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;CMD&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Defines the defaults for when this image is launched as a container, usually includes an executable to invoke, but that's &lt;em&gt;not&lt;/em&gt; required&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ENTRYPOINT&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Configures the executables or commands that will run once the container is initialized&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;USER&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Sets the user that the container is run under, often used to run containers as non-root&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;LABEL&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Adds key-value labels to the image being built; note that labels are passed down from base images&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ARG&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Define build-time only variables that can be used during the Docker image build&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ENV&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Sets environment variables from within the Docker image that can be used during the build process or when the container is run&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;EXPOSE&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Defines a port that the container will listen on when the image is run as a container&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;WORKDIR&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Sets the working directory for the commands that follow it&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;VOLUME&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Creates a mount point with a specific name that is bound to a mounted volume from the underlying host or another container&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Building a Docker image
&lt;/h2&gt;

&lt;p&gt;Now that we have a solid foundation, we can build a Docker image and see how a Docker image build works with a sample application. For this example, we will use an example &lt;a href="https://fastify.dev/"&gt;Fastify API&lt;/a&gt; that uses TypeScript and &lt;code&gt;pnpm&lt;/code&gt; for package management. The example project can be &lt;code&gt;git clone&lt;/code&gt; on our &lt;a href="https://github.com/depot/examples/tree/main/node/pnpm-fastify"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;After cloning the project, we can run &lt;code&gt;pnpm install&lt;/code&gt; and &lt;code&gt;pnpm build&lt;/code&gt; from the root of the example to install our dependencies and build our TypeScript source code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pnpm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; pnpm build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After building the code, we should see a &lt;code&gt;dist&lt;/code&gt; directory with our compiled code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;ls &lt;/span&gt;dist/
  index.js
  index.js.map
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can now run the example API outside a Docker container to see if it works as expected. We use &lt;code&gt;curl&lt;/code&gt; to hit a &lt;code&gt;/health&lt;/code&gt; endpoint on that API that returns a simple JSON response.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pnpm start
curl localhost:3000/health
&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"alive"&lt;/span&gt;:true&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Keeping our Docker image size down
&lt;/h3&gt;

&lt;p&gt;Before jumping straight into writing a Dockerfile for our example project, we need to start with a &lt;code&gt;.dockerignore&lt;/code&gt; first.&lt;/p&gt;

&lt;p&gt;A &lt;code&gt;.dockerignore&lt;/code&gt; file tells the build what files and directories to ignore and exclude from the Docker build context when we run &lt;code&gt;docker build&lt;/code&gt;. Our project git repositories often contain many files and folders that we don't need in our final image or the build context itself.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;node_modules
Dockerfile
.git
.gitignore
dist/**
README.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This &lt;code&gt;.dockerignore&lt;/code&gt; file tells the Docker build to ignore all of these files and directories during the build. These files will be excluded from the Docker build context and thus won't be copied via any &lt;code&gt;COPY&lt;/code&gt; or &lt;code&gt;ADD&lt;/code&gt; instructions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Writing a Dockerfile
&lt;/h3&gt;

&lt;p&gt;Now that we have a &lt;code&gt;.dockerignore&lt;/code&gt; file, we can write our Dockerfile. For a more advanced &lt;code&gt;Dockerfile&lt;/code&gt; that is highly optimized, we can use our &lt;a href="https://depot.dev/docs/languages/node-pnpm-dockerfile"&gt;best-practice Dockerfile for Node.js &amp;amp; &lt;code&gt;pnpm&lt;/code&gt;&lt;/a&gt;. The optimized Dockerfile uses multi-stage builds, optimized Docker layer caching, and BuildKit cache mounts to optimize the docker build image process.&lt;/p&gt;

&lt;h4&gt;
  
  
  Simple example Dockerfile
&lt;/h4&gt;

&lt;p&gt;For this post, we will use a more straightforward example Dockerfile to walk through core concepts.&lt;/p&gt;

&lt;p&gt;First, we need a base image for our Docker image to be built from. Since our example project is in Node, an official Node base image like &lt;code&gt;node:20&lt;/code&gt; is an excellent place to start.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; node:20&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once we have a base image, we can install our dependencies and build our application. We first enable &lt;a href="https://nodejs.org/api/corepack.html"&gt;corepack&lt;/a&gt;, an experimental tool for managing versions of package managers. We then copy in our &lt;code&gt;package.json&lt;/code&gt; and &lt;code&gt;pnpm-lock.yaml&lt;/code&gt; files and install our dependencies. Finally, we copy in our source code and build our application.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;RUN &lt;/span&gt;corepack &lt;span class="nb"&gt;enable&lt;/span&gt;

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package.json pnpm-lock.yaml ./&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;pnpm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--frozen-lockfile&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;pnpm build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The final thing to do is to set our &lt;code&gt;CMD&lt;/code&gt; instruction, which tells the container what to run when it's launched. In our case, we want to run our compiled &lt;code&gt;index.js&lt;/code&gt; file, our API.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; NODE_ENV production&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["node", "./dist/index.js"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note: This Dockerfile is not optimized for size or build performance. It's meant to be an example to follow along with. For an optimized version that uses multi-stage builds and Docker layer caching, see our &lt;a href="https://depot.dev/docs/languages/node-pnpm-dockerfile"&gt;best-practice Dockerfile for Node.js &amp;amp; &lt;code&gt;pnpm&lt;/code&gt;&lt;/a&gt;.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Building our Docker image
&lt;/h3&gt;

&lt;p&gt;Now that we have a Dockerfile, we can start building our Docker image with &lt;code&gt;docker build&lt;/code&gt;. We can run this command from the root of our example project. We tag our resulting image with the name &lt;code&gt;fastify-example&lt;/code&gt; via the &lt;code&gt;--tag&lt;/code&gt; flag.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nt"&gt;--tag&lt;/span&gt; fastify-example &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we run the &lt;code&gt;docker images&lt;/code&gt; command, we should see our new image in our list of container images.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker images
REPOSITORY                                TAG       IMAGE ID       CREATED          SIZE
fastify-example                           latest    7e3f51733ddd   8 seconds ago    1.18GB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Running our Docker image
&lt;/h3&gt;

&lt;p&gt;Now that we have built our Docker image with the &lt;code&gt;fastify-example&lt;/code&gt; tag, we can try running it locally. We can run a Docker container of our image via the &lt;code&gt;docker run&lt;/code&gt; command.&lt;/p&gt;

&lt;p&gt;We run our Docker container with the &lt;code&gt;-p&lt;/code&gt; (i.e., &lt;code&gt;--port&lt;/code&gt;) flag to specify that we want to forward traffic from port 8080 on our host machine to port 3000 in the container because our API is listening on this. We also run our container with the &lt;code&gt;-d&lt;/code&gt; flag, which tells the Docker Daemon to run the container detached in the background.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-p&lt;/span&gt; 8080:3000 &lt;span class="nt"&gt;-d&lt;/span&gt; fastify-example
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can verify our container is running via the &lt;code&gt;docker ps&lt;/code&gt; command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker ps
CONTAINER ID   IMAGE             COMMAND                  CREATED         STATUS         PORTS                    NAMES
5595944ea42b   fastify-example   &lt;span class="s2"&gt;"docker-entrypoint.s…"&lt;/span&gt;   3 seconds ago   Up 2 seconds   0.0.0.0:8080-&amp;gt;3000/tcp   peaceful_brahmagupta
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can also verify our Docker container is up and working by hitting the &lt;code&gt;/health&lt;/code&gt; endpoint with &lt;code&gt;curl&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl localhost:8080/health
&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"alive"&lt;/span&gt;:true&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can also use other Docker CLI commands like &lt;code&gt;docker logs&lt;/code&gt; to see the logs from our container. Note that the &lt;code&gt;logs&lt;/code&gt; command expects the container ID or name, not the image name. From our example above, the name of our container is &lt;code&gt;peaceful_brahmagupta&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker logs peaceful_brahmagupta
&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"level"&lt;/span&gt;:30,&lt;span class="s2"&gt;"time"&lt;/span&gt;:1695637221083,&lt;span class="s2"&gt;"pid"&lt;/span&gt;:1,&lt;span class="s2"&gt;"hostname"&lt;/span&gt;:&lt;span class="s2"&gt;"6e8107cd9149"&lt;/span&gt;,&lt;span class="s2"&gt;"msg"&lt;/span&gt;:&lt;span class="s2"&gt;"Server listening at http://0.0.0.0:3000"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can use the &lt;code&gt;docker inspect&lt;/code&gt; command to get a low-level description of our container. The JSON output can be helpful for debugging and troubleshooting.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker inspect peaceful_brahmagupta
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, we can call &lt;code&gt;docker stop&lt;/code&gt; to stop our container with a graceful shutdown, or we can call &lt;code&gt;docker kill&lt;/code&gt; to kill our container, which will terminate it immediately.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pushing our Docker image to a registry
&lt;/h3&gt;

&lt;p&gt;When we build a Docker image locally or via our &lt;a href="https://depot.dev/start"&gt;remote builders in Depot&lt;/a&gt;, the container image, by default, is kept on the machine that ran the Docker image build. When we want to run the Docker image locally, as we saw in the earlier step, the image staying on our machine is excellent.&lt;/p&gt;

&lt;p&gt;But, most of the time, we want to push our image to a Docker container registry so we can share it with other developers, deploy it to our production environments, etc.&lt;/p&gt;

&lt;p&gt;There are numerous container registries like Docker Hub, Amazon Elastic Container Registry (ECR), GCP Artifact Registry, and GitHub Container Registry. For this example, we will assume we are using GitHub Container Registry.&lt;/p&gt;

&lt;p&gt;To push to a Docker container registry, we generally need to call &lt;code&gt;docker login&lt;/code&gt; to authenticate to our registry. For GitHub Container Registry, we can use the &lt;code&gt;ghcr.io&lt;/code&gt; hostname, our GitHub username, and a &lt;a href="https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry#authenticating-with-a-personal-access-token-classic"&gt;personal access token (PAT) to authenticate&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker login ghcr.io &lt;span class="nt"&gt;-u&lt;/span&gt; GITHUB_USERNAME &lt;span class="nt"&gt;--password&lt;/span&gt; GITHUB_PAT
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Login succeeded
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After logging into our container registry, we can build our image with a tag that includes the registry hostname, our GitHub username, and the image name. We also specify the &lt;code&gt;--push&lt;/code&gt; flag, which will push our image to the registry we've tagged it with.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nt"&gt;-t&lt;/span&gt; ghcr.io/GITHUB_USERNAME/fastify-example:latest &lt;span class="nt"&gt;--push&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Alternatively, we can use &lt;code&gt;docker tag&lt;/code&gt; and &lt;code&gt;docker push&lt;/code&gt; to push an image we've built locally to a registry.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker tag fastify-example ghcr.io/GITHUB_USERNAME/fastify-example:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This tags our &lt;code&gt;fastify-example&lt;/code&gt; Docker image with &lt;code&gt;ghcr.io/GITHUB_USERNAME/fastify-example:latest&lt;/code&gt;, and then we can push it to our registry.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker push ghcr.io/GITHUB_USERNAME/fastify-example:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We've covered in this post how to get started with Docker and build a Docker image from a Dockerfile. We've also covered how to run a Docker container from that image and how to push that image to a container registry. All of these are handy to know when it comes to working with containers locally and in production.&lt;/p&gt;

&lt;p&gt;With Depot, we build the Docker image up to 20x faster and provide critical insights about how to rewrite your Dockerfile to build faster, leverage caching, and more. We remove the need to think about the artifacts Docker produces, allowing you to focus on writing your own code and getting it into production faster.&lt;/p&gt;

&lt;p&gt;You can &lt;a href="https://depot.dev/start"&gt;sign up for an account&lt;/a&gt; and get your first 60 minutes of build time free. If you have questions comments or want to chat more about containers, check out our &lt;a href="https://discord.gg/MMPqYSgDCg"&gt;Community Discord&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>docker</category>
      <category>containers</category>
    </item>
    <item>
      <title>Top 10 common Dockerfile linting issues</title>
      <dc:creator>Kyle Galbraith</dc:creator>
      <pubDate>Fri, 15 Sep 2023 12:46:23 +0000</pubDate>
      <link>https://forem.com/kylegalbraith/top-10-common-dockerfile-linting-issues-29mh</link>
      <guid>https://forem.com/kylegalbraith/top-10-common-dockerfile-linting-issues-29mh</guid>
      <description>&lt;p&gt;We recently announced the ability to lint Dockerfiles on build in our recent &lt;a href="https://depot.dev/blog/dockerfile-lint-on-build"&gt;lint &amp;amp; build blog post&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Running a Dockerfile linter on a Docker image we want to build can allow us to follow some of the best practices around writing efficient Docker images. Efficient could mean faster builds or smaller image sizes.&lt;/p&gt;

&lt;p&gt;This post covers the ten most common Dockerfile linting issues we've seen flowing through Depot to date. We expect these to change over time, but hopefully they can give everyone a good starting point for improving their Dockerfiles. We'll cover each issue, why it's a problem, and how to fix it.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to lint a Dockerfile
&lt;/h2&gt;

&lt;p&gt;With Depot, we make use of two Dockerfile linters, &lt;a href="https://github.com/hadolint/hadolint"&gt;&lt;code&gt;hadolint&lt;/code&gt;&lt;/a&gt; and a set of Dockerfile linter rules that &lt;a href="https://semgrep.dev/p/dockerfile"&gt;Semgrep has written&lt;/a&gt; to make a bit of a smarter Dockerfile linter.&lt;/p&gt;

&lt;p&gt;To lint a Dockerfile on-demand with Depot, we can pass the &lt;a href="https://depot.dev/docs/cli/reference#depot-build"&gt;--lint flag&lt;/a&gt; during a build, which will run before the build.&lt;/p&gt;

&lt;p&gt;Of course, we can also run &lt;code&gt;hadolint&lt;/code&gt; ourselves locally without Depot with our own specific rules and config file. Or even use the &lt;a href="https://hadolint.github.io/hadolint/"&gt;hadolint Dockerfile linter UI&lt;/a&gt;. To run hadolint locally you can either install it via brew or use the Docker image and pipe your Dockerfile into it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;hadolint Dockerfile
&lt;span class="c"&gt;# or use the Docker image&lt;/span&gt;
docker run &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; ghcr.io/hadolint/hadolint &amp;lt; Dockerfile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  1. Multiple consecutive &lt;code&gt;RUN&lt;/code&gt; instructions
&lt;/h2&gt;

&lt;p&gt;Also known as lint error &lt;code&gt;DL3059&lt;/code&gt; from hadolint.&lt;/p&gt;

&lt;p&gt;This is the most common issue we see with Dockerfiles flowing through Depot. It's present in &lt;strong&gt;nearly 30% of all Dockerfiles&lt;/strong&gt; we've seen. The problem is that multiple &lt;code&gt;RUN&lt;/code&gt; instructions are in a row that could be condensed. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;RUN &lt;/span&gt;download_a_really_big_file
&lt;span class="k"&gt;RUN &lt;/span&gt;remove_the_really_big_file
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's helpful to know how &lt;a href="https://depot.dev/blog/fast-dockerfiles-theory-and-practice"&gt;Docker layer caching works&lt;/a&gt; to understand why this might be problematic. In short, each new &lt;code&gt;RUN&lt;/code&gt; statement in a Dockerfile results in a new layer in the final image.&lt;/p&gt;

&lt;p&gt;In this example, we create a new layer when we download the big file and another layer when we remove it. Both layers will be present in the final image. So, the final image will contain the big file in the first layer, making the final image larger than it needs to be.&lt;/p&gt;

&lt;p&gt;However, &lt;code&gt;DL3059&lt;/code&gt; can also be problematic if we use two different &lt;code&gt;RUN&lt;/code&gt; statements to install packages. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;RUN &lt;/span&gt;fetch_package_registry_list
&lt;span class="k"&gt;RUN &lt;/span&gt;install_some_package
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first &lt;code&gt;RUN&lt;/code&gt; statement will fetch the package registry list in this example. The second &lt;code&gt;RUN&lt;/code&gt; statement will install the package. But if the package registry list changes between the first and second &lt;code&gt;RUN&lt;/code&gt; statements, then the package registry list will be out of date when we install the package.&lt;/p&gt;

&lt;h3&gt;
  
  
  Solution to &lt;code&gt;DL3059&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;When working with large files that we add and remove during a &lt;code&gt;docker build&lt;/code&gt;, combining those operations into one atomic &lt;code&gt;RUN&lt;/code&gt; statement is helpful.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;RUN &lt;/span&gt;download_a_really_big_file &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    remove_the_really_big_file
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This reduces the final image size by removing the intermediate layer that contains the big file as we download and remove it in the same &lt;code&gt;RUN&lt;/code&gt; statement. Note that this can have cache implications if you combine &lt;code&gt;RUN&lt;/code&gt; statements with things that can be cached with things that frequently invalidate the cache. In those situations, you likely want to keep the portion that can be cached in its own &lt;code&gt;RUN&lt;/code&gt; statement.&lt;/p&gt;

&lt;p&gt;For the package registry example, we want to combine the fetch registry list with the install package into one &lt;code&gt;RUN&lt;/code&gt; statement.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;RUN &lt;/span&gt;fetch_package_registry_list &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    install_some_package
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This ensures that the package registry list is updated when we install the package instead of potentially being outdated.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Pin versions during &lt;code&gt;apt-get install&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;A more controversial Dockfile linting issue is &lt;code&gt;DL3008&lt;/code&gt; from hadolint. This issue is also present in &lt;strong&gt;30% of all Dockerfiles&lt;/strong&gt;. The problem arises when not pinning versions during &lt;code&gt;apt-get install&lt;/code&gt;. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; ubuntu:22.04&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; some-package
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you don't version pin, you're not forcing the &lt;code&gt;docker build&lt;/code&gt; to verify it has a specific version and thus the required packages you may need. This can lead to unexpected behavior when you build your Dockerfile or run the resulting image if you inadvertently installed a newer version of a package than you expected.&lt;/p&gt;

&lt;h3&gt;
  
  
  Solution to &lt;code&gt;DL3008&lt;/code&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; ubuntu:22.04&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; some-package&lt;span class="o"&gt;=&lt;/span&gt;1.2.&lt;span class="k"&gt;*&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By pinning the version of &lt;code&gt;some-package&lt;/code&gt;, the build is forced to retrieve the particular version. This allows you to build up guarantees about the packages you're installing in your Dockerfile and the dependencies of those packages.&lt;/p&gt;

&lt;p&gt;The reason it's controversial is because version pinning runs the risk of needing to catch up on security updates. For example, suppose you pin a package version with a security vulnerability. In that case, you risk not getting your security update when you build your Dockerfile until you change the version to a new one. This is why it's essential to understand the packages you're installing and the security implications of pinning versions.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Use &lt;code&gt;--no-install-recommends&lt;/code&gt; to avoid installing unnecessary packages
&lt;/h2&gt;

&lt;p&gt;Another widespread linter error is &lt;code&gt;DL3015&lt;/code&gt;, installing unnecessary packages with &lt;code&gt;apt-get&lt;/code&gt;. This is present in &lt;strong&gt;22% of all Dockerfiles&lt;/strong&gt;. The issue arises when we're not using the &lt;code&gt;--no-install-recommends&lt;/code&gt; flag during &lt;code&gt;apt-get install&lt;/code&gt;. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; ubuntu:22.04&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; some-package
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you don't use the &lt;code&gt;--no-install-recommends&lt;/code&gt; flag, you install all the recommended packages for the package and the package itself. Potentially increasing the final size of your Docker image by installing packages you don't need.&lt;/p&gt;

&lt;h3&gt;
  
  
  Soltuion to &lt;code&gt;DL3015&lt;/code&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; ubuntu:22.04&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; some-package &lt;span class="nt"&gt;--no-install-recommends&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The solution is to pass the flag &lt;code&gt;--no-install-recommends&lt;/code&gt; to &lt;code&gt;apt-get install&lt;/code&gt;. This will prevent the installation of recommended packages and reduce the final size of your container image. It's essential to understand the recommended packages for the packages you're installing to ensure you're getting all the dependencies.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Avoid using the cache directory when using &lt;code&gt;pip install&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;Docker layer caching comes in again when we're talking about &lt;code&gt;pip install&lt;/code&gt; during a Docker build. Hadolint error &lt;code&gt;DL3042&lt;/code&gt; is present in &lt;strong&gt;18% of all Dockerfiles&lt;/strong&gt;. The issue arises when we're not telling &lt;code&gt;pip install&lt;/code&gt; not to use a cache directory in our Dockerfile. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; python:3.11&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;pip3 &lt;span class="nb"&gt;install &lt;/span&gt;mysql-connector-python
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you don't tell &lt;code&gt;pip install&lt;/code&gt; not to use a cache directory, it will install the package and keep a cache directory for that package, which creates an unnecessary cache entry for every package you've installed via &lt;code&gt;pip&lt;/code&gt; in that layer. When you have lots of packages, this can increase your final Docker image size.&lt;/p&gt;

&lt;h3&gt;
  
  
  Solution to &lt;code&gt;DL3042&lt;/code&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; python:3.11&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;pip3 &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--no-cache-dir&lt;/span&gt; mysql-connector-python
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We don't need a cache directory for our &lt;code&gt;pip&lt;/code&gt; packages because we don't need to reinstall packages when building a Docker image. The Docker layer cache can be used instead. Turning off the cache directory makes our final image smaller.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Remove the &lt;code&gt;apt-get&lt;/code&gt; lists after installing packages
&lt;/h2&gt;

&lt;p&gt;As we explored in our post around &lt;a href="https://depot.dev/blog/how-to-reduce-your-docker-image-size"&gt;reducing Docker image sizes&lt;/a&gt;, keeping container image sizes down often returns to the actual &lt;code&gt;docker build&lt;/code&gt; process. Hadolint error &lt;code&gt;DL3009&lt;/code&gt; is present in &lt;strong&gt;16% of all Dockerfiles&lt;/strong&gt;. The issue arises when we're not removing the &lt;code&gt;apt-get&lt;/code&gt; lists after installing packages. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; ubuntu:22.04&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; some-package &lt;span class="nt"&gt;--no-install-recommends&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our earlier example for &lt;code&gt;DL3015&lt;/code&gt;, shown here, can be optimized further to keep the final image size down. By not cleaning up the &lt;code&gt;apt-get&lt;/code&gt; cache, it's written into the layer for that &lt;code&gt;RUN&lt;/code&gt; statement. We are taking up valuable space in our final image.&lt;/p&gt;

&lt;h3&gt;
  
  
  Solution to &lt;code&gt;DL3009&lt;/code&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; ubuntu:22.04&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; some-package &lt;span class="nt"&gt;--no-install-recommends&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    apt-get clean &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; /var/lib/apt/lists/&lt;span class="k"&gt;*&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, we are combining the installation of &lt;code&gt;some-package&lt;/code&gt; with the clean-up of the &lt;code&gt;apt-get&lt;/code&gt; cache so that installing and clean-up happen in one atomic &lt;code&gt;RUN&lt;/code&gt; statement. This keeps the final image size down by removing the &lt;code&gt;apt-get&lt;/code&gt; cache from the final image and doesn't introduce another layer into the final image.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Make use of &lt;code&gt;WORKDIR&lt;/code&gt; instead of &lt;code&gt;RUN cd some-path&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;Another common Dockerfile linter issue is &lt;code&gt;DL3003&lt;/code&gt;, using &lt;code&gt;RUN cd&lt;/code&gt; instead of the &lt;code&gt;WORKDIR&lt;/code&gt; statement. This is present in &lt;strong&gt;14% of all Dockerfiles&lt;/strong&gt;. Here is a typical example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; ubuntu:22.04&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; /usr/src/app &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; git clone git@github.com:depot/some-repo.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each &lt;code&gt;RUN&lt;/code&gt; statement executes inside its own shell, and most commands can work with absolute paths.&lt;/p&gt;

&lt;h3&gt;
  
  
  Solution to &lt;code&gt;DL3003&lt;/code&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; ubuntu:22.04&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /usr/src/app&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;git clone git@github.com:depot/some-repo.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When changing directories, you can use the &lt;code&gt;WORKDIR&lt;/code&gt; statement, which spawns the shell in your specified directory. The only exception is when you need to do something inside the subshell; in that scenario, you still need to use &lt;code&gt;cd&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  7. Pin versions when installing packages via &lt;code&gt;pip&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;Like &lt;code&gt;DL3008&lt;/code&gt;, the Dockerfile linter issue &lt;code&gt;DL3013&lt;/code&gt; is the same idea but applied to &lt;code&gt;pip install&lt;/code&gt; instead of &lt;code&gt;apt-get install&lt;/code&gt;. This is present in &lt;strong&gt;13% of all Dockerfiles&lt;/strong&gt;. Here is a typical example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; python:3.11&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;pip3 &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--no-cache-dir&lt;/span&gt; mysql-connector-python
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you don't version pin, you're not forcing the &lt;code&gt;docker build&lt;/code&gt; to verify it has a specific version and thus the required packages you may need. As we saw in &lt;code&gt;DL3008&lt;/code&gt;, this can have unexpected behavior if we install a different version than what we originally installed when we created the Dockerfile.&lt;/p&gt;

&lt;h3&gt;
  
  
  Solution to &lt;code&gt;DL3013&lt;/code&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; python:3.11&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;pip3 &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--no-cache-dir&lt;/span&gt; mysql-connector-python&lt;span class="o"&gt;==&lt;/span&gt;8.1.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By pinning the version of &lt;code&gt;mysql-connector-python&lt;/code&gt;, the &lt;code&gt;docker build&lt;/code&gt; is forced to retrieve the particular version regardless of what may be in the Docker layer cache.&lt;/p&gt;

&lt;h2&gt;
  
  
  8. Use JSON notation for &lt;code&gt;CMD&lt;/code&gt; and &lt;code&gt;ENTRYPOINT&lt;/code&gt; arguments
&lt;/h2&gt;

&lt;p&gt;This Dockerfile lint error, &lt;code&gt;DL3025&lt;/code&gt;, comes down to correctness when running the image. It's present in &lt;strong&gt;12% of all Dockerfiles&lt;/strong&gt;. Here are typical examples for both statements where this comes up:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; ubuntu:22.04&lt;/span&gt;
&lt;span class="k"&gt;ENTRYPOINT&lt;/span&gt;&lt;span class="s"&gt; foo run-server&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; ubuntu:22.04&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; foo run-server&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When we don't use JSON notation for &lt;code&gt;CMD&lt;/code&gt; and &lt;code&gt;ENTRYPOINT&lt;/code&gt; arguments, the executables referenced won't receive signals from the OS correctly. This is particularly relevant when talking about how to signal to a running container that it is being shut down (i.e., a &lt;code&gt;SIGTERM&lt;/code&gt;).&lt;/p&gt;

&lt;h3&gt;
  
  
  Solution to &lt;code&gt;DL3025&lt;/code&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; ubuntu:22.04&lt;/span&gt;
&lt;span class="k"&gt;ENTRYPOINT&lt;/span&gt;&lt;span class="s"&gt; ["foo", "run-server"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; ubuntu:22.04&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["foo", "run-server"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By using JSON notation, the executable will be the containers PID 1 and, therefore, receive signals from the OS. Two additional things to note about this notation:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;CMD&lt;/code&gt; doesn't process environment variables in shell form (i.e., &lt;code&gt;$FOO_BAR&lt;/code&gt;) because of the side effect of how &lt;code&gt;sh -c&lt;/code&gt; is used as the default entry point. So, we must handle environment variables ourselves outside the &lt;code&gt;CMD&lt;/code&gt; statement.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;code&gt;CMD&lt;/code&gt; statement is parsed as a JSON array, so we &lt;strong&gt;must use double quotes ("") instead of single quotes('')&lt;/strong&gt; to correctly pass our arguments.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  9. Use &lt;code&gt;apt-get&lt;/code&gt; or &lt;code&gt;apt-cache&lt;/code&gt; instead of the user facing &lt;code&gt;apt&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;The command, &lt;code&gt;apt&lt;/code&gt;, is meant to be an end-user tool and not to be used in Dockerfile &lt;code&gt;RUN&lt;/code&gt; statements. So, &lt;code&gt;DL3027&lt;/code&gt; flags this Dockerfile lint error when we use &lt;code&gt;apt&lt;/code&gt; instead of &lt;code&gt;apt-get&lt;/code&gt; or &lt;code&gt;apt-cache&lt;/code&gt;. This is present in &lt;strong&gt;9% of all Dockerfiles&lt;/strong&gt;. Here is a typical example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; ubuntu:22.04&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;apt &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; some-package&lt;span class="o"&gt;=&lt;/span&gt;1.2.&lt;span class="k"&gt;*&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Solution to &lt;code&gt;DL3027&lt;/code&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; ubuntu:22.04&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; some-package&lt;span class="o"&gt;=&lt;/span&gt;1.2.&lt;span class="k"&gt;*&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The interface of &lt;code&gt;apt&lt;/code&gt; is not guaranteed across versions by Linux distributions. So it's better to use &lt;code&gt;apt-get&lt;/code&gt; or &lt;code&gt;apt-cache&lt;/code&gt;, which are more stable.&lt;/p&gt;

&lt;h2&gt;
  
  
  10. Pin versions when installing packages via &lt;code&gt;apk add&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;As we've seen in &lt;code&gt;DL3008&lt;/code&gt; and &lt;code&gt;DL3013&lt;/code&gt;, pinning versions is also important for &lt;code&gt;apk add&lt;/code&gt; in Alpine-based Dockerfiles. This is present in &lt;strong&gt;8% of all Dockerfiles&lt;/strong&gt;. Here is a typical example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; alpine:3.7&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;apk &lt;span class="nt"&gt;--no-cache&lt;/span&gt; add some-package
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Solution to &lt;code&gt;DL3018&lt;/code&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; alpine:3.7&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;apk &lt;span class="nt"&gt;--no-cache&lt;/span&gt; add some-package&lt;span class="o"&gt;=&lt;/span&gt;~1.2.3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The rationale is the same: version pinning forces the &lt;code&gt;docker build&lt;/code&gt; to fetch the pinned version regardless of what may be in the Docker layer cache. An important thing to note for Alpine-based images is that we are using partial pinning here via the &lt;code&gt;~&lt;/code&gt; syntax. We can pin to a specific version via &lt;code&gt;some-package=1.2.3&lt;/code&gt;, but this will fail the build if this package is removed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this post, we looked at the top 10 most common Dockerfile linting issues we're seeing as builds are flowing through Depot. As we saw, they can vary in severity and impact. But they all have the potential to improve your Dockerfiles and your builds. Each issue comes with its own set of pros and cons.&lt;/p&gt;

&lt;p&gt;For example, pinning versions can guarantee a specific state when building Docker images but have the downside of potentially missing security updates. Or using &lt;code&gt;--no-install-recommends&lt;/code&gt; can avoid making your image bigger for dependencies you don't need or use. But it can also mean you miss a dependency that you need.&lt;/p&gt;

&lt;p&gt;This post has given you some ideas on improving your Dockerfiles and your builds via linting. If you want to learn more about how Depot can help you improve your Dockerfiles on-demand, check out our &lt;a href="https://depot.dev/blog/dockerfile-lint-on-build"&gt;recent post on linting and building Dockerfiles&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you're looking to make your Docker image build process faster either for native Intel or Arm images, &lt;a href="https://depot.dev/start"&gt;sign up for an account&lt;/a&gt; and give things a try. We make it easy to run your first build with either &lt;code&gt;docker build&lt;/code&gt; or &lt;code&gt;depot build&lt;/code&gt; via our &lt;a href="https://depot.dev/docs/quickstart"&gt;quickstart guide&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>devops</category>
    </item>
    <item>
      <title>How to clear Docker cache and free up space on your system</title>
      <dc:creator>Kyle Galbraith</dc:creator>
      <pubDate>Tue, 08 Aug 2023 14:33:59 +0000</pubDate>
      <link>https://forem.com/kylegalbraith/how-to-clear-docker-cache-and-free-up-space-on-your-system-72f</link>
      <guid>https://forem.com/kylegalbraith/how-to-clear-docker-cache-and-free-up-space-on-your-system-72f</guid>
      <description>&lt;p&gt;Docker persists build cache, containers, images, and volumes to disk. Over time, these things can take up a lot of space on your system, either locally or in CI. In this post, we'll look at the different Docker artifacts that can take up space on your system, how to clear them individually, and how to use &lt;code&gt;docker system prune&lt;/code&gt; to clear Docker cache.&lt;/p&gt;

&lt;h2&gt;
  
  
  A short refresher on Docker caching
&lt;/h2&gt;

&lt;p&gt;Docker uses &lt;strong&gt;layer caching&lt;/strong&gt; to reuse previously computed build results. Each instruction in a Dockerfile is associated with a layer that contains the changes caused by executing that instruction. If previous layers, as well as any inputs to an instruction, haven't changed, and the instruction has already been run and cached previously, Docker will use the cached layer for it. Otherwise, Docker will rebuild that layer and all layers that follow it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdepot.dev%2Fimages%2Fdocker-cache-image1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdepot.dev%2Fimages%2Fdocker-cache-image1.png" alt="Dockerfile lines map to hashes that are either present in the cache or will need to be recomputed."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The Docker layers for which the hash of the inputs (such as source code files on disk or the parent layer) haven't changed get loaded from the cache and reused. For layers where the hash of inputs has changed, the layers get recomputed.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Using a cached layer is much faster than recomputing an instruction from scratch. So, generally, you want as much of your Docker build as possible to come from the cache and to only rebuild layers that have changed since the last build.&lt;/p&gt;

&lt;p&gt;One of the main factors that affects how many of the layers in your image need to be rebuilt is the &lt;a href="https://depot.dev/blog/fast-dockerfiles-theory-and-practice#an-example-with-nodejs" rel="noopener noreferrer"&gt;ordering of operations&lt;/a&gt; in your Dockerfile.&lt;/p&gt;

&lt;h2&gt;
  
  
  How much disk space is Docker using?
&lt;/h2&gt;

&lt;p&gt;The first step is knowing the disk usage of Docker. We can use the &lt;code&gt;docker system df&lt;/code&gt; command to get a breakdown of how much disk space is being taken up by various artifacts.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker system &lt;span class="nb"&gt;df
&lt;/span&gt;TYPE            TOTAL     ACTIVE    SIZE      RECLAIMABLE
Images          138       34        36.18GB   34.15GB &lt;span class="o"&gt;(&lt;/span&gt;94%&lt;span class="o"&gt;)&lt;/span&gt;
Containers      74        18        834.8kB   834.6kB &lt;span class="o"&gt;(&lt;/span&gt;99%&lt;span class="o"&gt;)&lt;/span&gt;
Local Volumes   118       6         15.31GB   15.14GB &lt;span class="o"&gt;(&lt;/span&gt;98%&lt;span class="o"&gt;)&lt;/span&gt;
Build Cache     245       0         1.13GB    1.13GB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Docker uses 36.18 GB for images, 834.8 kB for containers, 15.31 GB for local volumes, and 1.13 GB for the Docker build cache. This comes to about 50 GB of space in total, and a large chunk of it is reclaimable.&lt;/p&gt;

&lt;h2&gt;
  
  
  What space can we claim back without affecting Docker build performance?
&lt;/h2&gt;

&lt;p&gt;It's generally quite safe to remove unused Docker images and layers — unless you are building in CI. For CI, clearing the layers might affect performance, so it's better not to do it. Instead, jump to our CI-focused section.&lt;/p&gt;

&lt;h2&gt;
  
  
  Removing containers from the Docker cache
&lt;/h2&gt;

&lt;p&gt;We can use the &lt;code&gt;docker container prune&lt;/code&gt; command to clear the disk space used by containers. This command will remove all stopped containers from the system.&lt;/p&gt;

&lt;p&gt;We can omit the &lt;code&gt;-f&lt;/code&gt; flag here and in subsequent examples to get a confirmation prompt before artifacts are removed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker container prune &lt;span class="nt"&gt;-f&lt;/span&gt;
Deleted Containers:
399d7e3679bf9b14a1c7045cc89c056f2efe31d0a32f186c5e9cb6ebbbf42c8e

Total reclaimed space: 834.6kB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Which containers are unused?
&lt;/h3&gt;

&lt;p&gt;We can see the IDs of unused containers by running the &lt;code&gt;docker ps&lt;/code&gt; command with filters on the status of the container. A container is unused if it has a status of &lt;code&gt;exited&lt;/code&gt; or &lt;code&gt;dead&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker ps &lt;span class="nt"&gt;--filter&lt;/span&gt; &lt;span class="nv"&gt;status&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;exited &lt;span class="nt"&gt;--filter&lt;/span&gt; &lt;span class="nv"&gt;status&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;dead &lt;span class="nt"&gt;-q&lt;/span&gt;
11bc2aa92622
355901f38ecb
263e9bde1f24
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If we want to know the size of the unused container, we can replace the &lt;code&gt;-q&lt;/code&gt; flag with &lt;code&gt;-s&lt;/code&gt; to get the size and other metadata about the container.&lt;/p&gt;

&lt;h3&gt;
  
  
  Removing all containers
&lt;/h3&gt;

&lt;p&gt;If we want to remove all containers from the system, we can stop any running containers and then use the same prune command. Do this by feeding the output of &lt;code&gt;docker ps -q&lt;/code&gt; into the &lt;code&gt;docker stop&lt;/code&gt; or &lt;code&gt;docker kill&lt;/code&gt; command if you want to kill the container forcibly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker stop &lt;span class="si"&gt;$(&lt;/span&gt;docker ps &lt;span class="nt"&gt;-q&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
docker container prune
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Another option is the &lt;code&gt;docker rm&lt;/code&gt; command, which can be used with &lt;code&gt;docker ps -a -q&lt;/code&gt; to remove all containers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;docker ps &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="nt"&gt;-q&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The &lt;code&gt;docker rm&lt;/code&gt; command forces the removal of a running container via a &lt;code&gt;SIGKILL&lt;/code&gt; signal. This is the same as the &lt;code&gt;docker kill&lt;/code&gt; command. The &lt;code&gt;docker ps -a -q&lt;/code&gt; command will list all containers on the system, including running containers, and feed that into the &lt;code&gt;docker rm&lt;/code&gt; command.&lt;/p&gt;

&lt;h2&gt;
  
  
  Removing images
&lt;/h2&gt;

&lt;p&gt;Docker images can take up a significant amount of disk space. We accumulate new images when base images change or build new ones via &lt;code&gt;docker build&lt;/code&gt;, for example. We can use the &lt;code&gt;docker image prune&lt;/code&gt; command to remove unused images from the system.&lt;/p&gt;

&lt;p&gt;By default, it only removes dangling images, which are not associated with any container and don't have tags.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker image prune &lt;span class="nt"&gt;-f&lt;/span&gt;
Deleted Images:
deleted: sha256:6f096c9fa1568f7566d4acaf57d20383851bcc433853df793f404375c8d975d6
...

Total reclaimed space: 2.751GB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We reclaimed over 2.7 GB of space by removing dangling images. But, if we recall from the output of our &lt;code&gt;docker system df&lt;/code&gt; command, we have 34.15 GB of reclaimable images.&lt;/p&gt;

&lt;p&gt;Where is the rest of that space coming from? These are images on our system that are tagged or associated with a container. We can run the &lt;code&gt;docker image prune- a&lt;/code&gt; command to force the removal of these images as well, assuming they're unused images.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker image prune &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt;
Deleted Images:
untagged: k8s.gcr.io/etcd:3.4.13-0
untagged: k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2
deleted: sha256:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934
deleted: sha256:f3cecccfe2bea1cbd18db5eae847c3a9c8253663bf30a41288f541dc1470b41e

Total reclaimed space: 22.66GB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this way, we remove all unused images not associated with a container, not just the dangling ones.&lt;/p&gt;

&lt;h2&gt;
  
  
  Removing volumes
&lt;/h2&gt;

&lt;p&gt;Volumes are never cleaned up automatically in Docker because they could contain valuable data. But, if we know that we no longer need the data in a volume, we can remove it with the &lt;code&gt;docker volume prune&lt;/code&gt; command. This removes all anonymous volumes not used by any containers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker volume prune &lt;span class="nt"&gt;-f&lt;/span&gt;
Total reclaimed space: 0B
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Interestingly, we see that we didn't reclaim any space. This is because we have volumes that are associated with containers. We can see these volumes by running the &lt;code&gt;docker volume ls&lt;/code&gt; command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;DRIVER    VOLUME NAME
&lt;span class="nb"&gt;local     &lt;/span&gt;0a44f085adc881ac9bb9cdcd659c28910b11fdf4c07aa4c38d0cca21c76d4ac4
&lt;span class="nb"&gt;local     &lt;/span&gt;0d3ee99b36edfada7834044f2caa063ac8eaf82b0dda8935ae9d8be2bffe404c
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We get an output that shows the driver and the volume name. The command &lt;code&gt;docker volume prune&lt;/code&gt; only removes anonymous volumes. These volumes are not named and don't have a specific source from outside the container. We can use the &lt;code&gt;docker volume rm -a&lt;/code&gt; command to remove all volumes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker volume prune &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt;
Deleted Volumes:
c0c240b680d70fffef420b8699eeee3f0a49eec4cc55706036f38135ae121be0
2ce324adb91e2e6286d655b7cdaaaba4b6b363770d01ec88672e26c3e2704d9e

Total reclaimed space: 15.31GB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Removing build cache
&lt;/h2&gt;

&lt;p&gt;To remove the Docker build cache, we can run the &lt;code&gt;docker buildx prune&lt;/code&gt; command to clear the build cache of the default builder.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker buildx prune &lt;span class="nt"&gt;-f&lt;/span&gt;
ID                                        RECLAIMABLE   SIZE        LAST ACCESSED
pw11qgl0xs4zwy533i2x61pef&lt;span class="k"&gt;*&lt;/span&gt;                &lt;span class="nb"&gt;true          &lt;/span&gt;54B         12 days ago
y37tt0kfwn1px9fnjqwxk7dnk                 &lt;span class="nb"&gt;true          &lt;/span&gt;0B          12 days ago
sq3f8r0qrqh4rniemd396s5gq&lt;span class="k"&gt;*&lt;/span&gt;                &lt;span class="nb"&gt;true          &lt;/span&gt;154.1kB     12 days ago

Total:  5.806GB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we want to remove the build cache for a specific builder, we can use the &lt;code&gt;--builder&lt;/code&gt; flag to specify the builder name.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker buildx prune &lt;span class="nt"&gt;--builder&lt;/span&gt; builder-name &lt;span class="nt"&gt;-f&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Removing networks
&lt;/h2&gt;

&lt;p&gt;While Docker networks don't take up disk space on our machine, they do create network bridges, iptables, and routing table entries. So, similarly to the other artifacts, we can remove unused networks with the &lt;code&gt;docker network prune&lt;/code&gt; command to clean these up.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker network prune &lt;span class="nt"&gt;-f&lt;/span&gt;
Deleted Networks:
test-network-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Removing everything with &lt;code&gt;docker system prune&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;We can remove all unused artifacts Docker has produced by running &lt;code&gt;docker system prune&lt;/code&gt;. This will remove all unused containers, images, networks, and build cache.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker system prune &lt;span class="nt"&gt;-f&lt;/span&gt;
Deleted Images:
deleted: sha256:93477d5bde9ef0d3d7d6d2054dc58cbce1c1ca159a7b33a7b9d23cd1fe7436a3

Deleted build cache objects:
6mm1daa19k1gdijlde3l2bidb
vq294gub98yx8mjgwila989k1
xd2x5q3s6c6dh5y9ruazo4dlm

Total reclaimed space: 419.6MB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By default, this command will not remove volumes and only removes dangling Docker images. We can use the &lt;code&gt;--volumes&lt;/code&gt; flag to remove volumes as well. We can also add the &lt;code&gt;-a&lt;/code&gt; flag again to remove all images not associated with a container.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker system prune &lt;span class="nt"&gt;--volumes&lt;/span&gt; &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Managing Docker build cache in CI
&lt;/h2&gt;

&lt;p&gt;If you are building Docker images in a CI environment, you can, of course, use the above commands as well. However, your builds might not be fully taking advantage of the Docker build cache for the following structural reasons:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In a CI environment with ephemeral runners, such as GitHub Actions or GitLab CI, build cache isn't persisted across builds without saving/loading it over the network to somewhere off of the ephemeral runner. Saving and loading the cache is therefore slow because the network transfer speed is slow.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;By default, all CI runners are ephemeral unless you run your own.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If there is disk space, it's usually capped at 10 to 15 GB. Thus, if you're building large images with large layers, you will likely exhaust it.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Even if you are building very small images and only keep essential layers in the Docker cache on each CI runner, your builds will likely not use the cache and thus be quite slow, as computing each layer on every build can take a while.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdepot.dev%2Fimages%2Fdocker-cache-image2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdepot.dev%2Fimages%2Fdocker-cache-image2.png" alt="A diagram showing how, with ephemeral CI runners, the total length of loading cache, Docker build with cache, and saving cache can be similar to running a Docker build without cache."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Loading and saving cache from ephemeral CI runners via the network can take a considerable amount of time, negating the benefits of caching compared with always rebuilding all layers from scratch.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Then how can you optimize the use of the Docker cache in CI systems?&lt;/p&gt;

&lt;p&gt;Consider using Depot. Depot automatically persists the cache across builds on a real SSD disk. This makes Docker builds that use Depot up to twenty times faster than building Docker images on CI without it. With Depot, you build with a full Docker cache without the network overhead.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdepot.dev%2Fimages%2Fdocker-cache-image3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdepot.dev%2Fimages%2Fdocker-cache-image3.png" alt="A diagram showing how, with Depot, the time it takes to load and save the cache is zero, and the Docker build part is still fast."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Compared with network-based caching systems, Depot relies on fast SSDs to make the cache instantly available for Docker builds.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Adding Depot to your build only takes a few minutes — use the &lt;em&gt;depot&lt;/em&gt; CLI as a drop-in replacement for the &lt;em&gt;docker&lt;/em&gt; CLI, or use an environment variable without changing the rest of the configuration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://depot.dev/docs/guides/continuous-integration" rel="noopener noreferrer"&gt;Learn more about using Depot in a CI environment →&lt;/a&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>containers</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
